My research combines:
The early stages of the visual system are relatively well understood. Physiological models of how light is measured and coded in the retina, transmitted via the optic nerve to LGN and passed on to V1 are widely accepted. In cortex though, the visual system is not well understood. Strong evidence suggests that the visual system constantly adapts to the statistics of its stimuli, though that adaptation is guided by feedback from other brain regions that represents reinforcement reward and attentional modulation. My goal is to uncover how visual abilities arise in biologically plausible models, and to apply those insights to artificial visual systems.
When exploring a class of learning algorithms such as support vector machines, neural networks, or Gaussian processes, there are always hyper-parameters to optimize. Traditionally models were simple enough that this optimization could be done by hand or with grid search. Recently, results such as (Pinto et al. 2009) and (Coates et al. 2011) suggest that these traditional methods are unreliable and give the wrong idea about which learning algorithms work best. I am applying techniques of Bayesian optimization to explore efficient and practical algorithms for hyper-parameter selection.
There is a strong computational and statistical connection between the way we learn to see and the way we learn to hear. For example, (von Melchner et al. 2005) showed that the cortex is so adaptive and flexible that baby ferrets whose optic and auditory nerves have been cross-wired still grow up with some visual and auditory ability. I am interested in models and learning techniques that work for sound and images with only minor adjustment. For my Masters I looked at feature extraction and classification algorithms for recorded music.