Jump to content

Bruno Olshausen

From Wikipedia, the free encyclopedia
Bruno Adolphus Olshausen
Alma materStanford University (BA, MA), California Institute of Technology (PhD)
Scientific career
ThesisNeural routing circuits for forming invariant representations of visual objects (1994)

Bruno Adolphus Olshausen is an American neuroscientist and professor at the University of California, Berkeley, known for his work on computational neuroscience, vision science, and sparse coding. He currently serves as a Professor in the Helen Wills Neuroscience Institute and the UC Berkeley School of Optometry, with an affiliated appointment in Electrical Engineering and Computer Sciences. He is also the Director of the Redwood Center for Theoretical Neuroscience at UC Berkeley.

Career

[edit]

Olshausen received his B.S. and M.S. degrees in Electrical Engineering from Stanford University in 1986 and 1987 respectively. He earned his Ph.D. in Computation and Neural Systems from the California Institute of Technology in 1994. After completing his doctoral studies, he held postdoctoral positions at Department of Psychology, Cornell University and Center for Biological and Computational Learning, Massachusetts Institute of Technology.[1][2]

Olshausen has served in several editorial and advisory roles. In 2009, he was awarded Fellowship of Wissenschaftskolleg zu Berlin and Fellowship of Canadian Institute for Advanced Research, Neural Computation and Adaptive Perception program.

His academic appointments include:

  • Assistant Professor (1996-2001), Department of Psychology and Center for Neuroscience, University of California, Davis
  • Associate Professor (2001-2005), Department of Psychology and Center for Neuroscience, UC Davis
  • Associate Professor (2005-2010), Helen Wills Neuroscience Institute and School of Optometry, UC Berkeley
  • Professor (2010-present), Helen Wills Neuroscience Institute and School of Optometry, UC Berkeley

Research

[edit]

Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. His approach combines studying neural response properties with mathematical modeling to develop functional theories of vision. This work aims to both advance understanding of brain function and develop new algorithms for image analysis based on biological principles. He has also contributed to technological applications, including image and signal processing, alternatives to backpropagation for unsupervised learning, memory storage and computation, analog data compression systems, etc.

Neural coding

[edit]

One of Olshausen's most significant contributions is demonstrating how the principle of sparse coding can explain response properties of neurons in visual cortex. His 1996 paper in Nature with David J. Field showed how simple cells in the V1 cortex receptive field properties could emerge from learning a sparse code for natural images.[3] This paper is based on two previous reports that gave additional technical details.[4][5]

Features learned by generalized Hebbian algorithm running on 8-by-8 patches of Caltech 101.

The paper argued that simple cells have Gabor-like, localized, oriented, and bandpass receptive fields. Previous methods, such as generalized Hebbian algorithm, obtains Fourier-like receptive fields that are not localized or oriented. But with sparse coding, such receptive fields do emerge.

Specifically, consider an image and some receptive fields . An image can be approximately represented as a linear sum of the receptive fields: . If so, then the image can be coded as , a code which may have better properties than directly coding for the pixel values of the image.

The algorithm proceeds as follows:[4]

  • Initialize for all , initialize to a good value.
  • Choose a bell-shaped function . Examples include
  • Loop
    • Sample a batch of images .
      • For each image in the batch, solve for the coefficients that minimize the loss function
      • Define the reconstructed image .
    • Update each feature by Hebbian learning: . Here, is the learning rate and the expectation is over all images in the batch.
    • Update each by . Adjust learning rate.

The key part of the algorithm is the loss functionwhere the first term is image reconstruction loss, and the second term is the sparsity loss. Minimizing the first term leads to accurate image reconstruction, and minimizing the second term leads to sparse linear coefficients, that is, a vector with many almost-zero entries. The hyperparameter balances the importance of image reconstruction vs sparsity.

Based on the 1996 paper, he worked out a theory that the Gabor filters appearing in the V1 cortex performs sparse coding with overcomplete basis set, such that it is optimal for images occurring in the natural habitat of humans.[6][7]

References

[edit]
  1. ^ "Bruno's cv". www.rctn.org. Retrieved 2024-11-18.
  2. ^ "Bruno Olshausen | EECS at UC Berkeley". www2.eecs.berkeley.edu. Retrieved 2024-11-18.
  3. ^ Olshausen, Bruno A.; Field, David J. (June 1996). "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". Nature. 381 (6583): 607–609. doi:10.1038/381607a0. ISSN 1476-4687.
  4. ^ a b Olshausen, B.; Field, D. (May 1996). "Natural image statistics and efficient coding". Network: Computation in Neural Systems. 7 (2): 333–339. doi:10.1088/0954-898X/7/2/014. ISSN 0954-898X.
  5. ^ B. Olshausen, D. Field, "Sparse coding of natural images produces localized, oriented, bandpass receptive fields", Technical Report CCN-110-95, Department of Psychology, Cornell University, Ithaca, New York 14853, 1995.
  6. ^ Olshausen, Bruno A.; Field, David J. (1997-12-01). "Sparse coding with an overcomplete basis set: A strategy employed by V1?". Vision Research. 37 (23): 3311–3325. doi:10.1016/S0042-6989(97)00169-7. ISSN 0042-6989.
  7. ^ Simoncelli, Eero P; Olshausen, Bruno A (March 2001). "Natural Image Statistics and Neural Representation". Annual Review of Neuroscience. 24 (1): 1193–1216. doi:10.1146/annurev.neuro.24.1.1193. ISSN 0147-006X.
[edit]