How does the brain make sense of the world?
We can recognise tens of thousands of objects, but despite this vast number, our recognition is remarkably quick and accurate, completed within a few hundred milliseconds.
Recognition depends on a multitude of dynamic transformations of information in the brain, from low-level visual attributes through to higher-level visual representations and semantic meaning – not simply the name of the object, but access to the relevant properties of the object and how it relates to other objects. Our ability to rapidly recognise objects in our environment is fundamental to acting appropriately in the world. The rapid extraction of semantic meaning from vision provides a platform for complex behaviours such as object identification, object use and navigational planning, and without accessing semantics, we would not be able to communicate to others about our environment.
My research ask what are the neural dynamics and mechanisms by which vision activates semantics.
- How quickly do we access object semantics from vision?
- How are different kinds of semantic information activated over time?
- How do neural oscillations and connectivity support this?
- How is superordinate (e.g. animal) and basic-level (e.g. tiger) semantic information represented differently?
Through combining MEG, EEG, fMRI and Neuropsychology, we can take a multi-modal approach to semantic representations in the brain.