We aim to understand the strategies and algorithms of human decision-making using behavioral experiments, mathematical modeling, and neural networks. We are also interested in the neural basis of decision-making. Some of our work has more to do with encoding and representation than with decision-making.
The lab's current focus areas are
We have also worked extensively on visual and multisensory perception, working memory, visual attention, theories of neural coding and perceptual computation, experimental tests of these theories, exploration/exploitation, and social networks, but those are currently less active.
Our neural modeling work falls into a tradition that takes behavioral data and the problems faced by the organism as starting points to understand neural processes; for this, we use language of probability theory and machine learning. Our modeling usually starts with principles of optimality or rationality, but often ends up having suboptimal twists. This contrasts with another tradition, rooted in physics, in which neural measurements and in particular temporal dynamics are central, and theories are formulated in terms of differential equations and dynamical systems.
Planning can be defined as the mental simulation of futures, as in navigation, career planning, programming, or national policy. Cognitive science has mostly studied planning in relatively simple sequential decision tasks, where simplicity could for example be measured by the size of the state space. Many real-world decision-tasks are more complex. In various projects, we are exploring the way people plan (mentally simulate future states) in relatively complex tasks, where exhaustive calculation is intractable for people, but we can still have experimental control and we can fit and compare models.
While we have extensive experience in the lab comparing Bayesian/optimal to alternative models (e.g. Shen and Ma 2016), those alternatives are often not very satisfactory, because they are often specific to the task. We are interested in more general approximate frameworks.
Most of our work on visual and multisensory perception focuses on questions of inference and probabilistic representation. In Bayesian inference, an observer builds beliefs about world states based on observations and assumptions about the statistical structure of the world. If the assumptions are correct, then the Bayesian observer achieves optimal performance. When necessary, Bayesian observers integrate pieces of information in a way that takes into account the uncertainty of these pieces. We call this probabilistic computation. There is evidence from simple perceptual tasks that humans and monkeys perform probabilistic computation and are sometimes close to optimal. Relatively little work has been done in more complex perceptual decisions, such as extracting visual structure or categories from simple features. We are interested both in optimality and in probabilistic computation in such tasks, in particular ones in which the observer needs to integrate information from multiple items into a global, categorical judgment. We are also interested in confidence ratings.
Current projects:
We study a categorization task in which taking sensory uncertainty into account would help categorization performance, and ask whether people indeed do so.
Causal inference in multisensory perception | PLoS ONE 2007 | Frontiers in Psychology 2013 | PLoS Computational Biology 2018When you hear a sound and see something happening at the same time, these two stimuli might or might not have anything to do with each other. The brain must figure out which stimuli belong together and which don't.
Sameness judgment | PNAS 2012Judging whether a set of stimuli are the same or different is an important cognitive function and might underlie more abstract notions of similarity and equivalence.
Confidence ratings | Psychological Review 2017 | Neural Computation 2018We ask whether confidence ratings are based on posterior probabilities, and if so, how criteria are placed.
Visual search. See "Visual attention" below. Change detection. See "Working memory" below. Neural implementation. See "Neural coding and computation" below.Traditional models of working memory assert that items are remembered in an all-or-none fashion. We have shown that more important than quantity limitations are quality limitations: the memory of each item is noisy and the more items have to be remembered, the higher the noise level. This type of model is also called a "resource model". In another line of work, we try to take the decision stage of working memory tasks seriously; for example, to understand change detection, one has to understand the process of comparing the remembered stimuli with the current stimuli. Along these lines, we are currently examining whether working memory contains a useful representation of uncertainty. Finally, we recently wrote about what determines whether a neural network maintains a memory using persistent activity versus sequential activity.
Current projects:
These papers develop the theory that the precision of encoding an item in working memory is variable from trial to trial and from item to item.
Resource-rational theory of set size effects | eLife 2018We propose a conceptually new way of thinking about resource limitations: not as a strict limitation, but as the result of a rational trade-off between performance and neural cost. The law of diminishing returns makes an appearance.
Delayed estimation | Journal of Vision 2004 | PNAS 2012 | Psychological Review 2014This is a task we introduced in working memory studies. Observers estimate the value of a remembered stimulus feature on a continuum. Models of set size effects are frequently tested using delayed estimation.
Change detection and change localization | Current Biology 2011 | PNAS 2012 PLOS ONE 2012 | PLOS Computation Biology 2013 | PDF | Journal of Vision 2017Classic tasks with a twist: we always systematically vary the magnitude of change. Our main questions: what processes underlie the dependence of change detection behavior on set size; does uncertainty (reliability) get taken into account in change detection decisions? We have asked these questions both in humans and in non-human primates. Our main modeling framework combines limited resources, variable precision, and Bayesian inference. See also the data and code available here.
We have several projects on distributed attention and several on selective attention. Our distributed attention projects use visual search using simple, briefly presented stimuli. First, we examine the effects of the number of items (set size) on precision and performance, similar to working memory. Signal detection theories of visual search often do not contain resource limitations. We have shown that resource limitations are the rule rather than the exception. Second, we ask how close decision-making in visual search is to optimal. Third, we ask how distractor heterogeneity (diversity) affects search behavior. Signal detection theory studies of visual search have usually focused on homogeneous distractors, for computational tractability. However, homogeneous distractors are not very natural. We try to do better by studying heterogeneous distractors, still in a signal detection theory context. In the realm of selective attention, we have worked on the role of uncertainty in attention (probabilistic computation), detecting microsaccades, and attentional deficits in ADHD.
Current projects:
These papers compare optimal against heuristic decision rules in visual search.
Distributed attention: set size effects | Journal of Vision 2012 | Journal of Vision 2013These papers study the effects of set size on visual search decisions.
Distributed attention: heterogeneous distractors | Nature Neuroscience 2011 | Journal of Vision 2012 | Journal of Vision 2013 | Neural Computation 2015 | PLoS ONE 2016These papers use heterogeneous distractors in visual search.
Distributed attention: multiple-object tracking | Journal of Vision 2009Tracking multiple objects at once requires dividing attention, but it also has an inference (decision-making) component.
Selective attention: the role of uncertainty | PNAS 2018We asked whether the brain takes into account the level of uncertainty in setting decision criteria, when uncertainty is manipulated through attention.
Selective attention: ADHD | Computational Psychiatry 2018We use a new experimental paradigm and a computational model to separately characterize perceptual and executive deficits.
Selective attention: detecting microsaccades | Journal of Vision 2017Microsaccades have been used as a marker of selective attention. This paper develops a Bayesian method to detect microsaccades in noisy eye tracker data.
Bayesian inference is a successful mathematical framework for describing how humans and other animals make perceptual decisions under uncertainty. This raises the question how neural circuits implement, and learn to implement, Bayesian inference. Our lab has developed theories for such implementation; these theories establish experimentally testable correspondences between neural population activity and Bayesian behavior. We have proposed how Bayesian cue combination could be implemented using populations of cortical neurons; we call this form of coding probabilistic population codes (PPCs). Physiologists have since confirmed several predictions arising from this framework. We have generalized the theories to more complex computations, including decision-making, visual search, and categorization, often including detailed human behavioral experiments. We have shown that behaviorally relevant perceptual uncertainty can be decoded from fMRI activity. Most recently, we discovered that generic neural networks can easily learn to approximate Bayesian computation. Ongoing NIH-funded research in collaboration with the laboratory of Andreas Tolias strives to elucidate how neural populations encode uncertainty in primary visual cortex.
Current projects:
Foundational paper of probabilistic population codes, with an application to cue combination.
Decision-making | Neuron 2008Tasks in which the observer accumulates evidence over time. We introduce an alternative to the drift-diffusion model.
Experimentals tests of the likelihood component of probabilistic population codes | Nature Neuroscience 2015In these papers, we show that behaviorally relevant likelihood functions (and associated uncertainty) can be decoded on a trial-by-trial basis from either the BOLD response in early visual cortex or from multi-unit activity in V1 .
Physiological tests of Poisson-like variability | Journal of Neuroscience 2012In this paper, we examine some of the basic predictions of Poisson-like PPCs in monkey primate visual cortex.
More complex probabilistic inference with PPCs | Nature Neuroscience 2011 (visual search) | PNAS 2013 (categorization) | Multisensory Research 2013 (causal inference)These papers describe how more complex computations could be implemented using probabilistic population codes.
Neural population coding of multiple stimuli | Journal of Neuroscience 2015We ask what happens if a single population has to encode multiple stimuli.
Probabilistic inference with generic neural networks | Nature Communications 2017This paper uses a radically different approach from the papers above: instead of manually constructing networks, we train very simple neural networks, with comparable results. This paper reflects our current thinking on the neural implementation of Bayesian computation.
Neural mechanisms of working memory | Nature Neuroscience 2019In neural networks, we examine what task properties and what intrinsic network properties determine whether the mechanisms of maintenance of working memory are more sequential (across the population) or more persistent (within single neurons).