Model-Based Cognition: Hierarchical Reasoning and Sequential Planning
Model-Based Cognition: Hierarchical Reasoning and Sequential Planning
This Cosyne 2018 workshop was organized by: Kevin Miller, Kim Stachenfeld, Bas van Opheusden, Roozbeh Kiani
The workshop happened on March 5-6 in Beaver Run Resort, Breckenridge, Colorado
The slides are on this google drive.
Decision making in a complex natural environment requires humans and animals to construct internal models of the world around them. These internal models support a wide variety of flexible behaviors, ranging from relatively simple learning procedures (e.g. outcome revaluation) to decision-making in complex, elaborately structured domains (e.g. games like chess). Although there is a growing consensus that humans and animals rely on models of their environment for goal-oriented behavior, it has proven challenging to draft theories and design experiments to study model-based reasoning and planning in the brain.
Consequently, many questions remain about the mechanisms by which models of the environment are built, revised, and deployed during decision-making behaviors which our workshop will seek to address. A guiding principle of our workshop will be to consider decision-making in natural environments as a hierarchy of inference processes that generate a sequence of actions or action plans to attain a goal. In this hierarchical framework, a high-level strategy guides lower-level choices, and the outcome of those choices informs the strategy. Choosing a good strategy requires an internal model of the world that is rarely explicitly known and must therefore be inferred from past experience. A complete understanding of this framework must answer how models of the environment are learned, how suitable decision strategies are selected and executed based on such models, how these strategies guide ongoing choices, and how these processes adapt to improve performance in dynamic environments.
Our workshop builds on this framework and aims to provide new avenues to overcome existing challenges. We will identify points of connection across various perspectives from animal physiology, human neuroscience, and machine learning, and we will provide a forum for discussing recent advances in the field and their theoretical and conceptual implications.
Thomas Akam, Postdoctoral Fellow, Champalimaud Institute and Oxford University
Studying model-based cognition in rodents using multi-step decision tasks
Google Scholar
Bruno Averbeck, Senior Investigator, NIMH
Bayesian and reinforcement learning models of reversal learning
Webpage • Google Scholar
David Foster, Associate Professor, University of California Berkeley
Hippocampal sequences and learning
Webpage • Google Scholar
Stephanie Groman, Associate Research Scientist, Yale University
Model-free and model-based influences in addiction-like behaviors in rats
Webpage • Google Scholar
Sam Gershman, Assistant Professor, Harvard University
What is the model in model-based reinforcement learning?
Webpage • Google Scholar
Joshua Gold, Professor, University of Pennsylvania
A bias-variance trade-off in human inference
Webpage • ResearchGate
Jessica Hamrick, Research Scientist, DeepMind
Metareasoning and mental simulation in humans and artificial agents
Webpage • Google Scholar
Ben Hayden, Assistant Professor, University of Minnesota
Transformation of options to choices in economic choice
Webpage • Google Scholar
Roozbeh Kiani, Assistant Professor, NYU
Hierarchical decisions about choice and change of strategy
Webpage • Google Scholar
Rani Moran, Postdoctoral Research Associate, Max Planck Centre for Computational Psychiatry and Aging Research, UCL
Interaction between model-based and model-free systems in human reinforcement learning
Webpage • Google Scholar
David Reichert, Research Scientist, DeepMind
Deep reinforcement learning with imagination-augmented agents
Google Scholar
Geoffrey Schoenbaum, Senior Investigator, NIMH
Dopamine neurons respond to errors in the prediction of sensory features of expected rewards
Webpage • Google Scholar
Hyojung Seo, Assistant Professor, Yale University
Decision-making and reasoning in the prefrontal cortex
Webpage • Google Scholar
Alireza Soltani, Assistant Professor, Dartmouth College
Model adoption through hierarchical decision making and learning
Webpage
Matthijs van der Meer, Assistant Professor, Dartmouth College
Reward revaluation biases hippocampal sequence content away from the preferred outcome
Webpage • Google Scholar
Bas van Opheusden, Graduate Student, Princeton University and New York University
Expertise in sequential decision-making relies on attention and tree search
Google Scholar
Xiaohong Wan, Professor, Beijing Normal University
Neural systems for decision-making and metacognition
Webpage • Google Scholar
Marco Wittmann, Postdoctoral fellow, University of Oxford
Multiple time-linked reward representations in anterior cingulate cortex
Webpage • Google Scholar
About the Organizers
•Kevin Miller recently completed his PhD at Princeton University with Matthew Botvinick and Carlos Brody. He is interested in the algorithmic and neural mechanisms of human and animal decision-making, broadly construed. His recent work focuses on using the tools of rodent neuroscience understand the mechanisms of model-based planning. Webpage • Google Scholar
•Kimberly Stachenfeld is a research scientist at DeepMind and is pursuing her PhD at the Princeton Neuroscience Institute with Matthew Botvinick. She is interested in the intersection of machine learning and animal learning, and her recent research centers on theoretical perspectives on learning and planning in the hippocampus. Webpage • Google Scholar
•Bas van Opheusden is a graduate student in neuroscience with Wei Ji Ma and Nathaniel Daw at New York University. He is broadly interested in human decision-making in complex sequential environments like board and video games, how they adopt sophisticated strategies with little-to-no training and how they adapt their strategy to specific opponents. Google Scholar
•Roozbeh Kiani is an assistant professor in the Center for Neural Science at NYU. His lab focuses on understanding the neural mechanisms by which sensory and mnemonic information is used to guide behavior in complex environments. Webpage • Google Scholar