Bayesian Adaptive Direct Search (BADS)
BADS is a fast hybrid Bayesian optimization algorithm designed to solve difficult optimization problems, in particular related to fitting computational models (e.g., via maximum likelihood estimation).
BADS has been intensively tested for fitting a variety of computational models, and is currently being used in many computational labs around the world (see
Google Scholar for many example applications).
In our benchmark with real model-fitting problems, BADS performed on par or better than many other common and state-of-the-art optimizers, as shown in the original BADS paper
(Acerbi and Ma, 2017).
BADS requires no specific tuning and runs off-the-shelf similarly to other Python optimizers, such as those in scipy.optimize.minimize.
Variational Bayesian Monte Carlo (VBMC)
VBMC is an approximate Bayesian inference method designed to fit computational models with a limited budget of potentially noisy likelihood evaluations, useful for computationally expensive models or for quick inference and model evaluation (Acerbi,
2018;
2020).
Tutorial in Bayesian statistics
Bayesian Statistics Part 1: Basics and Bayes factors
Presenter: Ronald van den Berg
Video |
Slides
Part 1 reviewed frequentist hypothesis testing (based on p values) and contrasted this approach with Bayesian hypothesis testing (using Bayes factors). Basic concepts of Bayesian statistics were reviewed (posteriors, priors, etc) and several standard hypothesis tests were discussed from both the frequentist and Bayesian perspective, including correlation, t-test, and ANOVA.
Prerequisites: basic probability theory, basic frequentist statistics
-------------------------------------------------------------------------------------------
Part 2 covered Bayesian parameter estimation with a practical emphasis. The first section briefly covered parameter estimation as a statistical paradigm for scientific inference and software options for doing it in Python. The second section was a hands-on tutorial using PyMC3 to complete analyses in worksheets.
Prerequisites: basic probability theory, basic frequentist statistics
Four-in-a-row
In the paper A computational model of decision tree search (2017), we conducted a set of experiments on the game four-in-a-row, played on a 4-by-9 board. We also introduced a computational model of human moves. Here, you can try out the experiments and explore the data.
Variable-precision (VP) models
The variable-precision model is currently (2016) the best available model of set size effects in visual working memory. In this model, the observer has a noisy representation of all items in a memory array. The precision of this representation is itself modeled as a random variable - possibly reflecting fluctuations in attention. Mean precision decreases monotonically with set size. The VP model consistently outperforms the fixed-capacity, item-limit model by Pashler 1988 and Cowan 2001, and more recent variants. Here, we provide simple, stand-alone Matlab scripts to analyze data from two common paradigms: delayed estimation and change detection. In its basic form, the model has three parameters (for change detection) and four (for delayed estimation). Note that the VP model here (with a gamma distribution on precision) is slightly different from the one implemented in MemToolbox. Email us if you have any questions.
Variable-precision-model on delayed-estimation data
Most basic variable-precision model (use this if you are just starting)
Complete package (many variants)
VP on change detection data with controlled magnitude change
VP on change detection data with uncontrolled magnitude change
Delayed-estimation benchmark data and factorial model comparison
Delayed estimation is a psychophysical paradigm developed in 2004 by Patrick Wilken and Wei Ji Ma, that is used to probe the contents of working memory. Observers remember one or multiple items and after a delay, report on a continuous scale the feature value of a stimulus at one probed location. This benchmark data set contains data from 10 experiments and 6 laboratories. Additional data sets are welcome. Email us if you have any to add. Below, we also provide complete code to analyze the data.
Change detection
Change detection is a classic paradigm developed by W.A. Phillips (1974) and Harold Pashler (1988), to assess the limitations of visual short-term memory. Our lab has made two improvements to this paradigm: first, we vary the magnitude of change on a continuum, so that we can plot entire psychometric curves and thus have more power to compare models. Second, we test new models, especially noise-based (continuous-resource) models, and found that they do better than item-limit (slot) models.
References on the concepts:
To download
Full (664 MB)
Light (9.2 MB) - large .mat files containing analysis output left out; however, they can be generated using the provided code
Bayesian microsaccade detection