Bayesian Adaptive Direct Search (BADS)

BADS is a fast hybrid Bayesian optimization algorithm designed to solve difficult optimization problems, in particular related to fitting computational models (e.g., via maximum likelihood estimation). BADS has been intensively tested for fitting a variety of computational models, and is currently being used in many computational labs around the world (see Google Scholar for many example applications). In our benchmark with real model-fitting problems, BADS performed on par or better than many other common and state-of-the-art optimizers, as shown in the original BADS paper (Acerbi and Ma, 2017). BADS requires no specific tuning and runs off-the-shelf similarly to other Python optimizers, such as those in scipy.optimize.minimize.

Code (Python) | Code (MatLab) | Documentation
Source paper: Acerbi, L., & Ma, W. J. (2017). Practical Bayesian optimization for model fitting with Bayesian Adaptive Direct Search. Advances in neural information processing systems, 30.
Contact

Inverse Binomial Sampling (IBS)

The fate of scientific hypotheses often relies on the ability of a computational model to explain the data, quantified in modern statistical approaches by the likelihood function. The log-likelihood is the key element for parameter estimation and model evaluation. However, the log-likelihood of complex models in fields such as computational biology and neuroscience is often intractable to compute analytically or numerically. In those cases, researchers can often only estimate the log-likelihood by comparing observed data with synthetic observations generated by model simulations. Standard techniques to approximate the likelihood via simulation either use summary statistics of the data or are at risk of producing substantial biases in the estimate. Here, we explore another method, inverse binomial sampling (IBS), which can estimate the log-likelihood of an entire data set efficiently and without bias. For each observation, IBS draws samples from the simulator model until one matches the observation. The log-likelihood estimate is then a function of the number of samples drawn. The variance of this estimator is uniformly bounded, achieves the minimum variance for an unbiased estimator, and we can compute calibrated estimates of the variance. We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models. As case studies, we take three model-fitting problems of increasing complexity from computational and cognitive neuroscience. In all problems, IBS generally produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods with the same average number of samples. Our results demonstrate the potential of IBS as a practical, robust, and easy to implement method for log-likelihood evaluation when exact techniques are not available.

Code | Documentation
Source paper: van Opheusden, B., Acerbi, L., & Ma, W. J. (2020). Unbiased and efficient log-likelihood estimation with inverse binomial sampling. PLoS computational biology, 16(12), e1008483.

Tutorial in Bayesian statistics

Bayesian Statistics Part 1: Basics and Bayes factors
Presenter: Ronald van den Berg
Video | Slides
Part 1 reviewed frequentist hypothesis testing (based on p values) and contrasted this approach with Bayesian hypothesis testing (using Bayes factors). Basic concepts of Bayesian statistics were reviewed (posteriors, priors, etc) and several standard hypothesis tests were discussed from both the frequentist and Bayesian perspective, including correlation, t-test, and ANOVA.
Prerequisites: basic probability theory, basic frequentist statistics
-------------------------------------------------------------------------------------------




Bayesian Statistics Part 2: Parameter estimation and practice
Presenter: Gianni Galbiati
Video | Slides | Slides (with notes)
Code example (presidential heights) | Code example (Aspen’s change detection task)
Part 2 covered Bayesian parameter estimation with a practical emphasis. The first section briefly covered parameter estimation as a statistical paradigm for scientific inference and software options for doing it in Python. The second section was a hands-on tutorial using PyMC3 to complete analyses in worksheets.
Prerequisites: basic probability theory, basic frequentist statistics



Four-in-a-row

In the paper Revealing the impact of expertise on human planning with a two-player board game (2021), we conducted a set of experiments on the game four-in-a-row, played on a 4-by-9 board. We also introduced a computational model of human moves. Here, you can try out the experiments and explore the data.

Code | Try experiment | Explore data
Source paper: van Opheusden, B., Galbiati, G., Kuperwajs, I., Bnaya, Z., & Ma, W. J. (2021). Revealing the impact of expertise on human planning with a two-player board game.

Variable-precision (VP) models

The variable-precision model is currently (2016) the best available model of set size effects in visual working memory. In this model, the observer has a noisy representation of all items in a memory array. The precision of this representation is itself modeled as a random variable - possibly reflecting fluctuations in attention. Mean precision decreases monotonically with set size. The VP model consistently outperforms the fixed-capacity, item-limit model by Pashler 1988 and Cowan 2001, and more recent variants. Here, we provide simple, stand-alone Matlab scripts to analyze data from two common paradigms: delayed estimation and change detection. In its basic form, the model has three parameters (for change detection) and four (for delayed estimation). Note that the VP model here (with a gamma distribution on precision) is slightly different from the one implemented in MemToolbox. Email us if you have any questions.

Source papers:
1. Van den Berg, R., Shin, H., Chou, W. C., George, R., & Ma, W. J. (2012). Variability in encoding precision accounts for visual short-term memory limitations. Proceedings of the National Academy of Sciences, 109(22), 8780-8785.
2. Keshvari, S., Van den Berg, R., & Ma, W. J. (2012). Probabilistic computation in human perception under variability in encoding precision. PLoS One, 7(6), e40216.
3. Keshvari, S., Van den Berg, R., & Ma, W. J. (2013). No evidence for an item limit in change detection. PLoS computational biology, 9(2), e1002927.
VP on delayed-estimation data
Most basic variable-precision model (use this if you are just starting)
Complete package (many variants)
Code authors

VP on change detection data with controlled magnitude change
GitHub
To download
All data and code (664 MB)
Light (9.2 MB) - large .mat files containing analysis output left out; however, they can be generated using the provided code
Code authors

VP on change detection data with uncontrolled magnitude change
GitHub
To download
Code authors

Delayed-estimation benchmark data and factorial model comparison

Delayed estimation is a psychophysical paradigm developed in 2004 by Patrick Wilken and Wei Ji Ma, that is used to probe the contents of working memory. Observers remember one or multiple items and after a delay, report on a continuous scale the feature value of a stimulus at one probed location. This benchmark data set contains data from 10 experiments and 6 laboratories. Additional data sets are welcome. Email us if you have any to add. Below, we also provide complete code to analyze the data.

Source papers:
1. Wilken, P., & Ma, W. J. (2004). A detection theory account of change detection. Journal of vision, 4(12), 11-11.
2. Zhang, W., & Luck, S. J. (2008). Discrete fixed-resolution representations in visual working memory. Nature, 453(7192), 233-235. | Zhang lab | Luck lab
3. Bays, P. M., Catalao, R. F., & Husain, M. (2009). The precision of visual working memory is set by allocation of a shared resource. Journal of vision, 9(10), 7-7. | Bays lab | Husain lab
4. Rademaker, R. L., Tredway, C. H., & Tong, F. (2012). Introspective judgments predict the precision and likelihood of successful maintenance of visual working memory. Journal of vision, 12(13), 21-21. | Tong lab
5. Van den Berg, R., Shin, H., Chou, W. C., George, R., & Ma, W. J. (2012). Variability in encoding precision accounts for visual short-term memory limitations. Proceedings of the National Academy of Sciences, 109(22), 8780-8785.
6. Van den Berg, R., Awh, E., & Ma, W. J. (2014). Factorial comparison of working memory models. Psychological review, 121(1), 124.

Change detection

Change detection is a classic paradigm developed by W.A. Phillips (1974) and Harold Pashler (1988), to assess the limitations of visual short-term memory. Our lab has made two improvements to this paradigm: first, we vary the magnitude of change on a continuum, so that we can plot entire psychometric curves and thus have more power to compare models. Second, we test new models, especially noise-based (continuous-resource) models, and found that they do better than item-limit (slot) models.

References on the concepts:
GitHub
To download
Full (664 MB)
Light (9.2 MB) - large .mat files containing analysis output left out; however, they can be generated using the provided code
Code authors

Bayesian microsaccade detection