Lyman Command-Line Interface¶
The Nipype workflows for processing MRI data and estimating univariate models
are controlled through a set of command-line scripts, in concert with the
experiment and design information. For information about what these scripts do and how to use them, you can run <script_name> -h
. The help information is also reproduced on this page.
run_fmri.py¶
usage: run_fmri.py [-h] [-subjects [SUBJECTS [SUBJECTS ...]]]
[-plugin {linear,multiproc,ipython,torque,sge,slurm}]
[-nprocs NPROCS] [-queue QUEUE] [-dontrun]
[-experiment EXPERIMENT] [-altmodel ALTMODEL]
[-workflows [{preproc,model,reg,ffx} [{preproc,model,reg,ffx} ...]]]
[-regspace {epi,mni}] [-regexp REGEXP] [-timeseries]
[-residual] [-unsmoothed]
Process subject-level data in lyman.
This script controls the workflows that process data from raw Nifti files
through a subject-level fixed effects model. It is based on the FSL FEAT
processing stream and is enhanced with Freesurfer tools for coregistration.
The other main difference is that the design generation is performed with
custom code from the `moss.glm` package, although the design matrix
creation uses the same rules as in FEAT and is expected to give highly
similar results.
By using Nipype's parallel machinery, the execution of this script can be
distributed across a local or managed cluster. The script can thus be run
for several subjects at once, and (with a large enough cluster) all of the
subjects can be processed in the time it takes to process a single run of
data linearly.
At each stage of the pipeline, a number of static image files are created
to summarize the results of the processing and facilitate quality
assurance. These files are stored in the output directories alongside the
data they correspond with and can be easily browsed using the ziegler
web-app.
The processing is organized into four large workflows that save their
outputs in the analyis_dir hierarchy and can be executed independently.
The structure of these workflows is represented in detail with the graphs
that are on the website and in the source distribution. Briefly:
preproc:
Preprocess the raw timeseries files by realigning, skull-stripping,
and filtering. Additionally, artifact detection is performed and
coregistation to the anatomy is estimated, although the results of
these stages are not applied to the data until later in the
pipeline. A smoothed and an unsmoothed version of the final
timeseries are always written to the analysis_dir.
model:
Estimate the timeseries model and generate inferential maps for the
contrasts of interest. This model is estimated in the native run
space, and separate models can be estimated for the smoothed and
unsmoothed versions of the data.
reg:
Align the data from each run in a common space. There are two
options for the target space: `mni`, which uses nonlinear
normalization to the MNI template (this requires that the
run_warp.py script has been executed), and `epi`, which transforms
runs 2-n into the space of the first run. By default this registers
the summary statistic images from the model, but it is also
possible to transform the preprocessed timeseries without having
run the model workflow. Additionally, there is an option to
transform the unsmoothed version of these data. (The results are
saved separately for each of these choices, so it is possible to
use several or all of them). The ROI mask generation script
(make_masks.py) produces masks in the epi space, so this workflow
must be run before doing ROI/decoding analyses.
ffx:
Estimate the across-run fixed effects model. This model combines
the summary statistics from each of the runs and produces a single
set of model results, organized by contrast, for each subject. It
is possible to fit the ffx model in either the mni or epi space and
on either smoothed or unsmoothed data. Fixed effects results in the
mni space can be used for volume-based group analysis, and the
results in the epi space can be used with the surface-based group
pipeline.
Many details of these workflows can be configured by setting values in the
experiment file. Additionally, it is possible to preprocess the data for an
experiment once and then estimate several different models using altmodel
files.
If you do not delete your cache directory after running (which is
configured in the project file), repeated use of this script will only
rerun the nodes that have changes to their inputs. Otherwise, you will
have to rerun at the level of the workflows.
Examples
--------
Note that the parameter switches match any unique short version
of the full parameter name.
run_fmri.py -w preproc model reg ffx
Run every stage of processing for the default experiment for each
subject defined in $LYMAN_DIR/subjects.txt. Coregistration will be
performed for smoothed model outputs in the mni space. The processing
will be distributed locally with the MultiProc plugin using 4
processes.
run_fmri.py -s subj1 subj2 subj3 -w preproc -p sge -q batch.q
Run preprocessing of the default experiment for subjects `subj1`,
`subj2`, and `subj3` with distributed execution in the `batch.q` queue
of the Sun Grid Engine.
run_fmri.py -s pilot_subjects -w preproc -e nback -n 8
Preprocess the subjects enumerated in $LYMAN_DIR/pilot_subjects.txt
with the experiment details in $LYMAN_DIR/nback.py. Distribute the
execution locally with 8 parallel processes.
run_fmri.py -s subj1 -w model reg ffx -e nback -a parametric
Fit the model, register, and combine across runs for subject `subj1`
with the experiment details defined in $LYMAN_DIR/nback-parametric.py.
This assumes preprocessing has been performed for the nback experiment.
run_fmri.py -w preproc reg -t -u -reg epi
Preprocess the default experiment for all subjects, and then align
the unsmoothed timeseries into the epi space. This is the standard set
of processing that must be performed before multivariate analyses.
run_fmri.py -w reg ffx -reg epi
Align the summary statistics for all subjects into the epi space and
then combine across runs. This is the standard processing that must
be added to use surface-based group analyses.
run_fmri.py -w preproc model reg ffx -dontrun
Set up all of the workflows for the default experiment, but do not
actually submit them for execution. This can be useful for testing
before starting a large job.
Usage Details
-------------
optional arguments:
-h, --help show this help message and exit
-subjects [SUBJECTS [SUBJECTS ...]]
list of subject ids, name of file in lyman directory,
or full path to text file with subject ids
-plugin {linear,multiproc,ipython,torque,sge,slurm}
worklow execution plugin
-nprocs NPROCS number of MultiProc processes to use
-queue QUEUE which queue for PBS/SGE execution
-dontrun don't actually execute the workflows
-experiment EXPERIMENT
experimental paradigm
-altmodel ALTMODEL alternate model to fit
-workflows [{preproc,model,reg,ffx} [{preproc,model,reg,ffx} ...]]
which workflows to run
-regspace {epi,mni} common space for registration and fixed effects
-regexp REGEXP perform cross-experiment epi registration
-timeseries perform registration on preprocessed timeseries
-residual perform registration on residual timeseries
-unsmoothed used unsmoothed data for model, reg, and ffx
run_group.py¶
usage: run_group.py [-h] [-subjects [SUBJECTS [SUBJECTS ...]]]
[-plugin {linear,multiproc,ipython,torque,sge,slurm}]
[-nprocs NPROCS] [-queue QUEUE] [-dontrun]
[-experiment EXPERIMENT] [-altmodel ALTMODEL]
[-regspace {mni,fsaverage}] [-unsmoothed] [-output OUTPUT]
Perform a basic group analysis in lyman.
This script currently only handles one-sample group mean tests on each of
the fixed-effects contrasts. It is possible to run the group model in the
volume or on the surface, although the actual model changes depending on
this choice.
The volume model runs FSL's FLAME mixed effects for hierarchical inference,
which uses the lower-level variance estimates, and it applies standard
GRF-based correction for multiple comparisons. The details of the
model-fitting procedure are set in the experiment file, along with the
thresholds used for correction.
The surface model uses a standard ordinary least squares fit and does
correction with an approach based on a Monte Carlo simulation of the null
distribution of cluster sizes for smoothed Gaussian data. Fortunately, the
simulations are cached so this runs very quickly. Unfortunately, the cached
simulations used a whole-brain search space, so this will be overly
conservative for partial-brain acquisitions.
Because of how GRF-based correction works, the thresholded volume images
only have positive voxels. It is up to you to define "negative" versions of
any contrasts where you are interested in relative deactivation. The
surface correction does not have this constraint, and the test sign is
configurable in the experiment file (and will thus apply to all contrasts).
By default the results are written under `group` next to the subject level
data in the lyman analysis directory, although the output directory name
can be changed.
Examples
--------
Note that the parameter switches match any unique short version
of the full parameter name.
run_group.py
With no arguments, this will process the default experiment with the
subjects defined in $LYMAN_DIR/subjects.txt in the MNI space using the
MultiProc plugin with 4 processes.
run_group.py -s pilot_subjects -r fsaverage -o pilot -unsmoothed
This will processes the subjects defined in a file at
$LYMAN_DIR/pilot_subjects.txt as above but with the surface workflow.
Unsmoothed fixed effects parameter estimates will be sampled to the
surface and smoothed there. The resulting files will be stored under
<analysis_dir>/<experiment>/pilot/fsaverage/<contrast>/<hemi>
run_group.py -e nback -a parametric -p sge -q batch.q
This will process an alternate model for the `nback` experiment using
the SGE plugin by submitting jobs to the batch.q queue.
Usage Details
-------------
optional arguments:
-h, --help show this help message and exit
-subjects [SUBJECTS [SUBJECTS ...]]
list of subject ids, name of file in lyman directory,
or full path to text file with subject ids
-plugin {linear,multiproc,ipython,torque,sge,slurm}
worklow execution plugin
-nprocs NPROCS number of MultiProc processes to use
-queue QUEUE which queue for PBS/SGE execution
-dontrun don't actually execute the workflows
-experiment EXPERIMENT
experimental paradigm
-altmodel ALTMODEL alternate model to fit
-regspace {mni,fsaverage}
common space for group analysis
-unsmoothed used unsmoothed fixed effects outputs
-output OUTPUT output directory name
run_warp.py¶
usage: run_warp.py [-h] [-subjects [SUBJECTS [SUBJECTS ...]]]
[-plugin {linear,multiproc,ipython,torque,sge,slurm}]
[-nprocs NPROCS] [-queue QUEUE] [-dontrun]
Estimate a volume-based normalization to the MNI template.
This script can use either FSL tools (FLIRT and FNIRT) or ANTS to estimate a
nonlinear warp from the native anatomy to the MNI152 (nonlinear) template.
The normalization method is controlled through a variable in the project file.
Using ANTS can provide substantially improved accuracy, although ANTS can be
difficult to install, so this is not the default. The two methods are mutually
exclusive, and the outputs will overwrite each other.
Unlike other lyman scripts, the out is written to the `data_dir`, rather than
the `analysis_dir`.
This script will also produce a static image of the target overlaid on the
moving image for quality control. This is best viewed using ziegler.
Examples
--------
run_warp.py
With no arugments, this will estimate the warp for all subjects using
multiprocessing.
Usage Details
-------------
optional arguments:
-h, --help show this help message and exit
-subjects [SUBJECTS [SUBJECTS ...]]
list of subject ids, name of file in lyman directory,
or full path to text file with subject ids
-plugin {linear,multiproc,ipython,torque,sge,slurm}
worklow execution plugin
-nprocs NPROCS number of MultiProc processes to use
-queue QUEUE which queue for PBS/SGE execution
-dontrun don't actually execute the workflows
make_masks.py¶
usage: make_masks.py [-h] [-s [SUBJECTS [SUBJECTS ...]]] -roi ROI [-exp EXP]
[-orig ORIG] [-label LABEL] [-native] [-hemi {lh,rh}]
[-sample {white,graymid,pial,cortex}]
[-proj PROJ PROJ PROJ PROJ] [-save_native] [-aseg]
[-erode ERODE] [-id [ID [ID ...]]] [-contrast CONTRAST]
[-thresh THRESH] [-nvoxels NVOXELS] [-unsmoothed]
[-altmodel ALTMODEL] [-serial] [-debug]
Create masks in native functional space from a variety of sources.
Currently this can start with ROIs defined as a surface label on
fsaverage, labels defined on each subject's native surface, ROIs
defined on the high-res volume in Freesurfer space, or a statistical
volume from a subject-level analysis.
You can always pass a filepath (possibly with ``subj`` and ``hemi``
string format keys to -orig and the program will work out what to
do from the file type and other arguments. Alternatively, if files
are in expected places (Freesurfer data hierarchy, lyman analysis
hierarchy) there are shortcuts for the corresponding image type.
The processing is almost entirely dependent on external binaries
from FSL and Freesurfer, so both must be available.
The resulting masks are defined in the space of the first functional
run. This is also the target of the ``-regspace epi`` registration
in the main lyman fmri workflows.
The processing here is closely tied to these fmri workflows and requires
subject-level preprocessing to have been performed. This program
should be executed from a directory containing a project.py file
that defines the relevant data and analysis paths.
The script will also write a mosiac png with the mask overlaid on
the mean functional image defining the epi space. This image can be
viewed in ziegler. Additionally, it will write a json file with the command
line argument dictionary for provenance tracking.
If an IPython cluster is running, the processing will be executed
in parallel by default on all available engines. This can be avoided
by using the -serial option.
Examples
--------
make_masks.py -s subj1 -roi ifs -label ifs -sample graymid
Create a mask that will be saved to <data_dir>/subj1/masks/ifs.nii.gz
that is defined on the common surface in bilateral files at
<data_dir>/fsaverage/label/$hemi.ifs.label. When transforming from
surface to volume space, project halfway from the white surface to
the pial surface at each vertex and label all intersected voxels.
make_masks.py -roi lh.ifs -label ifs -hemi lh -proj frac 0 1 .1
Create a mask and save to <data_dir>/subj1/masks/lh.ifs.nii.gz
that is defined on the common surface in the single, unilateral file
<data_dir>/fsaverage/label/lh.ifs.label. This will create a mask for
each subject defined in $LYMAN_DIR/subjects.txt. When transforming from
surface to volume space, take samples in steps of 10% of the cortical
thickness between the white and pial surfaces and label any intersected
voxels.
make_masks.py -roi V1 -orig labels/%(hemi)s.%(subj)_V1.label -native \
-sample graymid -s labeled_subjects
Create masks from labels defined on native surfaces and stored outside
the Freesurfer subjects directory hierarchy. Masks will be generated
for all subjects defined in $LYMAN_DIR/labeled_subjects.txt.
make_masks.py -roi ifs_hard -label ifs -contrast hard \
-thresh 2.3 -sample white
Create a mask defined from the bilateral 'ifs' label as above that is
intersected with a mask created by thresholding the fixed effect zstat
for the 'hard' contrast of the default experiment at 2.3. This requires
that registration and fixed effects have been run in the `epi` space.
make_masks.py -roi wm -aseg -id 2 41 -erode 3
Create a white matter mask from the aseg (Freesurfer auto-segmentation
volume) for the default set of subjects. The hires mask will be eroded
in 3 iterations before transformation to functional space.
make_masks.py -roi caudate_hard -aseg -id 11 50 \
-exp stroop -contrast name_color -thresh 4
Create a mask by intersecting activations in the 'name_color' contrast
for the 'stroop' experiment with a caudate mask from the automatic
segmentation.
Usage Details
-------------
optional arguments:
-h, --help show this help message and exit
-s [SUBJECTS [SUBJECTS ...]], -subjects [SUBJECTS [SUBJECTS ...]]
lyman subjects argument
-roi ROI will form name of output mask file
-exp EXP experiment (can use default from project.py)
-orig ORIG path to original file with subj and hemi format keys
-label LABEL label name if in Freesurfer hierachy
-native orig label is defined on native surface
-hemi {lh,rh} hemisphere if unilateral label - otherwise combine
both
-sample {white,graymid,pial,cortex}
shortcut for projection arguments
-proj PROJ PROJ PROJ PROJ
projection args passed directly mri_label2vol
-save_native save label file after warping from common space
-aseg atlas image is aseg.mgz
-erode ERODE erode the hires volume with this many steps.
-id [ID [ID ...]] roi id(s) if orig is index volume
-contrast CONTRAST first-level contrast to binarize z-stat map
-thresh THRESH z-stat threshold
-nvoxels NVOXELS take top <n> voxels from z-stat
-unsmoothed use unsmoothed fixed effects zstats
-altmodel ALTMODEL stat file is from alternative model
-serial force serial execution
-debug enable debug mode
anatomy_snapshots.py¶
usage: anatomy_snapshots.py [-h] [-subjects [SUBJECTS [SUBJECTS ...]]]
Generate static images summarizing the Freesurfer reconstruction.
This script is part of the lyman package. LYMAN_DIR must be defined.
The subject arg can be one or more subject IDs, name of subject file, or
path to subject file. Running with no arguments will use the default
subjects.txt file in the lyman directory.
Dependencies:
- Nibabel
- PySurfer
The resulting files can be most easily viewed using the ziegler app.
optional arguments:
-h, --help show this help message and exit
-subjects [SUBJECTS [SUBJECTS ...]]
lyman subjects argument
surface_snapshots.py¶
usage: surface_snapshots.py [-h] [-subjects [SUBJECTS [SUBJECTS ...]]]
[-experiment EXPERIMENT] [-altmodel ALTMODEL]
[-level {subject,group}]
[-regspace {mni,fsaverage,epi}] [-output OUTPUT]
[-geometry GEOMETRY]
Plot the outputs of lyman analyses on a 3D surface mesh.
This script uses PySurfer to generate surface images, which can provide
considerably more information about the distribution of activation than
volume-based images. Because the 3D rendering can be difficult to work
with, the script is outside of the Nipype workflows that actually generate
the results. Unfortunately, that means the script cannot be parallelized
and does not cache its intermediate results.
Images can be generated either at the group level or at the subject level,
in which case the fixed-effects outputs are plotted. Currently, the
statistics are plotted as Z statistics (even for Freesurfer results, which
are stored as -log10[p]), and regions that were not included in the
analysis mask are grayed out to represent their non-inclusion. For the
group-level plots, some aspects of how the results are rendered onto the
cortex can be controlled through parameters in the experiment file. Other
parameters are available as command-line options.
It is important to emphasize that because this script must be executed
separately from the processing workflows, it is possible for the static
images to get out of sync with the actual results. It is up to the user
to ensure that this does not transpire by always updating the snapshots
when rerunning the workflows.
Examples
--------
Note that the parameter switches match any unique short version
of the full parameter name.
surface_snapshots.py
With no arguments, this will make snapshots for the default experiment
at the group level in MNI space.
surface_snapshots.py -r fsaverage -o pilot
Make snapshots from the outputs of the surface workflow that are stored
in <analysis_dir>/<experiment>/pilot/fsaverage. The -log10(p) maps that
are written to Freesurfer will be converted to Z stats before plotting.
surface_snapshots.py -l subject -e nback -a parametric -r epi
Make snapshots of the fixed-effects model outputs on the native surface
for an alternate model of the `nback` experiment for all subjects
defined in the $LYMAN_DIR/subjects.txt file.
surface_snapshots.py -s subj1 subj2 -r mni -l subject -g smoothwm
Plot the default experiment fixed effects model outputs for subjects
`subj1` and `subj2` in MNI space on the `smoothwm` surface of the
fsaverage brain.
Usage Details
-------------
optional arguments:
-h, --help show this help message and exit
-subjects [SUBJECTS [SUBJECTS ...]]
list of subject ids, name of file in lyman directory,
or full path to text file with subject ids
-experiment EXPERIMENT
experimental paradigm
-altmodel ALTMODEL alternate model to fit
-level {subject,group}
analysis level to make images from
-regspace {mni,fsaverage,epi}
common space where data are registered
-output OUTPUT group analysis output name
-geometry GEOMETRY surface geometry for the rendering.
view_ffx_results.py¶
usage: view_ffx_results.py [-h] [-subject SUBJECT] [-experiment EXPERIMENT]
[-altmodel ALTMODEL] [-contrast CONTRAST]
[-unsmoothed] [-vlims VLIMS VLIMS VLIMS] [-debug]
Display single-subject fixed effects results in Freeview. This script is a
simple wrapper for the `freeview` binary that plugs in relevant paths to files
in the lyman results hierarchy.
optional arguments:
-h, --help show this help message and exit
-subject SUBJECT subject id
-experiment EXPERIMENT
experimental paradigm
-altmodel ALTMODEL show results from model name
-contrast CONTRAST contrast name
-unsmoothed show unsmoothed results
-vlims VLIMS VLIMS VLIMS
custom colormap limits
-debug print freeview output in terminal