bullet Pep++ - A System for Visual Neurophysiology Experiments

Pep++ is a visual neurophysiology system I developed based on a Silicon Graphics Elan as a visual stimulus generator, the CED 1401-plus for data acquisition (hosted by a PC/DOS computer), and a master Intel machine running NeXTStep. The system is entirely controlled through a graphical user interface, and allows one to easily setup new stimuli, change loop structures in the experiments, and analyze the data in real time. Other neat features include API embedding of a MESA spreadsheet workspace area, which enables one to define complex loop algorithms, and an API link to Chartsmith for the automatic generation of slides. Data from pep++ can be read into Matlab, S-Plus, and Mathematica for further analysis. The network protocols are machine independent, so either the visual simulator or the data acquisition system (or both) may be implemented using any other type of hardware as long as they conform to the protocols.


eye Beyond Pep++ - The P^4 System.

There are some limitations to the pep++ system which we are addressing with the design of a new system to do psychophysical and physiological experiments -- both in acute and awake preparations.

The main limitations of pep++ are the following:

  • There are not many video configurations that one can set in the Silicon Graphics (except in the Infinite Reality graphics engine) and the fastest frame rate one can achieve is 120 Hz in interlaced mode (which has additional problems). As some cells in the LGN and V1 can follow 60Hz or 72Hz (the only non-interlaced modes in the SGI) it would be desirable to have a system that can go up to rates of about 160-200Hz.
  • As the stimuli are generated by the SGI one cannot generate new stimuli while one is being shown (as the CPU is constantly busy). It would be great if we could generate the next stimulus in the experiment at the same time that the current one is being shown. This can be achieved by having a multiprocessor SGI machine -- but this is very expensive...
  • The SGI is not really a real time machine. Even when one strips the IRIX OS down to its basics components one can still get a task to be preempted by the OS and if this happens during a stimulation period it may slow down the rate at which frames are being shown. We would probably have to repeat the stimulus again or throw our data away. It would be better to have a dedicated real time machine for the presentation of the stimuli.
  • Data acquisition and analysis take place in different computers connected to the SGI through the network. Sometimes the network slows down (or crashes!) and so does the entire experiment. We would like to have fast communication between different machines independent of the traffic in the local area network.

The new system is based on a network of Texas Instruments TMS320C4x Parallel-processing DSPs in combination with Greenspring's IndustryPack modules for I/O. The nodes of such a network are called TIMs (which stands for Texas Instruments' Modules). The idea is that one can get TIMs out-of-the-shelf that can do a variety of cool things. For example, one can get TIMs that consists of a processor and some memory (this would be a general purpose computing node), one can get a module with a DSP and digital input-output, or D/A and A/D converters and, of course, modules that contain a DSP and a graphics display controller. Each processor has 6 bidirectional links that allow it to "talk" to other modules at a rate of 20 MBytes/sec. So, the peak communication bandwidth is in the order of 120 MBytes/sec (!!!). We are working with Traquair Data Systems in the design of our hardware platform.

A small distributed kernel OS (called Parallel-C from 3L Ltd in the UK) can route messages between any two tasks in the network and provides real-time scheduling, timer, DMA, and interrupt services. This allows one to have a uniform programming environment for this network of DSPs. The project is the early stages but we have already started to develop on a DSP board with two modules (the graphics module and a general purpose computing node).

The graphics module is the SMT-304 from Sundance which includes a 50MFLOPS TMS320C40 Parallel Digital Signal Processor, a Weitek Power W9100 Graphics Controller, 4MBytes of VRAM, 1MByte fast SRAM (zero wait-state), a Brooktree Bt445 True Colour RAMDAC, and 16MBytes page mode DRAM. The Booktree Bt445 is one of the latest RAMDACs in the market and with it we can achieve pixel clock rates of up to 150MHz. At this pixel rate we should be able to get high spatial resolutions (1024x800 pixels) at a quite high frame rate of 180Hz (!).

Other nodes in the network can compute the next stimulus sequence concurrently with the graphics TIM showing a stimulus. This means that the next stimulus will probably be ready to be shown as soon as the present one is completed. Message routing between any two tasks in the network occurs very fast (with latencies less than 1msec). Therefore, there is no need for external synchronization signals between input/output boards and the display system. A real time system like this can be used in awake setups as well. For example, one TIM could be monitoring eye movements and sending messages to the graphics display in real time to modify the stimulus on the fly.

The Parallel-C software is logically independent of the underlying hardware and can be configured to run in networks with different numbers of processors. This kind of system is very flexible: one can add more processing power by buying more TIMs and re-deploying some of the tasks in the new TIMs. As long as our software is logically written to take advantage of tasks that can be run in parallel we can speed up our system very easily...

If you have any questions or interest in participating in this project please send me email...