This page is a result of a CNS*95 workshop which focused on the use of computers in computational neuroscience. We don't claim that this represents complete coverage of the topic, though we hope that it eventually will. In fact, we don't even guarantee that any of this makes sense right now; we thought it better to put this on the web early than to polish it. In other words, it's like 99% of the the other web pages out there. We expect the information it contains to be constantly changing to reflect what's going on right now in the CN, scientific visualization, and simulation worlds. So, if you have additions, corrections, suggestions, complaints, cute graphics, etc., please send them to Michael Stiber, who has volunteered to maintain this page. We're especially interested in pointers to your work that are relevant (and on the Web would be nice); please let us know from which part of this document you'd like to link it. If you'd like information about your work included here, but you can't put it on your own server, let us know, too, and we'll put it on our site. Thanks go to Rich Murphey and Upi Bhalla for their help in adding information to this page.

What's New?

Yes, I know this page hasn't been kept up. If someone would like to claim ownership, I'll be happy to cede it. Otherwise, we shall allow this page to more-or-less gracefully transform into a historical document.

Don't like the name "NeuroGeek"? Here's a short blurb about the trendiness of being a geek, taken from The Computists' Communique 5(39):

It's chic to be geek. (If that doesn't rhyme, you're either a geek or a nerd.) Keyboard-phobic executives are Out; anyone who enjoys using computers is In. Newsweek says so; Business Week says so; and TV Guide says so, in this week's review of "Dweebs". Of course, most of us are supposed to be rich: "If I had a million for every time I was given a wedgie... Wait! I do!" Jeff Jarvis writes, "So don't think of them as nerds or losers; in today's society, they are the winners. Think of them as Friends with real jobs, more money, more brains... and bad wardrobes." [TV Guide, 10/28/95, p. 6.]

(Geeks were carnival performers who bit the heads off live chickens or snakes -- possibly from Middle Low German for "fool"...)

We're looking for cute artwork, logos, etc. to add interest and color to this page.

There's now a revision date at the end of this document, so you can "easily" check to see if this page has been recently updated.

A people section has been added, to provide pointers to people and their work of possible interest.

Quick Index:


What are people doing now?


Integration algorithms

  1. Euler (forward & backward).
  2. Runge-Kutta (4th and 5th order).
  3. Predictor/corrector methods.
  4. Crank-Nicolson, used by both Genesis and Neuron, and described in detail in Mascagni's chapter in Methods in Neuronal Modeling.
  5. Backward differentiation with variable stepsize and order (Epsode, Bryne & Hindmarsh).
  6. DASSL, a BDF method with variable order (1-5) and variable step size, L. Petzold.
  7. ODEPACK, Hindmarsh & others.
  8. CVODE, Cohen & Hindmarsh.
  9. LAPACK by Ed Anderson, Z. Bai, Chris Bischof, Jim Demmel, Jack Dongarra, Jeremy Du Croz, Anne Greenbaum, Sven Hammarling, Alan McKenney, Susan Ostrouchov, and Danny Sorensen. Also available in C by J. Demmel & Xiaoye Li
  10. Mixed methods, Rush & Larson

Testimonials, Benchmarks & Practical Experiences

Mike Stiber writes his own simulators using ODEPACK. He's especially pleased with the ability of the {L}SODAR subpackages to simultaneously solve a system of constraint equations, allowing easy instrumentation of the simulation for spike detection, etc.

Rogene M. Eichler West of the Neuroscience Department at the University of Minnesota says, "We have compared DASSL (L. Petzold) to the popular Crank-Nicolson and found that when high accuracy is requested, DASSL is 60% faster... Our comparison was on the Rallpack set of standards..."

Rob Butera of the Mathematical Research Branch, NIDDK, NIH, says, "I now use CVODE, which is available from NETLIB. CVODE is an extension of LSODE written entirely in C - no more mixed language programming! I have found my code to run twice as fast as RADAU5 using similar error tolerances and similar order methods. Artie Sherman (here at NIH) tells me that he has found LSODE code in FORTRAN to be faster than comparable CVODE code in C. Still, I like the convenience of not having to hassle with mixed-language programming."


What could people be doing that they're not?

  1. PDE solvers.
  2. Variable spatial stepsize (dynamic recompartmentalization).
  3. Non-global time (each compartment with its own time & stepsize).
  4. Higher-order integration methods.
  5. Each compartment/neuron with its own integrator "instance"?
  6. Mixed discrete/continuous methods.
  7. Parallel machines.


Analysis Tools (non-commercial starred)

  1. Mathtools.net portal Free scientific portal for MATLAB/MIDEVA m-files and toolboxes, and Excel/Java/Fortran/C++ resources and links. *
  2. Mathtools.com Complementary products for MATLAB, like MIDEVA (fast MATLAB replacement), MATCOM (Compiler for MATLAB), Visual MATCOM (integrate m-files into Visual C++) and others, all available for download.
  3. DSTool Poincaré sections, bifurcation diagrams *
  4. PyDSTool, developed in collaboration with one of the original DSTool developers. It is not entirely meant to replace DSTool. It is an "open" rather than "closed" working environment, and it also contains some very pro-neuro-modelling features such as a template kit for compartmental modelling, support for mixed discrete/continuous models (#6 on the list above) and stiff integrators that partially support multiple time-scale integration (#3 on the list above), contination/bifurcation analysis, and dimension analysis tools for time-series data.
  5. Phaseplane/XPP/XPPAUT: G. Bard Ermentrout
  6. A graphical software package for the analysis of dynamical systems. Includes an interface to AUTO

  7. AUTO
  8. Probably the most popular program for bifurcation analysis of ODEs. The author states about the latest version: "To get a copy of AUTO94, people can send email to me at doedel@ama.caltech.edu. So far I have installed about 20-30 copies. I want to start slowly, in case there are some remaining bugs and portability problems."

  9. KAOS *
  10. Ye Olde Spreadsheet
  11. MATLAB
  12. Mathematica
  13. Maple
  14. S-PLUS data analysis system
  15. OCTAVE (MATLAB-ish) *
  16. MuPAD computer algebra system *
  17. See also Applied Chaos Laboratory's list


Resources

Testimonials, Benchmarks & Practical Experiences

Rogene M. Eichler West of the Neuroscience Department at the University of Minnesota says, "We wrote a simulator using LAPACK routines and found that it performs 2.5 times faster than either Genesis or Neuron (for a fixed level of accuracy) on both the Rallpack standards and a few other tests." He goes on to note that, typically, any special-purpose simulator will be faster than one meant to be both general-purpose and reasonably easy to use, so pure performance is only one metric. A technical report on all these performance comparisons (and others) will be available soon.

"Also, a technical report available now is: 'A Renumbering Method to Decrease Matrix Banding in Equations Describing Branched Neuron-like Structures', R. M. Eichler West and G. L. Wilcox, Minnesota Supercomputer Institute Research Report UMSI 95/167, August 1995. (To be submitted to Journal of Computational Neuroscience). This would be of interest to folks using morphometrically realistic compartmental models."


People