This is a bit off-topic.
I'm starting to think about setting up a regression testing system for some simulation programs. If anybody knows about some information on this subject (some code would be best) I'd appreciate hearing about it.
There are roughly two types of regression testing: one that tests the whole program - run the new version and compare the results with the reference and one that tests individual parts (single routines, small collections of routines that cooperate) by comparing with reference results. For this latter type, unit testing, several packages exist. My own, ftnunit, can be found on http://flibs.sf.net.
The first is a trifle more complicated. In my company we use a framework in Python to run the new versions and compare the results automatically. A long time ago I created a general script in Tcl to do this - but it being general means that you need to supply a bunch of things yourself. It will all depend on the run procedure, what you want to automate exactly, how to compare the results and so on. But I can provide more details :).
Thanks Arjen. It's the first type that I'm proposing to use. For a given reference case (input file), a model generates an output file containing a set of variable values at fixed time intervals. We will need to choose a subset of these output variables, and a set of times for computing the deviation from the reference output. For this computation we'll need to specify the metric (e.g. sqrt of sum of squares of % deviation), and a criterion for significance.
This could be done using either a GUI or a command-line program + text editor. For the GUI option Excel/VBA although a bit clunky is quite suitable for this sort of thing. There may be better choices.
If you have access to journal article, I am immodestly advertising this paper: A Continuous Integration Platform for the Deterministic Safety Analyses Code System AC2
It describes how we do regression testing regularly for some of our simulation programs.