Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Interesting Observations

JohnNichols
Valued Contributor III
2,976 Views

I have had a lot of fun and some heart ache playing with the MKL Library and the PARDISO solver.

The PARDISO solver is very fast.  The problem I am solving is the hydraulic network problem, in this case using the Gradient Method. The PARDISO solver gives a much better first estimate of the solution vector, which can be measured using the error on the sum of the hydraulic heads. It's first pass gives a 4 fold better estimate than the other solvers (all that I have tried).

But the solver then converges very slowly to the "best estimate" of the answer, taking 8000 iterations to do what the others do in 20. So the fact that the iterations are fast, is lost in the slow convergence.

I have played with passing the A array into the PARDISO solver in different formats, both the sparse and the dense matrix, it makes no difference.  Rum really. I look step by step at the solution vector, and it is a long way from the other inversion solution vector even on the first iteration for the fully dense solvers.

Interestingly if I run as a PHASE 33 - I get errors, but it works, if I use 13 - it also works slowly, but I do not get the errors.

It does solve, but it is not faster in the end than the other solvers, interesting really.

Question: Should I try another MKL solver?

JMN

 

 

 

 

0 Kudos
75 Replies
mecej4
Honored Contributor III
1,183 Views

All the symptoms point to bugs in the arguments passed to the solver. No linear equations solver should lead you to conclude that it "...gives a much better first estimate of the solution vector". Unless the linear equations being solved are ill-conditioned, and there is little reason to expect linearized fluid network flow equations to be such, any linear equations solver should give essentially the same solution (e.g., to 10 significant digits). Likewise, since you are in charge of generating the sequence of linear approximations to the underlying nonlinear equations, whether you use Pardiso or some other linear solver should not affect convergence to the solution.

You are welcome to post complete code and data for an example, naming at least one other linear solver than Pardiso.

0 Kudos
JohnNichols
Valued Contributor III
1,183 Views

Here is the C# code derived from the C code which forms the basis for the EPA-NET original program, the latest iteration has a lot of bells and whistles, but the basic structure is unchanged.

File.in is the input file.  The answer is written to the screen for simplicity.

It uses the MathNet solve routine as shown in the simulation routine in the main program.  You can see the orirignal Gaussian solver stil embedded in the code, I have not yet removed those routines. I also tested the results against the Fortran Solver from Conte and de Boor standard text on the matter.

The code has been tested against known examples and the original C program from the EPA.

As noted the matrix is not ill conditioned.  It solves rapidly.

JMN

0 Kudos
JohnNichols
Valued Contributor III
1,183 Views

So this is the PARDISO solver, it uses identical routines except for:

1. Solver

2. the routine to adjust the heads at each iteration. 

The following is the code to adjust the heads, the F matrix as defined by the original people who developed the technique holds the X solution from AX = B, but in this case the X solution is clearly not correct and so the F is not properly updated - ie I use the B vector.

                int retA = simulation(J, Nnodes, ipvt, F, err, sw, sw1);
                ///
                /* Update heads at each node using formula:    */
                /*   H = H + ACCEL*F                           */
                /* where ACCEL = overrelaxation parameter,     */
                maxdiff = 0;
                if (iter > 1 && iter < 11)
                {
                    a = Data.ACCEL;
                }
                else
                {
                    a = Data.ACCEL;
                }
                for (int i = 0; i < Nnodes; i++)
                {
                    if (Math.Abs(F) > maxdiff)
                    {
                        maxdiff = Math.Abs(F);
                    }
                    if (SNODE.Tank == -1)
                    {
                        SNODE.H += (a * F);
                    }
                    //Console.WriteLine("Set " + m.ToString() + " Node : " + i.ToString() + " " + SNODE.H.ToString());
                    if (SNODE.H < -1000)
                    {
                        Console.WriteLine("Probable Error in Nodes at Node " + SNODE.ID.ToString());
                        return (SNODE.ID);
                    }
                }

I am clearly not setting up the input matrix A to the PARDISO solver correctly. The code for that is shown below.

private static void SetSparseMatrix(double[,] aIN, int nIN, double[] bIN, out int n, out int[] ia, out int[] ja, out int mtype, out int nhrs, out double[] b, out double[] a)
    {
        n = nIN;
        a = new double[n * n];
        ja = new int[n * n];
        ia = new int[n + 1];
        b = new double;
        int count = 0;
        ia[0] = 1;
        for (int k = 0; k < n; k++)
        {
            b = bIN;
        }
            
        int counter = 0;
        for (int j = 0; j < n; j++)
        {
            int tally = 0;
            counter++;
            for (int i = 0; i < n; i++)
            {
                if (Math.Abs(aIN[i, j]) < 0.0000001)
                {

                }
                else
                {
                    a[count] = aIN[i, j];
                    ja[count] = i + 1;
                    count++;
                    tally++;
                }
            }
            ia[counter] = ia[counter - 1] + tally;
            //Console.WriteLine(ia.ToString() + "  " + tally.ToString());
        }
        mtype = 11;
        nhrs = 1;
    }

 

I spent last night looking at the input and output data. Clearly my array is not ordered properly in going form the C# to the Fortran, but I can not see the error and I have tried every ordering permutation I can think off. 

My next step is to go back to the Fortran and code in the problem and call the solver with the correct FORTRAN data, simple solution, but I really want to be able to run the code in AutoCAD so we can exchange data in a simple format . 

 

Clearly I am making the mistakes in the input to the PARDISO solver. 

 

JMN

0 Kudos
JohnNichols
Valued Contributor III
1,183 Views

The code will work, once I get the X vector correct - I have no doubt about that and it will solve in fewer iterations.

But sending the A matrix properly from C# to Fortran my be beyond my skill set.

JMN

0 Kudos
mecej4
Honored Contributor III
1,183 Views

(NOTE: I had not seen #4 and #5 when I looked at the program posted in #3, and the comments below do not take into account anything said in #4 and #5)

I get identical results (the files FileResult.Out produced match byte for byte) with Pardiso as I did from the files in #3 but, before I post the relevant source code modifications, please oblige me with answers to the following two questions:

  • Do you need to use MathNet at all, once you decide to use MKL/Pardiso to solve your equations? If not, removing all ties to MathNet would make the code cleaner and simpler to distribute to your users.
  • In routine simulation(), you set the sparse matrix t equal to the transpose of the dense matrix a. Is this deliberately done? If so, I am curious about the reason. 

The test problem that you provided is small enough to facilitate getting the bugs out from the Pardiso calls, but you will probably need a significantly larger network to make the program running speed issues stand out. You probably have some large networks represented in data files.

0 Kudos
JohnNichols
Valued Contributor III
1,183 Views

Dear mecej4:

So to provide the back story: I have been asked to take part in a large proposal from the Federal Government related to infrastructure.  One of the areas of interest is in the exchange of information about water supply networks.  The first issue is that there are no formal standards for the ready exchange of water supply model information as is required if we want to address the Fed's ideas in the proposal.

There are several different programs that can be purchased for the analysis of water supply systems. Most are based on the EPANET model developed by Rossman.  But for commercial reasons, there are several different data formats and some have proprietary model software for viewing the data.  I was talking to one major city modeller and his problem was he was now completely tied to one brand of software and could not view his models in anything else. This is an issue in disasters.

Rossman has retired and the EPA is talking about putting the code into the public domain, but it is already in the public domain. The C model you compiled the other day using the cl compiler, I had tried using VS and had problems, is the latest iteration from Rossman from 2008.  I prefer VS to the make files, just an old habit.

When I started preparing the proposal, which was submitted last week, I was looking at what was available and stumbled across Rossman's 1991 code for the original modelling program. The C code ran nicely on Dev-CPP in the IDE, but it stumbled if I moved it to the VS - ( I am not that good with C - really do not like such an uncontrolled language)  and I was not really that interested in the code, but the algorithms. The code gave me however the ability to check the answers, which allowed me to write and debug my code.  ( I checked the 91 code against the 2008 code and the algorithm is the same, one can see the 91 code embedded in the 2008 code.)  The Rossman '91 code however had an ASCII data structure that was really nice and simple, it is actually quite brilliant. Pity it got lost along the way.

I have two very large and complex models, one is 2000 nodes and one is about 3000 nodes, The models have been run on a variety of software over the years, but the basic data is stored inside AutoCAD models that are easy to maintain and use for other purposes. The two models are really complex and inter-related - great test models. The issues is getting the data out of the AutoCAD models and the results back in from a standalone program. I had tried one of the models in the EPA-NET software and it just stood still - so to speak and doing the data exchange proved how "some word" that idea was.

I have a Hardy Cross loop program - from Streeter and Wylie - that I have used to look at both models, but it is in Fortran and results give me some concerns that all of the loops are picked up and there is hydraulic accuracy.  My thoughts were to use a Fortran program and access the DXF file data, or to use a C# program and access the data structure in AutoCAD directly - the last is I feel a better option - long term.

So to answer your questions,

1. No, I do not need MATHNET at all - it was merely a very convenient method for solving matrix problems and moving away from the Rossman and Conte and de Boore solvers, the calls are a dream. MathNet showed me I could speed up the process a lot.  It is tolerably fast for small models, but will fail above 2000 nodes. But it and the other solvers need to be pulled from the code.  PARDISO is way faster, which is important if the models are to become useful in real time, which is the long range goal of the Feds for disasters.

2. I want to get the code down to the barest bones so I can concentrate on the real issue - checking the hydraulic accuracy and the exchange of data using direct access to the AutoCAD database.  Some research points to the issue of convergence on the Rossman type models with low pipe flows.

 No matter how you look at it - AutoCAD is here for the long haul.  The point is not to sell the software the point is to look for a data exchange standard and to provide a sample of how the program can run "very quickly".  Then let the real experts deal with the commercial issues.

In routine simulation(), you set the sparse matrix t equal to the transpose of the dense matrix a. Is this deliberately done? If so, I am curious about the reason.   Not deliberate, I stole the ideas and really just got them working with my problem.

The hydraulic accuracy issue is one reason why I have been loath over the long term to do any real work with the Hardy Cross stuff, I had nothing to check the big results files against - hence finding Rossman's 91 code and the proposal request all coming at the same time prompted me to look at the issue again.  (There are two other people who work with me on other projects who are screaming about me taking time for this stuff, but I do not want to promise what I can not deliver and no matter how you look at it, you are one of the most valuable MKL problem solvers in the world, so solving this now will give me a way to compare the different codes, against large models and decide which gives more consistent results.).

The final issue is moving away from the AutoCAD data as the real repository of the model, if I put the model into EPANET, and then make amendments in EPANET I am not sure the AutoCAD is correct, but the AutoCAD can be easily plotted and given to the field people. Again - disaster planning becomes a problem. It is like Catch 22.5.

My problem is understanding how to translate the A matrix as an array in C# into a form PARDISO understands, I know I have this badly wrong.

 

Thanks for the help.

JMN

 

 

 

 

 

0 Kudos
mecej4
Honored Contributor III
1,183 Views

Thanks for the detailed explanation of the background. It is quite interesting in itself. I take it that you have the tools to produce an ASCII data file (similar to your File.in) from the network data that is stored inside Autocad. I should like to see such data files for a couple of large problems (around 2000 nodes, as you hinted).

I have attached a modified version of the program that you posted in #3, which runs with MKL/Pardiso and produces the same results as the original version. It reports "22 iterations", as did the original. I build at the Intel C++ configured command line :

     csc WSAprg.cs /r:MathNet.Numerics.dll

The attached zip contains your input file as well as the output file from the new version, and a copy of the console output. Almost all the changes are inside routine simulation(). I have made use of the fact that MKL-Pardiso allows the CSR IA and JA arrays to contain zero-base indices.

With suitable adjustments, I found that the same code works with the Basel/Lugano versions of Pardiso 4/5, as well. Zero-base indices are not allowed in these versions, and the Pardiso call has an additional argument, dparm.

I am impressed with the quality of your C# code. I am barely able to program in C#, but I was able to adapt your code to work with Pardiso with just a bit of effort.

I look forward to seeing data files for a couple of big pipe networks..

0 Kudos
JohnNichols
Valued Contributor III
1,183 Views

Dear mecej4:

Brilliant-

So if I may: first thing you do is:

int nnz=0; int nrhs=1;
            IntPtr[] pt = new IntPtr[64];
            int[] iparm = new int[64];
            int maxfct, mnum, phase, msglvl,mtype;
            double [] ddum = new double[1];
            int[] idum = new int[1];

set nnz to count the number of elements in the A array

set the number of rhs to 1

creat the IntPtr  the integer pointer used by PARDISO

set the parameter vector

create the five integers required for the PARDISO control

create a ddum vector and an idum vector

-----------------------------------------------------------------------------------------------

then

for (int i = 0; i < n; i++)
                for (int j = 0; j < n; j++)
                    if(a[j, i] != 0.0) nnz++;

you count the number of elements

-------------------------------------------------------------------------------------------------

then

int [] ia = new int[n+1];
            int [] ja = new int[nnz];
            double [] anz = new double[nnz];

you create the three vectors to hold the sparse array using the count and N+1 for the pointes to each row in vector ia.

---------------------------------------------------------------------------------------------------

then

 /* Form ia,ja,v arrays for CSR matrix representation of A^T */
            nnz=0;
            for (int i = 0; i < n; i++){
               ia = nnz;
               for (int j = 0; j < n; j++)if(a[j,i] != 0.0){
                  ja[nnz] = j; anz[nnz] = a[j,i]; ++nnz;
               }
            }
            ia=nnz;

you have transposed anz with a clever step and created the 3 vectors

ia is counted from zero? not 1?

------------------------------------------------------------------------------------------------------------------

 for(int i=0; i< 64; i++)iparm=0;

            iparm[0] = 1; /* No solver default */
            iparm[1] = 2; /* Fill-in reordering from METIS */
            iparm[5] = 1; /* Write solution into b */
            iparm[7] = 2; /* Max numbers of iterative refinement steps */
            iparm[9] = 13; /* Perturb the pivot elements with 1E-13 */
            iparm[10] = 1; /* Use nonsymmetric permutation and scaling MPS */
            iparm[11] = 0; /* Use A as is*/
            iparm[12] = 1; /* Maximum weighted matching algorithm is switched-off
                    * (default for symmetric). Try iparm[12] = 1 in case of
                    *  inappropriate accuracy */
            iparm[34] = 1; /* Zero based indexing of ia, ja */
            maxfct = 1; /* Maximum number of numerical factorizations. */
            mnum = 1; /* Which factorization to use. */
            msglvl = 0; /* Print statistical information in file */
            mtype = 11;
            err = 0; /* Initialize error flag */

            for (int i = 0; i < 64; i++)pt = IntPtr.Zero;

            phase = 13;
            Pardiso.pardiso (pt, ref maxfct, ref mnum, ref mtype, ref phase,
                    ref n, anz, ia, ja, idum, ref nrhs,
                    iparm, ref msglvl, b, ddum, ref err);
            if (err != 0){
               Console.WriteLine("\nERROR during solution: "+err);
               return 0;
               }
            return 1;

----------------------------------------------------------------------------------------------------------------------------

 

you moved onto a failry standard analysis using PARDISO -

Brilliant and elegant simplicity - well done old chap.

 

JMN

0 Kudos
mecej4
Honored Contributor III
1,183 Views

ia is counted from zero? not 1?

Exactly. See Line-13 of the last (i.e., fifth) code fragment in #9. MKL/Pardiso allows using zero-based indices, if desired. 

0 Kudos
JohnNichols
Valued Contributor III
1,182 Views

I was counting from 1, blast you zero based arrays, K and R should have matched FORTRAN it is not as if they would not have been aware of the issue -

I got it working in VS - only thing I did was copy the mkl_rt file to the executable location - just out of interest how does one distribute this type of program if it needs MKL_rt to execute?

I will now make some large files for the analysis so we can compare the results - take a few days it is easy but tedious.

I think I embarked on this odyssey because I did not want to have to write different output routines (I already had three) to take the AutoCAD data and put it in different ASCII formats  and deal with the multitude of issues there for different programs. And then if you change the data you are all the way back to square one.

Plus the 91 ASCII format is so nice and friendly I thought - why change, just get straight to the AutoCAD data at last - tis is a good first step and the file is user friendly.

The interesting problem now is the pump curves, there are several variations in the different codes, I do not like any of them and some are plain wrong - Streeter and Wylies is not nice at all for example.  The original EPANET one is strange and trying to untangle the math was not fun.

Thanks heaps.

 

 

 

 

 

0 Kudos
mecej4
Honored Contributor III
1,182 Views

I got it working in VS - only thing I did was copy the mkl_rt file to the executable location - just out of interest how does one distribute this type of program if it needs MKL_rt to execute?

Making a local copy of the MKL DLL is not necessary. If %PATH% contains the directory containing MKL_RT.DLL, the CSC compiler will find it during compilation. The resulting program will find that and other needed MKL DLLs at run time, again from %PATH%.

Your users can download the one of the redistributables packages at https://software.intel.com/en-us/articles/intelr-composer-redistributable-libraries-by-version . See also https://software.intel.com/en-us/articles/which-mkl-libraries-are-redistributables/ .

0 Kudos
JohnNichols
Valued Contributor III
1,182 Views

Dear mecej4:

I sorted out the issue of the PATH, it was an interesting 30 minutes whilst I delved into the guts of the environment variables and worked out what was in and what had been left out.

I noticed you had a post - somewhere - where you talked about the use of a good editor and the command line compiler as your preferred work environment.  Back in the days of MICROSOFT Fortran 3.3.1 - two floppy disks and a COMPAQ portable, I had issues in dealing with very large DXF files from AutoCAD - I was writing DXF generating routines at the time for the design of sewers for Hunter Water and Melbourne Water Board. One would get very large files and the only editor that worked was VEDIT - still will open files that NOTEPAD and NOTEPAD++ choke on. VEDIT had a nice advantage that it would also run the command line compiler inside the program and return the errors as a text file that it opened.  It was sweet.

It was quite easy to sit and loop using a simple batch file, watch TV, the children and drink a cuppa.  When I look at the directories with some programs in them I still find the old batch files.  VEDIT is one of those quirky programs from an interesting group of people.  I recently bought the updated version for Windows, I still have the DOS version from 88.

But now that I use VS - it is really hard to go back to the command line. It has a lot of bells and whistles that get in the road, but it has some advantages. 

I started to look at getting the data out of the DXF file format - it has been fun.

NODE 1 291242.753487868 1556107.39237795 0.3 50 0.3
NODE 2 291258.028648731 1556091.14512478 0.3 50 0.3
NODE 3 291513.724307427 1556048.99620458 0.3 50 0.3
NODE 4 291521.696715193 1555978.18418654 0.3 50 0.3
NODE 5 291559.497917055 1555947.41915687 0.3 50 0.3
NODE 6 291686.94407948 1555926.30799208 0.3 50 0.3
NODE 7 291745.639308541 1555961.69256434 0.3 50 0.3
NODE 8 291809.4114204 1555951.56444368 0.3 50 0.3

Getting all of the data types in the AutoCAD drawing checked was interesting.  Should have a working file by Monday.

 

JMN

0 Kudos
JohnNichols
Valued Contributor III
1,182 Views

Sometimes programming is frustrating. 

JMN

0 Kudos
JohnNichols
Valued Contributor III
1,181 Views

Dear mecej4:

You asked for some larger sample files.  Here is the Tamworth model at the 100 node stage.  It is going to take me a few weeks to sort out the tanks and the pumps from the limited data that I have from the original results.

It has a total of about 1800 nodes, I wrote a nice little program that now takes the DXF file apart and recovers the model data. So except for messy data like pumps and tanks the model proper is the AutoCAD file.  It used to be very messy to output the results as AutoLISP text files and then configure them into one file depending on the modelling program.

At the moment I use an intermediate data file, but I will ultimately combine the programs so they run as one unit.

The original EPANET program had a very nice data structure based on one line per entity - makes it easy to create the file, the latest input file uses codes and it is not as nice - IMHO. 

My other problem with Tamworth is I do not have the DEM data and will need to find it so I get residual head data.  I have a few scattered spot levels and that is all.

I got the program running in SI, which is nice, as all my data is in SI. 

I am also using Tamworth to try and work out all of the errors that can be in a data file and trap them. 

So let me know what you think. But it fast - thanks to you.

Regards

JMN

 

0 Kudos
mecej4
Honored Contributor III
1,181 Views

JMN: Sorry, I had been out of town from last Saturday, visiting family at the other end of the continent.

Please note that I am not conversant with the input data conventions for this program and I have not seen any documentation of the same. Therefore, all that I can do is to run the EXE on a data file that you name as suitable input. If some sections in the data file are missing or the program detects an error in the input data file, I get stuck. 

I noticed that the routine FindNode is called lots of times, and that you use linear search within it. It can be made much faster by using binary search instead. Here are the changes to do that.

1. In InputNodes(), just before the "return" statement, check that the input data is in ascending order w.r.t. ID:

            for(int i=1; i<Nnodes; i++)if(SNODE.ID < SNODE[i-1].ID){
               Console.WriteLine("InputNodes : SNODE[].ID not in ascending order");
               Environment.Exit(1);
               }

2. Replace the body of FindNode with the following:

        private static int FindNode(
           constants Data,
           Snode[] SNODE,
           int Nnodes,
           int n1,
           int kn,
           ref int nodeIDIndex)
        {
        int idx=0,lo=0,hi=Nnodes-1;

        while (lo <= hi){
           idx = (hi+lo)/2;
           if(SNODE[idx].ID > n1)hi=idx-1;
           else if(SNODE[idx].ID < n1)lo=idx+1;
           else break;
           }
        nodeIDIndex = idx;
        if (SNODE[idx].H != Data.unKnown)kn++;
        return kn;
        }

Are you aware of utilities such as DXF2EPA (http://www.water-simulation.com/wsp/2005/06/03/dxf2epa-autocad-dxf-file-conversion-utility-for-epanet/) that read an Autocad DXF file and output an EPA-2 input file? I tried the program on your TAM-WAT.dxf file, but it asked me to select "layers", which I do not know how to do.

0 Kudos
JohnNichols
Valued Contributor III
1,181 Views

Dear mecej4:

Thank you for the comments and the code.  To answer your questions:

1. I have tried the DXF to EPA converters, the issue is that the layers used in AutoCAD can vary so widely and the method used to hold the node and demand data is not fully standardized.  it was actually using one of these programs that rekindled the interest - along with the NIST grant issues. I developed my models and the AUTOLISP routines I use in the late 1980's long before the EPA stuff was released and so I used the WATSYS file format standard in Australia. I also have routines that will export the AutoCAD drawings in EPA format. The EPA NET GUI is not as nice as having the data in AutoCAD, it is just so much more useful and can be plotted in many ways in AutoCAD. Most EPANET models in the USA are in US units and I work in metric, which is also a hassle with layer names.

2. There should only be one model - so as soon as you take the model into an ASCII file format there is the temptation to make changes and not update the AutoCAD Drawing or the EPANET GUI.  This is just normal human behaviour.

3. I am looking to let the WSA program access the DXF file directly and then send the contour plot of the results back to the DXF file so one can view the results.  it is very hard to visualize an 1800 node model.

4. I finally got the 1800 node file for Tamworth working in WSA.  I was trying to do it in small pieces, but it was easier to look at the entire file and find the errors in the model and fix them. The large model is running but it has a few issues, so I am still looking at it and trying to sort out the reservoirs and pumps.  Once I do that I will put the model into EPANET format and compare the results.  I am fixing a couple of issues with the demand data - I did not think it through when I set up my program. Once I have that fixed I will put the code and 1800 node model up again.

5. I have also got to teach myself how to profile the code to spot the bottlenecks, I have not done that before.

 

Thanks again - you really help so much.

 

JMN

 

0 Kudos
JohnNichols
Valued Contributor III
1,181 Views

Dear mecej4:

I just tried the code changes, they make the program faster, and also stop a crash problem I was having after 88 iterations when the head out of balance got below 0.26 metres, it would just crash.

Thank you so much.

 

JMN

0 Kudos
JohnNichols
Valued Contributor III
1,181 Views

Dear mecej4:

I will upload the files first as I just lost the lot when I file upload went awry.

JMN

0 Kudos
JohnNichols
Valued Contributor III
1,181 Views

Dear mecej4:

I was able to get the 1800 node Tamworth model running on the WSA program.  But the demand data format I have does not easily shoehorn into the EPA model format, so I went back and fixed the program to take demand data in residential, industrial, parkland and leakage format.  This is explained in manual.txt.

I then moved to get a much better report format out, so I could start to check for common errors.  This is shown in the Filerpt. txt. I did this with the smaller FileM.IN so I could look at the results and make some sense of them. I still have bugs in the control of pumps routines, but I am trapping more conceptual errors in the model now.

Then I looked to loop the program through time, i.e. 1 hour increments for 24 hours.  This tells you if the pump sizes are correct and changes in reservoir levels.  I used the FileM.in for the same reason again.

I could not get he program to loop - turns out the arrays:

                    double[] F = new double[Data.maxN];
                    double[,] J = new double[Data.maxN, Data.maxN];
                    int[] ipvt = new int[Data.maxN];

were located in the routine 'hsolve", but on the second loop when they were recreated the program crashed without giving an error, just gone.

It took a while to work out, but moving these arrays to outside the looping routine meant they were set once and then simply had to be re-zeroed at each use. The program stopped crashing.  Re-zeroing turned out to be expensive computationally, so I used the Array.Clear routine and got a big jump in performance.

But now the program will loop about 9 times and then consistently crash. I get the error shown on the untitled.bmp on the previous post. The information I could find on the Internet suggests this is the PARDISO DLL corrupting the data structures. The error pops up on a write statement, but not necessarily the same one.  I assume it is in accessing the corrupted data structure.

Just to make sure it was not my code, I put the MATHNET solver back into the program and it runs nicely, but slowly for 24 hours. 

Interestingly I put the Tamworth data into the EPANET format and tried to run the model in EPANET.  I am still trying to work out what the error codes mean.  I will sort it out, but with so many small screens of data it is a steep learning curve.

Any ideas - I have insufficient experience to understand what is happening and it does not break on the same write line each time, so it is hard to track down. If I look at the variables I get a lot of variable not available as it may have been optimized away, but I turned optimization off in the project file,  is there a way I can check the data structures upon return?

Regards

JMN

 

 

 

 

0 Kudos
mecej4
Honored Contributor III
1,175 Views

I think that there are bugs in the program, and I think that finding and fixing these bugs is harder in C# than in Fortran. For example, C# does not allow uninitialized variables, but the way it accomplishes this is to initialize all numerical types to zero. That is of no help at all, because zero is not necessarily a meaningful initial value (for example, for the speed of light).

I have done a bit of fiddling with the code that you posted in #19. The reason for the program crashing out is that there are errors that are detected and a message printed ("Reservoir has a head below the bottom"), but the calculation is allowed to continue. From this point on, the results are probably junk -- for example, with Filem.in, the first head error reported after that message is printed is seen to be thousands of times larger than the distance from the Earth to the Sun.

To catch these errors, a good strategy is to lower the array sizes declared in your Constants module to the minimum values needed to run the test problem. Doing so will let you catch bugs such as the following: the code in SetHourlyDemands indicates that the array Demand[] should equal Data.maxHRS in size. However, the line where the array is allocated contains "double[] Demand = new double[Data.maxPipes]". For the problem in FileM.in, maxPipes=14, but maxHRS=24. Such errors are masked when your arrays are declared with a "generous" allowance, but as you increase the test problem size the errors will come alive one by one.

I guessed that one reason that  Tank-1 was pumped dry was that its area of cross-section was too low. I changed the area from 520 to 5200 m2. The program ran to completion and gave plausible results. Perhaps your program is not yet capable of treating the aftermath of one or more tanks being sucked dry?

QUESTION/SUGGESTION: Although you use MKL, the contents of our posts are losing relevance to this Forum. Any suggestions for not overstaying our welcome? Switch to E-mail, perhaps, with files dumped to Skydrive, Dropbox, etc.?

0 Kudos
Reply