- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello everyone!
Has anyone got any ideas what
might cause the drop in performance of the BLAS function GEMV() when
compared to a simple serial computation of the same problem?
Let
me explain my question more clearly.
I've written a program that
compares the performance of GEMV() to a simple serial matrix-vector
multiplication routine. Each routine (serial one and GEMV()) is
called 100000 times and the total time needed for the computations is
recorded in a text file. This is done to simulate a program that uses
an iterative method of finding voltages and currents in an inductive
network.
With a matrix size of 1000X1000 GEMV() performs
approximately 3.3 times as fast (using 4 cores) as the serial
version.
But with increasing matrix size this performance
increase decreases considerably.
For a 1500x1500 matrix GEMV()
performs ~ 1.7 times as fast as the serial computation
and for a
2000x2000 matrix GEMV() using 4 cores takes about the same amount of time as the serial computation.
What is causing this behavior? Has it got something to do with cache, memory access patterns or something completely different? Any ideas what might be causing this and any suggestions on how to keep the performance up for large matrices would be greatly appreciated.
Gregor Seitlinger
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A 2000 X 2000 dense matrix would occupy 16 or 32 Mbytes, which probably is more than what you have in L-2 or L-3 cache.
Are you surprised by the timing results as a result of expecting linear speed-up according to the number of threads?
Amdahl's "law" has something to say about how much speed-up to expect, not just by using parallel programming, but by dedicating more resources in general.
There is an excellent review of the issues in A minicourse on multithreaded programming.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For the big matrixes speed of RAM is important (forlevel 2 Blas)
.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It of course turns out that it is a cache issue, the larger matrices are not fitting into the L2 cache (8MB) anymore and that is the reason for the decrease in speed.
Guess my best option is to use some sort of divide-and-conquer approach to get some performance back.
Hopefully I can block the matrix without so much overhead that the performance i gain from working on smaller matrices is eaten up by the blocking code.
Gregor Seitlinger
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page