Showing results for

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Highlighted
##

hareson

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
02:37 AM

6 Views

RAM Performance Question

Hello All

I'm building a new workstation for solving a lot of banded matrices. I only have to solve for a single solution, and I use LU decomposition of the matrix using the functino * 'cgbtrf*'. The matrices are around 300.000 x 1024 and upwards.

For this algorithm, is higher frequency higher CAS latency better than lower frequency lower CAS latency? I.e. is it the bandwidth or the fetch delay that will be my bottleneck?

I'm considereing 1066 MHz CAS 7 vs. 1333 MHz CAS 9 in 8 GB blocks to have room for expanding beyond my initial 64 GB.

Or will this not affect anything as it will all be bottlenecked by the CPU? (2x E5-2640).

Best regards

Henrik Andresen

12 Replies

Highlighted
##

I think you need to be more concerned about amount of available memory ( 64GB looks very good! ) and performance of your CPU.
Regarding CAS Latency numbers.
>>For this algorithm, is higher frequency higher CAS latency better than lower frequency lower CAS latency?
This is Not always true ( take a look at a similar thread / see below ).
>>...I.e. is it the bandwidth or the fetch delay that will be my bottleneck?
I think Yes.
>>...I'm considereing 1066 MHz CAS 7 vs. 1333 MHz CAS 9...
Please take a look at a similar thread:
Forum Topic: **Laptop SODIMM memory with 9-9-9-24 latency vs. 10-10-10-27 latency ( Non-ECC )**
Web-link: software.intel.com/en-us/forums/topic/364897
and follow a couple of **wikipedia** links since they provide more technical details on how different DIMMs with different CLs and Frequencies could be compared.

SKost

Valued Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
06:08 AM

6 Views

Highlighted
##

>>...The matrices are around 300.000 x 1024 and upwards...
I've done a quick verification: a matrix with dimensions **300000x1024** ( let's say a double-precision floating-point type ) "equals" ( in terms how much memory will be needed ) to a matrix with dimensions **17527x17527** and it will need ~2.89GB of memory. So, it is significantly less than 64GB of memory available on your computer.
A matrix with dimensions **17527x17527** could be processed on a computer with ~8GB of memory but as faster as possible CPU is needed.

SKost

Valued Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
06:22 AM

6 Views

Highlighted
##

hareson

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
07:17 AM

6 Views

Hi Sergey

Thank you for your replies.

The reason for the selection of memory is how I expect the LU decomposition works. It states in the wiki pages that compilers choose according to CAS latency, but for read/write operations it will still matter unless a lot of the data is cached before-hand. For 1k rows, the data usage will be 8KB per row and 8MB for all data relevant to the operations of a single matrix, assuming the out of reach data has to be read anyway.

In the multi-threaded case the data needs to be shared with other CPU's, which makes the data size exceed the cache data available on the specific CPU. So, depending on the algorithm, it will be one or the other. So I thought to ask in the case anyone made a test of this.

Regarding memory size I need to keep other data along with running multiple cases at once for full CPU utilization. This will allow me to benefit from the 64GB.

But as you write, I might be overshadowed by other effects.

Thank you

Highlighted
##

>>>>...A matrix with dimensions **17527x17527** could be processed on a computer with ~8GB of memory but as faster as possible
>>>>CPU is needed...
I'd like to give a short explanation why I did that verification: I simply wanted to see how initial size of your matrix matches to a size of a square matrix and ~**16Kx16K** is what I use most of the time.
Now, let's go practical and here are real numbers for multiplication of **16Kx16K** matricies using different algorithms:
...
**[ Algorithm 1 - Single-threaded - 'double' data type ]**
Matrix sizes: 16384x16384
Time to calculate: 3379.5433 sec
...
**[ Algorithm 2 - Single-threaded - 'double' data type ]**
Matrix sizes: 16384x16384
Time to calculate: 78.4685 sec
...
As you can see the 2nd algorithm is 43x faster. Of course, multi-threaded implementations for both algorithms will work faster but they both CPU-bound (!).
>>...But as you write, I might be overshadowed by other effects...
I wouldn't worry about CAS latency numbers for some DIMMs because even with the fastest memory in case of matrix multiplication processing is more CPU-bound then RAM-bound. In order to get results faster you need to use Advanced matrix multiplication algorithms, like:
Strassen - O( n^2.8070 )
Strassen-Winograd - O( n^2.8070 )
Kronecker based ( Tensor Product ) - I don't have an exact asymptotic complexity, it is about ~O( n^2.5 ) ( really fast! )
Coppersmith-Winograd - O( n^2.3760 )
Virginia Vassilevska Williams - O( n^2.3727 )
However, since your matricies are Not square you can't use them directly. For example, Strassen algorithm requires that both matricies are square.

SKost

Valued Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
05:17 PM

6 Views

Highlighted
##

Take a look ( as soon as you have time ) at two recently created threads related to matrix mulriplication:
Forum Topic: **AVX performance question**
Web-link: software.intel.com/en-us/forums/topic/373607
Forum Topic: **Matrix Multiplication**
Web-link: software.intel.com/en-us/forums/topic/365581

SKost

Valued Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
05:19 PM

6 Views

Highlighted
##

Bernard

Black Belt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
10:06 PM

6 Views

>>>I wouldn't worry about CAS latency numbers for some DIMMs because even with the fastest memory in case of matrix multiplication processing is more CPU-bound then RAM-bound. In order to get results faster you need to use Advanced matrix multiplication algorithms, like:>>>

Yes that is true.Better option is to invest in powerful CPU than in CAS 7 or 9 latency memory.I suppose that those programs influenced by the memory bandwidth could be described as those which have a high ratio of load/store operations.

@hareson

I have found a few links about the impact of memory bandwidth on scientific application

Link ://stackoverflow.com/questions/2952277/when-is-a-program-limited-by-the-memory-bandwidth

Highlighted
##

Bernard

Black Belt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
10:21 PM

6 Views

@hareson

There is also STREAM benchmark which measures memory bandwidth performation.

Highlighted
##

@hareson
There is also STREAM syntetic benchmark which measures memory bandwidth performance.
Link ://www.cs.virginia.edu/stream/ref.html

Bernard

Black Belt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-04-2013
10:22 PM

6 Views

Highlighted
##

hareson

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2013
01:38 AM

6 Views

Thank you all for your replies. I'll pool my money into CPU power instead of super optimized RAM then.

Again, thank you.

Highlighted
##

Bernard

Black Belt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2013
03:17 AM

6 Views

>>>Thank you all for your replies. I'll pool my money into CPU power instead of super optimized RAM then>>>

You are welcome.

Highlighted
##

>>...Better option is to invest in powerful CPU than in CAS 7 or 9 latency memory...
Price for 16GB of memory is ~100USD and price for the upgrade from Intel Core i7-3840QM to Intel Core Extreme Edition was ~ 800USD. Expected performance improvement when using all DIMMs with the same 7 or 9 CAS latency is unknown ( I don't expect it is greater than 0.5% ) but performance improvement with Intel Core Extreme Edition could be greater than 25%.
it is very important that **all DIMMs** have the same CAS latency and if DIMMs with different CAS latencies are used in a system then the slowest ( a higher number ) CAS latency will be set.
**Edited:** Missed 's' before 'lowest'

SKost

Valued Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2013
05:25 AM

6 Views

Highlighted
##

Bernard

Black Belt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-05-2013
09:18 PM

6 Views

>>>Price for 16GB of memory is ~100USD and price for the upgrade from Intel Core i7-3840QM to Intel Core Extreme Edition was ~ 800USD>>>

So which option would you choose?

For more complete information about compiler optimizations, see our Optimization Notice.