- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello All

I'm building a new workstation for solving a lot of banded matrices. I only have to solve for a single solution, and I use LU decomposition of the matrix using the functino * 'cgbtrf*'. The matrices are around 300.000 x 1024 and upwards.

For this algorithm, is higher frequency higher CAS latency better than lower frequency lower CAS latency? I.e. is it the bandwidth or the fetch delay that will be my bottleneck?

I'm considereing 1066 MHz CAS 7 vs. 1333 MHz CAS 9 in 8 GB blocks to have room for expanding beyond my initial 64 GB.

Or will this not affect anything as it will all be bottlenecked by the CPU? (2x E5-2640).

Best regards

Henrik Andresen

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

**Laptop SODIMM memory with 9-9-9-24 latency vs. 10-10-10-27 latency ( Non-ECC )**Web-link: software.intel.com/en-us/forums/topic/364897 and follow a couple of

**wikipedia**links since they provide more technical details on how different DIMMs with different CLs and Frequencies could be compared.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

**300000x1024**( let's say a double-precision floating-point type ) "equals" ( in terms how much memory will be needed ) to a matrix with dimensions

**17527x17527**and it will need ~2.89GB of memory. So, it is significantly less than 64GB of memory available on your computer. A matrix with dimensions

**17527x17527**could be processed on a computer with ~8GB of memory but as faster as possible CPU is needed.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi Sergey

Thank you for your replies.

The reason for the selection of memory is how I expect the LU decomposition works. It states in the wiki pages that compilers choose according to CAS latency, but for read/write operations it will still matter unless a lot of the data is cached before-hand. For 1k rows, the data usage will be 8KB per row and 8MB for all data relevant to the operations of a single matrix, assuming the out of reach data has to be read anyway.

In the multi-threaded case the data needs to be shared with other CPU's, which makes the data size exceed the cache data available on the specific CPU. So, depending on the algorithm, it will be one or the other. So I thought to ask in the case anyone made a test of this.

Regarding memory size I need to keep other data along with running multiple cases at once for full CPU utilization. This will allow me to benefit from the 64GB.

But as you write, I might be overshadowed by other effects.

Thank you

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

**17527x17527**could be processed on a computer with ~8GB of memory but as faster as possible >>>>CPU is needed... I'd like to give a short explanation why I did that verification: I simply wanted to see how initial size of your matrix matches to a size of a square matrix and ~

**16Kx16K**is what I use most of the time. Now, let's go practical and here are real numbers for multiplication of

**16Kx16K**matricies using different algorithms: ...

**[ Algorithm 1 - Single-threaded - 'double' data type ]**Matrix sizes: 16384x16384 Time to calculate: 3379.5433 sec ...

**[ Algorithm 2 - Single-threaded - 'double' data type ]**Matrix sizes: 16384x16384 Time to calculate: 78.4685 sec ... As you can see the 2nd algorithm is 43x faster. Of course, multi-threaded implementations for both algorithms will work faster but they both CPU-bound (!). >>...But as you write, I might be overshadowed by other effects... I wouldn't worry about CAS latency numbers for some DIMMs because even with the fastest memory in case of matrix multiplication processing is more CPU-bound then RAM-bound. In order to get results faster you need to use Advanced matrix multiplication algorithms, like: Strassen - O( n^2.8070 ) Strassen-Winograd - O( n^2.8070 ) Kronecker based ( Tensor Product ) - I don't have an exact asymptotic complexity, it is about ~O( n^2.5 ) ( really fast! ) Coppersmith-Winograd - O( n^2.3760 ) Virginia Vassilevska Williams - O( n^2.3727 ) However, since your matricies are Not square you can't use them directly. For example, Strassen algorithm requires that both matricies are square.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

**AVX performance question**Web-link: software.intel.com/en-us/forums/topic/373607 Forum Topic:

**Matrix Multiplication**Web-link: software.intel.com/en-us/forums/topic/365581

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

>>>I wouldn't worry about CAS latency numbers for some DIMMs because even with the fastest memory in case of matrix multiplication processing is more CPU-bound then RAM-bound. In order to get results faster you need to use Advanced matrix multiplication algorithms, like:>>>

Yes that is true.Better option is to invest in powerful CPU than in CAS 7 or 9 latency memory.I suppose that those programs influenced by the memory bandwidth could be described as those which have a high ratio of load/store operations.

@hareson

I have found a few links about the impact of memory bandwidth on scientific application

Link ://stackoverflow.com/questions/2952277/when-is-a-program-limited-by-the-memory-bandwidth

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

@hareson

There is also STREAM benchmark which measures memory bandwidth performation.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Thank you all for your replies. I'll pool my money into CPU power instead of super optimized RAM then.

Again, thank you.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

>>>Thank you all for your replies. I'll pool my money into CPU power instead of super optimized RAM then>>>

You are welcome.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

**all DIMMs**have the same CAS latency and if DIMMs with different CAS latencies are used in a system then the slowest ( a higher number ) CAS latency will be set.

**Edited:**Missed 's' before 'lowest'

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

>>>Price for 16GB of memory is ~100USD and price for the upgrade from Intel Core i7-3840QM to Intel Core Extreme Edition was ~ 800USD>>>

So which option would you choose?

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page