Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library
- Can MKL handle vectors larger than 2 billion ?

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

zx-81

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-13-2012
04:24 PM

36 Views

Can MKL handle vectors larger than 2 billion ?

I'm new to intel compiler tools and I would like to use MKL for accelerating some multi-gigapixel processing software I developed.

However, from a first glance at some of the MKL example programs it looks like vector/matrix sizes and dimensions are specified as plain "integers" in the C/C++ interface.

This would mean that the length of vectors/arrays etc. is inherently limited to 2 billion, even in the 64-bit version of the library (because on windows 64-bit model, the plain 'integer' data type is 32 bit only) despite the presence of enough RAM (e.g. 128 GB) to store vectors of dozens of billions of floating point numbers.

So my question is: what is the maximum size/length of vectors that can be handled by the most important MKL routines such as sparse-matrix-vector multiply, DFT and PARDISO solver ?

Are there special call interfaces (such as in FFTW-lib) that allow true 64-bit-sized input dimensions ?

Thanks in advance for any answers.

Link Copied

1 Reply

Gennady_F_Intel

Moderator

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-13-2012
08:02 PM

36 Views

--Gennady

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

For more complete information about compiler optimizations, see our Optimization Notice.