Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library
- Using FEAST for large matrix

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Hazra__Dhiraj_Kumar

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-25-2014
09:21 PM

93 Views

Using FEAST for large matrix

Hello,

I am presently working with FEAST to find eigenvalues and eigenvectors for a symmetric matrix. I need to solve N X N matrix with N ~ 10^6- 10^8.

Now I have few queries :

1. SInce the size is large it is not possible to allocate this storage in a desktop (it has 8GB ram). Is there any way to handle large matrix of this size ?

2. The matrix is also expected to be sparse for which I expect to store it in a compressed format and that can save some memory space. But the eigenvector matrix is also of the dimension N X N which I have to pre-allocate before calling FEAST. Hence the compressed storage will not be of much help. Is there any way to solve this problem ?

3. Since FEAST fpm uses 64 iparm of MKL_pardiso, I have checked that iparm(60) helps to work using disk space storage. Can I use that in feast to solve this large problem ? However, in this case also I guess I have to pass eigenvectors (N X N) to FEAST which I have to pre-allocate. Can I use disk space somehow for this?

My program is working for moderate size matrices (10000 X 10000).

I would appreciate any help in this regard.

Thanks,

Dhiraj

Link Copied

2 Replies

Alexander_K_Intel2

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-25-2014
09:29 PM

93 Views

Hi,

You are correct - general approach to reduce memory size of internal MKL pardiso is using ooc algorithm (iparm(60)). But memory allocation for matrix Q is still needed. The only way to reduce size of matrix Q is divide search interval on several subintervals with reduced number of eigenvalues in each of it (but you have to know such estimation for each subinterval). After you need to call EE functionality for each subinterval and find eigenvalues for in each subinterval in loop.

Thanks,

Alex

Hazra__Dhiraj_Kumar

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-25-2014
10:30 PM

93 Views

Hello Alex,

Thanks a lot for your quick reply. I shall try your suggestion.

Thanks,

Dhiraj

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

For more complete information about compiler optimizations, see our Optimization Notice.