Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Automatic offload using matlab

Laila_Q_
Beginner
716 Views

Hello,

i'm masters degree student whom willing to conduct an experiment about offloading in Mobile cloud computing. however, no simulator gives the ability to control the way to offload tasks (basically distribute tasks) as i'm going to use GA and PSO in partitioning tasks. I found the automatic offload in matlab which can be done using a simple statement. I have an application which is created for the sake of this experiment ( this means that i will use matlab to partition this application and distribute it to be executed on several computers).

My question is can i use automatic offload in matlab and modify the way it uses by default to use above mentioned algorithms or shall i switch to parallel computing toolbox to conduct so?

Your prompt reply is really appreciated.

 

0 Kudos
1 Solution
Ying_H_Intel
Employee
716 Views

Hi Laila,

Right, your question seems outside of the scope of this forum. And if you don't select Matlab (although they should support distributed computation), your question is even not for Matlab.

According to your description, it seems there are three key words: offload and distribute computation,  offloading in Mobile cloud computing.

About Offloading, there are various tools from third-parties, like
 
https://en.wikipedia.org/wiki/Computation_offloading
http://ieeexplore.ieee.org/document/7363622/?reload=true
Computation offloading is the technique migrating computations from client to server to exploit the powerful resources of the server. We observed some former approaches to the computation offloading, and found out they placed a huge burden on programmers to write annotations and substantially limited the computations to be offloaded. In order to overcome these problems, we propose an offloading system transferring the states without annotations and giving programmers freedom to use JavaScript features and DOM (Document Object Model) API in the offloaded computations. Our approach is based on the technique called snapshot, which safely saves and restores the states of web applications.

I guess, you may search the article https://software.intel.com/en-us/articles/using-intel-math-kernel-library-with-mathworks-matlab-on-intel-xeon-phi-coprocessor-system), so ask the question here.  The AO in that article mainly based on that Intel MKL has capability to auto offloading work to Xeon PHi Coprocessor, a few of algorithms of Matlab call MKL function underlying,  so Matlab has same functionality on Xeon phi coprocessor. And the capability of MKL AO are actually from Intel Compiler

for example, if you want to write C program with Intel C/C++ Compilers, offload the task to GPU or coprocessor, you can learn from
https://software.intel.com/en-us/articles/offload-runtime-for-the-intelr-xeon-phitm-coprocessor
https://software.intel.com/en-us/articles/how-to-offload-computation-to-intelr-graphics-technology

Intel(R) C++ Compiler  provides a feature which enables offloading general purpose compute kernels to processor graphics

  • This program can offloaded to processor graphics by using synchronous offload by doing the following:


#pragma offload target(gfx) pin(indata,outdata:length(size_of_image))

               {

                   _Cilk_for(int i = 0; i < size_of_image; i++)

 

                   {

                       process_image(indata, outdata);

                   }

               }

__declspec (target(gfx))

               void process_image(rgb &indataset, rgb &outdataset){

                       float temp;

                       temp = (0.393f * indataset.red) + (0.769f * indataset.green) + (0.189f * indataset.blue);

                       outdataset.red = (temp > 255.f)?255.f:temp;

                       temp = (0.349f * indataset.red) + (0.686f * indataset.green) + (0.168f * indataset.blue);

                       outdataset.green = (temp > 255.f)?255.f:temp;

 

                       temp = (0.272f * indataset.red) + (0.534f * indataset.green) + (0.131f * indataset.blue);

                       outdataset.blue = (temp > 255.f)?255.f:temp;

                       return;

               }

As for distribute, see  https://en.wikipedia.org/wiki/Distributed_computing. There are lots of mythologies too.  

With Intel software technology, mainly by Intel MPI, which use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects. If you need this. You may refer to https://software.intel.com/en-us/get-started-with-mpi-for-linux. ;

Back to your questions, offloading in Mobile cloud computing, the first question is how you build Mobile cloud, using some tools like Amazon Web Services or something others?, depends on the services, you may consider to select the program model supported by the moblie cloud.  

Best Regards,

Ying

View solution in original post

0 Kudos
5 Replies
Shaojuan_Z_Intel
Employee
716 Views

Hi Laila,

I assume your question is related to Matlab toolbox, you may need to consult with Matlab team. Other than that, if you have any concern about Intel Math Kernel Library, please let us know. Thanks!

0 Kudos
Laila_Q_
Beginner
716 Views

Please provide me with a link

0 Kudos
Laila_Q_
Beginner
716 Views

I posted the question in matlab forums since Dec 18 and no reply

Please guide me where can i get the answer!

Shaojuan Z. (Intel) wrote:

Hi Laila,

I assume your question is related to Matlab toolbox, you may need to consult with Matlab team. Other than that, if you have any concern about Intel Math Kernel Library, please let us know. Thanks!

0 Kudos
mecej4
Honored Contributor III
716 Views

There seems to be no connection between what you want to do with Matlab and MKL. The subject of this forum is MKL (Math Kernel Library).

Matlab may use MKL behind the scenes, but the details are of concern only if you wish to call MKL routines from Matlab directly.

We cannot provide any links because, frankly, your question seems to be outside the scope of this forum.

0 Kudos
Ying_H_Intel
Employee
717 Views

Hi Laila,

Right, your question seems outside of the scope of this forum. And if you don't select Matlab (although they should support distributed computation), your question is even not for Matlab.

According to your description, it seems there are three key words: offload and distribute computation,  offloading in Mobile cloud computing.

About Offloading, there are various tools from third-parties, like
 
https://en.wikipedia.org/wiki/Computation_offloading
http://ieeexplore.ieee.org/document/7363622/?reload=true
Computation offloading is the technique migrating computations from client to server to exploit the powerful resources of the server. We observed some former approaches to the computation offloading, and found out they placed a huge burden on programmers to write annotations and substantially limited the computations to be offloaded. In order to overcome these problems, we propose an offloading system transferring the states without annotations and giving programmers freedom to use JavaScript features and DOM (Document Object Model) API in the offloaded computations. Our approach is based on the technique called snapshot, which safely saves and restores the states of web applications.

I guess, you may search the article https://software.intel.com/en-us/articles/using-intel-math-kernel-library-with-mathworks-matlab-on-intel-xeon-phi-coprocessor-system), so ask the question here.  The AO in that article mainly based on that Intel MKL has capability to auto offloading work to Xeon PHi Coprocessor, a few of algorithms of Matlab call MKL function underlying,  so Matlab has same functionality on Xeon phi coprocessor. And the capability of MKL AO are actually from Intel Compiler

for example, if you want to write C program with Intel C/C++ Compilers, offload the task to GPU or coprocessor, you can learn from
https://software.intel.com/en-us/articles/offload-runtime-for-the-intelr-xeon-phitm-coprocessor
https://software.intel.com/en-us/articles/how-to-offload-computation-to-intelr-graphics-technology

Intel(R) C++ Compiler  provides a feature which enables offloading general purpose compute kernels to processor graphics

  • This program can offloaded to processor graphics by using synchronous offload by doing the following:


#pragma offload target(gfx) pin(indata,outdata:length(size_of_image))

               {

                   _Cilk_for(int i = 0; i < size_of_image; i++)

 

                   {

                       process_image(indata, outdata);

                   }

               }

__declspec (target(gfx))

               void process_image(rgb &indataset, rgb &outdataset){

                       float temp;

                       temp = (0.393f * indataset.red) + (0.769f * indataset.green) + (0.189f * indataset.blue);

                       outdataset.red = (temp > 255.f)?255.f:temp;

                       temp = (0.349f * indataset.red) + (0.686f * indataset.green) + (0.168f * indataset.blue);

                       outdataset.green = (temp > 255.f)?255.f:temp;

 

                       temp = (0.272f * indataset.red) + (0.534f * indataset.green) + (0.131f * indataset.blue);

                       outdataset.blue = (temp > 255.f)?255.f:temp;

                       return;

               }

As for distribute, see  https://en.wikipedia.org/wiki/Distributed_computing. There are lots of mythologies too.  

With Intel software technology, mainly by Intel MPI, which use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects. If you need this. You may refer to https://software.intel.com/en-us/get-started-with-mpi-for-linux. ;

Back to your questions, offloading in Mobile cloud computing, the first question is how you build Mobile cloud, using some tools like Amazon Web Services or something others?, depends on the services, you may consider to select the program model supported by the moblie cloud.  

Best Regards,

Ying

0 Kudos
Reply