- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Hello,

i'm masters degree student whom willing to conduct an experiment about offloading in Mobile cloud computing. however, no simulator gives the ability to control the way to offload tasks **(basically distribute tasks)** as i'm going to use GA and PSO in partitioning tasks. I found the automatic offload in matlab which can be done using a simple statement. I have an application which is created for the sake of this experiment **( this means that i will use matlab to partition this application and distribute it to be executed on several computers).**

My question is can i use automatic offload in matlab and modify the way it uses by default to use above mentioned algorithms or shall i switch to parallel computing toolbox to conduct so?

Your prompt reply is really appreciated.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Hi Laila,

Right, your question seems outside of the scope of this forum. And if you don't select Matlab (although they should support distributed computation), your question is even not for Matlab.

According to your description, it seems there are three key words: **offload and distribute computation, **offloading in Mobile cloud computing**.
About Offloading, there are various tools from third-parties, like
**https://en.wikipedia.org/wiki/Computation_offloading

http://ieeexplore.ieee.org/document/7363622/?reload=true

Computation offloading is the technique migrating computations from client to server to exploit the powerful resources of the server. We observed some former approaches to the computation offloading, and found out they placed a huge burden on programmers to write annotations and substantially limited the computations to be offloaded. In order to overcome these problems, we propose an offloading system transferring the states without annotations and giving programmers freedom to use JavaScript features and DOM (Document Object Model) API in the offloaded computations. Our approach is based on the technique called snapshot, which safely saves and restores the states of web applications.

I guess, you may search the article https://software.intel.com/en-us/articles/using-intel-math-kernel-library-with-mathworks-matlab-on-i..., so ask the question here. The AO in that article mainly based on that Intel MKL has capability to auto offloading work to Xeon PHi Coprocessor, a few of algorithms of Matlab call MKL function underlying, so Matlab has same functionality on Xeon phi coprocessor. And the capability of MKL AO are actually from Intel Compiler

for example, if you want to write C program with Intel C/C++ Compilers, offload the task to GPU or coprocessor, you can learn from

https://software.intel.com/en-us/articles/offload-runtime-for-the-intelr-xeon-phitm-coprocessor

https://software.intel.com/en-us/articles/how-to-offload-computation-to-intelr-graphics-technology

Intel(R) C++ Compiler provides a feature which enables offloading general purpose compute kernels to processor graphics

- This program can offloaded to processor graphics by using synchronous offload by doing the following:

#pragma offload target(gfx) pin(indata,outdata:length(size_of_image))

{

_Cilk_for(int i = 0; i < size_of_image; i++)

{

process_image(indata*, outdata );*

}

}

__declspec (target(gfx))

void process_image(rgb &indataset, rgb &outdataset){

float temp;

temp = (0.393f * indataset.red) + (0.769f * indataset.green) + (0.189f * indataset.blue);

outdataset.red = (temp > 255.f)?255.f:temp;

temp = (0.349f * indataset.red) + (0.686f * indataset.green) + (0.168f * indataset.blue);

outdataset.green = (temp > 255.f)?255.f:temp;

temp = (0.272f * indataset.red) + (0.534f * indataset.green) + (0.131f * indataset.blue);

outdataset.blue = (temp > 255.f)?255.f:temp;

return;

}

As for **distribute**, see https://en.wikipedia.org/wiki/Distributed_computing. There are lots of mythologies too.

With Intel software technology, mainly by Intel MPI, which use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects. If you need this. You may refer to https://software.intel.com/en-us/get-started-with-mpi-for-linux. ;

Back to your questions, offloading** in Mobile cloud computing**, the first question is how you build Mobile cloud, using some tools like Amazon Web Services or something others?, depends on the services, you may consider to select the pr...

Best Regards,

Ying

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Hi Laila,

I assume your question is related to Matlab toolbox, you may need to consult with Matlab team. Other than that, if you have any concern about Intel Math Kernel Library, please let us know. Thanks!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Please provide me with a link

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

I posted the question in matlab forums since Dec 18 and no reply

Please guide me where can i get the answer!

Shaojuan Z. (Intel) wrote:

Hi Laila,

I assume your question is related to Matlab toolbox, you may need to consult with Matlab team. Other than that, if you have any concern about Intel Math Kernel Library, please let us know. Thanks!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

There seems to be no connection between what you want to do with Matlab and MKL. The subject of this forum is MKL (Math Kernel Library).

Matlab may use MKL behind the scenes, but the details are of concern only if you wish to call MKL routines from Matlab directly.

We cannot provide any links because, frankly, your question seems to be outside the scope of this forum.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Hi Laila,

Right, your question seems outside of the scope of this forum. And if you don't select Matlab (although they should support distributed computation), your question is even not for Matlab.

According to your description, it seems there are three key words: **offload and distribute computation, **offloading in Mobile cloud computing**.
About Offloading, there are various tools from third-parties, like
**https://en.wikipedia.org/wiki/Computation_offloading

http://ieeexplore.ieee.org/document/7363622/?reload=true

Computation offloading is the technique migrating computations from client to server to exploit the powerful resources of the server. We observed some former approaches to the computation offloading, and found out they placed a huge burden on programmers to write annotations and substantially limited the computations to be offloaded. In order to overcome these problems, we propose an offloading system transferring the states without annotations and giving programmers freedom to use JavaScript features and DOM (Document Object Model) API in the offloaded computations. Our approach is based on the technique called snapshot, which safely saves and restores the states of web applications.

I guess, you may search the article https://software.intel.com/en-us/articles/using-intel-math-kernel-library-with-mathworks-matlab-on-i..., so ask the question here. The AO in that article mainly based on that Intel MKL has capability to auto offloading work to Xeon PHi Coprocessor, a few of algorithms of Matlab call MKL function underlying, so Matlab has same functionality on Xeon phi coprocessor. And the capability of MKL AO are actually from Intel Compiler

for example, if you want to write C program with Intel C/C++ Compilers, offload the task to GPU or coprocessor, you can learn from

https://software.intel.com/en-us/articles/offload-runtime-for-the-intelr-xeon-phitm-coprocessor

https://software.intel.com/en-us/articles/how-to-offload-computation-to-intelr-graphics-technology

Intel(R) C++ Compiler provides a feature which enables offloading general purpose compute kernels to processor graphics

- This program can offloaded to processor graphics by using synchronous offload by doing the following:

#pragma offload target(gfx) pin(indata,outdata:length(size_of_image))

{

_Cilk_for(int i = 0; i < size_of_image; i++)

{

process_image(indata*, outdata );*

}

}

__declspec (target(gfx))

void process_image(rgb &indataset, rgb &outdataset){

float temp;

temp = (0.393f * indataset.red) + (0.769f * indataset.green) + (0.189f * indataset.blue);

outdataset.red = (temp > 255.f)?255.f:temp;

temp = (0.349f * indataset.red) + (0.686f * indataset.green) + (0.168f * indataset.blue);

outdataset.green = (temp > 255.f)?255.f:temp;

temp = (0.272f * indataset.red) + (0.534f * indataset.green) + (0.131f * indataset.blue);

outdataset.blue = (temp > 255.f)?255.f:temp;

return;

}

As for **distribute**, see https://en.wikipedia.org/wiki/Distributed_computing. There are lots of mythologies too.

With Intel software technology, mainly by Intel MPI, which use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects. If you need this. You may refer to https://software.intel.com/en-us/get-started-with-mpi-for-linux. ;

Back to your questions, offloading** in Mobile cloud computing**, the first question is how you build Mobile cloud, using some tools like Amazon Web Services or something others?, depends on the services, you may consider to select the pr...

Best Regards,

Ying

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page