- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to run my application in native mode in Intel Xeon Phi, when I execute my code, solve(lprec) returns -1, that is lpsolve.NOTRUN, while when I was executing same code on intel processor my code works fine and it return lpsolve.OPTIMAL, I am not able to figure this odd behavior, can you please help with this issue.
- Tags:
- Parallel Computing
Link Copied
0 Replies

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page