- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The same computations are done both on CPU and GPU, and their computational time are measured on both processors.
Does the difference between these computational time provide useful information for us to improve both CPU and GPU architecture? or design a new architecture in both cases?
If so, how can we use this information to achieve that?
Could you please introduce me some references related to the issue.
Thank you in advance.
Look forward to hearing from you.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There is a fundamental difference between CPU and GPU design. From the high level point of view CPU like Intel Haswell is optimized for out-of -order or speculation processing of data which exhibits a complex code branching. On the other hand GPU is optimized for massive parallel data processing by in-order shader cores with little code branching.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page