This is strange: I am running a program on windows that reads a file from disk and executes a number of OpenCL kernels on the CPU device.
When I run it on my 4 year old laptop, i7 720 with mechanical hard drive, total time is around 90 ms.
When I run it on my 1 year old desktop with i7 3770 and SSD drive, total time is around 600 ms.
OS is the same (Windows 7), SDK version is the same (the latest), compiler is the same (VS 2012) and code is the same:
I don't know what command line arguments you're using, but the program defaults to using the GPU, which the 3770 has. I would guess you're seeing the performance difference between the HD Graphics 4000 on the i7 3770 and the i7 720. For many reasons, integrated GPU's aren't always faster than just running on CPUs.
@James I can't see the integrated HD 4000 on the chip.... I do have a radeon 7700 card in the machine, so perhaps
this prevents me from seeing the HD 4000.
This turned out to be my bad - problem has been resolved.
Well, this is still a problem for me.
80 ms on the i7 Q720 CPU, and 650 ms on the i7 3770 CPU.
Any ideas on why a four year old chip is almost 10 times faster than one from last year ?