I am a university researcher conducting research on secure sensor data processing. We've implemented some of our (mathematically-intensive) algorithms inside an enclave (compiled with the provided Intel Compiler and trusted Intel std. C/C++ library) and compared this against non-enclave performance (compiled with Microsoft Visual C++ Compiler w/Microsoft's std. libs).
We're seeing substantial performance increases with the SGX implementation (typically >10x increases).
We had informal discussions with an Intel rep. approximately 12 months ago, who stated that the Intel C/C++ libs are optimised with IPP, which benefits floating-point and string-based operations (which we rely upon hugely).
Is this correct, and would it be possible for someone to elaborate on this? Finally, what would be the best way to 'equalise' this performance (e.g. analgous to using the SGX libs in the untrusted implementation), so we can gauge the overhead of SGX more accurately?
Hope someone can help.
EDIT: Specifically, we heavily use vector data structures (including matrices implemented as vector<vector<float>>), floating point math (primarily multiplication and exponentiation, including roots) and string manipulation (atoi, strtok and strdup).
On Windows, sgx_tstdc.lib is linked with the Intel optimized math, string and cryptography libraries.
To compare performance, I'd first suggest you compare running an enclave in HW mode vs. running in Simulation. Note that you have to change the Simulation settings, since compiler optimizations are turned off. To ensure you have the same compiler settings, you may create a new building profile based on Prerelease and change the linker libraries to use the Simulation ones.
I can think of a few possible suspects:
Instead of changing the SGX implementation, you can change your regular implementation and use Intel compiler and libraries.
There are free packages for academy usage.