I would like to process say a 1GB of data in Intel SGX. I thought of reducing the overhead by having the data divided into chunks of say (8MB). Then, in every chunk my application enters the enclave, process the data and exit it. The process is repeated until the 1GB data is processed. Is this scenario going to give my application less time compared to just processing the whole data at once?
Processing 1GB data at once not possible in Windows as there's no paging support as of today in Intel CPU's available in market. Also dynamic paging support will come with v2 instruction set(hardware support) which is still not released to the market.
Linux does support paging hence possible to process 1 GB data but it would hit hard on performance as there will be EPC memory swapin/out . So processing data in chunks much better solution in the above mentioned scenario.
Yes chunk won't have dependency on other chunks. But The max EPC size on the platform may be just ~100MB, In Linux ,the driver will encrypt and page out the EPC to system memory when it exceeds the EPC size.