In one of the links on virtualization - http://www.intel.com/content/www/us/en/virtualization/virtualization-technology/intel-virtualization-technology.html, i see the following statement "I/O virtualization features facilitate offloading of multi-core packet processing to network adapters as well as direct assignment of virtual machines to virtual functions, including disk I/O". I am specifically interested in understanding the details of virtualized access to disk for I/O. Basically, i am trying to run 2 linux kernel instances on a single E5-2667 CPU, as the linux instances are running natively on the CPU cores (no hypervisor), i want both of them accessing different partitions on the same physical disk at the same time using some form of virtualization. Looking at the details of what kind of virtualization is available for disk access.
Hello, You might be looking this information on Intel(R) Data Direct I/O Technology (Intel(R) DDIO). Here are some details:
Intel® Data Direct I/O Technology (Intel® DDIO) is a feature introduced with the Intel® Xeon® processor E5 family and Intel® Xeon® processor E7 v2 family as a key feature of Intel® Integrated I/O. Intel DDIO is the latest Intel innovation in intelligent, system-level I/O performance improvements. Intel created Intel DDIO to allow Intel® Ethernet Controllers and adapters to talk directly with the processor cache of the Intel Xeon processor E5 family and Intel Xeon processor E7 v2 family. Intel DDIO makes the processor cache the primary destination and source of I/O data rather than main memory, helping to deliver increased bandwidth, lower latency, and reduced power consumption.
Intel DDIO re-architects the flow of I/O data into and out of the processor
The “classic” I/O mode—prior to Intel DDIO—dates from an era when I/O was slow and processor caches were a small, scarce resource. Classically, incoming data from an Ethernet controller or adapter went first into the host processor's main memory. When the processor wanted to operate on the data, it then read the data into cache from memory. Thus, a memory write and a memory read occurred before the processor even did anything with the data. Conversely, outgoing data from the processor to the external I/O first triggered a read from memory to cache followed by a write back to memory as the data evicted from cache. In architectures prior to the Intel Xeon processor E5 family and Intel Xeon processor E7 v2 family, an additional, speculative read would be triggered from the I/O hub.
The world has changed. 10 Gigabit Ethernet (10 GbE) is being adopted broadly in the data center, and with the Intel Xeon processor E5 family and Intel Xeon processor E7 v2 family, last-level cache is now 20 MB, no longer a scarce resource. The insight behind Intel DDIO is to recognize that the classical model’s multiple memory accesses, which degrade performance and increase system power consumption, can be eliminated with a more efficient flow of I/O data by making the processor cache the primary destination and source of I/O data.
Increased bandwidth, reduced latency, and reduced power consumption
The mix of these benefits in a particular server or workstation depends on the workload:
Thanks for the response, but i was looking at information on how disk virtualization can be achieved for the specific scenario i listed, specifically the part of the statement of interest is "direct assignment of virtual machines to virtual functions, including disk I/O", would be helpful if you could throw some light on the same.
I don't have an example to show you but I found an article the may give you some clues: https://software.intel.com/en-us/articles/intel-virtualization-technology-for-directed-io-vt-d-enhan...