Intel’s David Cowperthwaite on Iris™ Pro & Virtualization

24096

David Cowperthwaite of Intel’s Visual and Parallel Computing Group joins Steve Waskul at SIGGRAPH 2014 to talk about the latest trends including what’s new with Intel® Iris™ and Intel® Iris™Pro Graphics.

The interview begins with Steve asking David about the trends that are happening with workstation and desktop virtualization and remote access. David explains that the primary driving factor behind this is really data integrity, centralization and security. Perhaps you have HIPPA (Health Insurance Portability and Accountability Act (deals with protecting the privacy and security of health information) or financial auditing requirements or significant investments in Intellectual property. In these cases there can be a significant argument made for keeping this information in a secure central repository and extending access via desktop and application virtualization. David explains that secure delivery of virtualized desktops and workstations in these environments allows for protection of critical content while easing the process of managing the systems and allowing for greater flexibility within a company. It also allows users to work anytime from anywhere on a wide range of devices and IT managers to automate desktop setup and provisioning.

With remote workstations and outside in access Dave explains that you can log in from a wide range of devices and be up and running in seconds. There is no need to download large data sets as they stay in the data center connected to compute via high speed connectivity. This really has implications as well for boosting the capabilities of remote collaboration.

As the discussion moves forward David and Steve discuss specifically how Iris™Pro Graphics fit into the desktop / workstation virtualization paradigm. Currently there are several companies who are building high-density servers / appliances than can incorporate as many as 10 or more core i7 processors in a single 1U rack space. HP’s Moonshot system for example has a 4.3U form factor and can support up to 45 server cartridges with a range of server, storage and networking solution cartridges. In a configuration similar to what David is describing for creating virtual desktops HP’s system could accommodate up to 45 m710 server cartridges with Intel® E3-1284Lv3 processors that feature integrated graphics with Intel® Iris™ Pro Graphics P5200. With Intel® Iris™ Pro graphics Dave explains that they have doubled the execution units and added ED RAM as well to the die so in a relatively small package you can deliver 3D graphics performance that is comparable to a lot of discrete cards.

To run entire desktops on a node you can now use virtualization technology to split up the processor and GPU up in a way that preserves all of the features and APIs that Intel® Iris™ Pro graphics has. Dave specifically mentions a technology Intel developed for the Xen hypervisor called GVT-g. This allows sharing a GPU between many concurrent users. Each virtual desktop keeps a copy of Intel’s native graphics driver and on a time sliced basis, an agent in the hypervisor directly assigns the full GPU resource to each virtual machine. So, during its “time slice” each virtual machine gets a full dedicated GPU. What sounds very promising about this approach is that it sounds like you’ll still have experience as a user as you had before virtualization. Dave says this is not a compromise or a “throwback” to what you had a few year ago. You’ll still be able to run all the current APIs like Direct X 11.1, OpenGL 4.2, OpenCL and the Intel Media SDK Tools. So, with this technology, the rich, demanding applications in the technical space can be run in virtualized, shared, remote environments.

Stepping back and looking at the big picture David suggests that per 1U form-factor you should be able to virtualize 30 to 34 desktops using current systems or roughly 3 desktops per processor. Of course this does depend on the applications running but even if the performance drops down to say a “Core i5 HD4600” experience at the remote desktop level this might be well within what could be delivered via the remote infrastructure for example which can only deliver so much bandwidth in a typical enterprise environment. If users do have a more intensive graphic requirement, David explains that it is still economical to give them an entire node and dedicate a complete GPU to a single virtual machine. He also explains that with the nodes that are available today they feature up to 32GB of RAM per node. So, there would be enough RAM to provide one VM on a node with say 12 GB of RAM while another two VMs on the same node had 8GB each for example.

As the interview progresses Steve asks David about how IT Professionals like this solution. David explains that he’s seen very positive feedback so far. IT professionals like the efficiency of it. The solutions do present the individual nodes on the network as a number of individual machines so it’s not some strange new beast that needs to be managed in a new way. One of the nice things about this approach is that virtual machine images are portable so IT managers can move them from one platform to another and the performance you get from the VM will scale depending on the amount of cores/RAM/GPU you allocate to it. Also even during a single day the mix of VMs running on these platforms could change. So for example in a content production environment during the day you might have a lot of designers working and in the evening you might have a lot of rendering work to be performed. This solution would allow the IT managers to have one set of VMs running during the day for the designers with a different set at night tuned for rendering tasks.

From the interview it seems clear that for many organizations the concept of using high-density server configurations for remote access will make sense. We look forward to seeing how this continues to evolve over the coming months