Intel Fellow David Blythe catches up with Steve Waskul at SIGGRAPH 2014 and they discuss a wide range of interesting topics.
The interview starts off with David talking about some of the areas he’s interested in and was hoping to learn more about at SIGGRAPH 2014. The first area is the display technology and certainly there are a lot of things going on in this arena. One interesting topic is autostereoscopy or displaying images with 3D depth without the use of special headgear or glasses. There are several approaches to this including eye tracking technologies and lenticular lenses or parallax barriers that try to provide each eye with a different image to create the stereo effect. One of the areas that has really captured David’s attention is the push in resolution to 4K displays and the question of are we now reaching diminishing returns with further increases in display resolution. If so, the big question becomes what should change about the display next. Would increasing the dynamic range be the next area to exploit so you get much brighter displays or a much wider range of brightness?
Another interesting thing David considers is looking at how cameras take in images and then looking at how you can process and analyze the information they gather. He explains that to him it’s kind of a mirror in reverse of the graphics pipeline. Instead of taking a synthetic image and generating a final set of pixels, this would start with the pixels and convert them back into what was the original image and allow for analysis along the way. So, for example you can get augmented reality or other ways to enhance the information that is coming in from a camera. David likes the fact that this process could use similar kinds of hardware but it’s really a whole different space of algorithms and usages.
Next Steve brings up some interesting work that Doug Trumbull has done (see interview on this site) using high brightness projectors and high frame rate movies being projected stereoscopically. Doug has suggested that this is something that just creates an amazingly new form of cinema that engages the audience in a far more compelling way than current cinemas with relatively low brightness and lower frame rates. Dave agrees that it will be very interesting to see what happens and how we move next toward improving the visual experience. They again highlight some of the options teams are looking at like increasing the range of colors available, increasing the brightness of the display technology, increasing the resolution of the imagery and finally increasing the frame rate at which individual frames are displayed. So, the question is what will be the next big think or where should we go next in the quest to improve the visual experience via display technology.
As an Intel Fellow, Dave is in an interesting position as part of his mission is to look at these technologies and help Intel make the right decisions moving forward with their technology road map. Most of the folks reading this will understand that any of these directions will add additional computational complexity or cycles to process. For example if you increase the frame rate of a movie from 24 frames per second to 96 frames per second you’ll have four times the number of frames to process. If you add stereo images as well you have another doubling of the frames you need to create. All these possibilities push a system in a different way. Looking at increasing the color range or brightness changes the number of bits but it doesn’t change the number of pixels. Still it takes more processing power to deal with the increase in bits. We agree with David that it will be very interesting to see which direction display technology pursues next as we move forward and how tech companies and ISVs respond with solutions the meet the demands of content creators for applications that can deliver workflows to deliver content effectively to the new display technologies regardless of how they are improved.
During the conversation David point out that one area he’s also interested in is looking at how, as display resolution goes up we may not need to do as much complicated shading at each pixel vs. something that might be on a more “coarse” level. Mr. Blythe mentions that there are some papers at SIGGRAPH that describe some techniques around that. So, looking at a methodology that might be a bit more subtle vs. brute force could be one way to help keep the processing horsepower up to speed with increases in display resolution.
You can tell from the interview that David enjoys the challenges of this industry. One area he talks about is looking at the advances in real-time rendering which is just one of the many topics being showcased at SIGGRAPH. Areas like global illumination, removing various kinds of artifacts from aliasing (temporal or spatial aliasing) and just learning about what modern game engines are doing now and looking at how Intel can support those efforts are all interesting for David.
If you are not involved in these kinds of discussions you might be amazed at how much intelligent thought and creative energy goes into dealing with ways to create high fidelity visualization. Watching this interview will give you a small insight into this. Just one example would be David’s discussion of tessellation. (Basically this is where you break a larger plane into one or more geometric shapes without any overlapping or gaps being created). David has been working on tessellations for a long time and he notes that there was at least one talk at SIGGRAPH 2014 on techniques looking at how tessellations were being used for characters and subdivision surfaces.
One great thing you’ll notice during the interview is that David is truly interested in understanding the issues game developers, researchers, etc. are facing so that his team can help drive development forward at Intel. This is something that we’ve seen with many of our SIGGRAPH interviewees. By understanding industry concerns and participating in the latest research and events like SIGGRAPH where the visualization community comes together to address issues they face and discuss possible solutions, success stories, etc. it seems clear that participants at the show want to do their part to move the entire industry forward.
As the interviews move forward, Steve mentions that he’s engaged with several of David’s colleagues from his days working at SGI. In a funny reply David explains that they were doing a panel on GPU computing for mobile and almost everyone on the panel had worked at SGI during their career. He points out that those were great days and how it seems that a lot of the people who he worked with at SGI are now disbursed throughout the industry still making contributions.
The conversation concludes with a look at some of the papers being presented at SIGGRAPH 2014 and the interesting work being done with multi-lens cameras and their ability to provide depth information as well as images that are in focus.