Q: Let’s start with the background of high dynamic range (HDR).
BILL BAGGELAAR: High dynamic range is not new. It is the way that most of us see the world everyday. Our eyes are high dynamic range and wide color gamut sensitive equipment. On a bright sunny day, they can see cloud highlight details while still being able to see into the shadows. In dark environments, we can start to see detail in extremely low to practically no light. Some cameras are at least getting closer to being able to detect some of the same things on those sunny days. But they have also gotten better in the dark, not as good as our eyes, but certainly better as the sensor technology improves. On the other hand, the television sets or other types of displays that we typically have used to watch content has a much lower dynamic range than our eyes or even the cameras that are being used to capture the content. So TV viewing, up to this point, has been what we now call, Standard Dynamic Range (SDR) where we have maybe in the order of seven to nine stops of exposure depending on the display. With these new displays, we are now talking about Ultra-HD (4K/UHD) and High Dynamic Range (HDR) that could potentially go up to 20 stops. But more practically for the consumers, we’re talking maybe 12 to 14 stops. We are inherently contrast sensitive beings and so what this increased dynamic range does is it gives us an increased sense of sharpness, detail, clarity, color and saturation; all these things that we see in the real world, but we’re not able to realize on today’s consumer displays or even in theaters, for that matter. And since we are incredibly adaptable beings, when we’re watching a movie or a TV show in a particularly either dark or bright location, our eyes and brain adapt. We may see something right away where the contrast doesn’t necessarily feel right, but we adapt pretty quickly to it and can watch it and not be distracted by it. But now with these higher dynamic range displays, we’re able to really start to give people more picture details in order to provide a more immersive experience. As I mentioned, getting better color saturation goes along with this whole wider color gamut piece that is part of HDR. So HDR is not just about contrast, or about resolution; or even about color; it’s about being able to combine all three to represent images more accurately on consumer displays. This gives content creators an expanded canvas to represent things that are true to life or they can even go hyper real. For finishing movies for theaters, we work in P3 color space, which is a wider color gamut than the TV standard, Rec. 709. Often times there are very specific colors, particular purples, reds or translucent colors; that cannot be displayed in SDR/Rec.709, so we have to do additional color correction to nicely squash it down into Rec. 709 for consumer displays. There’s all sorts of saturated colors whether it goes from blue to orange to purple to green that we can now represent on consumer displays that we’ve never been able to represent before and that provides something closer to the original artistic vision, representing what the DP and Director originally intended for the viewers to see.
Q: So this is basically getting us as close to reality as what’s technically possible.
BILL BAGGELAAR: Yes, within certain constraints. There is the reality of staring at the sun on a bright sunny day. It hurts your eyes and we obviously don’t want to get that real. We don’t want people to be hurt by the content. So, within certain limits, yes. An additional sense of reality is possible, but I don’t know that we necessarily need to focus just on the reality piece. I think the content creators can present the vision that they want the consumers to experience more accurately although that sometimes doesn’t necessarily mean real. It just means that it’s maybe more immersive or maybe going to the level of creating colors that you don’t see in the real world that can exist, but you don’t necessarily see normally, but we can represent them in a way that we’re not able to with today’s display technology.
Q: Is it the camera delivering more in order to get that result? Or is it more of a postproduction process, or perhaps both?
BILL BAGGELAAR: A bit of both. Certainly it is better to start with captured images that have more inherent information in them. So the cameras have to be able to capture a wider dynamic range and a wider color gamut in order to truly take advantage of that in post. It doesn’t mean that you absolutely have to start with wider sources, but there are diminishing returns when starting from “narrower” sources. You’re always going to have the potential to get better results when you capture with a camera that has higher resolution, wider dynamic range and wider color gamut than the intended display, which is typically what we do. We’ve got the Sony cameras that capture in S-Log, one, two or three and we’ve got S-Gamut, which is a much wider gamut than P3. And as Rec. 2020 comes along, S-Gamut is actually even wider than Rec. 2020. Film has always been very wide as well and many of the other digital cameras have much wider color gamut than the displays that we actually have today or even are being planned for the near future, so starting from a wider color gamut helps to make sure that you can represent those colors on the displays.