Jeffrey, Meats and Chad at Studio

This is Part II of our interview with Jeffrey Jasper, CEO of nphinit LLC, former CTO of New Deal Studios, and winner of Academy Award for Best Visual Effects on Interstellar. Did you miss Part I? View it here.

Christopher: As I recall, basically where we’d left it is we’d finished talking about Interstellar and kind of bringing it up the present. I think where I’d like to start specifically about the Jon Stewart project, but maybe, and this is your call, we can talk about it in a more general way. Ways of looking at producing an animation pipeline with really amazing turnaround times. First, the architecture of that, but then also the hardware piece because we focused first on the fastest workstation rendering solutions, then the fastest storage server solutions. As I recall, our NVMe Storage Server X10 for the week’s work, the Server Rack – X90-SE for archive using StableBit Drivepool. Creative was a 2D/3D hybrid, mixed reality and animation as well – can you just kind of talk about how that came to be, I think that would be a good place to start. Kind of bring us up to the present.

Jeffrey: Okay. I’ll cover pretty much what I’m allowed to talk about. Which is pretty much that I was approached a couple of years ago, saying that they were going to do a project with Jon Stewart and HBO. And it was going to be quick turnaround daily animation, so they’re going to release a funny animated short every day. Pretty much everyone’s first reaction is that’s impossible or at least really complicated and expensive, if you want it to look good. They were wanting high quality animation and it was going to be streaming on HBO Now and Go, originally. I was super excited by the challenge. Shannon had done a ton of research already; I just started pouring through her research and started researching stuff on my own and we started figuring out ways that we could pull this off. We’re exploring both 2D and 3D, we expected to use both, but initially we focused on 3D because we knew we could find more efficiencies there. So we also wanted to kind of set ourselves apart from a lot of the 3D TV content that you see now. Which is kind of really flat rendered and animation is more simple because of cost and time issues. So we had a partnership with Otoy and were looking at using their OctaneRender to give a really nice, physically correct, rendered look to the 3D content. We spent a good deal of time over the course of the project, we probably built six different animation pipelines to kind of solve issues as they came up. We had animation styles that were straight up 2D, like South Park, also what we called two and a half D, which were 3D characters but they had 2D features, so cartoon eyes, mouth, stuff like that. We also had full 3D characters that could fit into real world environments, so like, photographic environments and we’re able to do all of those styles in a single day. And churn out, three to four shorts at three to five minute a day with amazing animation and render quality. It was a real testament to the incredible talent we had on the show and pushing the technology to the limits.

Christopher: That’s extraordinary. I’ve seen that too with, well, Octane in particular just because of our client Billy Brooks and just seeing all the work he’s done, you really can do alot with Octane.

Jeffrey: Yeah. Otoy was a partner and that the nice thing about using the physically correct rendering was it kind of set the look apart from what you see on TV for animation. We did experiments with game engine rendering, real time rendering, and some of our stuff was real time rendered, like our 2.5D work. It’s a nice speed benefit but it has that 3D CGI look that you can’t escape from. With something cartoony it doesn’t really matter, but with some of our 3D content, it was supposed to feel more grounded in reality. Having a physically based render helped with that. The lighting is accurate and real and it feels real, even though it’s a stylistic character.

Christopher: Moving into the specifics of the pipeline. Obviously you were using Octane Render. What other core applications were you using?

Jeffrey: I can’t go into too much detail into that. We actually used different software to cover everything from 2D to 3D but yeah. I mean we did have partnerships with a few different companies who did development work for their software to cover our needs. I guess things I can kind of say is we use Adobe products heavily. Which is common, as you’re doing any sort of editorial now in a sort of quick edit environment, and of course everybody uses Photoshop and those tools. We were not using but had started experimenting with their Adobe character animator and were really impressed with it.

Christopher: Adobe character animator? Really?

Jeffrey: I was definitely thinking of converting our straight 2D work over into it. All of the 2.5D work and almost all of our animation was Maya based. We also had a partnership with a company called Reallusion and we used character generation tools from them as well as used their 3D animation tool iClone as well during the development process.. I am continuing the relationship with them into my next projects with nphinit. They are developing a really fantastic suite for quick animation workflows as well as indie game and XR development. We had lots of custom glue tying everything together, including stuff for facial and body performance capture that one of our guys did in-house development on.

Christopher: It’s so funny, just to jump in here. I met James Martin, in the suit, at SIGGRAPH and just out of the blue, he was just doing his crazy stuff in his suit. The guy from Reallusion, and then he starts talking, yeah, my company is Reallusion and I went, what? All of a sudden it’s like, you worked with him and him? But I was impressed with this software – everything real time.

Jeffrey: They are doing real time body and face capture on a simple gaming box, using professional systems like Faceware and Xsens. It was a big push from us to integrate high-quality performance capture with their software. So that is something that is really innovative because it brings a high level of professional performance capture down to indie budget and so yeah. It was a really wonderful partnership and their developers are amazing. They took our ideas and ran with them to create really innovative stuff like realtime mixing between viseme and face capture with masking to limit stuff to certain parts of the face and ramps to control the expressiveness. We threw them ideas and they were like, yeah we can do that. They are beasts.

Christopher: He’s quite a character as well, he really is.

Jeffrey: Oh yeah Jimmy is a lot of fun. And the relationship with Faceware and Xsens and stuff was also really nice. They both make wonderful tools that are key to our current work at nphinit.

Christopher: You said Faceware and…

Jeffrey: Faceware and Xsens, Xsens is the body performance and then Faceware does markerless facial capture. And then Reallusion took that… because typically when you’re doing a performance capture like that onto 3D characters on a set, you have a box that’s handling the body capture. A box that’s handling face capture. If you’re doing hand capture, you have a box that’s doing hand capture. And then those feed into a box that kind of aggregates that data into the system and then you have another box that’s rendering it. It becomes a very complicated and expensive setup and Reallusion has been able to give you the same system run on a gaming rig with a decent GPU, similar to the little mini systems we bought from…which is to be able to do full body performance capture, in real time, with PBR rendering, on a box like that, just amazing.

Christopher: Can you explain PBR?

Jeffrey: PBR is physically based rendering. A lot of the game engines now use what are called PBR materials and PBR rendering to try and bring a level of realism to the render pipeline. So you get much higher quality rendering, including things like global illumination and such as well as more physically correct materials. In real time and in viewports on more traditional 3D software.

Christopher: I think maybe this would be good before we jump into game engines and VR which I do want to cover with you. In the workflow that you’re putting together for this concept show at HBO, there were three core pieces of hardware that you got from us and it started with the i-X2 dual Xeon workstation.

i-X2 Mediaworkstation

Jeffrey: Yes, the first ones were the big quad GPU systems, the i-X2 superbeasts.

Christopher: Yes, and then we got you the i7X, several dozen, i-X mini, and then also the X24 storage server which is our beast NVMe 24 hotswap and I think it’d be great to talk about those how they came in, starting with the i-X2, how you used these for the show.

Jeffrey: In general, to go through the list. We had the quad GPU system i-X2, that’s the Xeons, which were geared towards the high quality rendering in both Octane and Redshift and those systems were phenomenally fast for rendering. We could distribute renders across them. Just because they were so quick, especially Octane, which scales across all four video cards linearly. It helped us limit the amount of cloud rendering we needed to do. And then we wanted systems that could do lighter duty rendering service but be super fast generalist workstation so those were our lead animation workstations. So we went with the i-X Mediaworkstation with the dual 1080 cards, which I just love those systems. They were just a really nice, sweet spot for a workstation and then we had a general purpose workstation which was the i-X Mini with a single 1080 card. Which were still incredibly fast in a tiny form factor and they—for your Maya animator or someone working in something that doesn’t scale across multiple GPUs—like a game engine or something like iClone which is pretty much an animation software built on top of a game engine, that really can’t take advantage of the multiple GPUs. Those workstations weren’t doing any Octane or Redshift rendering; they were just phenomenal workstations for everything else. So they were just kind of like general workhorses. On the storage side, we had the 90 bay…

Christopher: Oh that’s right, you had the big 90… that product is our X90-SE Storage Enclosure. The X90-SE can provide over a Petabyte (1PB) of storage in a single 4U enclosure.

Jeffrey: Yeah, the X90SE with the little head unit server that had SSD storage in it. So I set it up for our video archiving system. I can’t remember the name of the one U server that had the SSDs in it.

Christopher: It had ten, that’s our NVMe storage server X10, 10 front hot swap NVMe SSDs.

Jeffrey: Yes. So with that we had a hot tier, and so as new video came in and it would go to the hot tier on the X10 flash storage and then as it became what we call cold data, so it was no longer being touched or used, it would slip over into the X90SE 90 bay storage unit, which had big, six terabyte enterprise drives in it from Hitachi. So that was the bulk storage and the 90 bay is scalable so you can always add more 90 bay units onto it and just build out a ridiculous system. So we weren’t doing RAID on that system which was the unique thing. We did disc pooling with data tiering.

Christopher: Disc pooling – yes StableBit Drivepool.

Jeffrey: Yes. So we used disc pooling on that and that can scale to crazy petabyte storage sizes. So it was really nice. The reason why we did the drive pooling versus a RAID type system was just a SAN solution would be prohibitively expensive for the type of scale that we were doing. We were doing about 300 gigabytes per day on average. So over time, the SAN would just get very, very expensive and RAID was just too dangerous of a recovery time. Even with RAID 6, there’s too much chance for multi disc failure and losing data. So we could have in level redundancy with drive pooling and using the kind of flash storage as the hot tier and the NVMe flash memory solved the performance issues for us. So it was a really nice video archive tool that was, compared to what most studios spend on SAN storage, a fraction of the price. It was just great. For our bulk pipeline, that was actually cloud based, but we had at the main studio, we had the NVMe Storage Server X24 as type of edge cache to the cloud.

Christopher: Edge cache, can you just expand on that for a sec?

Jeffrey: So in the case, the cloud is your cold storage and the active projects that we were

working on would be synced locally to that 24 bay NVMe Storage Server X24 and once again, we were doing disc pooling on that because of the structure of the data, it was always in constant sync with cloud. We just need a fast active disc pool that would cache our active cloud data. So we once again did disc pooling and also the NVME speed overcame any latency lost by not doing a RAID 10 type storage system. So it also kind of simplified the setup since it’s all software based storage. So there’s no RAID card or RAID card caching, so you don’t have to worry about a RAID card failing and hosing up anything. Or having to have a battery backup on the RAID to handle hard shutdowns or anything like that.

Christopher: I don’t think there’s anybody who doesn’t love the idea of RAID becoming extinct.

Jeffrey: There’s different solutions we could have done. If you were to build the same system and just use Linux based systems, you could do ZFS or BTRFS or any of the other scale out storage, software based solutions but the disc pooling with NVMe flash storage for active data was a big win for us.

Christopher: That’s great. You also—it’s interesting, you got very excited about our i-X Mediaworkstation which is the full-size tower which you can expand to 4 GPUs, or the i-X mini which is up to two GPUs?

i-X Mediaworkstation

Jeffrey: The one that was kind of my favorite was the i-X but all of them were kind of perfect for their niche. The big i-X2 quad GPU Xeon system, you just could not beat those systems for rendering. It was just rock solid and a workhorse for rendering and stuff. For anyone doing heavy GPU rendering, the i-X2 is just amazing. And then in a general system where you’re doing both animation and some GPU rendering but maybe you’re doing the bulk of your rendering cloud based or on a farm, rather than locally on the scene, the i-X was just really great. You can do a local test render and stuff and have a really good idea because the dual 1080s are fast cards, and now you can throw in 1080 TI cards or pick your choice, depending on how much RAM you want and the systems—but also even the little minis were just general workstations and they were just great for size and performance. They were unstoppable, and those could be upgraded to a dual card setup if need be, to kind of bring them up to the level of the i-X systems that we had and so, yeah. They were all just phenomenal systems and very flexible for growth over time.

Christopher: That’s great. I think the longer I’ve been in the business, the more I want to build systems where at least for five years, all you need do is upgrade the drives and the GPU and not worry about anything else.

Jeffrey: I mean the biggest reason why working with you guys was so good to the studio was not only did you guys have really good hardware, you guys have amazing support to back it up. Which a lot of places, you usually buy your hardware and your support is often, who knows where, and if you have major issues, it’s a huge hassle. You guys bent over backwards supporting us. And for our setups you are very knowledgeable when we’re doing configurations and helping really fine-tune it for the type of work that we’re doing. For the studio system that we set up compared to what most studios are building now, it was extremely cost effective and it ended up being way cheaper than a lot of studios would tend to build out. So it was just a phenomenal bang for your buck.

Christopher: That’s gratifying to hear, thank you for that feedback. We bend over backwards to find the best solution for each situation and each client. So thanks.

That bring us up to the present and one of the burning—questions – do you think game engines are going to replace compositing programs? More?

Jeffrey: No, but game engines will start to be used more as a final rendering solution more and more. You’re going to find—especially if you look at a lot of TV based rendering or for 3D content or kind of hard surface rendering, so spaceships and that sort of thing. Game engines can pull off a lot of that kind of rendering pretty effectively and are just getting better and better and over time, the difference between a game engine and a current ray tracer is going to diminish greatly. And then as you get into, I think you’ll see the kind of GPU compute on the high end push the ray trace pipeline that is used now for film based work and TV based work.

Christopher: I missed that, you said push the ray trace?

Jeffrey: Yeah. Raise up the level of quality and performance. So as real time starts to eat away at the bottom for those ray tracers, the companies that are doing ray trace rendering are going to have to move up in quality, towards even more and more physically based rendering. So more and more like a path tracer like Octane, but they’re going to have to do it at the scale of production scenes which can be massive in size. Which is kind of the Achilles heel of GPU rendering, which is that a lot of the rendering is limited by…

Christopher: VRAM.

Jeffrey: …Memory that’s on the GPU but rendering companies who are working on this are working on getting around those limitations. So as the system memory becomes faster and faster, it can be used to kind of offload from the GPU memory on to system memory to scale to larger production scenes and stuff.

Christopher: Exactly. There’s a good point you made regarding when you want to have a GPU based rendering solution and you do have to deal with a VRAM limitation. That you can do this is important — for Octane, that’s out of core rendering. Go into the UI, select out of core rendering and you can corral your assets into system RAM….

Jeffrey: Yep, but you currently take a speed penalty when you do that, but as the system—as that performance bottleneck goes away, then it becomes less of an issue and you can do that for both geometry, textures, and stuff which companies like Redshift are already doing. I know OTOY is working on doing it for everything as well. Yeah, it just becomes less and less of an issue over time. Maybe even down the road, light field rendering. Rendering always moves up and gets more compute heavy.

Christopher: Before we move on, I would like to just touch on Redshift. I met the founder at a couple of different shows and the VRAM limitations which are more easily felt in Octane, you don’t experience so much in Redshift. Is it because it’s shuffling assets into different areas which are handled in the background with Redshift or ..what was your experience?

Jeffrey: They’re both doing it in the background but—I mean in Redshift, you’re still going to take a performance hit when you move to using system memory just because it’s slower to go through the IO needed to shift things around. But Redshift does both geometry and textures out of core and so that means that it can scale to larger scenes currently than, say, Octane but the engine—it’s still better to try and fit your scene into the GPU memory if at possible, because you’re going to get the best performance. The two renders though kind of fit different niches. So Redshift is an biased render, so it’s more the style of a V-Ray type rendering engine, and the workflow and type of rendering that it does is similar to what people are used to with CPU based biased rendering. Where Octane is a truly unbiased rendering engine and it’s going for the most physically correct rendering as possible. Where biased rendering has some cheats, so you can tweak it for speed or style which helps with production deadlines and stuff like that. Octane is much, much more simple because there’s not really many settings to tweak. You get beautiful rendering because it’s doing it physically correct. They kind of serve different markets. I’d say Redshift’s biggest competitors are more V-ray than Octane. Even though they’re both GPU based renderers where Octane is something like Maxwell.

Christopher: Yeah, it’s going for the super photo realism. And again, the difference for “our listeners,” the difference between biased and unbiased rendering?

Jeffrey: Unbiased just means that you’re calculating as accurately as possible to generate as physically accurate a solution as possible, pretty much as realistic as humanly possible. Whereas biased, you can introduce cheats into the system like say, irradiance caching or that sort of thing, that helps with getting rid of noise or increasing render performance or even shifting it to a more stylized look. It’s a little bit more challenging to do with the unbiased render. So it’s just a difference between how the lighting is calculated within the scene, geared towards performance, realism, or stylization.

Christopher: So performance and realism, maybe that’s a good jumping off point for what’s next. What do you see in the future that you—what are you currently working on that you find compelling and interesting in terms of the technology but also in the experience and anything else you think is important in terms of the technology. Like the importance of game engines or light field technology awareness.

Jeffrey: I think for a lot of 3D animation work, game engines are really starting to come into their own for that market and if you look at the kind of demo projects, both that Epic has done with Unreal and Unity has done and these are really sophisticated, high quality, rendered projects done in game engines. So they’re proving out that you can turn out high quality stuff that—you’re rendering in real time on a gaming box. So that’s something that is a big focus for my work currently at nphinit. I’m also very interested, although it’s still very forward thinking right now, in light field technology. And six degrees of freedom for VR, so being able to move around a fully captured volume, so you’re pretty much doing—some people call it holographic capture or I just like to call it reality capture. You’re capturing a moving, physical scene and you’re able to move about it in real time. Once things like light field headsets start coming out, you’ll be able to relive those moments and it’s just a very compelling idea that has, I think, major implications. Lots of things need to be solved to make it happen. So it’s just a very compelling space for future work. Pretty much nphinit is focused on entertainment technology which covers a huge swath of things including XR which includes VR and AR, 360 video, event based systems, etc. We are using lessons learned over the years to try and come up with no only innovative solutions but entertaining solutions that have mass market appeal. We have a VR platform through our partner company in China, SAMOHO, which rides on cable and mobile systems there so people can enjoy 360 video on their TV, in a headset we make that is like the Oculus Go, or via mobile. We are making content for both China specifically and international markets. The platform supports passive viewing experiences so users can watch the cable companies content in a virtual theater or watch 360 videos, but it also supports interactive experiences as well. We can scale from a phone all the way up to large venues with our content.

Six degrees of freedom (6DoF) diagram

Christopher: Six DOF, or do they say six DOF, and light field technology. It seems that those are two pieces and one, the light field technology, just the visual experience whereas the six DOF is really about the physical experience, the sensory; your sense of self in that space because I have to say, still to this day, I put on those goggles and I go, this is weird. This is the first thing that comes to me. It’s okay, it’s interesting, but I’m looking around and going, really?

Jeffrey: There’s kind of an attitude now where you put on the goggles and they have 360 video and you can look around and it looks amazing and you feel like you’re standing in a movie, but then you want to move and you can’t. It kind of feels limiting, and then you have to move over to, game engines, real time rendering to be able to move around but you do that and it just kind of feels like, you’re in a first person shooter and it doesn’t feel real. It feels like you’re in a game.

Christopher: It feels like you’re an extra.

Jeffrey: Yeah, an extra in the game.

Christopher: You do, you feel like you’re an extra, like, what do I do?

Jeffrey: Even with interactive VR experiences, because of the limitations of what you can do as far as moving. So like the Vive or the Oculus Rift, tracking a movement, you’re still somewhat limited. So even moving around in the game, you’re somewhat limited so they have to things like click jumps or teleports with the hand controllers. It doesn’t feel natural and real and I think until we get it feeling natural and real, it’s always going to be kind of a tech-y market. So a lot of the content now is either a cool kind of artsy demo, first person game or it’s some sort of commercial product tie in or it’s just a passive viewing experience. So the growth is what everybody’s holy grail is that full immersive experience where you have full freedom to move about a highly realistic environment.

Christopher: There’s something about—I just imagined a farming family in Nebraska looking at those people with the goggles on, going “What?” It’s not like an iPhone, right in your pocket, now…

Jeffrey: The experiences are trying to make things more mobile. You get some dude with a computer strapped onto his back and a monster headset on his head, moving around in a motion capture space. It’s like, early days still. That sort of bulk needs to go away and for the family in Nebraska, it might be that AR is kind of the gateway. Where your phone is your portal into your kind of virtual world, and then as AR glasses become something that looks more like a standard pair of sunglasses—it’s a much easier sell to a general audience than the VR and mixed reality headsets that are coming out now, even the newer ones that don’t need a computer or external tracking. Which, I think, those will get picked up—their biggest market is going to be gamers, like high end gamers right now. As those two things start to converge into something that is mobile and portable and even stylish, then you’re going to find your more general audience and stuff. The phone, the cell phone might die with that and you’ll just have your contact lenses or your sunglasses and that’s your whole XR experience.

Christopher: And at the same time, something like a phone has become a trusted device. So it might be a portal through which—I think Apple is onto something. It seems like a good prospect I guess, of the prospects currently in play. How would you differentiate AR from VR?

Jeffrey: Well, VR is pretty much what the title says, it’s a virtual world that you’re going into a completely different space, separated from reality. Whereas AR, you’re adding generated elements into the existing world and so the simplest thing is you can stick a little character onto the desk that’s in front of you and the AR device handles tracking and surface detection. So the character sticks onto the surface and the rendering of that character within the scene, with correct lighting, whereas with virtual reality, you’re in a whole different reality. A lot of people think in the future virtual reality has more potential because the limitations of our reality fall away but I think they’ll both be equally important for different use cases, and maybe even converge somewhat, but yeah, I think that’s the basic gist of it.

Christopher: I think the augmented reality piece of it where you described—I think you were just saying a CG character and I think I even saw this on a Facebook group somewhere. Where this guy is like dancing on somebody’s desk, this little miniature man. That’s a good illustration of augmented reality.

Jeffrey: Yeah. A really good one I saw yesterday was just a virtual pet demo, and it was a guy, he was actually in a real park, and he had a little virtual stick and he threw it. And the little virtual dog went and chased it, crossed the grass of the park, grabbed it and ran back with it. The virtual pet had a lot of personality.

Christopher: Where did you see that?

Jeffrey: It was one of Apple’s AR kit demos that I came across online. It was like a virtual pet game but they did an AR test with it. So the game is actually all CG based but they made it an AR example with the same dog character and stuff. I thought personally, for me, I thought the AR demo was much more compelling than the virtual pet game because how cool that is. You can take your virtual pet for a walk in the real world, play games, and see it running around your house and that sort of thing.

Christopher: Wow. And there are other practical applications. And seeing things like, being able to provide a doctor an exact copy of the person he’s going to operate on before the operation…

Jeffrey: Or you can have a doctor inspecting your AR avatar and robots doing the actual operation on your body.

Christopher: Yeah, that too. Well Jeffrey, I think that will do it. Thank you for a wonderful conversation.

Jeffrey: Thanks, talk to you later.