Jeffrey Jasper
Digital Effects Supervisor on Academy Award-winning Interstellar
The Work
CTO: New Deal Studios and HBO Projects
Jeffrey shares about his Oscar-winning work, GPU rendering with our workstations and asset management with our NVMe storage servers and how they help him lead game-changing, ultra-fast content creation teams.
The Workflow Details
Fastest workstation rendering, fastest storage servers, animation pipelines
Christopher: I think where I’d like to start specifically about the Jon Stewart project, but maybe, and this is your call, we can talk about it in a more general way. Ways of looking at producing an animation pipeline with really amazing turnaround times. First, the architecture of that, but then also the hardware piece because we focused first on the fastest workstation rendering solutions, then the fastest storage server solutions. As I recall, our NVMe Storage Server X10 for the week’s work, the Server Rack – X90-SE for archive using StableBit Drivepool. Creative was a 2D/3D hybrid, mixed reality and animation as well – can you just kind of talk about how that came to be, I think that would be a good place to start. Kind of bring us up to the present.
Jeffrey: Okay. I’ll cover pretty much what I’m allowed to talk about. Which is pretty much that I was approached a couple of years ago, saying that they were going to do a project with Jon Stewart and HBO. And it was going to be quick turnaround daily animation, so they’re going to release a funny animated short every day.
Pretty much everyone’s first reaction is that’s impossible or at least really complicated and expensive, if you want it to look good. They were wanting high quality animation and it was going to be streaming on HBO Now and Go, originally.
I was super excited by the challenge.
Jeffrey: Shannon had done a ton of research already; I just started pouring through her research and started researching stuff on my own and we started figuring out ways that we could pull this off. We’re exploring both 2D and 3D, we expected to use both, but initially we focused on 3D because we knew we could find more efficiencies there. So we also wanted to kind of set ourselves apart from a lot of the 3D TV content that you see now. Which is kind of really flat rendered and animation is more simple because of cost and time issues.
So we had a partnership with Otoy and were looking at using their OctaneRender to give a really nice, physically correct, rendered look to the 3D content. We spent a good deal of time over the course of the project, we probably built six different animation pipelines to kind of solve issues as they came up. We had animation styles that were straight up 2D, like South Park, also what we called two and a half D, which were 3D characters but they had 2D features, so cartoon eyes, mouth, stuff like that. We also had full 3D characters that could fit into real world environments, so like, photographic environments and we’re able to do all of those styles in a single day. And churn out, three to four shorts at three to five minute a day with amazing animation and render quality. It was a real testament to the incredible talent we had on the show and pushing the technology to the limits.
The Hardware
Three powerful machines
Christopher: In the workflow that you’re putting together for this concept show at HBO, there were three core pieces of hardware that you got from us and it started with the i-X2 dual Xeon workstation.
Jeffrey: Yes, the first ones were the big quad GPU systems, the i-X2 superbeasts.
Christopher: Yes, and then we got you the i-X, several dozen, i-X mini, and then also the X24 storage server which is our beast NVMe 24 hotswap and I think it’d be great to talk about those how they came in, starting with the i-X2, how you used these for the show.
The big i-X2 quad GPU Xeon system, you just could not beat those systems for rendering. It was just rock solid and a workhorse for rendering and stuff. For anyone doing heavy GPU rendering, the i-X2 is just amazing.
The Hardware
A detailed breakdown
Jeffrey: In general, to go through the list. We had the quad GPU system i-X2, that’s the Xeons, which were geared towards the high quality rendering in both Octane and Redshift and those systems were phenomenally fast for rendering. We could distribute renders across them. Just because they were so quick, especially Octane, which scales across all four video cards linearly. It helped us limit the amount of cloud rendering we needed to do.
And then we wanted systems that could do lighter duty rendering service but be super fast generalist workstation so those were our lead animation workstations. So we went with the i-X Mediaworkstation with the dual 1080 cards, which I just love those systems. They were just a really nice, sweet spot for a workstation.
And then we had a general purpose workstation which was the i-X Mini with a single 1080 card. Which were still incredibly fast in a tiny form factor and they—for your Maya animator or someone working in something that doesn’t scale across multiple GPUs—like a game engine or something like iClone which is pretty much an animation software built on top of a game engine, that really can’t take advantage of the multiple GPUs. Those workstations weren’t doing any Octane or Redshift rendering; they were just phenomenal workstations for everything else. So they were just kind of like general workhorses. On the storage side, we had the 90 bay…
Christopher: Oh that’s right, you had the big 90… that product is our X90-SE Storage Enclosure. The X90-SE can provide over a Petabyte (1PB) of storage in a single 4U enclosure.
Jeffrey: Yeah, the X90-SE with the little head unit server that had SSD storage in it. So I set it up for our video archiving system. I can’t remember the name of the one U server that had the SSDs in it.
Christopher: It had ten, that’s our NVMe storage server X10, 10 front hot swap NVMe SSDs.
Jeffrey: Yes. So with that we had a hot tier, and so as new video came in and it would go to the hot tier on the X10 flash storage and then as it became what we call cold data, so it was no longer being touched or used, it would slip over into the X90-SE 90 bay storage unit, which had big, six terabyte enterprise drives in it from Hitachi. So that was the bulk storage and the 90 bay is scalable so you can always add more 90 bay units onto it and just build out a ridiculous system. So we weren’t doing RAID on that system which was the unique thing. We did disc pooling with data tiering.
Christopher: Disc pooling – yes StableBit Drivepool.
The Storage Setup
Disc pooling with data tiering
Jeffrey: Yes. So we used disc pooling on that and that can scale to crazy petabyte storage sizes. So it was really nice. The reason why we did the drive pooling versus a RAID type system was just a SAN solution would be prohibitively expensive for the type of scale that we were doing. We were doing about 300 gigabytes per day on average. So over time, the SAN would just get very, very expensive and RAID was just too dangerous of a recovery time. Even with RAID 6, there’s too much chance for multi disc failure and losing data. So we could have in level redundancy with drive pooling and using the kind of flash storage as the hot tier and the NVMe flash memory solved the performance issues for us. So it was a really nice video archive tool that was, compared to what most studios spend on SAN storage, a fraction of the price. It was just great. For our bulk pipeline, that was actually cloud based, but we had at the main studio, we had the NVMe Storage Server X24 as type of edge cache to the cloud.
Christopher: Edge cache, can you just expand on that for a sec?
Jeffrey: So in the case, the cloud is your cold storage and the active projects that we were working on would be synced locally to that 24 bay NVMe Storage Server X24 and once again, we were doing disc pooling on that because of the structure of the data, it was always in constant sync with cloud. We just need a fast active disc pool that would cache our active cloud data. So we once again did disc pooling and also the NVME speed overcame any latency lost by not doing a RAID 10 type storage system. So it also kind of simplified the setup since it’s all software based storage. So there’s no RAID card or RAID card caching, so you don’t have to worry about a RAID card failing and hosing up anything. Or having to have a battery backup on the RAID to handle hard shutdowns or anything like that.
Christopher: I don’t think there’s anybody who doesn’t love the idea of RAID becoming extinct.
Jeffrey: There’s different solutions we could have done. If you were to build the same system and just use Linux based systems, you could do ZFS or BTRFS or any of the other scale out storage, software based solutions but the disc pooling with NVMe flash storage for active data was a big win for us.
We had the quad GPU system i-X2 which were geared towards the high quality rendering in both Octane and Redshift, and those systems were phenomenally fast for rendering. It helped us limit the amount of cloud rendering we needed to do.
The Resulting Workflow
“just phenomenal systems”
Christopher: You got very excited about our i-X Mediaworkstation which is the full-size tower which you can expand to 4 GPUs, or the i-X mini which is up to two GPUs?
Jeffrey: The one that was kind of my favorite was the i-X but all of them were kind of perfect for their niche. The big i-X2 quad GPU Xeon system, you just could not beat those systems for rendering. It was just rock solid and a workhorse for rendering and stuff. For anyone doing heavy GPU rendering, the i-X2 is just amazing.
And then in a general system where you’re doing both animation and some GPU rendering but maybe you’re doing the bulk of your rendering cloud based or on a farm, rather than locally on the scene, the i-X was just really great. You can do a local test render and stuff and have a really good idea because the dual 1080s are fast cards, and now you can throw in 1080 TI cards or pick your choice, depending on how much RAM you want and the systems—but also even the little minis were just general workstations and they were just great for size and performance. They were unstoppable, and those could be upgraded to a dual card setup if need be, to kind of bring them up to the level of the i-X systems that we had and so, yeah. They were all just phenomenal systems and very flexible for growth over time.
Christopher: That’s great. I think the longer I’ve been in the business, the more I want to build systems where at least for five years, all you need do is upgrade the drives and the GPU and not worry about anything else.
Jeffrey: I mean the biggest reason why working with you guys was so good to the studio was not only did you guys have really good hardware, you guys have amazing support to back it up. Which a lot of places, you usually buy your hardware and your support is often, who knows where, and if you have major issues, it’s a huge hassle. You guys bent over backwards supporting us. And for our setups you are very knowledgeable when we’re doing configurations and helping really fine-tune it for the type of work that we’re doing. For the studio system that we set up compared to what most studios are building now, it was extremely cost effective and it ended up being way cheaper than a lot of studios would tend to build out. So it was just a phenomenal bang for your buck.
Christopher: That’s gratifying to hear, thank you for that feedback. We bend over backwards to find the best solution for each situation and each client. So thanks.
[On Interstellar] we used Rhino and Modo to break out of the model into all of its pieces to convert them into buildable assets. Just like if we were going to build a house, you need accurate real world blueprints or CAD for that. We were doing that for the models and figuring out how all the various parts should go together on a real physical model as well as the materials.
The Product
i-X2: The Ultimate Workstation
With dual Intel® Xeon® processors and space for up to 10 solid state drives in a small form factor and NVMe SSDs like the M.2 Samsung 960 Pro, the i-X2 is the fastest workstation for CPU Rendering and GPU rendering in a small form factor. With GeForce Quadro or Tesla GPU accelerators or AMD Radeon Pro cards like WX 9100, it is well-equipped for heavy workloads with 24/7 reliability and a sound choice for render engines like Octane Render, Redshift or V-Ray, Cinema 4D, Maya, ZBrush and other intensive texture, modeling, simulation, compositing and render heavy media work.
The Product
NVMe Storage Server X24
The NVME Storage Server X24 is the mid-sized version of the NVMe Storage Server X48, with all the same functionality: space for upt to 24 NVMe SSDs in a 2U chassis, Dual Xeon E5 v4 CPUs and up to 3TB RAM, with flexible connectivity options it provides lightning fast asset management server capabilities with count-onable 24/7/365 reliability.
For the full interview with Jeffrey Jasper, click here.