What does the role of Unreal Developer entail in Virtual Production?

Dr. Jodi Nelson-Tabor
10 min readJul 26, 2021

Written by Dr Andrew MacQuarrie

Due to COVID a lot of our work took place on Teams, using screen sharing to explore and adapt the scene together in Unreal. Here we work through the shot list, adding fences and digital stand-in characters to create a storyboard.

What is Unreal and why do we use it in VP?

The main way in which Virtual Production (VP) differs from classic green-screen filming is that there is a fully composited video available on set that shows how the final output will look. Achieving this requires that the digital scene be rendered and combined live with the camera feed. To make this work the location and lens properties of a physical camera are tracked in 3D space. This data is then used to define the location and lens properties of a virtual camera inside the digital scene. This means the views from both physical and virtual cameras match, so they can be composited convincingly in real-time.

One of the simplest ways to achieve this is to use a game engine such as Unreal or Unity. These tools may come from the gaming community, but they are an excellent fit here because they already perform most of the necessary functions. In a games context, these engines are designed to assemble 3D assets, handle input from the player and render the final output to the screen. While in games the virtual camera’s position is being changed by the player via some input controller, in VP the virtual camera’s position is updated based on tracking data from a physical camera. There are a number of game engines that can do this, but Unreal tends to be favoured by the VP community as it has a number of high-quality tools that aid filmmaking.

What is the role of an Unreal Developer in VP?

The Unreal Developer in VP manages the digital assets. This means getting the virtual scene to look good, but also ensuring it works well from a technical perspective. The role also involves supporting all the activities that interface with the digital assets in some way, such as creating storyboards and physical props, as well as sound and lighting design.

There is a delicate balance to be struck between the visual quality of the scene and its optimisation. The scene must be optimised enough to run at real-time frame rates during filming, but also ideally have a high enough visual quality that offline rendering in post isn’t required to make the shots look good. This is further complicated by the fact that the recording and compositing processes can take a substantial quantity of your GPU power, meaning the scene must perform at approximately twice your desired filming frame-rate. For example, if you want to film at 30fps, your scene must be able to render at 60fps when not under the additional load of recording and compositing. The Unreal Developer must ensure that scenes are optimised enough to allow this but are also of a high visual quality.

In larger productions there are likely a number of roles involved in the process of digital scene creation, each with different skill sets. If you’re designing your world from scratch you may need 3D artists to create the virtual assets, or you might get a studio to digitise physical props, environments and people by 3D scanning them. In our case, we were using pre-existing assets sourced through online asset stores, so we generally didn’t need to create our own. We did however heavily customise our environment — changing lighting, moving objects, combining assets from various sources, etc. — to make the scene fit the requirements of our shoot. Often scenes acquired through asset stores such as the Unreal Marketplace allow quite a lot of customisation, and moving and scaling individual components can be fairly trivial. Some other aspects can be trickier, such as getting real and virtual objects to match and coexist in a believable way.

My experience on the ‘How to Be Good’ (H2BG) production

Other members of the team could install Unreal and download the scene to explore the environment for themselves. However using 3D software of any kind, and particularly the complicated interfaces present by games engines, can be a very steep learning curve. Not all team members will have the large amount of time required to become proficient with these technologies. This means that the Unreal Developer is often the touchpoint for anything related to the virtual scene.

For us, this started with digital location scouting. Due to COVID, these sessions happened remotely on Teams. We used screen sharing while we explored the digital assets together, mapped out what aspects looked good, what angles might work, and explored what changes might be needed to the scene to make it work for the script.

We then went through a very detailed previz stage, in which a shot list was created in tandem with generating a digital storyboard. This stage was labour intensive, but we all agreed that it was critical and worth all the time spent on it. During this process we added virtual stand-in characters, posed and resized to match their real-world equivalents. In our case, we used Mixamo characters — they didn’t look much like our actors, but at this stage that didn’t matter too much. We defined each shot by positioning a virtual camera in the scene, setting its lens properties and location, and positioned the digital stand-ins and props appropriately for the shot. We then took a screen grab from each camera, turning the shot list into a storyboard.

A virtual camera placed in the scene. These allow the location and lens properties of each shot to be defined early, as well as generating images for a storyboard. Virtual characters from Mixamo act as digital stand-ins.
A poster version of the storyboard became a shared physical object that facilitated discussions on set.

This storyboard forms a critical document that facilitates discussion, but can also highlight issues early. For example in one shot we realised the protagonist would be standing on a puddle in the virtual scene. As the VP process we were using didn’t support reflections within the virtual scene, this puddle would have immediately broken the illusion. We knew early on that we needed to either remove the puddle or change the shot’s location. We could also use the storyboard to identify what objects were virtual and which were physical within each shot — a distinction that’s critical, as it determines whose job it is to ensure that they’re built in time for the shoot. The storyboard also served as an invaluable shared physical object to facilitate discussions on set; we printed ours out onto posters, and many on-set discussions took place around these boards. When some elements are virtual and others physical, it’s useful to have an image of all the components together to allow easy pointing and gesturing that everyone can understand.

Getting real and virtual objects to blend together is a difficult process, as our Production Designer Alison Cross has highlighted in an earlier post on the topic. In her post she discusses an important role that the Unreal Developer plays: helping others in the team access the information they need about the virtual scene. For a Production Designer, this may be learning what the textures look like, what measurements certain objects have, what the lighting might look like, etc. Likewise the Sound Designer, as discussed by Ian Thompson, may want to design sound that is physically accurate, such as knowing what materials are in the scene, what the distances are, or what objects may be around making noise. For example, in our case there was a power transformer in the virtual scene that Ian determined would be making a humming noise.

The sound designer may want to incorporate certain virtual objects from the scene, such as this power transformer that emitted a humming noise in our soundscape.

Another aspect that requires collaboration is in the lighting design. As discussed by our DoP and Co-Director, Peter Bathurst, designing the lighting can be done in previz, matching virtual and physical lights. Having similar lighting means that physical foreground objects will composite convincingly onto the virtual background. Even the best laid plans don’t always work out, however, and it was important for us to have the ability to dynamically change both virtual and physical lighting on set to create the right look and feel, and handle issues such as green spill as they arose.

The last role that the Unreal Developer plays is during the edit and post-production phases. At this point, they will work closely with the editing and VFX team, as discussed by Walter Stabb in last week’s post. This might involve making adjustments to the virtual scene to create a more convincing blend with the captured footage, or re-rendering the background to be composited with an edited foreground plate. This final step may also involve collecting B-roll. While this would normally have been filmed on set, in VP such establishing shots or cutaways can be captured after the fact inside the virtual scene using Unreal. A limitation here is that it can only involve the virtual assets — any physical props or characters required need to be captured during the shoot.

The Unreal Developer may also be required to come up with creative technical solutions to achieve complex shots. An example of this was that we wanted to capture some long walking sequences — much longer than the physical green-screen setup allowed. To allow these shots a treadmill set to a slow walking speed was used. This required that the virtual camera in the scene be set to move in such a way as to convincingly match the character’s walking motion, while the physical camera remained largely stationary. In another shot, the character is seen from further away than the green-screen studio allowed, requiring creative solutions to billboard the character into the virtual scene in a post-production process.

Actor Connor Creighton (Tom) walking on the travelator. Some shots presented a technical challenge when the tracking of the physical camera had to be combined with movement from a treadmill.
Some shots would have been impossible to film fully in the studio. This image shows an establishing aerial shot that was captured using Unreal in post-production. The shot made use of footage captured in the studio, billboarded into the environment. This provides parallax as the camera moves around the shadowy figure seen here from a large distance.

What benefits does VP bring?

For me the main benefit of VP was in its decision making. VP allowed a dynamic, agile way of working. Making lots of decisions up front during previz allowed these decisions to be discussed and revisited, resulting in interesting ideas that may have been missed otherwise. As the real-time composite means everyone on set gets an idea of what the final result will look like, this allows changes to lighting, object placement, performances, etc. to all take place in the moment. I did find, however, that this dynamic approach could be hampered by the time some changes required, such as needing to rebuild the lighting inside Unreal.

One of the main promises of VP, that the real-time composite will be “final pixel” (i.e. no post-production will be required), which appeared overly optimistic. It seemed that there would always be changes you wanted to make in post, e.g. fixing green spill, getting a better matte from your chromakey process, changing some virtual objects so they match better with their real-world equivalent, etc., so that some post process is always likely. While some of these aspects might be addressed through the use of LED screens as opposed to green-screen filming, these will present their own complexities, and are beyond the scope of this project.

What are the main challenges for VP?

Coming from a software background, the main issues I see for VP are around the skill sets required and the development of a shared working practice. The game engines that VP makes use of are complex pieces of software, which are currently not designed to facilitate VP out of the box. This means some amount of Unreal knowledge will be needed inside any team attempting a VP shoot using this technology. Current film production teams tend not to have this knowledge, so will either need to bring in new people or up-skill. If bringing in new team members with Unreal experience, it’s likely they’ll come from a games or programming background, and therefore not have filmmaking experience. Building a shared lexicon to facilitate communication than, becomes critical.

This coming together of the software world and the filmmaking world may also have issues around working practices. Software has for a long time favoured an iterative process, in which small prototypes are made and tested quickly to identify issues early on. This is critical in software, where even a small bug can make a system completely unusable, and can take days to fix. The idea of having all of your work come together and be tested on one single, mission critical day would be horrifying to most software engineers, because in tech such days always fail.

In filmmaking, however, such mission critical days are common. There’s often things you can’t easily check in advance. You may not have access to certain props, lighting rigs or studio space until the day of the shoot, making it impossible to know for sure that some combination of these will work on the day. In our case, for example, we had hired some fence panels for the days of the shoot. As a result, we couldn’t test what the fence panels looked like under the studio lights until we were in there, meaning we had to work to get the virtual and physical panels to match while in the studio. On a complex shoot with a lot of moving parts, this proved to be a difficult task that there wasn’t time for, resulting in the decision to “fix it in post”.

Dr. Drew MacQuarrie is a lecturer in virtual reality and games at the University of Greenwich. His research focuses on virtual reality content creation through film, on which he has previously worked with organisations such as the BBC and Dimension Studios. He is also interested in the user experience of emerging content types, and how affordances can impact the ways in which viewers engage with digital media. More information about these projects can be found here.

A team of filmmakers and academics at the University of Greenwich have created a micro-short film entitled, How To Be Good, in collaboration with industry leaders at Storyfutures Academy and Mo-Sys engineering to explore and document workflows in virtual production. In this first article of a series, principle investigator Dr Jodi Nelson-Tabor discusses what virtual production means in different contexts and how producing How To Be Good sheds an important light on how VP can be a managed and harnessed to create films that would otherwise be cost prohibitive and complex to shoot.

To follow the series, click on the following: 1, 2, 3, 4, 5, 6, 7, 8, 9

--

--

Dr. Jodi Nelson-Tabor

Dr Jodi Nelson-Tabor is the Business Development and Training Manager for Final Pixel.