- Explanation of in-camera VFX
- Illustrating in-camera VFX
- Camera tracker, the starting point for in-camera VFX technology
- It all started with Unreal Engine
- The LED wall that revolutionized VFX
- A pattern that incorporates an LED wall into a conventional shooting style that even controls lighting with LEDs.
Explanation of in-camera VFX
“Vol.01 of the Virtual Production Field Guide” was all about the LED studio and how that has changed how virtual production works. This volume will examine in-camera video effects (in-camera VFX).
Here in japan, in-camera VFX has become a hot topic due to its frequent use in the historical drama “Dosuru Ieyasu.” In fact, it uses exactly the same technology as the camera tracker and real-time rendering used in the green screens described last volume.
The in-camera VFX method, the camera in the virtual space moves in the same way according to the position information of the camera, and the image projected by the virtual camera is displayed on the LED wall, along with the subject in the foreground. It is a mechanism in which a real camera records in normal settings.
The second aspect involves replacing the LED part is replaced with a green screen, and the camera track, real-time rendering, etc. can be operated with the exact same system.
Illustrating in-camera VFX
For me, the horizontal line that separates real-time rendering and pre-rendering (offline rendering) feels like a bigger gap than the vertical line that separates the LED wall and green screen.
Whether it’s a green screen or an LED wall, the virtual camera in the virtual space moves in the same position based on the information from the tracker attached to the camera, so that the images taken by the virtual camera can be synthesized with the green screen, and the LED wall It is a mechanism that is projected to direct.
Compared to the screen process, where the camera position and lighting are limited by the video shot in advance, the in-camera VFX allows you to freely choose the angle, and the lighting is also simulated according to the lighting of the person in the studio. It is also possible to change the lighting of the space.
This is the part that draws a clear line from conventional composite photography, and I feel that the creativity of the scene will be greatly improved.
Camera tracker, the starting point for in-camera VFX technology
Now, let’s write about the in-camera VFX system. First, from the part near the camera common to (1) and (2).
As mentioned above, it conveys information about the virtual space camera and the actual camera, PTZ (pan, tilt, zoom (lens mm), roll (tilt), position (position in space), etc.) A tracker is fixed to the camera. Most Japanese studios use trackers that use infrared markers. stYpe’s RedSpy and Mo-Sys’ StarTracker divide the force.
About half of the cases overseas are of the infrared marker type, and I hear that the other half of the cases are of the infrared camera type that tracks by recognizing the markers attached to the recording cameras that are installed in various places in the studio. Typical examples include OptiTrack and Vicon.
There are other types such as ViveMars, which obtains information from a base station that emits infrared rays, VGI’s LinkBox, which confirms position information with a stereo fisheye camera, and Pixotope Camera Tracking, a hybrid type of similar image recognition and infrared marker. do.
There are advantages and disadvantages to each, and there are differences in accuracy, and there are various models that range in price from hundreds of thousands of yen to nearly 10 million yen.
It all started with Unreal Engine
Epic Games’ Unreal Engine builds a virtual space by sending camera data from the tracker. It is no exaggeration to say that this UE created the culture of virtual production. Until now, there have been virtual studio systems such as Brainstorm, TriCaster, and Vizrt, but their CG-like rendering performance made it impossible to achieve real-time rendering like real-time UE. They are now working with UE as well.
Equipped with virtual production functions from version UE4.27, it is called virtual production ready from UE5.1 and fully supports in-camera VFX, and now it is version UE5.2 and stability is increasing. If you have a workstation equipped with a high-speed GPU, you can set up an in-camera VFX system with just the UE functions, but if you don’t have engineering knowledge, there are some difficult parts.
Applications known as “media servers” along with virtual production solutions cover that. For(1) in-camera VFX, disguise was the standard, and for (2) green screen, Zero Density’s Reality was the standard. However, recently, applications such as Pixotope and Aximmetry, which support both SMODE, (1) and (2), have entered Japan for in-camera VFX.
In the case of (2), with green screen and real-time synthesis, this is the end. The CG background and foreground generated by the virtual production solution mentioned earlier are combined with the green screen image recorded by the camera and displayed on the monitor. Well, if you think about it, it’s pretty simple compared to the LED ICVFX.
Regarding (1) in-camera VFX, the images generated by these are projected onto the LED wall via the LED processor (controller). Brompton’s Tessera type has a large market share, followed by NovaStar, and Sony’s Crystal LED model called ZRCT-300 made by Sony.
These manage the frequency and color together with the LED. Frequency is also involved in problems such as flicker, so it’s pretty important and the processor can’t be neglected.
The LED wall that revolutionized VFX
Of course, the key part of the LED in-camera VFX is the LED wall. It’s the biggest expense in terms of the amount of money for the equipment, and if it fails here in the end, it won’t work.
First, size and shape are very important factors, but they are easy to understand, so let’s put them aside and start with the number of pitches, brightness (dynamic range), and reflectance.
In terms of the number of pitches, the 1.5mm range is becoming mainstream in Japan. On the other hand, if you look at overseas examples, the width is as wide as 2.3 mm to 2.8 mm, and it seems that there is a tendency to make the LED surface and the size of the studio larger accordingly. I remember writing in PRN Magazine Vol.15, published a year and a half ago, that I wanted people to think that the pitch width multiplied by 1000 was the standard for the closest distance from the LED to the subject, but it was 1.5mm. and 1.5 m from the LED. If it is 2.8mm, you will have to be close to 3m away from the LED.
Just a brief note though… All of this varies depending on the sensor size, lens mm, aperture value, distance from the camera, etc., but there is no limit to what you say, so please take it as a rough guide based on my own empirical rule.
The finer the pitch, the higher the resolution. As the resolution increases, more machine power is required to send it out. It is better to be prepared for the high-definition cost of setting up the studio. The current situation in Japan is choosing the option of making a small studio with a fine number of pitches.
Next is brightness, but the background main LED doesn’t need that much brightness.
However, in terms of dynamic range, it must be able to fully express the HDR color gamut. The subject standing in front of the LED has sufficient contrast as it exists in the real world. For that reason, if the color gamut of the background is clearly lower than that of the real world, it cannot be denied that the background feels flat, even though it is synchronized with the movement of the camera. At the very least, I want a dynamic range that exceeds the gradation of the recording camera.
Normally, the displays we see have a brightness width of only 100nit according to the standard of SDR (Standard Dynamic Range). I hear that when it comes to HDR (high dynamic range), it has an expressive power of 1000nit to 10000nit. In that case, the brightness of the LED will be about 1000nit. This makes it possible to express tones closer to the real world.
Another important factor is the reflectance. This is not a problem when viewing images in a dark space such as a theater, but the fate of virtual production is that the subject in the foreground must also be illuminated. The strength of the LED in-camera VFX is that natural base lighting is created by the ambient light of the LEDs placed on the ceiling and in front, but since the LEDs are only surface light sources, light like sunlight and shadows appear sharp. Naturally, in order to express such hard lighting, it is necessary to use lighting equipment with a point light source, such as those found in ordinary filming locations. The light from such lighting and the influence light between LEDs should not be underestimated. It is desirable that the LEDs have as low a reflection as possible so that they do not affect the background LED panel as much as possible.
Of course, the ones labeled as “non-glare” are low reflective, but since there is no numerical value for how much it differs depending on the model, the only way to check it at this stage is to actually check it with your own eyes. It’s an important element for LED panels, not just for virtual production, so I’d like to see something like a standard value that can be quantified.
All the elements are then recorded by the camera, leading to the final image without compositing.
I am often asked what kind of camera is suitable, but in a nutshell, it is a large sensor camera with a genlock function. Regarding genlock, there is a way of thinking that there is no problem with LEDs with a high refresh rate, but it is better to have genlock. When shooting with an LED wall, not only for ICVFX, moiré tends to occur when the LED surface is in focus. For that reason, a large sensor camera with a shallow depth of field is suitable. At least I want the sensor size of Super35.
A pattern that incorporates an LED wall into a conventional shooting style that even controls lighting with LEDs.
Well, as those who have seen the virtual production shooting scene in the video will know, most LED studio panels are arranged in a curved shape. This has the implication that even if the camera is in the center, it can be shaken left and right to reduce blurring, but since the LED light has a strong linearity, it is better to face the subject as much as possible. This is because even if the light is not included in the captured image, it functions as a natural ambient light or as a reflection of a reflective object. For that reason, the ceiling is covered with LEDs so as to enclose the entire building.
Alternatively, they may be placed on a plane. Stage C of Kadokawa Daiei Studio and in-camera VFX shooting of NHK’s “Dosuru Ieyasu” are also built with a flat LED wall in the studio. This is a pattern in which the left, right, and ceiling are illuminated in the same way as in conventional green screen photography, and LEDs are not used as ambient light.
In-camera VFX is often described as “moving writing.” If the image of an enclosed LED studio is that the subject is in a virtual space, then this kind of usage is a perfect representation of the virtual space outside the large window. From the innovative in-camera VFX method, it is a form that approaches the conventional shooting method.
Last time, I talked about how the screen process is more familiar and used more frequently on set, but we took it a step further and created the ambient light using the conventional lighting method, while the background was created using the interior lighting. Incorporating the technique of camera VFX, it is a way to use the LED wall that is one step further than the screen process.
Still, there is a big difference between the idea that the outside of the large window is a virtual space, and the idea that the real subject or part of the set exists in the virtual space surrounded by LEDs.
In any form, I hope that this method of in-camera VFX will gradually permeate the actual shooting site. With this method of ICVFX, it seems that new horizons can be seen by throwing away the theory of video production up to now and making a new way of thinking.
About the author: Motomi Kobayashi
Started his career as a cinematographer for Motoki Kobayashi’s MV, and worked on Spitz, Ulfuls, Shiina Ringo, SEKAI NO OWARI, and others. He has been active across genres such as the movies “Night Picnic”, “Pandora’s Box”, etc., the drama “Suteki na Sen TAXI”, and the 2017 NHK Kouhaku Uta Gassen grand opening. He is also a VFX advisor for virtual productions. He is the CTO of Chapter 9, a CG background asset production company.