Categories
animators Autodesk Blog Post FBX interview Mocap Motion editiing MotionBuilder

Film production in Autodesk MotionBuilder

We are excited to have a guest post from our friend and long time creator of amazing MotionBuilder tools and plugins Neill3d AKA Sergei Solokhin. He also has a site https://docs.openmobu.org/ and we shared a lot of his past MotionBuilder work here https://www.riggingdojo.com/2019/02/12/open-motionbuilder-rigging-tools/

Thank you and enjoy!

The Wall and MoPlugs Dev Story by Neill3d – Sergei Solokhin

  The story is about a feature film project where I’ve contributed.  Only now I have enough time to finish and share some of my project thoughts, so I think it’s better to do it later than never.

Intro

  The project is a feature film, a kind of virtual essay, titled The Wall. I won’t go into details about the film’s plot, script, or art production here. I’ll leave that out of this article, but you can learn more from the film’s page, interviews with the screenwriter David Hare, the director Cam Christiansen, and other materials. I will provide links at the end of this article. In this article, I’d like to share my personal experience of working on the project, as well as discuss some of the technical details and solutions I’ve implemented.

  Before I started to work on film, I was leading a small mocap studio AnimArt. I also actively maintained a blog for mocap and animation Neill3d.com, especially about the MotionBuilder platform, sharing my thoughts, scripts and plugins. And that sharing paid off in full.

One day, I was contacted by Cam Christiansen:

Friday, March 8th, 2013 at 5:56 PM
“Sergey, is there a time we can talk about developing some MotionBuilder tools for us? I’m not sure what your correct email is.”

  As a part-time job I decided to take on the task of developing several scripts aimed at optimizing large scenes. Throughout 2013, I worked on various scripts and plugins tailored to the project’s needs, and they proved to be useful.

  Cam had a vision to avoid switching to other applications and concentrate the creation process in one place. That required implementing editing, post-vis and rendering features into MotionBuilder. And that sounded like a crazy and huge deal, but I was open to such engineering and animation challenges.

  After I have created scripts and several plugins, Cam asked if I would like to join the film project on a permanent basis and become fully immersed in the production.

  I decided that it’s a great chance and a great journey to join. Also my leading of the MoCap studio turned into routine management tasks and I was looking for an opportunity to get back to development and participate in something ambitious. And that’s what I did.

There is no plan, there is a way

  I joined the project on a remote basis, starting with several ideas on how to implement the director’s vision for facial animation visualization, managing the dense geometric meshes of the world, and creating procedural animation elements, such as constructing walls.

  Interesting fact. The team was based in Canada, but they were allowed to spend three months in Thailand during the winter and continue working in a warm and pleasant environment. This is one of my favorite memories and stories about team organization during cold winter evenings.

  I began working in the winter from Thailand, where one of the team members, Mitch Barany, was on a nearby island. Initially, I managed to make progress, which gave hope that building such a creative platform and assembling film scenes efficiently was achievable.

  The content creation was in the manner of post-production.   I mean that the artists were not supposed to prepare optimized models, clean up animations, etc. The scenes became more detailed and this increasingly distanced us from real-time performance and stability.

  Another stumbling block was the need for me to develop plugins while simultaneously working on production. Naturally, I could not guarantee their immediate stability. This led to delays, pushed deadlines further, and prevented us from reaching the desired results. I, for my part, tried not to interfere with the content at the early stage and focused on creating more and more software solutions to address the tasks. However, it felt more like a project within a project.

  After about a year and a half, it became clear that even with new software solutions and supporting scripts, we were not making effective progress on the film. This was a difficult moment; the film’s budget was limited, and most of the team left the project. Eventually, only two of us remained: the director, who also worked as an artist-animator-designer, Cam Christiansen, and me.

  At this very moment I could have despaired and started looking for a new job, my child was born. Since I was actively taking part in caring for the baby, I didn’t dwell on negative thoughts and kept working. Somehow, the film’s producers continued to fund my work. Whether they still believed the film could be saved or felt it would be wrong to leave me unemployed at such a critical time remains a mystery. Personally, I lean toward the second explanation.

  One way or another, I continued to receive financial support, even though there was little hope and a proper team left. I kept working, delving deeper into the film’s content to fix the situation, not just through software solutions but with overall optimization. Meanwhile, Cam kept drawing, building models, and assembling new scenes for the film. Time flew by for me; with a small child, I had no set schedule. I worked whenever I could, especially while the baby slept. I stayed on duty with my daughter and used every moment to return to work on the project.

  Time passed, and the moment came when several scenes of the film were completed, and our work began to attract interest again. In my situation, with a small child, I simply did not have the time to worry or feel sad about the project’s fate—I was grateful for the opportunity and kept working. Perhaps this detachment from the state of affairs helped me overcome that barrier relatively easily and avoid a moral downturn. When the baby grew older and the infant colic phase passed, allowing me to get more sleep and take a fresh look at the state of the film, I discovered that a real chance had emerged. We had managed to rework and optimize a significant amount—the massive environment geometries, character models, skeletal rigging and found solutions for facial animation. The plugins were more stable, and scenes were rendering smoothly. Now, it was just a matter of continuing and not giving up.

  This marked a turning point after which the film could be viewed in parts. There was still a great deal of work ahead. Yet the producers believed that with trust and patience, we were beginning to make progress. That’s exactly what happened: our funding was extended, and for another year or so, we continued with additional resources and team members.

   With the extra support and a strengthened team, we assembled all the necessary parts of the film in the required quality for cinema screens, and the film was approved and accepted by the producers. 

  The film The Wall premiered at the Annecy International Animation Film Festival in France, one of the most prestigious animation festivals in the world. Additionally, it was screened at the Toronto International Film Festival (TIFF), where it garnered attention for its unique blend of animation and documentary storytelling.

Happy End!

  Now, after sharing this story, I would like to devote some attention to the various aspects of the project that required software solutions.

Solutions

Correction Shapes

  When an artist creates a previs and seeks to improve it through high-quality iterations, one of the primary tools we needed was sculpting and the creation of corrective shapes. Once the camera is chosen and the angle is set, it becomes possible to finalize the animation from the camera’s perspective and make pose and silhouette adjustments using this utility.

  The solution involved two main components. The first was the sculpting tool itself. I developed a minimal set of brushes for working with geometry, including pull/push, drag, smooth, freeze, erase, and paint. Additionally, it was essential to ensure support for working with a Wacom pen, enabling operations to respond to the pen’s pressure for intensity control.

  Behind the scenes, this tool operates as a deformation constraint that displays changes to the model’s geometry on top of the standard deformation.

NOTE: in MotionBuilder the operation order is the following – first goes keyframe, then constraint, blendshapes, skinning deformation, deformed constraint and point cache.

  The deformed constraint is used for a preview of the changes made to the shape. If the result was satisfactory, it could be saved as a new blendshape on a model. In this case, I developed the logic for calculating the corrective shape under skinning. The reason for this is that the character deformation we see on screen is a combination of the base mesh, followed by blendshapes and the skinning applied on top. To save the corrective blendshape by sculpting geometry in a given frame, it is required to subtract the entire skinning effect from the final pose and record the difference between the resulting positions and the currently active blendshapes on top of the base geometry.

  This became a convenient and essential tool for the workflow and iterations during the film’s production.

  I would also like to highlight a second utility related to this topic: the blendshape toolkit. I developed this manager to display current animated blendshapes and provide additional functionality for duplicating, organizing, saving, and loading from an external file.

  The utility also includes critical production operations for creating duplicates of the model mesh. In this case, the duplicate has all deformations, constraints, skinning baked into the mesh. There is also an operation for calculating the difference between the deformed and base rigs. Another essential operation is Combine Models. In our scenario, many environment models were assembled from separate pieces, and with MotionBuilder, we could significantly enhance scene performance by merging static geometries into a single model.

Character Animation

  For character animation, no special tricks or custom rigs were necessary. The functionality provided by MotionBuilder was more than sufficient, requiring only a bit of skill and expertise. I have experience working with character animation in a motion capture studio. In this context, all movement was bone-based, with carefully selected rotation centers, a simple HumanIK standard skeleton, and bones for facial animation.  

The animations were retargeted from data recorded on a big studio in the UK. And it was a standard pipeline for mocap retargeting and animation cleaning.

Facial Animation

  The motion capture session included simultaneous facial capture on video. The director’s vision was to transfer this facial animation onto the 3D characters’ faces. The 3D character models were designed to resemble the actors used in the capture, but were not exact replicas; they were artistically modeled versions, not precise 3d scans. This makes the need to retarget the video footage onto the characters’ faces, incorporating additional processing and stylization.

  To achieve this in real-time, a specific concept was adopted. The idea was to divide the video footage into facial zones: mouth, left and right eyes with eyebrows, and nose. These four sections would then be projected onto the 3D model. This approach allowed for adjustments that matched the proportions of the 3D model’s face. Additionally, the face itself would need to be animated, with the video projection providing extra visual details.

  To implement this approach, I developed a tool for dynamic masks and a specialized shader capable of projecting up to six images onto a model over the standard diffuse texture.

  The dynamic masks allowed for masking out the specific region of the image by using the alpha channel. Since the video was captured using an HMC (head-mounted camera), a static mask was sufficient to isolate the desired fragment. However, the masks tool also supported animated regions, enabling them to be fully dynamic if needed. 

  Under the hood, the masks utilized the NV_path_rendering extension. This allowed for real-time drawing of spline shapes filled with color in the required texture color channel, which the shader then used for image masking. To optimize performance, four masks for different facial regions were packed into a single texture.

  The shader utilized the Camera object to read projection matrices and positioning, which were then used for projecting the texture onto the 3D model. The texture projection was depth-clipped to ensure visibility only on the face front geometry and not on the back of the head. Additionally, it was clipped based on the selected dynamic mask. 

 The head model itself was animated from the same footage by tracking down the 2d face landmarks and animate bones from them.

Post processing and Composition Master

  An essential part of iterative work on film scenes was post-processing. The idea here was to create a tool to avoid rendering and post-processing in a separate program, instead performing this directly in MotionBuilder. Using the OpenReality SDK, I began by writing a scene manager plugin to intercept calls before and after scene rendering. From there, I integrated a framebuffer into which the scene could be rendered. 

  The framebuffer was there to extract color and depth maps from the 3D scene, pass them through post-processing GLSL shaders, and render the final texture on-screen over the 3D scene.

  The composition master itself was structured as a nested hierarchy of input data (texture, color, procedural texture), effects, and output objects. The root of this tree was the final output image, with effects added in reverse order and each computation branch ending in an input image. The blending element had two branches.

  The post-processing graph was built according to this tree logic, allowing it to be saved and loaded from a separate file. The utility itself was written in Python, utilizing standard MotionBuilder interface elements without direct use of Qt.

  Among the source materials for post-processing, we eventually added additional elements such as rendering 3D objects into masks, shadows, reflections (cubemap) and procedural textures like gradients. We also included the scene depth texture (useful for effects like depth of field), color texture, and supplementary data from the scene, such as fog areas, decal regions, and similar elements.

  3d decals, regions and a post process filter. The concept behind this feature was to automate the creation of a screenshot from a set camera angle, which would then be sent directly to Photoshop. In Photoshop, a new layer of details could be added and then sent back as a projection onto the geometry. This workflow allowed for dynamic creativity in enhancing visual elements. Moreover, the blending and visibility of decals could be animated, offering even greater flexibility for creative expression.

  The result was a highly functional and flexible utility for implementing the director’s creative visions.

TECH NOTE: for composition effects I’ve used compute shaders, the downside of using them compare to render to quad and process with fragment shader is that you have to implement derivatives and do an interpolation logic manually

OpenGL derivatives – https://registry.khronos.org/OpenGL-Refpages/gl4/html/dFdx.xhtml

TECH NOTE: for shading I’ve used a nice shader effect library nvFX (by Tristan Lorach) which helps to manage a structure on top of glsl shader, such as includes, combination of shaders, etc.

Composite Master Toolkit – https://www.youtube.com/watch?v=Xss74zEdCdc

Shading

  The drawing context itself couldn’t be altered, as it was predefined by MotionBuilder and required the use of OpenGL. However, by leveraging extensions, certain stages of geometry preparation and rendering could be made significantly more efficient. These stages, which we could capture, became key points for implementing features and optimizations.

  The OpenReality SDK provides an extensive toolkit for creating plugins targeting various parts of the software. For debugging and schematic graphics within a scene, I utilized FBModel when drawing with a Custom Drawing handler. This approach offers a straightforward and efficient way to render geometry directly into the scene.

Key Tools and Techniques

  • FBLayeredTexture/FBTexture: These were used with Dynamic Texture to store textures in video memory or for a custom mix between textures.
  • FBShader: This allowed customization of how models were rendered, from individual objects to entire groups with material grouping. A model could have multiple shaders assigned within the scene.
    • Shaders were occasionally used as metadata for rendering rather than direct visualization. For example, a masking shader could specify which mask models were to be rendered into. This shader didn’t handle visualization itself but prepared a list of models for additional rendering into specific masks.
    • Shaders followed either a short pipeline with hooks like pre-render and ShadeModel, or a longer Material Effect pipeline, which allowed more detailed control during geometry passes, material changes, and similar processes.
  • FBManipulator: This versatile tool enabled the creation of viewport manipulators, either as temporary tools or always-active background elements. Indirect use of manipulators significantly enhanced flexibility for handling user interactions, supporting tasks such as rendering overlays (HUD rendering stage) and processing user input seamlessly.
  • FBEvaluateManager: This provided valuable hooks, such as the OnRenderingPipelineEvent, at two critical points—before and after scene rendering. These hooks were instrumental in attaching custom framebuffers and capturing scene visuals into textures. This capability laid the foundation for post-processing plugins.

 Production Real-Time

  The scenes and models for the project were originally created in Autodesk Maya and Maxon Cinema4D. Since these assets were typically for offline renderer, artists prepared them without any special real-time optimization. In our case, however, we were working on a fully real-time platform. This presented significant challenges due to the sheer number of detailed models, dense meshes, and separate objects, all of which heavily impacted performance and stability.

  To address these issues, I developed a range of tools and methods aimed at optimizing scenes and improving real-time performance.

Optimization Techniques

  1. Cleanup Manager
    Since the scenes were full of imports from external programs, they often included far more objects, textures, and materials than were actually in use. To tackle this, I developed a Cleanup Manager to analyze dependencies within a scene and remove unnecessary elements efficiently.
  2. Texture Optimization
    • Texture Manager and DDS Batch Manager: The scenes contained a large number of textures, which consumed significant video memory and often lacked mipmap levels. A key optimization involved converting these textures to the DDS format, which is supported by MotionBuilder (MoBu). DDS offers compression, faster loading, and built-in mipmap support, reducing memory usage and improving performance.
    • The batch manager is available on github – https://github.com/Neill3d/BatchDDS
  3. Geometry Consolidation
    • I implemented functionality in the Blendshape Toolkit to merge multiple small models into a single geometry. MotionBuilder’s scene graph struggles with numerous small objects, and consolidating them significantly improved scene management and rendering efficiency.
  4. Leveraging OpenGL 4 Extensions
    • I utilized OpenGL 4 extensions, such as NV Bindless Graphics, to enable more efficient rendering of geometry, reducing overhead and boosting frame rates.
  5. Scene Simplifications
    • Replacing some complex scenes with cubemap backgrounds to reduce rendering costs.
    • Substituting 3D crowd characters with a sprite system or animated textures, significantly lowering the computational load for background elements.
  6. Baking Constraints and Logic
    • Many scenes relied on relation constraints or logic-driven constraint boxes, which were performance bottlenecks. I baked these into C++ code, enabling the use of parallel evaluation with MotionBuilder’s Evaluation Manager. This allowed for much faster and more stable scene evaluations.

  These tools and methods collectively stabilized performance in real-time environments, making it possible to handle the originally unoptimized assets while maintaining high-quality visual output.

 GPUCache

  To squeeze more performance out of frame rendering, I began studying NVIDIA’s CAD scene demos, where the bindless approach demonstrated a significant boost in efficiency.

5.8 FPS with 40–60 seconds of loading time when using the 3D scene directly, compared to 60 FPS with just 8 seconds of loading time when utilizing GPU cache.

Video – https://www.youtube.com/watch?v=-1OAM5uxMA8

Extension used for bindless graphics (link to article at the end section)

  • GL_NV_shader_buffer_load
  • GL_ARB_bindless_texture

  In this case, GPU Cache involves preparing geometry, textures, and materials into a format optimized for fast GPU loading. The data is structured specifically for vertex attributes, ensuring efficient alignment:

  • Geometry Optimization:
    The geometry is reorganized into a format tailored for rapid GPU access. Vertex attributes are sorted by material and indexed to the appropriate material, allowing for streamlined rendering.
  • Material Consolidation:
    All materials are packed into a single array with a defined structure, reducing overhead and simplifying material lookups.
  • Texture Optimization:
    Textures are preprocessed into a GPU-friendly format, including precomputed mipmap levels. This not only speeds up loading but also mitigates texture aliasing and jittering when viewed at a distance in perspective.

  For texture compression, I utilized OpenGL extensions to upload uncompressed data to the GPU, allowing the driver to determine the most suitable compression format. I then extracted the compressed data and stored it in a file cache. On subsequent loads, the precompressed textures could be quickly uploaded to the GPU, dramatically improving load times for large static environments.

This approach has multiple benefits:

  1. Reduced Scene Load Times: Precompressed textures and optimized geometry significantly decreased loading times.
  2. Improved GPU Memory Usage: Compressed textures made more efficient use of video memory.
  3. Increased Visual Stability: Generating mip levels enhanced image quality, reducing texture flickering at a distance.

  This system provided a substantial performance boost for static environments, making real-time rendering faster and more stable.

Lighting.

  In standard real-time OpenGL, shaders are typically limited to eight light sources per object. For film production, this limitation needed to be addressed. To overcome it, I implemented a clustered lighting approach.

With this method:

  • The scene is divided into regions, or clusters.
  • Light sources are assigned to lists corresponding to the clusters they intersect in the current frame.
  • Each cluster calculates lighting only for the sources within its bounds, rather than processing all the lights in the scene.

  This optimization significantly reduced the computational load while allowing for a much higher number of light sources in a scene. The implementation of this method is well-documented in https://www.aortiz.me/2018/12/21/CG.html

 A short tech demo of the following cluster lighting feature,

 video – https://www.youtube.com/watch?v=87rAFa49jos

Shadows

 For global (sun) shadows, I used a cascade shadow mapping technique.

  The camera frustum was divided into four regions based on distance, and the scene was rendered into four separate textures. This is a well-known technique, and I will provide a link at the end of the article for further reading on it.

  Additionally, shadows could be visualized separately as a filter for the composition master, enhancing control over the final scene appearance.

  For creating local depth effects within the scene, I also implemented 3D fog, adding atmospheric depth to specific areas of the environment.

To create reflection effects, there was an option for additional visualization into a 2D texture or a cube map (in the case of spherical reflections, such as for external car windows).

Nvidia article – https://developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf

Final rendering.

  As we neared the finish line, one serious and important barrier remained. In the final image, there was a lot of flickering during motion, especially with meshes and fences modeled with fine and thin geometry. I spent a lot of time thinking about how to address this and realized that we would need to tackle the problem head-on.

  I implemented supersampling rendering—a technique that uses a grid of screens with a total resolution several times greater than the visualized resolution. By downsampling the image, we could reduce artifacts. Combined with standard 16x multisampling and about 3×3 or sometimes 5×5 tiles, this approach produced the high-quality image we needed—one that could be shown in a cinema and still be watchable. On a large screen, any flickering is amplified and can be unsettling to the audience.

And one more issue on rendering large scenes we had was due to z-fighting.

Logarithmic Depth

  First and foremost, to improve the depth accuracy in scene rendering, I used logarithmic depth.

Here is an example of linear depth:

and a result of a use of logarithmic depth.

the z-fighting effect is not that visible any more.

Video – https://www.youtube.com/watch?v=SwAhiech3IU

Simulation

  One of the tasks I faced was vehicle simulation. In the film, the main characters travel between scenes by car, and there are also scenes with urban traffic or people riding scooters.

  MotionBuilder already includes integrated PhysX and ODE (Open Dynamics Engine). The first version of the vehicle simulation was created using the built-in physics, but it lacked stability and greater control over the vehicle’s behavior.

  I had experience with vehicle simulation using the open-source Newton Physics Engine in a 3D car tuning project. So, I decided to integrate this physics engine into MotionBuilder, aiming to record vehicle simulations and later process the baked animations.

  The integration proved to be a very complex and time-consuming task, and I can honestly say it did not pay off. Once the simulation was working, allowing for vehicle passes to be recorded, I realized that running a simulation while having a predefined scenario was quite tedious and not very productive. It became clear that it was much easier to create a small vehicle rig for manual and procedural animation, using various pivot points, wheel projections onto surfaces, and calculating wheel rotation and steering based on the car’s trajectory.

Video – Car Simulation for MotionBuilder

  Although implementing vehicle simulation in MotionBuilder was my idea, dream, and a strong desire, within the scope of the project, I can say that it ultimately proved to be unnecessary.

Rigging

  In the end, that’s what I had to do—it turned out to be much more convenient and practical than redoing simulations with a joystick and trying to hit a specific timing, only to later edit over it.

I developed several rigs for the luggage, scooter, and vehicle.

Video of a luggage rig – https://www.youtube.com/watch?v=KgxZS86UHo8

Procedural Animation

  I also developed additional utilities for parametric animation, such as for tossing objects and for rotating and turning wheels when moving along a given trajectory, especially in the case of large urban traffic. This turned out to be an even faster and more convenient approach than using a rig for each individual vehicle.

  Some scenes in the film required specialized types of procedural animation. For example, in the scene with the construction of a wall, where elements needed to fall from above and arrange themselves into a structure along a predefined path.

Wall Bricks Constraint
  In this case, it was very convenient and efficient to create a new FBConstraint. I also used simple mathematical expressions within input properties to allow for the use of multiple such constraints, creating individual movement and distributing them over time.

An FBConstraint is a block of animation logic that runs in a parallel thread, and multiple constraints can execute in parallel with each other.

Conclusion

What Worked Well
  Many of the director’s ideas were successfully realized, and I was open to any new concepts, always striving to push boundaries and achieve the desired results.
MotionBuilder proved to be an incredibly flexible, expandable, and efficient platform for production.
  I practiced recording video messages, demos of features, and updates. This was an excellent way to keep everyone informed, and I continue to use this approach to this day.
Even with basic OpenGL, by using OpenGL4 extensions and NVIDIA bindless, and relying on top-tier graphics cards, there was a great opportunity for optimization and creating new effects for real-time work.
  And, of course, the film was completed!

What Didn’t Work So Well
  At times, my personal ambitions took over, and I took on much larger tasks than the project required. For example, a realistic vehicle simulation is impressive and comprehensive, but in the end, it wasn’t very effective for the project itself.
Unit testing was introduced at a late stage, which made it less effective than it could have been.

  I would like to thank Cam Christiansen for the opportunity to participate in the creation of the film and to live out such a grand dream without boundaries. Cam also had a vision for the tools needed for the film and creativity, and he demonstrated immense patience, waiting for the implementation to be ready and fine-tuned—after all, everything was done on the fly, so stability wasn’t always achieved on the first try.

  A huge thanks to NFB (National Film Board of Canada) and producers David Christensen and Thompson Bonnie for their support during the tough times the project faced, especially at a critical moment in my life related to the birth of my daughter. Their support helped me avoid a very difficult situation.

  There are many things I could have done differently, but perhaps I expected too much versatility from my work when a more effective approach would have been to just create the tool now and apply it in action. These tools were not final products; they were simply makeshift tools for the film, very much in a prototype stage. Perhaps I was wrong to expect the creation of final products in addition to this, although it was nice to dream about it.

  I also want to note that I developed a unit testing system for features closer to the end of the film. The instability caused by changes in the system often led to breakdowns in existing scenes and logic, delaying the work on the film. Looking back at the process now, I would have introduced unit testing at an earlier stage to have better control over the systems as changes were made.

  That said, I am glad the film was completed. MotionBuilder is truly an incredible and flexible platform, even extending beyond the animation system. At a time when Unity and Unreal Engine lacked advanced animation systems and nonlinear editing, I believe MotionBuilder was an excellent choice, ahead of its time.

Links and resources

Here is a set of links to topics and resources that I’ve mentioned in the article.

The Wall film on Canadian Film Board – https://mediaspace.nfb.ca/epk/wall

Autodesk MotionBuilder – https://www.autodesk.com/products/motionbuilder/overview

MoPlugs project on github – https://github.com/Neill3d/MoPlugs

Neill3d Youtube channel – https://www.youtube.com/@Neill3d

Newton physics engine – https://newtondynamics.com/forum/newton.php

Nvidia bindless graphics – https://www.nvidia.com/en-us/drivers/bindless-graphics/

nvFX by Tristan Lorach – https://github.com/tlorach/nvFX

Logarithmic depth buffer – https://www.gamedeveloper.com/programming/logarithmic-depth-buffer

Clustering lighting – https://www.aortiz.me/2018/12/21/CG.html

Categories
interview

Day in the life: Rigging Supervisor

Todd Widup – Rigging Supervisor at
Sony Pictures Imageworks

Todd has been a mentor for us for years and has worked on some really big projects so I asked him to share a bit of his work and how he manages his day.

Guardians of the Galaxy Vol. 3. © 2023 MARVEL.

Alright I suppose I should do a brief introduction.  Hi, my name is Todd Widup and I am a Rigging Supervisor at Sony Pictures Imageworks where I work on VFX projects. 

That is to say, I work, currently exclusively, on our VFX films that come in from clients and over the last couple years. Films like “The Marvels”, “Guardians of the Galaxy Vol. 3” and currently the “Ghostbusters” sequel, along with a couple of unannounced projects.

So, without further ado, here is “a day in the life of a Rigging Supervisor at Sony Pictures Imageworks”.

Typical day

Normally, my typical day starts at 9:00am.

I login and start checking emails, seeing what assets were published on my shows, yes, shows…I am usually on two shows at a time.  I also look over what models need to be reviewed, what rigs are ready for review, and I review assets in modeling already  for the shows.

Now, since I’m in the Midwest and Sony is on the West Coast, I have about 2 hours before everyone else starts working so I go thru and get some time to do some testing or work on an asset, if there is one…and there usually is. 

I also use this time to write a few python scripts to assist with automating some things I do routinely for rigs or asset prep work.

Once the teams start logging in, it’s meeting time lol.  Most days, I have 4 hours roughly of meetings, but there are occasions where it is closer to 7 hours. 

I have show production touch bases, team touch bases, animation rounds and show rounds as well as department meetings and show tech meetings.

In between all those meetings, I am usually trying to sync with my artists, helping where I can, assisting our junior or interns on tasks.

Having taught for the Apprenticeship program with Rigging Dojo for years and teaching with a state university, I watch my new artists a lot, helping them with picking up workflows and techniques with our proprietary systems and tools. 

In addition, as a supervisor, I am involved with the interview process for new artists.

Miles Morales (Shameik Moore) and Gwen Stacy (Hailee Steinfeld) take on The Spot (Jason Schwartzman) in Columbia Pictures and Sony Pictures Animation’s SPIDER-MAN™: ACROSS THE SPIDER-VERSE.

My normal day ends someplace between 7 and 8 pm my time.  Normally my last meeting is around 6:30/7pm and then I wrap up a few emails and messages and commit anything I have been working on that day so it is backed up just encase of a systems issue. 

As you can see, it’s lots of meetings and reviews, and a little on the box time, but all well worth it.

Student advice

As a supervisor and artist, I get asked a lot of questions .. especially from the mentor students I have had in the past along with the classes I have taught at a state university.  So here goes a few.

Do you have any tips on keeping track of it all or what has or hasn’t worked well with the spread out teams?

I have had the pleasure of having a phenomenal group of coordinators on each show.   They help to keep me sane and focused and track most things.  Beyond having a personal coordinator, I use google calendar for tracking key dates so i get pop up notifications on a lot. 

Additionally, in the studio, we use Shotgrid for task tracking which is a huge help as well.

What do you look for in a reel?

I get that a lot from students in my college classes.  In general, I’m looking for good samples of deformation, joint placement, variety of work (hard surface, bodies, faces) and python scripting.
Showing those 4 things goes a long way for me.

Thank you Todd for giving us a look into your day!
Rigging Dojo

You can find more about Todd here

https://www.linkedin.com/in/todd-widup-51633a1/

And learn more about our Apprenticeships if you want to have experienced mentors like Todd guide your career path and level up your skills.

Categories
Blog Post interview video

The Rigging Buddies Podcast #26: Alicia Carvalho

This is a great interview with Rigging Dojo Alumna, Alicia Carvalho, Rigging Supervisor at Digital Dimension. There are some great discussion about deformation and the needs for having good skinning and deformation skills. Alicia shares some must hear advice on how to prepare information before rigging and insights on how to lead a team and work with production to get great results. This is a must listen for anyone new or experienced character TDs, go show them support.

Also DO NOT miss the last half hour+ of this podcast. It is an outstanding discussion of women in the workforce, bias & the impact on career path. It is incredibly important if you are management or artist, you need to hear this.

Huge thanks to Miquel for doing the Rigging Buddies Podcast and keeping the interviews going for the community. Be sure to check out all the great podcasts and the mGear rigging tools he has created for Maya http://www.mgear-framework.com/ (We also start our Rigging 101 students on mGear to explore concepts)

About Rigging Dojo Rigging Class

Our Rigging 101 Class spends a good deal of time on the foundations of what makes good skinning and deformation from the ground up and anyone who has taken our class will recognize this quote that we start our skinning topic with.

“My biggest gauge is that I still look for deformations that are really high quality. You can’t easily teach someone to have an “eye” for what looks right. Someone who knows what an arm should look like when it moves is way more valuable to me than the person that can build a fancy toolset. To me it is still comes down to the same thing, do they “get it.” No matter how they get there, did they hit a creative goal. Some people get hung up on technique and process. They forget that they are creating something. “

Categories
Blog Post interview Leadership rigtip

Get to know: Nina Fricker – Lead Technical Animator at Insomniac Games

Get to know: Nina Fricker – Lead Technical Animator at Insomniac Games

https://www.linkedin.com/in/nina-fricker-9182921/

Let’s start off with some questions from our friend Izzy Cheng

Hi Izzy!!!

What is some advice you’d give to people getting into Technical Art or Technical Animation?

AH!! Where to start?? This is a topic I could talk about a LOT.

The main advice I give to aspiring character TDs is to work with a modeler and animator on some characters. This has a numerous benefits. First, the group will push each other to be better at their craft. The rigger is going to find areas where the model needs improvement to get good deformation. The animator is going to find problems with the rig that will require better weighting and controls from the rigger. Everyone can help critique performances. Find people who will really push quality. Ideally at the end of all of this, all three will have great demo reel pieces that each individual wouldn’t have been able to achieve on their own. Another benefit is that you’re basically emulating production. This is how it works in a studio, so getting this kind of experience, and more importantly, getting comfortable and adept at the iteration cycle between departments shows companies that you’re production-ready! Find a way to highlight this collaboration on your resume and demo reel. I’d love to see examples of how the iteration loop between everyone improved the end character and performance. Make sure to talk about this in your interviews! Let people know what animators hated in your rigs, and how you addressed their concerns. It eases my mind, as a hiring manager, to know that you’re comfortable receiving and responding to criticism.

As a veteran in the game industry, what keeps you from burning out?
I have a lot of interests outside of work that keep me balanced. I love to work out, cook, learn, garden. Work/life balance is really important in order to sustain a long career in the game industry. I have been incredibly fortunate to work for studios that take good care of its employees.

Do you have a favorite project you worked on at Insomniac and why?
Ratchet & Clank: Into the Nexus

This was my favorite because it was the project that I felt like the North Carolina studio really hit its stride. As a group we had gone through shipping a few titles together, and had learned to work and collaborate with each other incredibly well. In addition to the working relationship of the team, the project had a really fun plot line and character line up. Into the Nexus had two awesome female characters (Talwyn & Vendra) which was a great new challenge for me. They both had to deliver a wide range of emotion. The animators on the project brought out some spectacular performances from them which I’m still very proud of to this day.

What traits make a good *Lead* Technical Animator?

To me the most important trait of a good lead, regardless of discipline, is that your focus is on making your team successful. Going from an individual contributor to a lead required a major mental shift in what being good at my job looks like. This didn’t come easy after almost a decade of straight-up production work. I still struggle with not being able to do as much work myself. As with all aspects of my career, I’ve received wonderful guidance and mentorship from so many people at Insomniac. They’ve helped me realize that I make a big impact as a force multiplier through leading. This came in the form of CONSTANT reassurance that the success of my team was first and foremost and that my feeling of not doing enough showable production work was a normal reaction. It’s become very fulfilling for me to see the amazing things the folks on my team accomplish.

Great Questions from Izzy, thank you for that, now let us get to some of our own.

Were you always into computer graphics and games or how did you find your way into the industry.

I pretty much decided that I wanted to work in 3D when I was 12 years old. This was when I saw Jurassic Park in the theaters (yes, I’m old). The “Welcome to Jurassic Park” scene where we first see the brachiosaurus blew my mind. Eventually I saw a “making-of” for the film and fell in love with the magic of bringing digital characters to life.
Just a couple years later Toy Story was released in theaters. A full CG animated film. Once again, mind blown.
Around this same time, I had a wonderful teacher in middle school who taught AutoCAD to 7th and 8th graders. Back in the early/mid 90s, this was basically unheard of. Having access to his high-end Linux machines and learning this sophisticated software gave me confidence that I could make things on a computer and learn complicated concepts. I give him a lot of credit for my comfort level with technology at a very early age.
At this point I set a life goal of being a VFX Supervisor at ILM. Granted, at the time, I had absolutely NO idea what that meant. It was just a job title that I saw under the names of people in behind the scenes pieces, so I thought that’s what I was going to be when I grew up.
While in high school, a good friend of mine showed me how to use trueSpace 2 and Bryce 3D. We got both installed on my home machine which freed me to start tinkering around on my own. Also in high school, the cincher of my career direction came out in theaters. The Matrix. Up until then, being techy and computer obsessed was just nerdy. The Matrix was not only technologically inspiring, but it made me feel like being techno-savvy was super bad-ass! I very much wanted to be part of the vfx/cg animation world.
The combination of all these things led me to Full Sail to study Computer Animation. I wanted to get into a job where I was working on 3D characters as soon as possible. It wasn’t until I got to the Character Setup class that I learned that rigging was where I wanted to go with my career. This is also when games as a career started to surface as an option. I didn’t much care which area of entertainment I went into as long as it meant I could work on 3D characters.

Can you share your learning curve and experience over the years as a TD going from finishing school to getting your first job?

The transition from school (where I had both Rigging Dojo founders Brad Clark and Chad Moore as instructors) to my first job lasted roughly 3 months. In this respect, I consider myself INCREDIBLY fortunate.
While still in school, during my rigging class, I had a lab instructor who left towards the end of the course to work at a company called Turbine. Since rigging was something that really sparked my interests, I kept in touch with him throughout the rest of my time at Full Sail. I’d send him my group project rigs and he was gracious enough to give me feedback and advice when I ran into technical issues. Not long after I graduated, this lab instructor turned mentor was looking for an entry level rigger to join his team at Turbine. Thankfully he saw some potential in me and hired me to fill that position!
In my first couple years at Turbine, I learned a ton about the ins and outs of production. There’s so much more to being a developer than the specific craft you’re trained in. It was quite intimidating at first to learn all of that and a game engine. At Full Sail, we didn’t get any exposure to engines or production pipelines. I get the impression that has changed at most schools, and both are now a regular part of 3D programs.

What does your day or week look like now that you are on the Tech Animator side vs. more of a rigging or pipeline TD?

Tech Animation at Insomniac means supporting the rigging pipeline and Maya tools for artist, primarily animators and riggers, but in some cases other departments as well. As a lead, my main responsibilities throughout a week involve jumping around to a number of different things. Depending on what’s going on and where we are in production, things can change week to week. Here are some of the things that I do regularly:
Meet with various feature teams to evaluate progress, plan goals, collaborate on a plan of execution for the next set of goals, etc. It’s in this area that I get closest to our games. The work is very close to the heart of what our audiences will experience.
Meet with the riggers on my team to discuss their goals, both short and long term. This is also where I get feedback from them on how things are going on the team/project/studio.
Provide rigging support for projects. I generally try to stay out of important tasks because the amount of time I can spend on production work can vary greatly day to day. I’ll take on smaller rigging tasks when they pop up. This helps the people on my team stay more focused on the larger things they’re working on. I also really enjoy working on prototypes for a new idea.
Fix bugs both in the game and in our tools.
Work with other leads and the project manager to schedule. Because production is constantly changing and evolving, we evaluate and adjust on a weekly basis.
Collaborate with the character TDs in both studios on direction of our tools.
Participate in code reviews.

Can you talk about developing for VR projects vs. a more traditional game and some things you learned or overcame that might have been a surprise?

As a studio working on our 4th VR title, we’ve learned an incredible amount about developing games for VR. To me the most surprising aspect of working in VR is how easy it is to trick your brain into accepting what you’re seeing is real. Back on Edge of Nowhere development, we had areas of the game where you’d walk along cliff sides that overlooked steep edges. My hands would get really clammy and sweaty every time I ran across them. I truly believe that VR is something you need to experience first hand to really understand it. It’s a very visceral experience to have your fear of heights triggered just by playing a game. It’s an exciting medium to play in and we’re pushing the boundaries exploration in VR with our latest title, Stormland.

This is a behind the scenes teaser (I make a brief appearance):

https://www.youtube.com/watch?v=Gla5gObbERs

And here is our trailer!

https://www.youtube.com/watch?v=DJBXA8gN-5k

At Rigging Dojo we get a high percent of female students and some of our most successful students have been women, what has your experience been as a female in Tech and games?

I am extremely fortunate to have spent so much of my career at Insomniac where gender is a non-issue. My leaders and colleagues create a safe, professional and collaborative culture in which everyone is able to thrive. It’s not something I take for granted. What I find the most troubling is that women still only make up a small percentage of the industry. I think we’re hovering somewhere around 15-20%. I thought after 17 years I’d see a more balanced population, but the increase has been meager at best. This makes me sad, and it’s why I got involved with a mentoring program. The least I can do is play a small part in helping more women make their way into this line of work that I love.

You have been a mentor for artists like Izzy who we just interviewed, do you still do mentoring and what was that experience like?

Izzy and I were paired up through a mentorship program called Game Mentor Online which is unfortunately no longer running. It was an excellent program started by Women In Game International that I really enjoyed and wanted to continue with. Since it never came back online, I haven’t been actively seeking a mentoring program, but I would like to find one that has a similar structure and vibe to it. I miss it, and as mentioned above, it’s a way for me to help women break into our industry.
As a side note, Izzy was WELL on her way to a budding career as a Character TD when I started working with her. She’s incredibly smart, hard-working and relentlessly learning new things. I’m so incredibly proud of her! <3

If you could give your past self any advice on working, life and the games industry what would it be?

There was a long time where I was very self conscious and fearful of not knowing things. If a topic came up in conversation that I didn’t understand or wasn’t familiar with, I’d just listen and try to figure things out. It really weighed on my self esteem. On the outside I’d nod along like I was keeping up but, internally I was upset and convinced that I was stupid. I felt like a fraud and that soon I’d be discovered and fired. Eventually… we’re talking years… I had a bit of a mental shift. There came a point when I got so tired of feeling so terrible about myself despite my career still moving forward. I can’t remember the catalyst, but I started experimenting with speaking up. I tried it out a little, here and there, and saw no perceivable adverse effect. As time went on, I got more and more comfortable with putting myself out there and asking questions when something was raised or referenced that I didn’t know or understand. Now, I’m on the complete other end of the spectrum. I ask about anything I don’t know. Completely shameless.

There were a few surprising things that came from this 180 (okay, maybe not THAT surprising, but it was for me)…

1) Nothing bad ever came of it. Not once. No one ever shamed me or made fun of me or thought less of me. In most cases, people have been happy to explain and help me.
2) I learned a lot from my peers. So often people we’re more than happy to take the time to teach me.
3) A lot of people were in the same boat. So many times I’d hear echos from others of “oh yeah, I don’t know either”. There are even times when people who seemingly appear to nod like they understand will admit they don’t when the topic is cracked open! Why do we do that?! I think that showing vulnerability is difficult and uncomfortable, so we tend to do what’s more comfortable. We nod and pretend to know.
My advice to my younger self would be: Let your vulnerable and authentic side show. It’s okay to be imperfect and not know everything. We’re all in good company. Give your peers the benefit of the doubt that they’re more helpful than harmful.

Last question – what book are you reading right now or last finished?

I usually keep both a fun book and informational book going at once.

I just finished Trevor Noah’s Born a Crime: Stories from a South African Childhood. I highly recommend listening to the audiobook version. Now I’m looking for something to read/listen to next. Any recommendations?
On the informational side, I’m reading Being Wrong: Adventures in the Margin of Error by Kathryn Schulz. It’s an interesting dive into the psychology of being wrong. This kind of stuff is fascinating!

How can people best find you online?

Twitter would be the easiest way, although I don’t post too often: @NinaFricker

 Thank you so much for taking the time for us.

You bet! The pleasure was all mine!

p.s. Want to see someone interviewed, let us know so we can talk with them! Our next interview will be with Sophie @ Insomniac Games California

A character TD/rigger on the awesome  title! Congrats!

Then next after her in our women in Tech Art series will be Julia Bystrova, Lead Character Rigger at Tangent Animation who just finished up work on the all Blender CG film from Netlfix called “Next Gen” by  http://www.tangent-animation.ca/ 

We hope to have more Blender training available this coming year as it expands and matures its animation and rigging tool set along with major UI improvements (Blender Rigging for Netflix Next Gen )

Categories
Blog Post interview Microcast

Blender Rigging for Netflix Next Gen

Like many people, we took notice of the work for the new movie coming out from Netflix called “Next Gen” by  http://www.tangent-animation.ca/ 
Today we talked with Rigging Lead David Hearn (https://www.levelpixellevel.com) about rigging and about working with Blender on a large scale production. (We were also joined by friend and Blender master Charles Wardlaw )

Interview:

Listen to the interview podcast here, or on our microcast TechArtJam.com

Check out the trailer:
Next Gen
Be sure to check out the great Blender Robot projects that David has posted on his blog. Here is one of the latest “Machine Making Ep 6”
Main goals of this Machine:
  • To model and rig a full robot in Blender 3D
  • Build a double-jointed leg system using IK
  • Add an interesting city background and push the final composition.
  • Download the asset here: https://gum.co/LpTEb
  • Full Animation Test: https://youtu.be/WUj4sFzvGpU

https://www.youtube.com/watch?v=Y-og3yQPSHc

https://www.levelpixellevel.com/machinemaking/machine-making-episode-6-full-robot (check out the full blog post for more behind the scenes)

Blender Rigging Features:

First check out the latest build Blender 2.8
https://www.blender.org/2-8/

Check out the features by yourself by playing with these files provided by the community.

Great Rigify Addon – Great place for beginning rigging in blender
Blen Rig – Auto Rigging Solution
Great place to start rigging in blender:

https://www.youtube.com/watch?v=0U3NjTvwdWI

Blender Bendy Bones Example
Blender 2.8 New Armature Display settings
Blender 2.8 new collections and groups
Blender 2.8 Animation + Eevee
Great Intro To Python In Blender
Learning Blender:
Free Blender Models