Day 4, nodelling some components together and finally some amount of actual rigging
Continuing to build on the previous video with some updates to the core control shapes with better direction and selection and moved the model and nodes up from the world (this was done off screen due to crash recovery)
Myth: Having more than one shape node under a transform is a problem
Busted: You can have as many shapes as needed, the problem is that some scripts and tools assume there is only one shape node and or the shape node is the name of the transform+Shape. Write better tools.
Rigging up pedals:
Try to keep pivots clean, either world 000 or local in the position you need (put there by the buffer)
Rotate order for the pedal – make sure the driving axis is not influenced by the other two axis (like X in rotate order XYZ)
Build rig function in the graph and don’t relay on the hiearchy of nodes to make the rig work, use good graph connections so if there are changes in the hiearchy the rig function won’t break. (see rotate order comment previously ) need rotate order help check our infographic
Locking attributes are an interface issue not a “rig” issue, the rig shouldn’t break the actual functionality.
Make the pedals stay flat (oriented to the world) is the rigging goal (AKA anti-counter animation rig). Concept is rotation Data of the wheel control can be reversed and drive the pedal in order to have the rig do the counter animation.
Rigging Dojo Note:Raf starts talking about custom nodes vs. Maya default nodes and how connections like we are about to do cause Maya to create unitConversion nodes to deal with Angles and Radian conversions between connections. Rigging Dojo Alumnus Ryan Porter has release an open source set of angle nodes to help keep the graph clean You can use the animBlendNodeAdditiveRotation node to do this without extra nodes or unit conversions to reverse the rotation if you need or want to to stay in vanilla Maya.
rotationX -> input A and B of the animBlendNodeAdditiveRotation node then set the Weight B to -1 you now have zeroed out the rotation from the input.
Back to Raf: pedals_M_staff_Ctrl.rotation X ->mulDoubleLinear (*-1)-> pedals_M_pedalLeft_ctrl_srtBuffer
Now drive the mesh (temporary to test rig but in the video creates a multDoubleLinear node by mistake, should be an add node)
Notes on Left and Rig components (example is the left and right pedals) It all can still be thought of as a middle component
We asked about a shot need like a gag that animators need to counter rotate the arms like they broke.
GREAT TIP!!! as long as it is for a single shot fine to do a single one off
“Don’t complicate anything you design you can preemptively address any case you can think of…. address the minimal requirements. as well you can and quickly as you can and let them find out what they like about the main use cases”
Rig the other side:
Duplicate the pedals_M_pedalLeft_ctrl_srtBuffer and name it Right
Move the buffer to translate the rig over to the right pedal location
At this point Raf hit a point where there were some bad issues with the rig because of so many connections going to one node that was effecting many areas…time to break up the connections
Under pedals_M_cmpnt break up the deform output system
deform (create new nodes)
These new deform nodes replace the quick connections done before so re-connect the incoming connections from the controls to the deform nodes and then back out to the mesh
the pedals_M_pedalLEft_ctrl_srtBuffer ->addDoubleLinear to the new deform-pedal_m_pedalLeft_srt– and back to pedal_L_footRest_mesh
Time to drive the wheel mesh with the control rig so follow the process for component hookups outlined so far and check out the video
need to create an pedals_M_cmpnt- output
hook up the connections to drive the wheel and we need world transform as this is the master control or close to master:
When you do “god” nodes or master nodes always add offsets to these controls as a safety backup. The offsets allow you to expose it to animators without creating a problem with trying to deal with Maya pivots.
… This section gets a bit complex and hard to follow here in text, make sure you watch the video and build the nodes as you follow along
quality comes from Iterations, build a rig that you are not afraid to edit…building a robust and clean rig allows this.
Q: can you add an attribute to turn off the autocounter rotation on the pedals.
A: Adding an attribute to allow the effect to be turned off can create a bit of a cycle loop on the graph but it isn’t too bad, won’t cause a problem. Using a custom attribute “blend” to the pedals_m_sttaff_ctrl and feed it in to the multDoubleLinear value to change the multiplier.
NOTE: Thank you to the other TDs reading over our documents and catching problems, spelling or things we missed, it makes it better and we appreciate it.
Let’s get started with a bit of background about you. What led you to computer graphics and rigging as your specialty?
I guess it all started around 1997-1998 when I was 16 years old and I got interested in computer graphics and 3D. I remember I saw a commercial on TV for a collectible magazine to learn how to start doing VFX like they do in Hollywood. So I started this collection and this was both my starting point in the field and how I got started self-teaching. This collection was oriented to 3D Studio DOS and it was everything I had to involve myself in the 3D field for the web in those early days.
In 2000 I already had some experience with 3D Max when I started college. I studied Digital Design & Graphic Design, but during those years at college I didn’t specialized in CG, it was more of a generalist education in the design field (graphic design, web design, introduction to web scripting, art theory, photography), instead of the kind of studies where you finish with a very specialized demo reel to start looking for a job in your particular mastery, such as modeling, look, rigging, etc. During those years at college, I took all the classes offered that were related to the animation industry, and the introduction to Maya class was one of them.
During my last years at college, I started working as a web designer. Those were probably my first steps into scripting.
In 2006 I got my first job related to the 3D animation field. It was a very small company in my hometown, Barcelona, where I got the opportunity to work as a 3D generalist for an animated TV series development, and also for the toon version of the FC Barcelona players. I did everything from modeling to rigging to texture, lighting, and the final comp. As I mentioned, it was a very small company! At that time, I was the only one at the studio with rigging skills, so I modeled and rigged all the characters for the TV series development. That was my first time writing code for rigging.
Where did you go from there, you were working as a generalist but then did you have an area you liked best?
During my following steps at different companies, I’ve been primarily working as a character modeler, lead character modeler, or modeling supervisor, but I’ve been rigging as a freelancer for The SPA Animation Studios and Aardman, or rigging my own stuff in my free time. Here you can see some examples of my rigging work:
Given all of that, it is difficult for me to choose a preference between working as a modeler or as a rigger, because I enjoy both the artistic and the technical side. So, I think that’s why I’m very conscious about the relevance of a good topology and all the implications of the modeling work on the rigging side.
During all these years I’ve also been working in the educational field, as a lecturer and online lecturer/instructor, specifically on this subject. I’ve recently finished my collaboration work as a lecturer for The CG Master Academy (CGMA) about the technical side of modeling work (http://3d.cgmasteracademy.com/instructors/)
It seems like you had a big jump from being a generalist to then go through Aardman and then end up at Disney. What was that transition like for you as an artist, and did it change your workflow or skill set?
Yes, it was a big jump, but it was a process that took eight years from when I started working as a generalist to when I landed here at Disney as a character modeler. During those eight years I’ve been working on my artistic and technical abilities, always oriented towards characters, but I have the impression that now I’m more focused on the artistic side, mainly because at Disney I have unlimited resources to keep growing as an artist, from my friends and coworkers seated around me, the feedback I receive from my supervisors and art directors, and because of the legacy and tradition that the company has.
On the other hand, now that I am spending 70% of my time using zBrush, it means that I spend most of my time on the artistic side, working on the volumes and shapes and am therefore less focused on the topology or neutralization which always come later on during the modeling process.
You mentioned your experience modeling and rigging and how, because you have done both, you are more aware of topology when modeling. How do you approach topology differently, and can you share some tips for both TDs and modelers wanting to improve?
Definitely having experience in both fields dictates a lot of the way I work, especially if I have the opportunity to model and rig the same character.
There are some basic rules about topology that more or less everybody knows, such as uniformity on the topology density, facial topology layout, avoiding triangles, nGons, or poles with more than 5 edges, etc., but there are a few simple topology things that are very helpful for the rigging side that I always do on my characters.
I always use quadrant/quarter meshes, especially on the character’s limbs. This basically means that I have a cylinder mesh for the limbs, which makes it easy to identify the middle edgeloops to split the mesh in quarters. For me, this is always very useful in terms of joint placement or even painting weights.
Another important thing I always do on my characters is to maintain consistency and uniformity on connection areas, which basically means to use clearance layouts on connection areas, keeping all the poles, for example on the shoulder area, on the same edgeloop. This consistency helps me to keep an easily readable topology while working, for future examination of the model, and also as an easy way to share parts from my different models. Also, on the rigging work, it helps me to paint the skin weights, because the weight distribution grows homogeneously thanks to the fact that all the interruptions on the mesh and poles happen at the same level, since they are all on the same edge loop.
Another good habit as a modeler is to be consistent with your topologies, especially in production. If there is not a universal mesh, it is a good practice to try to identify some landmarks or areas to keep in the same spot between your different characters.
Lastly, I like to publish my models with the mouth and eyelids closed, because it is easy for me to track the correspondence between loops coming from the upper lip/lid to the lower lip/lid. This makes it easier to keep nice topology, paint weights, and even to get a texture on the eyelids which doesn’t stretch when you close the eyelids.
All of this, obviously, is not a “must”, but for me it is always helpful and makes my work easier as a modeler and also as a rigger.
(An example of the facial topology layout I use for my own projects, which is ready for production)
Modeling tools have gone through a big revolution over a short time. How has this affected your work? And as far as technology and skills, what do you feel is needed to get the characters to that next level a production requires.
For sure, modeling is probably one of the areas which have evolved fastest in recent years, due mainly to the implementation of zBrush in the majority of production pipelines. This implementation means that you can create high-polycount detailed assets, as well as bake all the detail into displacement maps, or normal maps in case of VFX or video games. One of the pros of using sculpting software like zBrush is that it is beneficial even when working on traditional character modeling for animation, where the baking details techniques are not required because the characters don’t demand that amount of detail. I’m thinking of the traditional process of modeling, where we basically push vertices to get the desired shape. This process has become easier thanks to sculpting software because now, instead of tweaking vertices, we move and push these vertices using brushes, and this results in, not only a big improvement in performance and speed, but also the capacity to try and experiment faster, so iterations on the modeling process have become easier.
Before I started working at Disney Animation, I wasn’t a 100% zBrush user. I had used it in the past, but it wasn’t as definitively an important part of my workflow. But since starting here, I have realized all the benefits of using it in production, especially in terms of speed, the fast tweaks you can do on your model, and how easy it is to get some polypaint on it to help you to present the character for approval.
Also, here at Disney, we tend to use a universal topology to populate the different worlds of our movies. This topology evolves from show to show to accommodate the different needs of each department or the different demands of each show. In general, we tend to use a quite dense topology in order to be able to have enough resolution to recreate the variety of designs that exist in our shows. So having the ability to work easily with dense meshes makes zBrush an important tool in our pipeline.
Simplicity in rigging is a big challenge for booth speed and animator overwhelm. What are your thoughts on how to keep animation controls intuitive?
All animators have their own preferences in terms of how to make the controls more intuitive for them, but in general what they want is to have all the manipulators around the character to avoid traveling with the cursor to the right side in order to change values in the channel box.
This means you have to find a balance between a manageable number of controls on the character and how to relate their different transformations and axes (translate, rotate, and scale in XYZ) with all the different behaviors or deformations you’re providing to the character, especially in areas with a large density of controls, like the facial area. Of course, there is always the opportunity to provide different levels of control to the animators, such as main controls or micro controls, for instance, the micro controls for the lips. But at least, on the main level, to provide an intuitive balance between control vs. manipulators.
Your Troglodita rig caught our attention, and that of many other artists. Would you walk us through some of its making? Deformations on it are stellar and the face rig has really nice control, and I am guessing there are quite a few fixes or corrective shapes on the rig to help with deformations.
The Troglodita was a freelance project I did for The SPA Studios. They had an older version of the character done in 3dMax, so in 2012 they contacted me to create a new version of the character, new modeling and rigging, but this time for Maya.
It was a great project because of the complexity of the rig. Sergio Pablos, the owner and director of The SPA Studios, who has a background as a Disney 2D animator, requested a rig with a broad range of deformation that was able to, for example, move, rotate, and scale the majority of the controls on the rig to give him the desired range of plasticity/expressivity.
For both the body and the facial rig, I wrote a python autorig. The body autorig is a standard kind of body rig, only with an extra level of control, such as the body hair deformations, in line with the customer’s expectations.
The facial rig was trickier, as it was basically a bunch of guided deformers sliding over a nurbs skull surface in order to help the animator keep the focus on the animation instead of on the constant volume preservation or fixing the interpenetration with the different parts of the face. This was very important, especially on the muzzle, where I wanted each control to describe a non-uniform arc trajectory, when it was translated through a single axis according to the precise muzzle shape at that particular moment. This guided system allowed me to create a coordinate system based on the UV coords of the nurbs surface to build a dependency/relationship system for the facial controls, and to enable a PDS system to drive a bunch of corrective facial shapes.
As a skin influences the facial deformation, I used a mix between joints and a nurbs patch system that was dependent on a bunch of nurbs curves which received the direct input from the animation controls.
So, in order to be able to create this kind of system, I wrote a bunch of extra python tools to help me to manage the work with the corrective shapes, including exporting the raw meshes to work with the shapes separately from the rig, splitting the corrective shapes, connecting them to the final rig, and updating them when necessary. It was a ton of work, but at the end I was very proud of the work done, because in a lot of aspects, it pushed me to unknown areas at that time.
(Example of the work done for the Troglodita rig)
We haven’t talked much about scripting, but you mentioned that you first started writing scripts for your rigs when you were a one-man show. How did you continue to learn and advance your scripting ability? Do you find that it has benefited you as a modeler also and not just when rigging?
I started with scripting in college and then at my first jobs when my career was focused on the web design industry. I remember some fun projects I did with action script, the scripting language for Adobe Flash. Later, when my career started to focus again on the 3D animation industry, I used this amateur scripting knowledge to start doing small things with MEL, such as the zipperRig and some other stuff I’ve never shown publicly. At that time, I remember some friends at work recommending that I switch to Python, so I decided to enroll in a couple of scripting workshops for MEL & Python. I’ve kept in contact with my coding guru friends, including Angel Pavon (http://www.agedito.com/blog/), who is always willing to give me some coding advice.
Did scripting help myself as a modeler? Yeah, of course. Scripting is always a plus, even on modeling—especially when you realize that you are doing any kind of labor too often, from procedural modeling tools to tools designed to help you split facial shapes. Scripting is always a good thing to have in your toolset, as it’s often a time-saving tool, especially on tasks which require trial and error.
What do you tell students who don’t want to learn programming or more technical aspects of modeling and the software?
Modeling and sculpting is a discipline more attached to the artistic side, so not all modelers develop their skills on the technical side, especially these days where, thanks to sculpting software, you can start with just a piece of digital clay.
Definitely, the artistic eye is the most powerful tool for a modeler—and it will be even more in the near future. So scripting isn’t ever a “must”, but as I was saying before it could be very beneficial for some particular modeling tasks, or at least in order to be able to communicate with someone with technical skills to develop a tool for that particular kind of modeling work.
I have a few more rigging questions, the zipper being one of them. It is a really cool looking rig. What was the process to get the zipper effect and how did you get the models to line up and not go through each other?
The zipper rig was one of the first autorigs I wrote with MEL. It was more than 8 years ago so I don’t remember all the details, but it was just a bunch of point-on-curve info nodes on a curve constrained between the editPoints of the main curves of each half of the zipper. Then, depending of the current state for that particular point on the zipper, open or closed, the point-on-curve info node returns a position along the uCoord of the curve, where 0 or 1 equals opened and 0.5 equals closed. But as I said, it was a long time ago so I don’t remember all of the finer details.
How do you deal with the re-rigging process early on in production when models are being changed and tweaked so much?
This is always part of the process, so I usually write my rigging scripts in different modules or stages and I like to save my WIP files in these different stages so I can easily re-do some parts of the work if necessary. Also, on these different rigging stages, I like to create ‘sets of things’ to make certain kinds of nodes more visible for me as a rigger. These sets are part of my creation process but they never go to animation. An example of these is my different joint sets, which I use to help me with the skinning process, where instead of adding all the joint hierarchy at the same time, I gradually add the different joint chains as I’m going through the skinning process.
No talk about model topology is complete without discussing the default pose to rig. Do you have a preference? Why?
This is always a tricky point. From a modeling point of view, modeling on a TPose is always easier, especially when we have to deal with modeling fingers and hands. Modeling while your piece is aligned with the world axis is always easier, but you can find ways to work with an unattached arm aligned to the world if your goal is to end with an APose character. The same idea could be applied if we’re talking about an easier way to place a character rig guide for joint placement during the rig creation. TPose is always easier, for the same reason.
From the deformation standpoint, APose is probably the most efficient way to get a neutral base for your shoulder deformation, because this is exactly the upper arm’s middle range of motion, especially if you’re only using joints for the deformation. On the other hand, if your pipeline allows you to work with PSDs, TPose would be my choice because this would be the neutral mid-point for all the corrective shapes for the shoulder/upper arm area.
Another thing to keep in mind is the look or appeal that the character has when we deliver the model to rigging. TPose is always kind of a weird pose because, from a design and proportion standpoint, it is harder to appreciate the shoulders’ width, the upper arm vs. the lower arm proportions, or even the whole arm vs. the body proportions. At the end, as a modeler, you always end up deforming the area at different ranges of motion to see any weaknesses in proportions.
Related to this topic, especially for the shoulder articulation area, I always recommend the work of Brian Tindall(www.hippydrome.com) as a reference for modeling for articulation, to study pivot joint placement and how the corrective shapes help to achieve the nicest deformations, especially with their gifs where you can see the effect of using only the skin cluster vs. adding the corrective shape over the joint deformation.
Deformations are always a balancing act between character design, topology, and animation needs. How do you approach skinning and the final deformation work? 12. Can you talk about your approach to bone placement and skinning vs. what you do with corrective shapes, helper bones, or custom deformers?
I don’t have any particular secret way of painting my weights, what I usually do is paint them in a very methodic way, blocking all the influences and only releasing two of them each time I paint or modify the influences: the one I want to paint, and the one I want to steal influences from.
When I worked as freelance rigger, I always tried to avoid 3rd party plugins because I didn’t want to have to install them on the animators’ computers, such as pose readers to drive my corrective shapes or custom deformers. For this reason, I always approached corrective shapes using a bunch of standard Maya nodes and assuming that I would have some limitations. But by putting in extra effort on helper bones, especially in articulations such as the elbows, knees, or even knuckles and fingers, I can define some behaviors to these helper bones to help me to keep volume preservation on these particular areas.
Do you use any publicly available scripts or tools when rigging? Do you have any modeling scripts or tools/tips you can’t live without?
I do not use a lot of external scripts but there are a couple that have become very useful on my pipeline:
abSymMesh, a very useful script to check symmetry meshes and to create symmetrical/asymmetrical shapes. I use it constantly, not only for facial shapes, but also as a checking script when I bring my meshes over from zBrush.http://www.supercrumbly.com/3d.php?sid=112#.VmSI2dATE3U
Is there anything you wish would be fixed or changed in the software in order to improve production work and make it easier?
One of the things I would like to change or fix is the limitation on the number of inputs/outputs of Maya’s nodes. Sometimes, to do a rig or any kind of process that requires multiple instances of the same kind of calculation, you end up with an unnecessary number of the same duplicated node because you only can input/output one value or 3 per each node, in the best scenario with the XYZ/RGB output channels, instead of having an unlimited number of inputs/outputs.
I would love to see this reviewed, because it would help to get better performances and maintenance on the rigs, instead of ending up with custom nodes, which are easy to track and update in a company environment, but difficult to manage for small projects or freelance riggers.
Do you see any major advancements in rigging for characters in the near future, or are the techniques mostly standardized now?
This kind of question is always difficult to answer. I’ve been rigging with Maya my entire career, so I guess my answer is going to be conditioned by that.
As riggers, we know the animators’ needs and the kind of controls they need to perform their work, so very often we end up with a recurrent pattern of node structures to deliver the standard package, such as switchable IK/FK limbs with bend deformations and pin elbows/knees on a biped character. This means we’re using a lot of nodes to create one specific rig behavior that is a kind of standard in the industry. So, my guess is that at some point we will end up having custom nodes to achieve these results while being more efficient in terms of node economy and rig performance.
What have you found, training-wise, to have been the most helpful to you for growing as a character TD and modeler?
I consider myself to be self-taught in the fields of modeling and char TD. However, I enrolled in some rigging and scripting workshops because they are always interesting in a way of how to start with a new tool or software.
But what most motivates me to learn and keep learning is to be inspired by other artists. By just looking at and studying their work you can learn things, or establish ideas or solutions to follow. Sometimes you can see something on a TD reel or in a “making of” that catches your attention and makes you ask yourself, “How do they do that?” Then, you start a journey of trying to figure out how to accomplish that goal, following your own paths and ideas and trying new things. This gives you a learning path where you have to find your own way, your own solutions. Maybe you end up with something totally different, but I bet you’ll have learned a lot of things along the way. Also, talking with other people about your own questions and having their point of view of how to approach a problem is something very effective in terms of continuous learning. You can do a single thing in so many different ways…
You have been teaching now, so on the flip side, what has been hard to get students to understand when it comes to animation-ready modeling?
I’ve been lecturing for some time and also recording educational stuff but it’s always been more related to the modeling side with an approach to the technical side, in terms of how to deliver a proficient model for rigging, or how to create facial shapes ready for animation.
One of the questions that comes up often when I’m teaching is regarding the density of facial meshes for animation or production. Obviously, when you are starting with topology puzzles, it is always a pain to add density on your mesh, so students, or even people who only create characters for still images, tend to focus only on the shape of the mesh, without adding enough topology density to enable a successful facial deformation or performance, so they are often surprised when they see examples of characters for production.
Related to this topic, people also ask me “What if I model a low version of the character and then I subdivide the mesh to get enough detail for facial deformation/performance?” And my answer is that I prefer to manually model the mesh that will receive the deformation, from the rig or even from the shape modeling, because I like to have control and to identify the main edgeloops of, for example, the face, because this will allow me to decide which are the edgeloops which will become a wrinkle or fold, which are the two edgeloops in between the folds to use to create the wrinkle… I like to have this kind of decision-making power over my meshes, so if instead of this I modeled a low version and then subdivided it, I would get a mesh with some areas that would get unnecessary detail and others that would be without the required density for facial performance, which is more important for future deformations.
Lastly, it is always really insightful and interesting to hear what a “Day in the life” looks like for a production artist at Disney working on a big feature animation project. Can you share with us what your day looks like?
A regular day at Disney Animation starts around 9:00 and we usually begin with the department rounds where the character modeling supervisor of the show and the production assistant come to each office to check how your character is going and when it will be ready to show to the art department. At Disney, we don’t work with the traditional orthographic views of the character, so we normally get a couple of ¾ views from the art department, and from those we create the sculpture of the character in zBrush with all the elements that help us to sell the model. This includes polypaint, slightly posed, etc. Then, when the modeler and the supervisor are happy with the model, we show it to the art director to get feedback from their side. This process includes more rounds, including the art dept., draw-overs, etc. Once all of the people involved with the sculpture are satisfied with the result, we present it to the directors of the show. If we get the approval from them, we start the whole process of neutralizing the character to get it ready for production.
Character creation at Disney Animation is a strong iterative process until all of the pieces match perfectly, so when a character has been approved and published, it goes into a test and a calisthenics process where even the smallest point could be iterated and polished even more.
One of the things I most like about being at Disney is that we are fortunate to have access to a huge library of documentation recorded and archived from past productions, from the early days at Disney to the current shows, where we can still learn, so there is always something interesting to watch or listen to while I’m working.
We also have a program called Educational Enhancement where we have an annual budget to spend on continued education, so it is very easy to keep learning new things.
Obviously, there is also time for fun and recreation. The studio is always organizing events, screenings, presentations, or even concerts to maintain a creative atmosphere all around—and of course there are the foosball and ping pong breaks!
Any last tips, advice, or anything that you see is missing in current reels or training?
One of the things I would like to see more often on reels is a balance between technical and artistic stuff. The majority of the time we can see modeling reels with impressive artistic skills but I miss the technical side a little bit, such as topology, or even facial shapes, which prove your skills and control, not only on neutral or posed shapes, but also on shapes in movement.
On the other hand, we can find impressive rigging reels, from the technical side, but often I miss the artistic eye on the rig performance a little bit, such as on body or facial deformations. We should always prepare our reels thinking not only from the position we are aiming for but also on the customer who we will deliver our job to, modelers to riggers, and riggers to animators. They are the ones who will judge our work from a different perspective, so keeping them in mind when we work on our reels is always a plus.
Last question—a fun one. Which book are you reading right now, or which was the one you finished most recently?
Actually, I’m very passionate about my work, whether it be modeling or rigging, so if I have some free time and I’m not going out with family or friends, I gladly spend this time on something work-related, either for a personal project or on learning something new related to CG. But if I finally get some extra time, for instance when I’m on a flight, I always read. My last book was Creativity Inc. by Ed Catmull.
Thank you for your time and for sharing your amazing work with us!
Character TD jobs require cross discipline understanding and something we get often are artists transitioning to Maya from Max and have a hard time with the way Maya approaches things like transforms. To make it worse, often times the same name means something diffrent in both software but they almost do the same thing so let Rigging Dojo help you out.
If you need more rigging focused training check out our ondemand class
What are the accepted use (if any) of Joints(Bones) vs Locators(Point Helpers) and more importantly, when, where, and how to use Groups effectively?
Is it best to group a bunch of objects together first before skinning them to a Joint?
How do you create a Dummy control in Maya?
What are shapeNodes?
There are more but simply understanding the makeup of your 3d objects goes a long way to understanding the software.
Translation to Maya
First lets look at some Max tools compared to Maya then we will talk details.
Bones == Joints
Point Helper == Locators
Expose Transform == Locator + Maya math nodes
Dummy == Null Transform (Group no children and no shapeNode)
Editable Poly Object == Transform + shapeNode
Geometry type (poly/nurbs/shapes) == shapeNodes
Creating a “group node” with nothing selected creates an empty node automatically named “Null” in Maya
Grouping transform nodes together before skinning doesn’t do anything extra for you because grouping isn’t doing to a mesh combine. You still have separate skin clusters for each piece of geometry, it isn’t like attaching many meshs together under an editPoly mesh object.
If you group geometry together you are just parenting the nodes under a new transform node. It creates a “Null” renames it to “Group” and automatically parents any nodes you had selected under the new “Group” node. These are not like the groups in Max that act like diffrent objects are combined in to one, locking the selection of the children. Maya has “Assets containers” that allow for similar functionally.
If you did wanted to have all your geo and just one skin cluster (for speed sake..the fewer skin clusters the better) you would have to do a mesh-combine operation first to put all the geometry shapes under a single transform node….like attaching mesh objects in Max.
(Note that Maya 2015+ allows you to mesh-combine with an option to also combine skin weights, allowing for some cool workflows and more flexibility to update models later)
Locators in Maya have some extra attributes in addition to just their transform node. They have some independent control over their size and location of the cross shape.
Locators can also output their world space location. You can then connect out their location in world space to other nodes. Similar to the expose transform tool in Max, but not as complex unless combined with other math nodes in Maya.
(example the Maya distance tool uses the locators world location attribute to drive the distance measurement so you can parent them in to any hierarchy and still get their true measurement.)
Joints are the most costly in Maya and have some diffrent attributes than regular transforms. Think of Joints as a transform node + orients (where they aim) and scale inversion (causes child joint to translate instead of scale when parent is scaled) etc..
They offer the most control and act almost like a Max list controller, combining what you would need to use 2 or 3 group transform nodes and some math operations in to a single node.
Shape Nodes are a type of node that is responsible for displaying and holding the geometry and other icons like the locator cross and the cone on a light. They are a child node of the “transform” node (our Null node that holds translate, rotate and scale). If you edit a polygon cube in Maya, moving the points around or extruding faces happens to the “shape”. Shape nodes can be re-parented to other Null nodes but you have to do this with a script, you can’t re-parent them directly in the Maya GUI.