Models with multiple meshes

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Models with multiple meshes

      Hey Guys,

      I am having trouble understanding how to go about doing something, it wasn't really covered in the book besides maybe a passing comment on it.

      Here is a scenario, we have a game with say, a Tank actor in it for a game like battlefield or the old Battle Tanx game for the N64. The tank has possibly 3-4 different meshes in it; The main body, the turret, the cannon on the turret, and maybe the tracks. In the Model loading SDK I am using (ASSIMP) , when I load in a model I am given a list of meshes in the model, for the most part the test models I have been using have single meshes, yet I did try a tank model and only got the base. I understand that I could technically just loop through each of these meshes rendering each one, but from the way the book explained the Scene Graph, heirarchal objects such as a tank should really be creating child nodes for each of these additional meshes. So some questions

      a) How are these individually represented in the physics system without seperating? Is it a single rigid body that represents the entire actor, or individual parts of that actor? And I suppose if the actor changes, say, the turret rotates, then we would update the shape of the rigid body so that the collisions match more accurately to the visual representation?

      b) How are each of these individual orientations tracked? If the player moves his mouse left, it should rotate the tank's turret to the left, if my actor only has one World Transform to position and orient the tank base, how would I go about storing the other parts? Would it be in a special Actor Component?

      c) Say were playing a FPS and you are walking along and a sniper decides to peg you off from a distance, so we would switch the type of bullet actor to a rag doll style, how is this managed? How are each limb identified and reported to the visual representation of the object? I know it would be via an event, but my question is more so to do with how an object would go from

      Human Actor, Kinematically moved with a single collision hull
      to
      Human Actor, moved by physics, with multiple collision hulls.

      I have read through the book looking for idea's, and have thought on how I would go about this, but I am drawing blanks here
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • RE: Models with multiple meshes

      well, the way i would do it (and im almost a complete noob) is by representing each part of the mesh as it's own actor. and they are all childs of the main "tank" actor. each child can have it's own orientation and if needed it's own collision component.

      about the ragdoll, the way Unity does it is creating 2 actors, one is a playable character and the other is a ragdoll. when the character dies simply switch between them and transfer the bone data to the ragdoll for initial skeleton. it might be problematic if you have characters dying all the time but for something like an FPS it should be all right.
    • You mean that the Game Logic actor has a list of children? That kind of breaks the Actor Component architecture doesn't it? So my game view sends an event 'Event_Actor_TurretMove', how does it identify the turret from the rest of the components to apply the transformation?
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • As with anything else, there are multiple ways to do things. I'll let Mike weigh in on the specifics of Bullet and the GCC graphics system and just explain abstractly how we solved this type of problem on Rat Race.

      In Rat Race, the main character (Tina) was represented by four separate meshes. This allowed us to easily swap out meshes when she changed clothes. We could have shoes, tops, skirts, and other bits of clothing all be compatible with each other. We had some physics in the game for a while, though in the end we decided to pull it out and do almost everything with animation. It was a comedy/adventure game so having that direct control was important.

      Graphically, there was a single root node that represented the canonical Tina position within the scene graph. Each mesh was a separate node that was attached to the root as a child. Whenever anything to update Tina's position, it just updated the root node.

      On the physics side, we had a single collision box surrounding Tina. As the box moved or was effected by physical forces, we just had it effect the root node. Any translation or rotation we propagated down the scene graph as normal, meaning that if Tina got knocked over or was forced to rotate due to some physical force, it would correctly rotate each mesh.

      Now, let's dig into your questions (and the comments left by Dash).


      a) How are these individually represented in the physics system without seperating? Is it a single rigid body that represents the entire actor, or individual parts of that actor? And I suppose if the actor changes, say, the turret rotates, then we would update the shape of the rigid body so that the collisions match more accurately to the visual representation?

      If it were me, I'd have two physics boxes. One box is the tank itself and the other is the cannon. They would be attached together with a rotational constraint. Some physics systems allow you to treat this as a single object, others do not. The key is that you have two volumes and that they are constrained together. If the tank bounces, the cannon should bounce too. This should all happen automatically when you apply the constraint.

      Many physics systems also have the concept of a motor, which allows you to move an object along its constraints. In this case, you would just manipulate the motor when the cannon rotated. Even if you don't have that, it should still be easy. Since you have two physics volumes, you can apply forces to them separately.

      The graphics side should be easy enough, just follow the same pattern that we did on Rat Race. You would create a single root node with some child nodes for each different mesh. If you get a cannon rotation, you can rotate that child node independently of the parent. Separating your object into multiple scene nodes and physics volumes is fairly common in games.

      Now, all that having been said, you'll likely need to make some modifications to the GCC code base to make this all work. At the very least, you'll need to expose more functionality from Bullet. Once you get the core idea working, the rest should fall into place.


      b) How are each of these individual orientations tracked? If the player moves his mouse left, it should rotate the tank's turret to the left, if my actor only has one World Transform to position and orient the tank base, how would I go about storing the other parts? Would it be in a special Actor Component?

      Your actor's TransformComponent object should represent the root node. You might actually only need to extend the PhyscialComponent interface and write a MultiPhysicalComponent class that can handle multiple physical objects. You'll also probably need a new Renderable component to handle setting up the scene nodes and to be able to swap out meshes.


      c) Say were playing a FPS and you are walking along and a sniper decides to peg you off from a distance, so we would switch the type of bullet actor to a rag doll style, how is this managed? How are each limb identified and reported to the visual representation of the object? I know it would be via an event, but my question is more so to do with how an object would go from
      Human Actor, Kinematically moved with a single collision hull
      to
      Human Actor, moved by physics, with multiple collision hulls.

      This one I don't know about and will have to defer to Mike. I've never worked on a game with ragdoll physics and I'm pretty sure he has, since I believe Thief 3 did exactly what you're describing. Worst case, it shouldn't be much harder than swapping out the actor's PhysicalComponent. Best case, it's just a function call.


      well, the way i would do it (and im almost a complete noob) is by representing each part of the mesh as it's own actor. and they are all childs of the main "tank" actor. each child can have it's own orientation and if needed it's own collision component.

      This can also work. Honestly, you don't need full actors as children for this kind of composite actor. It tends to be wasteful in terms of memory and performance. Still, it can be a valid strategy. For example, all of the props in Rat Race were full actors and many of them (like coffee cups) could be attached to other actors. We had an Attacher and Attachee component that we used to manage the connection. It gets to be a pain sometimes, though, especially with complex objects and animations. That's why we chose not to use this system for Tina.


      about the ragdoll, the way Unity does it is creating 2 actors, one is a playable character and the other is a ragdoll. when the character dies simply switch between them and transfer the bone data to the ragdoll for initial skeleton. it might be problematic if you have characters dying all the time but for something like an FPS it should be all right.

      That also seems wasteful to me. I suppose it's fine as long as both actors are never in memory at the same time, but you're still incurring the cost of instantiating a full actor whenever someone dies. That seems wrong to me. At the worst, you should only be recreating the physics component.


      You mean that the Game Logic actor has a list of children? That kind of breaks the Actor Component architecture doesn't it? So my game view sends an event 'Event_Actor_TurretMove', how does it identify the turret from the rest of the components to apply the transformation?

      I'm not convinced that an event is the right thing here. Events are really good at notifying a bunch of objects that something happened without coupling all those systems together. They're generally bad at single-call situations. For example, sending an event that every actor is subscribed to asking the actor of ID 6 to rotate is a bad idea. All actors will subscribe to the message and you'll execute a lot of these statements:

      Source Code

      1. if (m_id == pEvent->GetId())
      2. return;


      You don't need any of this since the control comes from the view layer. You already have a reference to the controlled actor in HumanView (search for the VSetControlledActor() function). For other tanks, you can do the exact same thing. The AiView will end up sending the event anyway, so it can just directly call into the actor attached to it. This doesn't break the architecture because the View's job is to send commands back to the logic for processing.

      -Rez