AI Component vs AI View

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • AI Component vs AI View

      The book discusses using an AI view to handle events and processes. It also discusses using AI components as part of the actor system to change AI behavior. However, much of the AI in the Teapot Wars is done through neither, but rather through script processes. All the AI view really does is act as a reference for how many AI teapots to spawn (one per AI view).

      How would you incorporate components and the view? Does the view act as a state machine and AI components as different states?
    • I think we just recently had a discussion about this, let me see if I can dig it up, in the mean time you could look around as well. It was quite an extensive talk if I recall right.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • Hmmm, so the conversation was actually about other components, however I DID have this conversation with my colleague specifically about AI in views or components, our conclusion was that AI could actually belong in multiple places, depending on the nature of the AI, for example.

      In a game like command and conquer, the AI would not be a single actor, but would command an army of actors, so in this case the AI may be better in logic (script or code), where the view interfaces with the case specific 'commander' object.

      In a game like unreal tournament, the AI would always be a single actor, and would benefit from sharing the same command interface as the player, and would likely be best to simulate the AI and convert its responses directly into game commands.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • I remember we discussed AI views and how many per AI, but not a distinction between view and AI and implementation. That makes sense about a game by game basis. I was trying to think of a general AI engine implementation though. Now that I think about it though, AI from game to game varies so much it would be difficult to have an AI system in an engine. You could do general path finding and other simple actions, but probably not specific AI states or logic.
    • I really handy system that I plan to implement is an AI behavior tree, I use one called RAIN {Indie} for our Unity project, and it is completely arbitrary what data you put in, and how you use it.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • This completely depends on the game and how your game architecture is set up, but I'll talk a bit about some of my common architecture philosophies when building the AI system for a game.

      The AI system is usually split into multiple levels. At the lowest level is the mathematical side, which includes general kinematic math (if applicable) as well as path finding. This is very low level and usually implemented in C++ for maximum performance. You will likely be calling into this system very frequently. Above that is the decision making layer, where individual agents make decisions about how they should behave. There are a myriad of techniques here and the ones you choose will be largely dependent on the game you're making (Drawn to Life used Hierarchical Finite State Machines while The Sims 4 uses a utility-based scoring algorithm). Above that, you have the tuning layer. A lot of people omit this layer but it is just as important as the others. This layer is where you have all the little values that can be tweaked by someone (sometimes a designer, but more often it'll be you). There can be dozens of these for a simple game or thousands for a complex game (like The Sims, which has dozens per interaction).

      Conceptually, that's how you break it down. Sometimes you add layers or move them around if necessary. For example, an RTS like Command & Conquer will have relatively simple AI for each agent (respond if attacked, shoot at things close to you, etc.) while the real decision making and tactical AI will be one layer up. Sometimes there are multiple AI systems all working together, like one for military and another for building up your economy. I think Master of Orion II did that, but I could be wrong. A blackboard architecture can be helpful here.

      Another important concept is that of percepts and actions. A percept is a perception of the world, it's the input coming into the AI system. This is all the data the agent has to make his decision. Actions are how an agent acts upon the world. This can take many forms, but it's usually a command of some kind sent back with the information on how to proceed.

      There are different levels of completeness and correctness as well. Chess is a completely deterministic and open game. The AI has 100% of the information about the world and its actions are completely deterministic. Now let's say that we changed the rules of chess so that when one piece attacks another, there's a formula for determining the outcome, liked a summed weight based on material point cost (so a pawn attacking a queen would have a 10% chance of success). All of a sudden, the game is completely different and the AI has to take that non-deterministic outcome into account. That makes the gamestate still open, but the outcome of actions unknown. It gets even trickier if we change the rules again to say that you can only see into the adjacent squares. Now the world is no longer open, so the AI can't guarantee the position of anything it can't see. Most video games are non-deterministic and not open.

      On The Sims 4, percepts take the form of an AutonomyRequest object, which contains all relevant data for the AI system, including the Sim that's making the decision (which in turn includes all of their motives and current actions). The sim uses this data to formulate a new action and pushes a fully created Interaction object onto itself. This object represents the action the sim is doing and it's exactly the same object used when the player clicks on something and selects it. The only difference is the source of the interaction and the priority (interactions chosen by the player override AI interactions and certain interactions (like death) override everything). This concept is very important; there should little or no difference between the AI selecting something and the player selecting something.

      Before I dig deeper into the implementation and how it fits into the GCC architecture, I need to talk a little more about the component system and some things that it's missing. Specifically, it's missing the concept of Lua components. In a real game engine, you'd have a second component system that lived entirely in script. In my engine, the ScriptComponent creates a Lua ScriptEntity class, which directly maps to that ScriptComponent (the ScriptEntity Lua class effectively inherits from the C++ ScriptComponent class). This ScriptEntity has a table of components which are composed very much like the C++ components. In fact, they exist in the same data file as part of the ScriptComponent definition. This lets me write components in Lua and attach them to different entities in the game. These are usually gameplay-specific components. For example, in my farm simulation game, each almond tree has the AlmondTree component, which defines all the data and methods necessary for simulating an almond tree.

      One of these components is the AiThinker component, which defines the ChooseAndPushBestAction() function. Whenever an entity needs to decide what to do, they call this function and a new action is pushed onto them.

      My action system is essentially a state machine where each atomic step in the action is its own state. For example, walking up to a tree and fertilizing it is an action with four states. In order, they are:

      1) Path to Target
      2) Set Rotation
      3) Loop Animation
      4) Call Function

      Each state executes in turn. Once it's done, it jumps to the next state. The first state causes the agent to path to the target of the action. This spins up a C++ Process that calls into the pathing system, gets the correct path, then causes the agent to follow it. This is done 100% in C++. When that process terminates, it calls back into Lua and progresses the state. This is an example of the lowest layer of AI working.

      Once all four states are done, the action terminates and moves to the next action in the queue. If there are no actions remaining in the queue, the AiThinker component is notified and a new action is found and pushed onto the queue. This starts the process all over.

      Now, back to GCC. The AITeapotView is pretty much deprecated. It used to manage the percepts and the actions back to the game, but that all got ripped out when I rewrote the scripting system and rebuilt the AI in Lua. You'll notice that the functions are pretty much stubbed out with simple return values. Even the m_pPathingGraph member is no longer used; it's accessed directly in the BaseGameLogic class now.

      As for components, my architecture above is exactly how I'd build it if this were a real engine. You want it to be a component so that agents can share the same code and each one can have different settings. If you look at DecisionTreeBrain.lua at the two decision nodes at the bottom of the file, you'll see an example of how not to architect it. I have the _closeDistance member in IsObjectCloseNode and the _lowValuePercentage member of the IsHealthLowNode class hard-coded in. In a real game, I'd have those be configurable parameters in the component's definition XML so that it can be different for every enemy. A cowardly goblin might run when its hit points are at 50% while the mighty ogre might never run (_lowValuePercentage == 0). This is the tuning layer, and it's just as critical as the others. With it, you can write a single AI system that is heavily parameterized and can be used by every agent.

      On Drawn to Life, we had over 100 states that were shared among a huge number of enemies. Each enemy had maybe one unique state while the rest of their behavior was governed by previously written states. The core AI system was exactly the same for every agent. Same with The Sims 4, and RatRace, and Barbie, and every other AI system I've ever written.

      Actually, The Sims 4 is a bit different in that we DO have a few different AI systems, but they were all shared by all sims and run in different circumstances. It also uses a slightly different architecture in that there's a single AutonomyService class which contains a queue of requests from all sims. This keeps multiple sims from running their AI update in the same frame. This kind of thing is pretty common in games with heavy AI updates. I can't go into details quite yet, but I submitted a talk to GDC about this exact thing, so hopefully that all gets approved.

      Hope that helps. Let me know if you want me to expand on anything.

      -Rez
    • That helps a lot. I kept looking at the AITeapotView and wondering why it was there since it did almost nothing.

      Using Teapot Wars to make sure I have this right if you were to extend it with AI components. The TeapotStateMachine (in the TeapotAI.lua file) would be an AI component and the TeapotStates (in TeapotStates.lua) would be the actions/states the component uses. In addition, the HardCodedBrain, which has the Think() function, could also be a part of an AI component.


      Also, I look forward to your talk at GDC, hopefully it gets chosen.

      The post was edited 1 time, last by Trinak ().

    • That's mostly correct. The TeapotStateMachine class would be the component and the states would be used by it. The HardCodedBrain class, along with the various other subclasses of TeapotBrain, are used by the TeapotStateMachine. They wouldn't be incorporated in, they would be used just like they're being used right now. In fact, all you really need to do is turn TeapotStateMachine into a component and have that component be configurable with an initial state and the appropriate brain to use. That would be the first step.

      After that, you'd want to refactor how the brains and states work.

      -Rez
    • Hi,

      I am bringing this one back to life...

      My design is a bit different i think, I have an AIComponent that acts as a base component to be overridden and it is the 'brain'.
      It contains the state machine and does the thinking and attaches new states.
      I then have an AI system for each 'brain type' to call update/think into each component.

      My questions is where might you handle events that AI might be interested in?

      It seems like a bad idea to add listeners to components or indeed things within the component (i.e. brains). My current solution is for each system to forward events to appropriate components to handle. But, I am hoping there is a better approach that doesn't require an AI system for every different aicomponent implementation. Or have i missed something completely?

      cheers in advance :)
    • There several ways to handle this sort of thing. Check out this post for some ideas:
      mcshaffry.com/GameCode/thread.php?threadid=1920&sid=

      -Rez