"I can't wait to *see* what you come up with"

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • "I can't wait to *see* what you come up with"

      (I apologize if I posted this in an inappropriate board. I also apologize for creating two accounts - I have not received the confirmation e-mail and redefining the password did not work)

      Hello, Mr. Mike and Rez,

      I would like to thank you for your book: both the book and the source code are fantastic. The game forum is also a great resource it is not common to find authors who go out of their ways to help the readers. I browse the forums every now and then since I purchased the book in 2012; however, I had never registered before.
      Since then, I have been playing around with your game engine implementation. I truly enjoy the flexibility architecture of the engine. In fact, I chose GCC4 to be the reference for my research project. Since last year, I have been working on flexible, customizable ways to implement accessible games considering different user disabilities. There some works regarding the design of accessible games, however I could not find extensive works regarding the implementation.

      My goal was to provide different specializations to an IO-free base game logic in order to, at a later step, tailor the IO to suit different interaction (dis)abilities. GCC4s architecture suited my needs perfectly.

      In the last two months, I have been documenting the most relevant features and approaches of the engine. Some weeks ago, I made my code repository public. The result is UGE, which is available at <github.com/francogarcia/uge>.
      Some parts of UGE are organized or named slightly differently from GCC4. Some I have yet to implement and either are missing from my engine or use your source code. Some features needs refactoring and worsen the implementation with sloppy code*.

      In general, UGE extends, improves and adds new features to GCC4, exploring both the data and event-driven architectures and components to provide customizable, profile-driven interaction personalization without modifying the game logic. For instance, some new features include:

      - 3D audio support;
      - Input mapping;
      - Data-driven player profile tailoring;
      - Data-driven event setup (for instance, using the profile to enable or disable game events to present the game interface for different interaction abilities);
      - IO-free game logic;
      - IO-free scene graph;
      - IO device abstraction;
      - Abstract scene graph for rendering and spatial audio;
      - More cross-platform code (although some parts are still Windows only).

      Currently, it only compiles in Visual Studio / C++ 2012. It does not compile in previous versions due to some C++11 functionalities. I found out today that what I though was a clever forward declared enum class does not work in Visual Studio 2013, although it did in 2012.

      For a quick overview of the engine, I created a brief, illustrated guide. It is available at <github.com/francogarcia/uge/ra…%20in%20a%20Nutshell.pptx> (PowerPoint) or at <github.com/francogarcia/uge-ev…E%20in%20a%20Nutshell.pdf> (PDF). This guide also details my approach to develop accessible games with UGE.

      The game prototypes are proof of concept so far. However, such as Teapot Wars, they offer a cohesive example that features all of the main approaches from UGE to create a Universally-Accessible game (more details about this in the last link). Hence this topics' tittle. Although, with UGE, you may disable the graphics and have an audio only gaming experience. :)

      I am currently evaluating it and asking for developers feedback regarding the engine. As my work is derivative of yours, I assume you are best developers to whom I could ask for feedback. Should you wish to provide feedback, it would greatly help me to improve the engine. Suggestions and criticism is always welcome.
      To avoid repeating myself, this is the link to the evaluation: <gamedev.net/topic/655200-looki…ine-for-accessible-games/>.

      Once again, thanks!
      Franco

      * Edit: after re-reading this, this sound ambiguous. I am referring to my own implementation here.

      The post was edited 1 time, last by Franco Garcia ().

    • It's hard to give feedback without seeing the code. Reading through your documentation, the architecture looks to be exactly the same as ours with a few tweaks here and there. Is there something more concrete that you did that you'd like us to comment on?

      I think the concept of improving the accessibility of games is really cool, so I'm interested to see what you come up with. :)

      -Rez
    • Thanks for the input, Rez!

      You are correct. As I wrote, I based my architecture on yours. However, for a different goal.

      Here is a very raw tech demo: <github.com/francogarcia/uge/wiki/Game-Tech-Demo-Tutorial>.

      I am discussing the approach with IGDA Game Accessibility SIG researchers.
      I think it can turn into something very nice!

      Once again, thanks!
      Franco

      Edit:
      By the way, I just remembered something I wanted to ask.
      In Chapter 11, you talked about the COG GUI system.
      Do you have any reference for it?
      I searched for it some time ago without success.

      The post was edited 1 time, last by Franco Garcia ().

    • Hello, Rez,

      I apologize for the late reply.

      As I wrote before, I am exploring your awesome architecture with a different goal - run-time adaptations. Most of the changes to architecture consider future support for assistive technologies.
      The goal is to address the main accessibility strategies with a simple approach.
      Your data-driven actor approach is really great and promotes great run-time flexibility.
      I am trying to to extend it to make it even more flexible - considering accessiblity purposes. Let me try to explain.

      Yuan, Folmer and Harris Jr., in Game Accessibility: A Survey, outlined three main strategies explored in accessible games to improve game accessibility:

      i) {Enhance, replace, reduce} stimuli: this is related to visual and hearing impairments;
      ii) Reduce time constraints: this is related to cognitive impairments;
      iii) {Reduce, replace} input: this is related to motor impairments.

      As it is possible to simulate the Game Logic with IO-free components, it also is possible to define the IO-related components at a later time. Thus, instead of using a one-pass approach to create the data-driven actors, it is possible to use a two-pass:

      1) The first creates the Game Logic actor with the IO-free components;
      2) The second adds IO-related components to (a).

      An entire IO-free game with the relevant components and events defines the entire game simulation (1). This is great because it allows to have an entire game that can specialized for any desired IO. In particular, it can be explored for accessibility purposes. Think of the game resulting from (1) as a template to create games with the same gameplay - it is just a matter of defining how the player interacts with the game.

      To explore this in a data-driven way, a player profile describes an abstraction of the user interaction needs that should be satisfied.
      To achieve this, a player profile, at run-time:

      a) Creates and configures IO subsystems;
      b) Defines the desired input device and automation, if needed;
      c) Adds output related components to the actors;
      d) Registers relevant event handlers to the desired events.

      As components are attached at run-time, it is possible, at any time, to add IO components to them (2). This results in a game with optional input (with script AI) and output data. As it is always possible to attach and detach components and change data, this greatly helps (i) and (ii). With the AI-component, even (iii).
      Although this works nicely for graphical games, it does not work well for every stimuli - for instance, it is not suitable for audio-only games.

      There is another thing it is possible to explore in a data-driven fashion: events. As events are registered during run-time, it is possible to fetch the desired handlers from a XML resource and choose what to enable per profile.

      This works well alongside components: events are great to decouple the implementation, provide instant stimuli feedback (i) and allows defining a communication protocol to automate game commands (iii). It also covers many of the existing accessibility guidelines, strategies and allows to explore different assistive technologies for IO.
      Thus, both combined cover (i), (ii) and (iii).

      As everything in the player profile is created at run-time, it is possible to start mixing different existing components and handlers to define the game for different interaction needs. As everything is configurable and text based, it is always possible to refine the settings this is even more interesting considering it is possible to override some Game Logic settings if needed. Thus, if it is possible to tweak the data parameters to create an accessible game, then it is not necessary to code anything.
      When this is not possible, the new specialization will add either new components or events (or both). Provided there exists a possible combination, it might enable one more group of players to play. A nice side effect on this is that every new implementations can improve the old profiles and aid the creation of new ones with decreasing need for code. This effectively means that considering a new interaction need may improve the game for someone else as well.

      What is interesting here is that, provided the profiles do scale well, it might be possible to cover multiple interaction needs at once after the definition of several profiles. After all, it will reduce the problem choosing the desired components, handlers and data from the available implementations to customize the game.
      If it does work, it could improve game accessibility, as it is an easy to implement, yet flexible and very customizable approach.

      What do you think, Rez? Maybe you could offer me some insights and/or advice.

      For very small demos, it seems to be working well. I have yet to code a decent demo for it - I will start working on it by next week.
      At worst, it offers an easy way to explore many of game accessiblity guidelines.

      The post was edited 1 time, last by Franco Garcia ().

    • I do something similar in my own engine with 'users', users can have any number of components attached to them, and these components related to different IO such as rendering, input, audio (humans), etc. It's worked out pretty good for me so far, but I haven't had a chance to stress test this under a development environment to see how it works in practice, only small amounts of testing.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • Hello, mholley,

      Thanks for your reply - that is great news!
      As the entire approach is data-driven, it becomes a matter of fine tunning the text profile to enable and disable what helps the user and change values to tweak the game logic.

      As the entire interaction is defined at run-time, it is easy to change input and output devices to play the game. It is also makes it easier to register new ones (such as assistive technologies).

      As the game logic is already implemented, most of the coding efforts for the new profiles are destined to create new event handlers, components and/or IO subsystems. It is like an easy way to create an accessible game of the first implementation with the added benefit of being able to reuse the new and old features and letting the user choose what better suits his/her needs.

      I will have a bit more free time soon and start coding a nice demo.

      Thanks for the help! :)
      Franco

      The post was edited 1 time, last by Franco Garcia ().