Better abstraction of rendering technology through IRenderer?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Better abstraction of rendering technology through IRenderer?

      Hi Everyone,

      I'm currently writing my own game/engine, which works in a pretty similar way to the GCC4 code, except that I'm making it cross-platform since I spend a lot of time switching between Linux and Windows. As a result I'm currently using SDL(and thus OpenGL) for my graphics rendering, but I'm not sure if it'll give me the kind of flexibility I'm looking for. Right now I just want "Something to render 3d graphics easily", but at a later point I may want to dig more into the graphics side, and use raw OpenGL, or somthing more established like Ogre3D.

      What I'm looking to do therefore is add an abstraction layer for graphics rendering, so that I can plug in which ever renderer I like. I read about GCC4's IRenderer interface for abstracting graphics, and decided to take a look at the code, but it doesn't seem to be quite how I expected. I was expecting to see IRenderer implement functions like DrawLine(), DrawMesh(), DrawTriangleStrip(), ToggleFullscreen() etc. so that I could pass the interface my game data, and the implementations of IRenderer would determine how to perform that action. Thus allowing the rest of the code to only care about IRenderer.

      However there seems to be quite a lot of other parts of the code in GCC4 that hook directly in to the current renderer. For example creation of the D3D device is done directly in GameCodeApp::InitInstance(). Similarly there are indicidual scene nodes for both D3D 9 and 11, and multiple places in RenderComponent.cpp where it has to test which renderer is being used and perform different actions.


      My question is whether there is a better way to abstract the renderer? Is the GCC4 code this way, because there is a fundamental flaw in the approach I described, or is it simply because the engine wasn't expected to use anything other than directX? It'd be great to get opinions before I program my way into a dead end.

      Thanks!

      The post was edited 1 time, last by Claxon ().

    • I'll give you an idea of how I structure my rendering engine (which hides OpenGL in a separate library).

      - Windowing, this is an independent layer which abstracts Windowing for different systems (WinAPI, X11), this will be added to the 'Render System' as a render target, which also set up the window for OpenGL.
      - Low Level Wrappers, this is anything that OpenGL needs to have implementations for (Framebuffer, Texture, Buffer, BufferArray, Shader, Program, Uniform, etc)
      - High Level Objects, this is what uses the low level objects which are abstracted away (Meshes, Materials, Compositors, Viewports, Render Targets)
      - Scene Objects, these objects contribute to the scene, and uses instances of the high level objects (Scene Nodes, Entities (Cameras, Lights, Renderables, etc))

      Essentially if I wanted to put these to use I would

      - Create a window
      - Create a render system (OpenGL)
      - Add the window as a render target to the render system
      - Create a scene
      - Add a scene node for a camera to the scene
      - Create a viewport on the window using the camera (which creates the connection)
      - Create a scene node for lights, meshes, particle effects, which in turn create everything they need (Buffers, Materials (Textures, Shaders/Programs)).
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • I've noticed some problems as well, while trying to port to OpenGL. Although there is some abstraction, there's a lot of DirectX-specific code scattered around the codebase. Are there any plans of fixing this? It's not really a good learning material, feels more like a legacy project that we have to maintain. :(
    • In a real project, you would abstract things even further. You're seeing DirectX stuff scattered around because we're using DXUT to manage the Windows app. This was a deliberate choice that we made. The advantage of this choice is that we could get away from explaining a lot of the Windows-specific stuff. DXUT takes care of a LOT of little annoying things that we simply didn't have the page count to cover. It's also easier to understand. You turn over everything to DXUT and give it callbacks so it can notify you when stuff happens that you care about.

      The disadvantage is what you're seeing right now, which is that we're tightly coupled to DXUT, which means we're tightly coupled to Windows and DirectX. It would be non-trivial to port it to something else. As I said in my previous post, I would probably choose something like SDL today since it offers many of the same benefits while still being multi-platform. Were we to do a 5th edition, we might consider that.

      To answer the original question, my own engine has a project called RenderSkin which abstracts all DirectX functions. All DirectX code lives here and hides behind abstract interfaces, which are used by the rest of the engine. I also have an OS project which does the same thing for OS calls. Porting my engine to OpenGL would require me to rewrite the implementation of the RenderSkin, but nothing else in the engine would have to change. Likewise, porting to Mac would require a new implementation of the OS layer. The other 11 projects in the solution that makes up my engine are completely platform and render independent.

      The creation process is relatively straight-forward in my engine:
      1. Application subclass is instantiated.
      2. The Application subclass overrides the CreateRenderSkinFactory() function so that it creates and returns the appropriate factory. This is the game-specific app and is the only place I include a non-interface file from the render skin.
      3. In the engine layer, Application::Init() calls this factory method to create the render skin.
      4. It creates the appropriate Scene class (not DirectX scene; think of this as a human view)
      5. The Scene object is initialized, which causes it to call into the factory and create a RenderDevice.
      6. The RenderDevice encapsulates the core render functionality, whether it's DirectX or OpenGL (the Scene doesn't know or care).
      7. Graphics resources like textures are created through the render skin factory. The caller gets back a Texture interface.
      8. Rendering happens by sending a RenderCommand object to the Scene.
      9. When the Scene's thread decides to render, it goes through all the render commands and renders each one by calling Geometry::Render(). Again, Geometry is an interface from the RenderSkin. Under the covers, the concrete class contains the DirectX data necessary to actually render it.
      10. Rinse & repeat.
      The important thing here is that the process is exactly the same for OpenGL. There's no difference as far as the engine is concerned. The only system that cares is the implementation layer of RenderSkin.

      -Rez
    • Thanks for the reply Rez. That is pretty similar to the approach that I took. My rendering code has a number of interfaces that are used by the rest of the code to get things done without knowing about the concrete implementation. I didn't separate my rendering code into a separate project though. It's something I still might do, because on occasion it is very tempting to just use one of the concreate classes "for testing purposes", which at some point I'll probably forget about and leave it in.
    • To honest, I don't know how it was handled on The Sims 4 or any other professional project I've been on. I've spent my career in gameplay and AI so I've never been on that side of the fence. Mike has, so he might have a better answer. :)

      -Rez