SceneGraph question

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • SceneGraph question

      Hi!

      If Mike or someone else could answer this.

      In the book it was written that the SceneGraph is on training wheels and far from what to put in a real engine. After googling and reading articles, most of what I found about SceneGraph is that it keeps "what's connect to what" and mostly in terms of matrices.

      So I find myself lacking information and hopefully I could get an answer here, what can be improved in the SceneGraph? what should be added or changed? and on top if those two questions, can anyone refer me to places I could read and learn how to implement a more complete SceneGraph?

      Thank you :)
    • RE: SceneGraph question

      Oh I can think of a few things - but perhaps the most important improvement has to do with knowing what graphics hardware is present and optimizing the output of the SceneGraph node iteration to take the best advantage.

      The current implementation just runs through the tree and makes all the DirectX calls to draw everything. What a real game engine would do is hold these calls in batches, and even try to optimize the batches by minimizing calls that change the rendering state.

      So there really needs to be a class that sits in between the SceneGraph and DirectX - one specifically written for optimizing the renderer.

      At least, that's one big improvement one could work on.
      Mr.Mike
      Author, Programmer, Brewer, Patriot
    • Hi Mike, thank you for the response!

      Do you mean a class between SceneGraph and DirectX to handle geometry instancing?

      Two questions:
      I was working on a "ShaderNode" just yesterday which overloaded the render children so it will do so for each pass, but then I realized it could be a problem when a child might have alpha, it will suddenly go away and render at the end without the shader applied. How am I to avoid that and keep the render settings?

      I also added a function to "SetShaderPerKidProperties(kids pointer*) so it is called for every child in the shader node. I didn't want the kids to set their shader properties because it is mostly depending on the shader itself. Maybe instead I should add a map or a hash map to the RenderProperties structure which would contain all added shader settings for rendering? (animated grass could have amplitude and direction, fur shader could have an intensity, though who sets the lights then?)/

      I did have geometry instancing implemented in the shader before, what it what you meant in your reply? Would you say to use it for every object or just objects which are designed to use it (for example, grass, trees, highly repeating objects) and the rest draw each with a call for DrawSubset per appearance?

      Would it really need to sit in between or could it just be a case of special Node and special KidNodes?

      The system I had before implementing the SceneGraph from your book acted like this:

      I had a list of entities which were iterated and checked for visibility, if passed, they were send for sorting.

      Sorting went like this: Sort by Shader, then sort by model. In case of grass which all have the same model, sort by texture.

      Then when all done, go over the list, prepare instance buffers and for each model with an instancing effect, add instance data and batch render using Geometry Instancing instance vertex buffer technique.

      When I read of the SceneGraph I thought a similar sorting can be done by just adding all models of shader X into ShaderNode "X", then the ShaderNode has the overloaded draw children call (as I mentioned up), the only problem I figured is what to do when suddenly a child is marked with having alpha.

      I am still trying to figure out a way how to achieve a similar effect with geometry instancing. Perhaps a "MeshInstanceNode" class which holds the per node data, sorted by model and texture inside its GeometryInstancingNode, in their render they will add their positions to a list/vectors, filtered by model and texture. Then the parent will handle the geometry instancing technique, maybe in the post render?

      Or perhaps it needs to be a whole separate system like I had it before?

      Just looking for help on how to implement those two functionalities the best way if possible :)

      Edit:
      Another question! Do fully clipped pixels (AlphaTesting) needs to be rendered in the last pass as well or just blending modes? that is, if I have a texture with pixels 100% invisible, can it be rendered with the rest of the models or needs to be sorted at the end and rendered front to back?

      The post was edited 4 times, last by Shanee ().

    • Some addition questions...

      How would PostProcessing be handled? for example, HDR which needs to replace the back buffer with a floating point back buffer.

      Would it just be a Node on pre-render replace the back buffer and on post render apply tonemapping etc? Having all other nods as its children?
    • LOL Way too many questions!

      Geometry instancing isn't what I was talking about - rather I was referring to a way to batch render calls to DirectX that essentially cross Scene Graph node lines. Right now a single scene node might change render state, shader, etc. which doesn't respect performance problems seen at the hardware level.

      But, the Scene graph itself can be a really convenient way to organize world render data, so to get the best of both worlds you have to have something sit in between the scene graph that will eventually send optimized batches of render data to the hardware layer.

      As the scene graph is traversed, instead of making DirectX calls it sends commands into this other class.

      I hope that makes better sense.

      Geometry instancing is something that simply saves memory - which is really important of course - but it is very different from my proposal.

      Ok, so read that answer and if it makes any sense at all we'll go to your next question.
      Mr.Mike
      Author, Programmer, Brewer, Patriot
    • I read it and I have an idea to offer.

      Why is a class needed in between? I will give you an example of how I extended the SceneGraph in order to efficiently sort hierarchal information:

      A node called ShaderNode had overloaded the render children by that it has a shader and it calls Begin and End as well as BeginPass and puts the render kids right inside the loop there. (usually it is only repeated once anyway as most of my shaders have just one pass, but it's good for both cases after all that's the intention)

      Now let us look at the kids, shall we?
      Kids are sorted by Texture and Material. Once I finish some more of the implementation I am going to implement my mesh geometry instancing from the old system to this one.

      This means all models with the same effect are naturally batched together and effect changes is minimized, this is what you said you want to do, isn't it? Well, the SceneGraph seem to prove a very natural answer to this problem by simply holding the "normal meshes" inside a "ShaderNode" (which is not a mesh itself, but just starts shader pass).

      Now I am trying to decide how to implement my geometry instancing here. Back then (before the SceneGraph) I just had all "Entities" render do the following:
      1. Test frustum, if passed go to 2.
      2. Add an information structure to a "SceneManager" queue, this was generally a list (std::vector) of shaders, each shader had a list of Meshes which had a list of Matrices.

      Later the SceneManager was called to render, it looped through the list, went from shader to shader, in each shader it went over the meshes, for each mesh it created an instance vertex buffer containing all the positions and misc colors added and rendered them via geometry instancing shader. That way 1000 grass objects rendered in one call.

      Let us go back to the new system.

      While I really loved that old system of the scene manager, I find it silly that now I might just go over my SceneNodes just like I did before with Entities, send them to another class and filter them and render. Then again maybe it is ok because I am more efficent at removing objects with this system?

      I don't know, really.

      So one option is to just have all objects that wants to be rendered add themselves to a batch queue and then be rendered via geometry instancing like before (a lot of iterations and what's the scenegraph there for then? I think it pretty much makes that class irrelevant now?

      Option number two is add special nodes that can be added positions, colors and whatever other relevant data and then batch them with the same technique - BUT:

      Disadvantages would be: only one mesh type per node of this type would work, meanings I'd need a node of this type for each mesh (which is ok I guess?). Also: loss of the benefit of child nodes again? I'd much more prefer if instead it worked for example like, the node can be added children instead of modifying its own structure, maybe?

      Benefits: I could have a node of this type per region so that way I can just add the whole radius of its kids to it and instantly make a region appear or disappear and not bother checking every kid/data inside (is it really a benefit over single culling and adding to a queue?)



      Conclusion: I am not sure how and where to apply geometry instancing in conjunction with the SceneGraph. The most optimal methods seem to make the SceneGraph kind of useless and only in the way as everything is sent for another class to just iterate again, filter and sort, wasn't this what the SceneGraph supposed to be all about?

      Yet, sending them for second filtering and then batch rendering them seems to be the best way.

      On the bright side, the problem seems to be the case for geometry instancing only, as I described before my way to handle different shaders seems to keep in style and intent of the SceneGraph hierarchal sorting concept. (sort by shader and not just by matrix)

      Amateur game developer begging for ideas, please :)

      P.S. am I allowed to show off my boyfriend's latest university project? he is on his final year and made this as his a team project:
      youtube.com/watch?v=TFW4JCSCVtE

      *proud* but hey wait until I finish moving all my rendering system to the new classes I am making then I will show work from our real project :) *ha! then again you worked on Thief III, jealous :P (thief fan, ultima [+online] too!)*
    • IMO geometry instancing is just a way of saving some memory, so SceneGraph leaf nodes would just actually point to shared geometry instances to get common stuff like mesh data. A better graphics programmer than me would likely have some great pointers here - maybe I'll ask someone at the office and see what they would suggest.

      By "In Between" I really meant that at some point, a data structure is built that eventually gets sent to the hardware, and the thing that builds that data structure is likely not going to be the hardware agnostic SceneGraph.

      Whatever this thing is it may have extremely different implementations on platforms as different as the PS3 or Xbox360, so likely they will be built completely from scratch, and talk directly to the low level graphics interface.

      Of course, like I said above I'm not the best graphics programmer (far from it!) so this idea could be completely wacked. LOL.
      Mr.Mike
      Author, Programmer, Brewer, Patriot