Vertex Declarations

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Vertex Declarations

      Hiya,

      I'm trying to build a renderer for my small game project, but I'm having a bit of a problem with vertex declarations - I was hoping someone could help?

      For each mesh in a scene, I need to render it several times each frame using different HLSL shaders. Most shaders take different vertex input structures though, which I think means I need to set different vertex declarations. But how can I do this if I can only store a mesh in memory in a single vertex format?

      Could vertex streams be used for this - one for positions, one for normals, etc? I could then just bind the ones needed by a particular shader when necessary. Nvidia seems to recommend using as few as possible though, so I'm wondering if this would be mis-using them?

      Otherwise, I could just keep a fully bloated copy of each mesh, set a huge vertex declaration for a single stream, and the shaders can just use the parts they want. This seems a bit messy though...

      Can anyone comment, or recommend the best solution?

      Thanks very much for any help! :)

      The post was edited 1 time, last by beebs1 ().

    • RE: Vertex Declarations

      Originally posted by beebs1
      Hiya,

      I'm trying to build a renderer for my small game project, but I'm having a bit of a problem with vertex declarations - I was hoping someone could help?

      For each mesh in a scene, I need to render it several times each frame using different HLSL shaders. Most shaders take different vertex input structures though, which I think means I need to set different vertex declarations. But how can I do this if I can only store a mesh in memory in a single vertex format?

      Could vertex streams be used for this - one for positions, one for normals, etc? I could then just bind the ones needed by a particular shader when necessary. Nvidia seems to recommend using as few as possible though, so I'm wondering if this would be mis-using them?

      Otherwise, I could just keep a fully bloated copy of each mesh, set a huge vertex declaration for a single stream, and the shaders can just use the parts they want. This seems a bit messy though...

      Can anyone comment, or recommend the best solution?

      Thanks very much for any help! :)


      In HLSL you can use semantics and indexes to use only the vertex data you need in the shader. Only declare those that you are going to use (in the effect code), and check so that the mesh has those stuff, like normals, UV-coordinates, tangentspace-vectors, etc. You can then only use the data that you need in your shaders, and declare everything that the mesh vertices have in the vertex declaration.

      The variable set with the texture coordinate semantic with index 0 for example (TEXCOORD0), is going to get the value that correspond to that value in the vertex structure.

      In other words, have the vertex declaration indentical to the vertex structure, and only declare the data that you need in your shaders. It makes no sence to have two vertex declarations for instance, because you don't want to change the vertex data structure when loaded to the graphics card anyway if not absolutely necessary.

      Don't know if it's possible to solve in a more efficient way with vertex streams, I'll have to let someone else answer you on that.

      Hope I made myself clear enough, and good luck.

      EDIT:
      Great btw that someone asks about 3D stuff, my favorite subject is not discussed as much as it should in these forums unfortunately... ;)
      "There are of course many problems connected with life, of which some of the most popular are: Why are people born? Why do they die? Why do they want to spend so much of the intervening time wearing digital watches?"
      -Douglas Adams

      The post was edited 1 time, last by steelblood_relic ().