Physics Timing

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Physics Timing

      Hey Guys,

      I have made a timer class wrapping the functionality of SDL's Timer. It is properly returning the delta time to me in milliseconds, only, I don't know how to properly give them to the Bullet Physics Step Simulation Method.

      In the book It says to pass them in as seconds, so in my case I am getting about a millisecond between frames. So as a float, 0.001f. This is EXTREMELY slow, I have a simple demonstration for testing that has a ball drop to a sloped platform, this then begins to roll off to the edge. Before I used my timer I simply passed in 0.005f to the StepSimulation, which looked nice.

      Am I doing this properly? It is a constant 0.001f that is getting passed in but it is a slow crawl, after the ball rolls off the edge it is like it's in slow motion, so it's not just the speed of the ball on the slope.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • So SDL thinks that every frame takes exactly 1 ms? That seems suspicious to me. Have you tried setting up your own timer and seeing how long each frame takes? I'm guessing there's something wonky going on there.

      When you have very low time values, physics systems can start to break down due to floating point precision. (Mike talks about this in his Gotcha on page 574.)

      -Rez
    • SDL Has a function called SDL_GetTicks which will return the time in milliseconds since SDL was initialized, I have a wrapper class that basically has a delta time member and a last time member, each frame gets the number of ticks, then subtracts the current time from the last time, storing it in the delta time, I then store the current ticks to the last time member.

      Source Code

      1. unsigned long curTime = SDL_GetTicks();
      2. m_DeltaTime = curTime - m_LastTime;
      3. m_LastTime = curTime;


      I then have a GetDeltaTime method that does this

      Source Code

      1. return (float)m_DeltaTime * 0.001f;


      If I set a breakpoint at the get ticks, and count ten seconds and then press continue, it shows exactly 10,000 ms has passed, so say I am at 10,000 and my frame takes 10 milliseconds to pass, I should have
      10,010 - 10,000 = 10 ms
      I then multiply 0.001f by this to get my floating point number.

      At this point I have been plugging this right into Bullets Step Simulation, I have fiddled with this trying to get it right. Should I be really be multiplying my delta time until it seems right?
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz

      The post was edited 1 time, last by mholley519 ().

    • I'm guessing SDL_GetTicks() just wraps GetTickCount() or something similar. If you set a breakpoint for 10 seconds, do you actually 10,000ms? As in, exactly? I'm guessing that you're hitting some limit in SDL since your demo is likely running way faster than SDL can deal with.

      Try artificially limiting the framerate of your game to 60 fps. This definitely isn't something you want to ship with, but it'll be an interesting test to see if your physics universe realigns itself. That should give you something around 17ms, which may be better. You can also try 30 fps, which should give you results around 33ms.

      Floating point precision can really screw you over. I think that's what's happening here.

      Incidentally, I'm dealing with something similar at work right now. There's an inconsistency between a timed event (an event that triggers after a certain amount of time has elapsed) and the decay of a statistic. They are often off by about 0.00005. Floating point precision make me sad.

      -Rez
    • Ok, I have never limited Frame rate before, I assume I just need to delay my program each frame. There is a delay method in SDL as well which will consume time.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • also would using a double instead solve the precision issue? Or will bullet somewhere along the line cast it to a float truncating it?
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • Yeah, something like:

      Source Code

      1. const unsigned long FRAME_LIMIT = 17; // ~60 fps
      2. if (deltaTimeMs < FRAME_LIMIT)
      3. Sleep(FRAME_LIMIT - deltaTimeMs);


      I think the function itself takes a float, so it would get truncated at that point.

      -Rez
    • Thanks for the help Rez, I got things sorted out, after some good sleep I noticed I had a small error in my code, the Step Simulation method is now getting around 12-17 ms delta times now, but was still horribly slow.

      Do you find that you ever have to multiply this number to make bullet physics look 'realistic'? I had to multiply my deltaT by a factor of 8 just to get this to move nicely!

      Should I be multiplying this or should I be looking elsewhere for the culprit? I have not implemented a density or physics material list yet, so both of these are set to 1.0f for density, restitution and friction. I don't imagine that this is the problem but I could be wrong.

      Also the StepSimulation method takes a btScalar, which I imagine is just a wrapper for a float.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • *sigh* I usually don't end up having to have code like this looked over, so please forgive me I don't know what it is that I am doing wrong, here is my calculation exactly as it looks. I am having trouble where it jumps way down to 3 fps, and then way up to 200fps, I only caught this once I started showing the FPS in the title bar.

      Source Code

      1. unsigned long currentTime = SDL_GetTicks();
      2. m_DeltaTime = currentTime-m_LastTime;
      3. m_LastTime = currentTime;
      4. if(m_DeltaTime < m_FrameLimiter)
      5. {
      6. unsigned long diff = m_FrameLimiter - m_DeltaTime;
      7. SDL_Delay(diff);
      8. }
      9. char* szFPS = new char[32];
      10. unsigned long fps = (m_DeltaTime != 0) ? 1000/m_DeltaTime : 0;
      11. sprintf(szFPS, "DeltaT: %u FPS: %u", m_DeltaTime, fps);
      12. SDL_WM_SetCaption(szFPS, NULL);
      13. delete [] szFPS;
      Display All


      Could you point out what it is that's not right, I could really use a second pair of eyes here, thanks.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • You shouldn't have to lie to the physics simulation. Mike did the Physics integration so he'd be able to tell you if there's something specifically wonky with Bullet, though looking at the code, it appears that we just pass the time in directly:

      Source Code

      1. if (m_pPhysics && !m_bProxy)
      2. {
      3. m_pPhysics->VOnUpdate(elapsedTime);
      4. m_pPhysics->VSyncVisibleScene();
      5. }


      Nothing jumps out at me as being wrong with the code. GetTickCount() (which SDL_GetTicks() likely wraps) is notoriously bad for high-resolution timing. If you're trying to time things that are 10ms - 16ms or less, you'll get very erratic results. Check out the remarks on the MSDN page:
      msdn.microsoft.com/en-us/library/windows/desktop/ms724408(v=vs.85).aspx

      You may want to try using a higher resolution timer. Another possibility is to just sleep for 20ms as a hack. The problem with the method I suggested to you is that if the frame only took 5ms, GetTickCount() may return as high as 16ms. If the next frame took 10 ms, you might get 16ms again. As you can see, this will cause erratic behavior.

      As for some frames dropping to 3 fps, I'm sure what's happening there, although you are spamming the title bar with framerate info as well as dynamically allocating and deallocating that string. That's likely to be very slow. Maybe profile it and see what's happening?

      By the way, you don't need to dynamically allocate the string. You're already allocating a static amount of memory, so just declare the array directly:

      Source Code

      1. char szFPS[32];


      That'll be MUCH faster than thrashing memory.

      -Rez
    • Rez will probably hate this, but here's my inner loop, which seems to work well enough for getting non-zero deltas.

      while(Engine::Instance()->IsFinished() == false)
      {
      MSG message;
      if (PeekMessage(&message, nullptr, 0, 0, PM_REMOVE) != 0)
      {
      TranslateMessage(&message);
      DispatchMessage(&message);
      }
      else
      {
      static long timer = 0;
      long delta = clock() - timer;
      timer = clock();

      Engine::Instance()->Update(delta);
      Engine::Instance()->Render();
      SwapBuffers(EntropicHDC); // Using OpenGL double buffering.

      // Sleep one millisecond to avoid running faster than the resolution of clock()
      // Plus this isn't remotely resource constrained, so it will still easily do 60 fps
      Sleep(1);
      }
      }


      James
    • Sleeping for 1 ms probably wouldn't effect this much. As far as I know, Sleep() is not bound to the low-resolution timer used by GetTickCount(), it's bound to the system clock (which is higher-resolution). Either way, adding an arbitrary Sleep() call is definitely a hack, unless you pass in 0 as the parameter, which just causes the current thread to yield.

      Still, having some kind of false performance load can help simulate what your game will actually do. When I was writing my multi-threaded renderer, I had a Sleep(1) in there because my logic thread was literally an empty function. Turns out this was causing the app to starve the OS of CPU cycles. A simple Sleep() call fixed that. I was able to remove it once I had the engine rendering and doing something useful in the logic thread.

      Like I said before, I would put in a much higher value just to test. Something around 20ms.

      -Rez
    • I have yet to even read up a bit on threads, I have never used them...I think maybe inadvertently.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz

      The post was edited 1 time, last by mholley519 ().

    • I wouldn't bother at this stage in the game. It'll just confuse the issue further. Threads add a level of complexity that you probably don't need for most simple games. You should only resort to threads if you're trying to solve a specific problem in your game.

      -Rez
    • I just wanted to add in the solution to this problem so anyone else who comes across it can have the answer, after talking to mr. Mike it quickly became apparent that my objects were HUGE . Reading up here shows that bullet physics by default represents 1 meter as 1.0f. My ball was 60 meters so after changing this, their is no issue in speed and no need to multiply my delta time.

      Thanks for the help guys, here is the article I read

      Scaling The World
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz
    • On the subject of timimg

      I have been using GetSystemTime for my game timing and for timing my function/process durations. Is this a bad way to do this? What do professonal programmers use for high-resolution timers? Has anyone played with the QueryUnbiasedInterruptTime? MSDN says that it has issues with inconsistent results on systems with multicore processors.
    • You should check out using QueryPerformanceCounter, I would have used that, but SDL was already set up, and I imagine it will do fine for me. I think it is Windows Specific, but It is accurate to something like a nanosecond I think.

      *Edit*

      Apparently gettimeofday is the linux equivalent.
      PC - Custom Built
      CPU: 3rd Gen. Intel i7 3770 3.4Ghz
      GPU: ATI Radeon HD 7959 3GB
      RAM: 16GB

      Laptop - Alienware M17x
      CPU: 3rd Gen. Intel i7 - Ivy Bridge
      GPU: NVIDIA GeForce GTX 660M - 2GB GDDR5
      RAM: 8GB Dual Channel DDR3 @ 1600mhz

      The post was edited 1 time, last by mholley519 ().