You can also average the time of the last few frames, but (like everything) that can bring it's own difficulties.
About choppiness Even if you have perfect timing and you render perfectly on every single frame, you can still get choppiness. This happens because sound algorithms and other "stuff" happens periodically. It's not long enough to cause a missed frame, but it's long enough to mess up the rendering. Say you're rendering using a 60Hz monitor (and you have DirectX synched to it). Now collect the difference in time between render frames. You'd expect to always see 16 or 17 milliseconds. Here's what you might see: 16 16 16 27 <--- What happened? 5 16 Some high priority task ran and delayed the start of your render, however DirectX still rendered your frames at the proper time because of the double or triple buffering. I've noticed this phenomenon when using Windows media player to play sound, but not nearly so much when using Bass. -Jeremy
On Actual Timing In Windows There are four major APIs I could name in Windows to get the time (and various other APIs that seem to be a layer on top of these APIs). A quick summary: TSC, the Time Stamp Counter, a new instruction as of the Pentium architecture. It tells you how many cycles the CPU has executed since it was powered on. QueryPerformanceCounter (QPC), the high-resolution timing API in Windows. timeGetTime() (TGT), the old-school method for reasonably-high-resolution time. GetSystemTimeAsFileTime() (GSTAFT), with a return type as big as all outdoors. GetTickCount() and the C functions _ftime() and _clock() are implemented using GSTAFT. Unfortunately these all have their failings. I will save you a lot of headaches and palaver and simply tell you: use QPC. There are two places where QPC falls down. One is on a specific type of "south bridge" chipset, where a busy PCI bus will cause QPC to leap forward by seconds at a time. This is a well known and acknowledged bug. The other is on specific kinds of modern hardware, such as a laptop our own princec has. On computers where QPC is implemented using TSC, and where the CPU supports SpeedStep, the frequency of QPC will change when the CPU frequency changes, and there's no way of noticing this without a lot of heavy-duty wizardry. Both these problems are rare, so my advice is use QPC and console your users if they encounter one. For the record: TSC is fussy because a) you have to time it yourself, b) on CPUs with SpeedStep the frequency will change, and c) on multi-core CPUs, if you don't specify CPU affinity, you'll get different results from the multiple TSC counters. TGT is fussy because a) it's so low-resolution, only 1ms in the best case (using timeBeginPeriod()) and usually not even that good, b) it drifts a little, which Tom Forsyth blames for Startopia games going out of sync, and c) it is slow to call (taking 1-2microseconds to execute). GSTAFT and its ilk are unusable, as they only have 10ms accuracy on the best of days (and only 55ms accuracy on older machines).
1msec isnt really "low". All quake engine based games are using such a timer and its really good enough. Surprisingly. My machine shows that leaping behaviour. Its pretty annoying if a game has a problem with that (as you can see in that bug listing... one should backup that timing function to detect this kind of bad behaviour). >GSTAFT and its ilk are unusable, as they only have 10ms accuracy on the best >of days (and only 55ms accuracy on older machines). Its actually win9x (and not old machines per se). With 2k/xp its 10msec and linux (and mac?) is 1msec. If you're writing old java its the only type of timer you have, but it sorta works for method #1 (the other ones too... in theory). I came up with some adaptive yield throttle loop construction which even works reasonable well with a resolution as bad as 250msec. In this year's 4k competition several games were using that method and it worked very well across a broad selection of hardware. The side effects are (obviously) that it can over/under steer a bit if there are huge performance jumps, but the damping sorts that out for the most part. Needless to say that its of course a major pain in the rear regardless. A 1msec resolution is damn awesome in comparison
Hey, just for your information titan attacks pauses quite frequently on some of my machines. I think it is because my machine sees a cpu hog an gives it a penalty. Do you add a little sleep between frames? However, when these slow downs happen the game goes slowly too. Which is nice, rather than the action going on whilst you can't do anything.
"There are four major APIs I could name in Windows to get the time.." hmm.. which one of those is the Multimedia Timer?
That'll be because your machine ain't actually fast enough to run it at 60Hz Try 16bit desktop colour. That's just the way I deal with slowdowns Cas
<devil's advocate> Would that be one implicit vote for not locking the frame rate? </devil's advocate>
In the commercial games I've worked on, we've either had no timing strategy at all or everyone has their own little timing strategies for their subsystems. The most used method (up until we moved to middleware) seems to be everyone using their own frame counters to move objects around. We work with consoles however, but it always amazes me how disorganised and hacked to death commerical code can be. It's amazing it even gets released. Currently, we're using Unreal3, which uses strategy #3 everywhere (at last a strategy to adhere to!).
That's probably because the boss wants you to "finish" the work as soon as possible.. and thus one doesn't have the time to produce really well-thought code. This is one of the reasons why I have never worked under anyone, even when I was making retail games. Someone (certainly not me) said that #3 was more or less an unprofessional shite. Ouch.
Well it would be if my sales were bad but... they're not and that, at the end of the day, is all that counts! Cas
I can certainly see your point, what with Gears of War showing so poorly at E3 this year and practically every game that won an award at the show being an Epic licensee. Method #3 is obviously not viable.
Indeed.. also for the Quake3 engine.. <irony> what a piece of crap smoothness-wise! </irony> It's about time to close this thread IMHO..
Proton decay leads to chopness, due to the quantistic involvement. You're much better off using General Relativity methods.
I'm wondering how some of these methods are implemented ... 1. Simple fixed rate timer Kind of like waiting for VSync with no concurrency between CPU/GPU? Movements calculated using an assumed constant frame-rate and the code hopefully always runs in a frame? Very noticeable when it skips a frame. 2. High frequency internal timer What happens if you're CPU bound and the process loop can barely manage to run in a frame (say 60fps)? You'd be making the game run even slower. I assume movements are calculated using an assumed constant frame-rate and the process-loop is called to keep up with the frame rate? e.g. If we skip a frame, we call the process loop twice? This is fine for consoles, what about PC's with multiple refresh rates? The refresh rate must be a factor in movement calculations. 3. Delta Value Timing I'm confused about that quote, surely time is a factor in all animation regardless of the timing strategy? This is the method I'm most familiar with and I like it. I find it the easiest and most flexible to use. It's the only method (as I understand it) that is frame-rate independant. Please correct me if I'm wrong. 4. Low internal fps with tweening I've never seen this method used before and don't fully understand it. Perhaps once I do, I'll change my mind from method #3 to #4 Is this method implemented like method #2 but we just render the same frame out again until it's time for an update? Does this mean an assumed constant frame rate? [Edit - After reading the thread again, most of this has already been covered - sorry. For what it's worth, I think Fabio's arguments are spot on].
Basicly you run your simulation/physics_engine/game_logic at a fixed rate, and then freerun your renderer, which will display the physics situation at time-n (n is a fixed delay you can't avoid). This is best done interpolating the two (or more) physical states that surround the particular time you want to display. Of course you've to keep the past x blocks of whole game states as computed by your simulation/physics_engine/game_logic, otherwise your renderer will not know where to find the data it needs. Basicly it's a fixed rate simulation and a rendering that features sample rate conversion. With sample here I mean a whole simulation/physics_engine/game_logic state (position of each object, etc..). This method is approximatively as smooth as #3 (approximatively because the smoothness you regain is due to interpolation of objects' positions, etc..), and adds latency too. Clearly an inferior method than #3 but the best in case of networked multiplayer games for obvious practical reasons due to game clients synchronization.
People already have, I just deleted them Anyway, time for the thread to be closed. It was an interesting discussion though....