As I see it, there are 4 ways people do timing. Can someone tell my why my opinions are wrong here? 1. Simple fixed rate timer Basically, the game waits for x ms to expire before processing the game loop. Pros: The easiest to implement Cons: Unless your internal timer is the same as the refresh rate, it will not be silk smooth. Example: Platypus, Demonstar My Opinion This is very basic, but it works. Lots of new coders use it. 2. High frequency internal timer, render at refresh rate This is where the game runs internally at, say, 250 fps and will run its game logic the needed amount of times to keep up with the refresh rate. Pros: Fairly easy to implement, very precise collisions, silk smooth in most cases. Cons: Some refresh rates are not 100% smooth, depending on internal frequency. If internal logic rate exceeds the internal fps, the result will be a nasty slowdown. Not efficient. Examples: Bricks of Atlantis, Bugatron, Pocket Tanks My Opinion This works surprisingly well. I used it for Cosmo Bots and it always appears smooth. However, in my games, my internal logic is always so simple that it can never get to the "Nintendo Slowdown" point. And I cap it to avoid this. Still, in low fps-situations, you are burning a lot of cycles for the game's logic that aren't really necessary. 3. Delta Value Timing This is where the game runs as fast as it can, and the "amount of time passed" is sent to the logic loops and objects move at their speed*time_passed. Pros: Always silk smooth, works even in very low fps situations, easy to implement, efficient. Cons: Can be tricky with physics/collisions, can make code messy Example: Heavy Weapon, Lego Builder Bots My Opinion Although it's easy to get in there, it means that time is factor in all movement- basically adding another variable to deal with. This causes the code to be more messy in the long run. Also, it's really tricky to get stuff like acceleration to work exactly the same on all computers. If you can deal with all that, it is smooth though. 4. Low internal fps with tweening This is where the game runs very slowly internally, say 25 fps. And smooths the rendering between the last two frames based on a time variable. Pros: Efficient, silk smooth, easy physics. Cons: Tricky to implement, inprecise collisions Examples: Best Friends, Quake 2 My Opinion This is my favorite method. It's efficient and looks nice. The only problem I have is that getting it to work "just right" takes a lot of fiddling- especially when objects become visible for the first time. I recently switched my current project from #2 to #4, and it wasn't fun, but it's working good now. In high fps situations (geforce card, etc) I see no actual difference though. On the low end, it does seem to run a bit better.
I synch to the video and then compute one (or more, to not alea/miss collisions) physics steps. Also, my mouse/keyboard input handlers' events are timestamped so if necessary I can take into account also that in my physics engine (to extend the usefullness of the multi-steps approach). Ideally I run only one physics iteration/step per frame but, if it becomes necessary (expecially for collision) to avoid aliasing, I divide that frame into n necessary steps (n must be evaluated depending on speed, etc.. here some raycasting collision prediction method helps). IMO the real problem is not much the timing method you use, but the fact that framerate is not constant. The method I use (described above) is in my experience better than the 4 you mentioned, but neither this will make the animation smooth as silk if the rendering takes "randomly" 1 frame now, 2 frames then, etc.. The PC is not really an ideal platform for achieving easily constant frame rate, expecially in 3D games (that's what I was thinking about, anyway). For 2D games it's much easier though (the #4 is a good comprimise IMHO, because it removes most of the aliasing). Never seen too much physics in 2D games anyway.
By the way.. one idea I have that I should implement is to use the timestamp to force additional physics steps, so that all inputs are useful for the simulation. Anyway, all of this is going to make a difference only on very low FPS, which is though undesiderable by itself, so I think that for 3D games #3 method + the "1 or more physics steps" (as described in my previous post) may be the best compromise. The rules for me are: never compute more than can be seen && never waste inputs. Physics algorithms tend to be unstable, as you said mentioning acceleration (but then there's also curved movements, etc..), thus at least inputs should not be discarded, and always acted upon. I've been tempted several times to implement something similar to your #4 though, so I think (even++ more for 2D games) that you are making the right choice. When you're ready to draw a frame, compute an additional time (meaning when it will be actually displayed) and then interpolate/extrapolate objects positions to eliminate aliasing even if your fps are different than your fixed 25Hz objects "physics" update. It will look smooth, but will have some visual latency. Normally a player doesn't notice 10-30ms of latency between hands output and visual input anyway.
I am using #2 in my current project. You have posted this method some time ago. Hope i've implemented it right. In my first game i used #3, didn't like it. Strange that i don't get smooth movements in any of these two, maybe i am doing it wrong?
yes, but: 1) it will add latency. 2) no method can save your smooth animation if the rendering takes more than you foresaw. Imagine the loop.. and suddenly, after you've computed the positions of the objects for the next frame, then rendering took one frame more than you were expecting. The player will unavoidably see with some ms (1 frame) of delay objects in a position not representative of the real time they appeared on screen. In a simple 2D game it is quite easy to get constant framerate (or compute how long it will take to render it, and thus show the objects in the right position), but in a complex scene/fx 3D game that has to run on a lot of different configurations it will be hard.
I think I know what you're saying... Let's say I calculate a tween value of .655F and have 20 items to render. If the first 10 items take more than 1 ms to render, then the next 10 will actually be "off" by that 1 ms. Is that what you mean? And are you the same OpenGL programmer who wrote something for Ola? I can't imagine there are too many Fabios around
We use #3 here. Everything pretty much has the following functions. Initialise() Update( FP delta ) Render() Shutdown() The Update function has the time delta passed through in seconds. A good test is to pass through 0.0f as your delta to the update loop. Everything should remain stationary as if the game is paused. If something is still moving then it's not framerate independant and needs to be fixed.
Hi Mike, Well, english is not my native language so sometimes it's hard to express myself and also to understand the replies. As I understand (please correct me if I'm wrong) tweening means interpolating inbetween two (or more, for e.g. cubic interpolation) frames (meant exclusively as computed objects coordinates, not gfx yet) which are generated at a fixed rate (let's say 25Hz). So we have a freerunning, 25Hz generator of the world's objects coordinates, and another independent free running "renderer" that has to give a visual representation of what the world is and, to avoid time-aliasing, it must be able to do it for any moment, not just those fixed 25Hz steps, because the display is not necessarily in synch with this 25Hz game engine. Here comes interpolation (with all its problems and limitations!). Latency is unavoidably added because to have an accurate enough interpolation, cubic-interpolation should be the very minimum, meaning 3 or 4 frames must be available (thus we've to delay visual representation of 2 frames, otherwise we don't have enough data to interpolate yet). This method is anyway very good, and by no suprise (although I admit I learnt it thanks to you now) John Carmack used it in Quake2 (maybe even 3? I suspect his choice had to do also with the need for an optimal solution for internet based multiplayer algorithms). When I say that no method in the world can save you from a certain problem, I mean that whatever is the way you're computing the coordinates of the objects (necessarily) before rendering them on screen, that image will have to represent the world (with a constant delay if we want) as it would be the exact moment when the image hits the retina and reachs the brain. If you lose control over this (e.g. Windows starts swapping; or doesn't give your process enough CPU that frame; or the scene to be rendered is too complex and takes extra frames to be rendered), then an additional, unpredictable delay will appear between your chosen "time appearing on the image" and the actual display. This will kill smoothness, we can't help it. The possible solutions are: 1) constant framerate (all is predictable) 2) while we can't help if it's Windows that suddenly freezes or slows down our process, we can at least foresee how many frames it will take to render the frame we're going to render (basing e.g. on the number of objects that are going to be drawn for that frame, their size and fx) and thus if we e.g. evaluate that the next frame will take 1 more vertical refresh (of course I always assumed VBL synched to avoid tearing, which is very unprofessional IMHO) we should not draw the objects at the coordinates we were going to use, but at those delayed by the right amount, so when the image will hit the retina, it will show the objects where they would be at THAT time. To make a (voluntarily exaggerated) example: our PC gfx card has a refresh rate of 100Hz and takes 10 ms (1 frame) to draw up to 10 objects, 20 ms to draw up to 20 objects, and so on. We have almost always <= 10 objects per frame, but at a certain point we have 12. Assuming 100Hz framerate: Code: TrueTime N.Objs. SimTimeToRender TimeThatTheDisplayShows 000ms 8 000ms --- 010ms 7 010ms 000ms 020ms 9 020ms 010ms 030ms 12 030ms 020ms stalled 020ms 050ms 8 050ms 030ms 060ms 9 060ms 050ms at TrueTime 30ms we expected that the player would have got his new image 10ms after, so we sent him world coordinates 030ms, but the renderer stalled and the image actually appeared 10ms later, thus showing objects in a place wrong by 10ms. What causes jerking is not much the replication of the 020ms frame, it is instead the 030ms image appearing at TrueTime 050ms (should have appeared the 040ms image), and this is a first jerking, and then another one follows immediately after: an image step from 030ms to 050ms in just 10ms. When we were going to draw 12 objects, if we knew that that would have taken one extra 10ms to be done, we would have cancelled it and sent the next 10ms world coordinates to the renderer: Code: TrueTime N.Objs. SimTimeToRender TimeThatTheDisplayShows 000ms 8 000ms --- 010ms 7 010ms 000ms 020ms 9 020ms 010ms 030ms 12! 040ms 020ms I knew this was going to happen 020ms 050ms 8 050ms 040ms 060ms 9 060ms 050ms The player thus would have always seen images coherent with time. The only remaining unpleasant effect is that one frame will not be rendered, but this is not going to cause real disturb, and there are really no solutions for this latter problem, besides upgrade the gfx board with a more capable one or reduce the complexity of those frames that would take more to render, thus making rendering time constant (and turning the previous system useless). For this whole timing and smoothness problem, the ideal solution IMHO would to choose a fixed framerate (not necessarily during design, but maybe depending on the PC power we're running on) which has to be a n integer multiple of the refresh rate, keep our fps absolutely constant, run physics/game_logic with it (no time-aliasing, no latency due to #4 interpolation). Cons? Scene complexity must be reduced (number of displayed objects and/or quality of effects) when we evaluate that the renderer is not going to cope with it. Another con is that if the scene complexity is very low, we aren't going to be able to exploit it to get higher fps. I think though that this method will be very viable when motion blur will be too, and thus we'll use a constant 25fps framerate (with an integer multiple refresh rate, say 100Hz for monitors, or 50Hz for PAL TV system) and eventually reduce image complexity when necessary. We'll be able to run the physics/game_logic perfectly in synch with the display rendering, thus having it perfectly smooth, without delay/latency, and with best possible efficiency. With motion blur we won't need more than 25fps anyway to get perfectly smooth images. Non motion blurred fps >=25 look better simply because the added frames someway compensate the lack of motion blur, getting merged by the natural persistence of the retina. ssssssssssshh!! yes it's me. he told me many many nice things about you, by the way, during the many years.
Regarding the latency, yes that's the major side effect of tweening. I am using a 25 hz clock currently, which means that the logic loop is only run once per 40 ms... Here's a simplified version of my loop (the one I actually use accounts for alt-tabs and that kind of stuff) Code: #define INTERNAL_MS 40 //how much time has passed since we last were here elapsed = timer.Check(1000) - lastLogicTime; //ticks is the number of times we need to run our logic. Usually this will //be 0 or 1, but in slow cases, it could be higher int ticks=elapsed/INTERNAL_MS; for(int k=0; k<ticks; ++k) { lastLogicTime=lastLogicTime+INTERNAL_MS; doLogic(); } //so tween is the amount of interpolation to do between the last //position and current position float tween= (float)(elapsed % INTERNAL_MS)/(float)INTERNAL_MS; doRender(tween); If you changed INTERNAL_MS to, say, 4... Then you wouldn't have to calculate a tween at all, and it basically becomes method #2.... So in this situation, latency could be as low as 0, or as high as 80 ms..... The only thing that would be really easy to notice with this would be the mouse cursor. So for it, I make an exception and update its position at render time. I think all the Id games still use this method, but I was only sure about Quake 2.... Hey maybe the tween could be calculated for each object individually. I wonder how much overhead a high-performance timer call has Yup that would be ideal, but varying the internal update rate changes how acceleration affects objects and collision precision... Ola said good things about you too- something about writing a really nice/fast OpenGL layer. BTW for anyone wondering, Ola is the founder of Arcadelab- Daniel's brother.
I've been using #2 at 125 f/s internally in the logic loops. Sometimes, to optimise, I'll trim back certain things by only doing them every "x" logic frames. I've never run into many optimizing issues using this method though. It runs pretty good even on slower machines for quite complex games logic wise. It's not always perfectly smooth dependingon the frame rate, but it's generally "good enough" for my purposes. + I really like having a dependable constant frame rate to use as a pseudo timer. I find it keeps the code quite simple and nice.
I've used #3 for Fairies; while it's silky smooth, it does make the code very messy, so for Mystic Inn and the new title we've started working on, I've switched to #2, updating logic at 100 fps, except for some logic reconciliation steps that don't contain any movement and only need to be done once per rendered frame. Best regards, Emmanuel
At each (25Hz) clock you may compute each objects coordinates and store them. Keep the last 4 buffers for cubic interpolation, or 2 for linear. Then interpolate (assume a fixed delay, which will be of course greater in case of cubic interpolation, then add the foreseen/evaluated time the renderer will take) the objects coordinates to show with good approximation where they are at any fractional time due for display. Multimedia timers don't have much overhead anyway neither on NT based OS's and with 1ms resolution. A modern machine can cope perfectly even with 1ms timers, although it's not ideal. I've always assumed constant internal update rate here. About Ola.. we've been very good friends since about 12 years ago.. met IRL, always stayed in touch. I've the greatest respect and esteem for him as a person, but I also think that he's not only a great pixel artist (as many know) but also a very talented and experienced coder, although nowadays he enjoys mostly to express himself through his pixel art and game design (well, when you finish developing your dream game engine, and all the tools (ArtGem for example) you don't have much more left to code ).
@Fabio: I never considered using cubit interpolation, I think it's beyond the scope of my 2d project @Emmanuel/Svero: 125hz and 100hz? What happens if the monitor is at 85hz? Some frames logic would run once, others twice. I would think that would be choppy....
Some frames would have several updates performed, but it ends up not mattering - the tweening should still be fine depending on how you calculate things for your loop. I use the method at Gaffer's articles without any troubles. http://www.gaffer.org/articles/Timestep.html Note that Ysaneya at GameDev.net has been having trouble with this method - but as far as I can tell it stems from him not being able to guarantee that his physics on average takes less time than his allocated timestep, which is a requirement for fixing your logic timestep. http://www.gamedev.net/community/forums/topic.asp?topic_id=393475
Personally I prefer #3. Although it can get messy and some of the maths can be a real mind-bender to figure out sometimes, it has one major advantage - reliability. It doesn't matter if the target computer runs 10x faster than your dev machine or 10x slower, a good delta time algorithm will always produce the same results, just at a different fps. I personally limit my delta time to no faster than 400fps, as a kind of future proofing (for when computers can play the game so fast that the variables can no longer detect the time between frames). I also find delta time is very good for precision in collisions - e.g. when you want to move an object backwards by it's collision depth with another object, you can do so extremely precisely. Plus, once you get used to thinking in terms of time rather than distance, the maths involved becomes simpler and the messy code becomes more readable. At least, to yourself Also, when it comes to physics and delta time, you can still use fixed-rate steps, you simply find out how many steps are needed when the object moves delta*velocity in that frame. But as has been said already - it all depends on the game. For some games, the other timing methods are every bit as good or better then delta time, and require less work to implement. I mostly use delta time because it was the way I was taught to do timing and it's integrated into my framework so much that it will take me ages to switch to another method.
Actually #3 is by far the least reliable method, and should not be used, ever. Consider the following: 1. You are running on a very slow computer, or experience a sudden slow-down on a fast computer. As your movement delta is multiplied by a large time value, your system loses numerical stability. Depending on your collision detection system, objects may even move through other objects. 2. You are running on a very fast computer. As delta time tends toward zero, movement vectors lose accuracy, and objects may stop moving at all. 3. Even under normal circumstances, moving 10 times by 1/10 unit will yield different results than moving 11 times by 1/11 unit. 4. An object, moving at some (constant or variable) speed, fires missiles at a constant rate. To correctly handle this case, you would need to: Keep track of the timer value from frame to frame. Calculate how many missiles (0 or more) are fired in the current update. Calculate where the object was when it fired each of those missiles, and fire the missiles from that location. Adjust the position of the missiles so that the missiles fired earlier in the frame have already moved forward by the end of the frame. Chances are you'll get the calculations wrong somewhere along the line, and it'll still look close enough to correct most of the time that you don't notice that there's an error.
Being an idiot I stick with #1 (and preferably with Vsync on). My game logic is utterly trivial and uses almost no CPU, and I simply design the games to run at 60fps on the minimum spec. So easy it hurts. And always silky smooth. Cas
What if the screen refresh rate is 100 hz, and vsync is on? In that case it will not be silky smooth.
Rainer Deyke - all of those problems you listed there are fixed by one simple solution - set a minimum and maximum delta value per frame. And point 4 is where the code gets messy and the maths complicated. Yes, there is a chance that you'll make small errors in the code, but as long as those errors provide results that look and act like the user expects, then they are not serious issues. It's only when you deal with physics reactions that the precision becomes a concern, and then it's matter of making sure the physics steps are working with the delta time, not against it. The other methods listed can be problematic because they can lose sync with the screen, just like Fabio has been pointing out. With delta time, the calculations are not always 100% perfect, but they are in sync with what the user experiences (assuming a correct implementation - there's a lot of room for developer error in delta time programming). Sometimes, especially in visually rich games, appearances are more important than precision.