Separate names with a comma.
Discussion in 'Game Development (Technical)' started by Mike Boeh, May 19, 2006.
The physics for these games happen on the server right?
I read it as straight #4...
I think it's interpretable as either 3/4 or 4, since they aren't mutually exclusive.
JC and me
Quake 3 used #3. I gather the game had a pretty simple approach: compute game state, render frame, forever. So if you got 120fps, you were getting 120 game state updates per second too. Some l33+ players discovered that there were some otherwise-unmakable jumps in the game that you could make, if your computer was running within a specific range of FPS. Thus giving advantage to some players over others.
For that reason, Carmack changed Doom 3 to use #2: a strict 60fps game state clock. If you render Doom 3 at 100fps, 40 of those frames are redundant.
My game also uses #2, at 120fps. I worked very hard to get the game to be 100% repeatable from recordings of just the inputs, and I'm happy to say that I've succeeded. If I'd used #3, I simply wouldn't have been able to achieve it... or at least without the frame rate itself becoming an input. I would definitely consider #4 if the game state were slow to calculate, but I don't expect to have that problem any time soon.
Edit: I also wanted to mention that I chose 120fps because 120 is higher than anyone's frame rate is likely to be, and it's such a wonderfully composite number. 2 x 2 x 2 x 3 x 5. If you run at 60fps, or 80fps, or 100fps, the game state updates will coincide nicely with screen updates I figure. It probably makes little or no difference, as almost any number > 100 would work fine, but perhaps there is a tiny advantage to a good composite number.
Quake 3's problem was rounding errors if I remember correctly. They clamped the velocity vectors to an int. A poor implementation.
If the update is limited to 60fps and you're rendering at 120fps won't it look juddery as half of your frames are the same?
Um. You are aware that, with floating point variables, moving an object by 0.1 units 100 times does not yield the same result as moving it by 0.2 units 50 times, right? This is pretty much fundamental to the floating point mathematical model. (A quick check using Python yields 9.9999999999999805 for the former and 9.9999999999999964 for the latter. With single-precision floats, the difference will be much larger.)
Now, you could stick to fixed-point/integer math, and you could regularily round your variables to eliminate rounding artifacts, but it is highly unlikely that you will be able to eliminate all timing artifacts for any but the simplest games.
Actually delta-time is significantly less efficient than method #4 (fixed update rate plus tweening). It requires one physics update per rendered frame, and the actual updates are slower.
That does not address any of the flaws that delta-time is likely to encounter during normal operation. It just keeps one of the most degenerate manifestations of those flaws from manifesting.
Small numbers don't have to be zero in order to cause floating point underflow or in order to disappear entirely when added to a larger number. (Personally I have gotten actual zeros as time deltas before, but only because I use a millisecond timer (with over 1000 fps).
If I draw something at 9.9999999999999805 and 9.9999999999999964 I bet you couldn't tell the difference on screen.
I never said method 3 is the most efficient, it's just not inefficient. In fact, I never said method 3 is the best at all, I just disagree with you when you say it's flawed and shouldn't be used. There's enough real world applications of the method that show it does work. It's an incorrect message to send out to the folk reading this who aren't sure. Each method has it's pros and cons, you still haven't answered what happens in the other methods if the bottleneck is in your update?
I'm still not sure how you'd get a zero time delta unless you're using an inaccurate timer. Even if you were drawing nothing at all you'd struggle. And in which case who cares? I was under the impression high resolution timers match the clock frequency of the cpu. Therefore your render would have to consist of a NOP and that's it to maybe cause a problem.
I wish you'd explain what these flaws are - your points have been addressed. I'm interested to know if any retail games have managed to work around them. Give me a real world example of where it will matter. I've yet to see these flaws manifest themselves.
I really wonder why lotsa people here still don't realize that delta-time can be an integer multiple of a base time (preferably the video refresh period). Thus delta-time can indeed reproduce the same identical results each time it runs, and doesn't necessarily need floating point at all. On the Amiga I wasn't using floating point, trust me, and my games had loooooong demo modes where my previously recorded input was simply coming from memory rather than from the joystick. Never missed one micron all the times it was replayed.
This thread is really becoming a boring religious thread.. with people sticking emotionally on their beliefs instead of wanting to consider simple and proven facts.
On single-player games I use delta-time + one or more steps per frame (as required e.g. by the collisions system) because it is senseless to compute more than the user can see anyway, it's only a waste of processing power, it's not good coding practice, nada-nada-nada. The steps can be linear (floating point is more comfortable/easy), or integer submultiples of a whole frame (e.g. two steps, one 1/8th ahead and the other 1/24th ahead.. that's how I was doing in integer math on the 68000 CPU).
Also, you get the benefit that once you've your physics working with any delta, you can be more sure it's bug free than if it's a tweaked cheat running at a fixed frequency, something that often masks malfunctions that then appear when you never expect them and you've already released your product. If numerical stability is a problem, don't use Euler^H^H^H^H^H^H^H^H^H^H.. then do n necessary steps per frame and you're done, or improve your algorithm (use e.g. RK4 integrator or some other solutions).
I.e. on singleplayer I use basicly #3.
On multiplayer, networked games, I prefer a server-clients approach: the simulation runs on the server, and the clients are simply more or less stupid terminals sending their input to the server and displaying what the server sends back (i.e. what each client has in view, no more than this, to avoid bandwidth waste), interpolating the world on the client to simulate a higher frame rate. I.e. basicly #4.
This has proven the best compromise for networked games, but adds undesired latency and other problems, so is far from being ideal as the singleplayer solution, and shouldn't be used also for that.
For why even properly coded delta-time sometimes chops too, check my long post, it's due to unforeseen delays in the renderer, and is common to all methods (which have other added problems as well that increase chopness. Delta-time is the best of all for singleplaying, and I fail to see where it's supposed to be more complex than the other methods. Because you've to factor in one more variable vs fixed-time? Now come back and tell me if it's better this way or the fixed way when you start to deal with collisions and impulses anyway..).
I think most Delta Time coders started designing their game or engine tools with the idea of using Delta time from the start. Going from frame based or fixed logic speed and then upgrading to delta time based movement can be a bit daunting, but once you code it....correctly, it works flawlessly.
But otherwise, with games....if it works....it works. There are no "correct" ways to code anything, especially on the PC.
When I had to adapt my singleplayer code to multiplayer it was a zero-seconds job. I know it wouldn't have been the same the other way round. And what if you change your mind about the Hz to use? You can use #defines to optimize your code in case you want to fix the rate, but your source should always be as much versatile as possible. Anyway, I'm not generally in favour of floating point, expecially now that 64bit integer math is quite viable.
Yeah. And if you are using classes then typically you are typing one set of extra '*td' on movement code for example and then forget about it forever. Each object will work from your timer+deltatime algorithms and it doesn't need too much thinking about once those classes are up and running. In my experience it has been the case anyway and it's not too tricky if you designed it from the outset to work like that.
Yes, clearly the problem was rounding errors. Controlling the FPS was what let you influence the rounding error one way or another to your advantage.
No more juddery than playing the game at 60fps.
In point of fact, the Doom 3 rendering engine was capped at 60fps too, because there is no point in rendering redundant frames. Quoting from Carmack on the subject (indirectly through IGN):
Also, the online forum at Beyond3D discussed the high-framerate jumping in Quake 2 and 3.
I think we will always have some tendency to do this; our habits and ways of doing things become entrenched, and we loath to leave them behind, esp. when we've found they worked in the past. It's often easier to just defend your own personal view / position, than it is to be open and consider the merit of what other people are suggesting (when it differs to your own view).
I very much think (and it's been said repeatedly already) that there's no such thing as wrong or right, it's just a matter of which approach is the most appropriate for your game, and also for you personally as a coder.
I favour the non delta time approach, but then my background isn't in PC game dev (with all it's non-standard frame rates). I may change my view on this further down the line.
I try to consider all the implications of using each approach, and for the type of game I'm making (I'm considering using method #2), I believe running my game logic 10x more often than I render my game, would take CPU usage from 0.1% to 1% (if that) for computing my game state. I'm quite happy with this trade off against not having to complicate my code by factoring in delta time.
If my game was a more sophisticated, it may well be a different story. Been a good thread though! (but getting a bit boring now).
Spot on. It's interesting how all the methods have their pro's and cons. One thing is for sure, all the methods will work if you implement them correctly and there doesn't seem to be one way which is better than the rest no matter what. Just choose what's best for you.
Each one has its pros and its cons, true, but some chop more than others though.. this is unfortunately an undeniable fact.
Yeah, I know I could make it work, but I use so many little timers for widdly little animation details that would be just more of a pain to set up that way. And since I have so many, I prefer the shorter way. I like using timers that trigger a bunch of discreet, single events as they count down.
I do have one animation effect which is wired directly to the computer's clock, though - lip syncing the dialogue.
I was going to mention the replay thing, but I guess everyone has their methods of getting around it. I like being able to just record the inputs and the random number seed and get a painless playback - except it's never quite painless because I have a bad habit of putting random number rolls into my rendering routine, which ruins everything...
I've not found any problems with inefficiency of over-processing logic loops, even though I was worried about it. On my (hardly state of the art) computer, my logic never seems to take up more than 1-2ms per frame, and that's with a fair bit going on. Of course, your results may vary, and I guess if I was concerned that I'd be doing a lot of number crunching I might go with a tweening method.
I guess delta does give a good degree of flexibility with the timing, too. I have a speed slider in my game, but it's limited to running in discreet steps of logic loops to rendered frames.
I'm using a variation on #4, and I'm not seeing any of the CONS you mentioned. The variation is that I have an events timer which allows events to be scheduled at any time (to the nearest millisecond). Just before rendering, the physics engine processes all events up to the current time. Rendering is completely separated from physics (including collision detection), and the game doesn't have a clue when a frame is being rendered.
For hit detection, I have an event which executes exactly* every 20 milliseconds**, and detects collisions very precisely and deterministically***. Game objects follow some kind of path (hand drawn, linear, accelerated, etc.) and take no CPU time, except when it comes time to render or run collision detection. For instance, when the collision event (or render) needs to know the position of an object, it just uses a formula like D = 1/2*A*T*T.
So to summarize: I do not have a master loop which requires all events to happen at a specific time and rendering is completely separated from physics.
* As far as the physics engine is concerned.
** I could just as easily use any timing by changing a single constant. I chose 20 milliseconds because (empirically) it was the longest interval where all collisions were detected properly
*** I can re-run the game to any point in time, and get the exact same game state.
My game currently doesn't need to bounce anything, but if it did, I would probably need to change collision detection to run every (oh say) 5 or 10 milliseconds. This would be very easy, and wouldn't effect the render quality at all.
Unfortunately, the standard Windows timer runs at a resolution of 5 milliseconds [edit - or 10 as Larry says] (depending on the version of Windows), which is not good enough to get perfectly smooth frames. When you combine that with 300Hz logic and a 60Hz refresh, I can understand why you're having problems.
The first thing you can try is changing your frame rate to [edit - 200Hz, or something more compatible with Windows]. A better approach would be to use the Windows high performance timer. There are two of them: 1) Multimedia timer (1ms resolution), and 2) Hardware timer (QueryPerformanceCounter).
[edit - removed, possibly incorrect statement]