Doesn't vsync mean the game will get rendered at the refresh rate? Does this mean that with vsync on #1 has no disadvantages? Edit: oh wait, i get it
I am beginning to think I wasted two days doing #4, and might go back to #2... Getting 2 things to bounce off each other properly at 25hz is a real pain!
Where I come from, incorrect behavior is unacceptable, and slight variantions in behavior are always incorrect. How can you be sure that you won't get serious problems as the game simulatiuon slowly desynchronises? For the record, I don't see any disadvantages to #4 at all, so long as the game simulation is reasonably fast and the internal frame rate isn't too low. I use an internal frame rate of 50 fps, which nicely sidesteps the artifacts caused by 25 fps updates.
Thats always a pain no matter which timing strategy you pick. Well, that is... if you want to do it as accurate as possible. See if there is a collision in this timeslice. If so, move the simulation to that time. Handle the collision. Then move on from there until you ran out of normalized time (dont forget to scale the normalized time fraction accordingly before subtracting it). The higher the speed of the moving objects the lower the framerate will be. Its like that because you'll have more collisions per frame then (and each collision means sweeping everything against anything again). Well, its pretty complicated and sorta weird. But it looks and feels awesome. In a breakout game for example you dont lose displacement as a collision side effect, which makes it a tad faster (especially that tunneling mass bounce stuff) and more realistic. It really feels damn solid. But usually it isnt worth the trouble, methinks.
I attempt to change the display mode in fullscreen to 60Hz. If that fails, or it's windowed mode, I don't use vsync. Either way with vsync on or off I still use a timer to cap the framerate - there is no actual reliable way of knowing if vsync is indeed on. Cas
I've been using #2, running at about 300hz. The logic doesn't really have much overhead compared to the rendering, and it means that it copes with different refresh rates better. However, I've never been able to get it properly smooth. Part of the problem is, I think, that using Blitz I only have access to a ms timer, and that doesn't seem quite good enough. I'm torn between locking the framerate (which improves things on fast computers) and leaving it floating (for computers that need to frameskip). Maybe I should do both, and have it intelligently select the appropriate one... I've tried using an averaged framerate too, but that has weirdy side effects. Every time I fiddle with this I end up giving up after a while, with the intention of coming back to it later. I hate timing code. I wish I had a fixed hardware platform to lock down the framerate to.
I've found there's a little chop, but that it's generally pretty smooth and not that noticeable unless you're really trying to look for it. I don't think it makes a difference for most if any players. I'd agree it's not perfect on the chop side, but I really like what it does internally code wise. Very clean and easy to work with.
This is basically Fabio's suggestion from above (#3 + 1 or more simulation updates), correct? Not challenging you here, just making sure I understand correctly. Another thing about latency to consider is that if you are rendering to a OpenGL/D3D device, there are inherent latencies in the video driver as it buffers frames. If you are buffering 4 simulation frames before posting a rendering update, it could be up to another 3 rendering frames before the results are seen onscreen. Something to consider if your renderer isn't clearly outpacing your simulation.
Yep. I've started using #3 with my last two games (one still in development). My golf 3D game uses #3 and my physics and collision have worked just fine. And anytime I've had oddities with collision it was due to other reasons, not the timing. Sure it was a bit tricky setting up with my physics code, but I'm very happy with the results. My game runs silky smooth on fast computers and remains very playable on slower computers. For example, if a player on a fast computer putts the ball with the same power as a player on a slow computer, the ball will go the exact same distance and take the exact same time to come to a rest. The only difference is that on a fast computer the ball will move much smoother. I'm using Shockwave (which has a fairly old 3D engine) and mostly do 3D games that are playable on the web, so using fixed frames is a kiss of death. I used a fixed timer with my first game and I got tons of reports of the game running sluggish even on mid-range systems. If the game couldn't keep up with the target frame rate, the game would basically run in slow motion and be unplayable (or unfun). Nowadays since I've switched to using a delta timer, the actual game speed is the same on fast and slow computers. I also put maximum value limits on object movements to prevent huge skips if the computer stutters for a brief moment. Collision has never been a problem because I do distance checks BEFORE I actually move the object. If a wall is 9 units away, I tabulate the move increment of the object and if the increment >= 9 units I know collision will occur with the current update so I don't add the increment. Instead, I may place the object right next to the wall. However, if the increment is < 9 units I can add the increment to the object. It doesn't matter if the increment is 5 or 2,343 units, the collision will be detected properly. If you're doing games with simple logic code and low/no 3D requirements, you can pretty much try any of the options Mike B. brought up and be okay. But I think if you're doing a fairly demanding 3D game and want to run at a fairly consistent game speed on various system specs, then I highly recommend #3 and would avoid #1. I can't really comment on the other timing methods since I don't have much experience with them.
Indeed, and that not only increases latency, but worsens also the jerkness when that "unexpected" renderer-delay I described in my example happens. I don't like to lose control of # of buffers, that's why in my OGL code (dunno if D3D is the same, never made my hands dirty with it ), before entering the main loop, I issue many (say ten, in any case more than the maximum number of hidden buffers you can expect that there can be) glSwapBuffers() in sequence (clearing the screen with black), so the buffers queue will be empty when I finally enter the main rendering loop. In more than one case this indeed *made* difference to get perfect smoothness. Another thing you've to be sensitive about is Windows multitasking issues. After lotsa experimenting with priority classes and so on, I found that the best result (if you still want to have input and audio working on all PCs) is to set +1 priority in your rendering thread, normal priority class (yes, live with it) and multimedia timers resolution to 1ms. This will make the Windows multitasking scheduler work on a 1ms resolution too, turning interference from other processes less critical, although you still have a normal priority class for your process. Do not expect all things to work on all PCs if you set a higher priority for your process or anything above +1 for the thread.. this is a comment coming from me who would like to use REALTIME_PRIORITY_CLASS for his games if it was possible.
3. Delta Value Timing My preferred choice. "Actually #3 is by far the least reliable method, and should not be used, ever." With all due respect, this just isn't true. "1. You are running on a very slow computer, or experience a sudden slow-down on a fast computer. As your movement delta is multiplied by a large time value, your system loses numerical stability. Depending on your collision detection system, objects may even move through other objects." I've seen this same answer posted over and over. Just cap the delta so it never goes past a pre-determined value. Problem 100% solved. "2. You are running on a very fast computer. As delta time tends toward zero, movement vectors lose accuracy, and objects may stop moving at all." Again, simply make sure to cap your delta values so they never go past a min/max setting, and then you GAIN all the benefits of time based coding. Which is to allow newer computers to render at high frame rates, and slower computers to render at more coarse frame rates. No need for tweaning, processing the logic in one thread, rendering in another, etc, etc. I also take it one step further by averaging the delta over the last 10 or so rendered frames, helps smooth out the movement even more.
#3 here and always has been, but then I do mostly 3D stuff and wouldn't want to try the others or combinations just yet (until I was using more advanced physics stuff also when it may become a problem). #3 isn't perfect, but as DXgame says you cap the min/max and average it (someone on here told me about averaging last year and it works nice!) and it seems to work ok. Sometimes you get the odd mini-jerk or tiny stutter but that's windows + 3D I guess I also echo Anthony's statement "I hate timing code. I wish I had a fixed hardware platform to lock down the framerate to." Life would be so much simpler without timing issues and things like QPC problems on HT cpu (laptops etc).
So make your own! ChangeDisplaySettings to 60Hz, run both logic and render at 60Hz. No more hair pulling, no complex code, and crucially, no difference to the users. I just can't help but wonder programmers make work for themselves. If the users don't see any difference why do it any other way? Cas
I'm not getting at you (as If I would dare ) but I would *never* alter the refresh rate or rely on fixed refresh. Sure it works (for you) but it's just something I wouldn't feel right doing. And if your game can't keep up with 60 then you get slowdown anyway so delta timing would still be needed. That and the fact I never play a game @ 60hz for any length of time because I notice the flicker - I want at least 85hz refresh when I play.
You don't like consoles then? (And if my games can't keep up at 60Hz, they need adjusting until they can!) Cas
(edit) > if my TV had visible flicker like my monitor does @ 60hz then I wouldn't like playing consoles. (a more succinct way of saying it) I just got used to having higher/more comfy refresh rates on 3D (so it prob doesn't apply to your 2D stuff as much if at all I suppose). LCDs with slow response times are one thing (at their "60hz") but on a large CRT that can go uptp 100hz or more it seems like a backwards step to force the user to run in the lowest common denominator. I don't like punishing players. Along those lines it's the same with capping frames at 30fps or something. Those who have decent systems and can get higher framerates and run in whatever refresh rate they want to should be able to. Though I realise of course everyone thinks different and it's just my take on it.
From the perspective of selling games, it makes not one iota of difference, it would seem. So why punish yourself grokking all those crazy newfangled techniques when the old fashioned way still works and still sells exactly the same as any other technique? Cas
For PC, locking to 60Hz would be okay I guess. Presuming that it can be done reliably, which I guess it can if you've not had any trouble. I wouldn't like to port the game to a PAL system though, so anyone hoping to get their game on Xbox Live Arcade should be careful.
In ye olden days running on PAL just meant the game was 20% slower We never noticed over here... You can't reliably get vsync (and you generally can't get it at all, in windowed mode) so you're really always relying on a hires timer frame rate cap in addition. I use the winmm multimedia timer accurate to 1ms for this purpose (as it is unaffected by Speedstep and multicores) and it's accurate enough. Cas