Probably not. It might be a problem though if your game was too slow to run on any machine of the 2xAGP era and older...
I am wondering how common agp 2x is?. Do you lot come across it often?
I ask, because my current game thats in development bottle necks on agp 2x. I am unable to shift as much (graphical) data as i wish using the old agp 2x slots.
If I neglected to support this speed agp slot, would it effect my sales by much?
Yeah that would be the problem of running my game on a agp 2x machine. It would run too slow to be playable.
AGP 2X is very common, at least from the evidence I get.
Retro64 Computer Games
You must be doing a lot of mesh manipulating or uploading to the card for this to be any kind of issue. I would definately worry if it bottlenecks on any form of AGP.
Another thing - often you will have AGP 4X and so on underperforming as 2X (I recall a lot of systems that did this for stability).
Instead of looking at AGP, look at your target audiance. For casual I would put this at 600mhz and a geforce2 mx (And thats generous).
Last edited by Robert Cummings; 04-19-2005 at 06:13 AM.
I don't have any stats (who has stats on AGP bus speeds? I didn't think it was collectible via query...?)
But I think given the length of time when AGP 2X was the only thing available (before AGP 4X came out) it seems logical there would be a lot of them out there.
But more importantly, what is your target demographic for your game? Is it a casual audience game? Or is it more hardcore/mainstream? If it is casual, then you may encounter more people with AGP 2X since casual gamers don't have as much reason to upgrade their systems. If it is a hardcore/mainstream game, then I doubt you will encounter many AGP 2X systems since AGP2X is so old, and most hardcore gamers will have upgraded several times since the AGP2X standard was at the top...
Just my .02c
Xenopi Studios, Inc.
Gaming for a Better World!
At the moment im trying to make the game run on the least amount of resources possible without loss to gameplay, no harm in that I say.
Less resources = more computers that can run the game = bigger potential market for the game.
I was trying to figure out if its worth me spending time developing a special way to do things for agp 2x (in my game) or concentrate on something else (becuase not many 2x's are still being used).
If you're talking about "casual gamers", the hardware is all over the map, and the older the hardware you can support the larger your possible audience is. I don't think anybody has any hard data on the shape of the curve; my arbitrary "minimum supported hardware" requires an nVidia TNT or better, and I know people who shoot both higher and lower than that. My advice is to strike a cautious balance wherever you can between "pretty" and "runs on old hardware", erring on the side of "runs on old hardware" wherever possible.
If you're talking about "hardcore gamers", I can refer you to the current results of Valve's automated survey. This doesn't have direct values on AGP bus speed, but you could probably infer it from the class of video card involved (e.g. if they have a GF4 it probably isn't running at 1xAGP).
Generally speaking, if you are AGP bound that is bad. I would look into static resources that are unintentionally categorized as dynamic, or perhaps revisit those art assets and look for size optimizations.
Of course you may have a unique situation... But in my experience there's very little reason to be bus limited on the PC (consoles are a different story).
vjvj is right. Basically, you are doing something wrong. Are you pushing more graphics and/or frames than say... Quake 3 or Unreal Tourney? I think you're barking up the wrong tree.
I don't see how AGP speed could be a problem, unless you are pushing insane amount of polys/frame. Quake2 can run on PCI no problem, and Quake3 runs on AGP1 without problems. Not sure how many polygons they render per frame, but I would imagine it's around 1-10 thousand.
Also, for the casual game audience, you'll be lucky if they even have AGP.
Last edited by dima; 04-19-2005 at 11:48 AM.
AGP speed isn't affected unless you are actually requesting data or uploading data to the graphics card. This is common if you are doing deforming meshes, or you run out of texture memory.
I would presume that people avoid reading from video memory as much as possible. Using system ram is probably best for dynamic geometry and using dynamic buffers for uploading it to the hardware, that's how most games do it; static mesh sits in video memory in static vertex buffers, then dynamic buffers in systems ram is used for dynamic geometry, also it's interesting to use static vertex buffer and a dynamic index buffer to be able to draw selective pieces of a mesh really fast.
I think i need to fill you lot in a little...
Its a large image held in system memory that I wish to display part of, fast.
I dont wish to hold the image on the graphics card, because its aprox 12mb to 32mb - thus putting it out the reach of many computers out there (esp 8mb, 16mb and prolly 32mb). Also, as many computers come with loads of system memory compared to graphics memory, the system mem looks like the ideal place for this image!. On top of that, once all the sprites and effects are added, I could potentialy be using a lot more graphics memory then I really need too.
As you can see/imagine, drawing a screen size image across the agp is not wise (even on agp 8x i have managed to do it in 70fps on a 9800pro - 2.4ghz).
However, I do have a solution. Thats to use the gfx mem as a cache for the image. By splitting the large image into many smaller tiles and copy one of those across one per a frame to where is needed depending on the scrolling direction. I can then also get rid of ones that are not needed anymore, thus freeing up memory and keeping the requirements down low. (~maybe 2mb/3mb max)
The question about the agp speed was more related to what size tile I can get away with per a frame. Test indicate that 64x64 is the ideal solution for my game on an agp 2x.
Now once this is implemented, I can keep the specs low; eg) gfx mem = 8mb+
Well, thats all said and done - i still wish to know how common agp 2x is!
Last edited by Shagwana; 04-19-2005 at 01:18 PM.
Hmm, 3D hadrware is best used with geometry and small textures, but if you really abosulutely must use huge ass images, I guess you have to split them up. Don't forget that texture sizes have constraints, and for maximum performance and compatability textures should be kept at 256x256 or 512x512, you can't just throw an 800x600 image as a texture and expect it to work everywhere. I guess your method of streaming the tiles for the background is the only good option, as uploading big images from system ram would be slow and keeping everything in video ram is a waste. So if you have to, go for it and try to get streaming to work, and I guess 64x64 tiles will work just fine, maybe 128x128, I guess it also depends on the scrolling speed and direction. What happens if camera just jumps to a different location? Best way is to use 'reusable' tiles and make up big backgrounds using only a few tiles that always sit in video memory, but that wouldn't be as detailed. There are tricks though like layering tiles and adding effects ontop of that, or just having a really good tileset would help alot in assembling large backgrounds.
Definitely consider what dima suggested. If you are using this large image as a large tileset, chances are you have massive unnecessary duplication going on. E.g. if you have a plain brick tile, then a brick tile with a window on it, then a brick tile with some dirt on it, etc. etc. etc. don't store them all as separate tiles. Store the base brick image as one tile and the window/dirt/etc. images as separate tiles (optionally compressed/scaled) and render tiles in multiple passes.
By your numbers, worst case (32mb) is 2000 64x64 tiles (assuming 32bit rgba). I'd be shocked if those were all wildly unique (and I would be expecting a lot from the visuals in your game! )
Edit: Sorry, I don't know how common agp2x is
You might also want to consider scaling down your Big image 50% or 25% in medium and low detail modes, and moving around respected blocks (if started at 256x256, then 128x128 and 64x64) for lower performing PC's. Having a really crappy PC (AGP 1x or even still PCI) handy would do wonders for testing.
Using repeating tile blocks is not an option, this large image is a pre drawn race track. Repeatition does not exist upon the image (well, not enought to justify using repeating tiles to save space).
I would rather not lose the ability to edit the race track in a paint package as well.
I'm afraid I cannot comment on how common AGP 2 is.. sorry.
However I can hopefully contribute to the tile concept:
Since repeating blocks are not an option and there is a chance that you'll be unable to store all the tiles in video memory, how about keeping track of tiles in some form of most-recently used list? You could then just keep the top N tiles in this list, that is, as many as you fit in video memory and swap tiles in and out as they get promoted or demoted out of the top N positions in the list.
The performance of this approach will be affected by the track layout: If it is circular or generally convex, you'll likely be swapping in tiles from the previous lap almost all the time; if it is very winding, the swap in rate will be slower. Anyway, you should generally only need to swap in as many tiles required to cover the width or the height of the screen, that is, 10 64x64 tiles at worst on a 640x480 screen.
The number of tiles allowed in video ram, N, could be determined at run time. The best case scenario is that N = the total number of tiles, that is, no swap ins/outs required.
P.S. This is just an idea.. I haven't tested it myself, so take it with a slight pinch of salt
After some work, I have manage to get a bigger then gfx mem image scrolling on a old P3-500, 300mb, Agp2x, TnT1 setup at 120fps+ in 32bit being spewed from system memory!.
Just need to get my mits on a pci setup now in order to see how well it performs there.
OR, you could model the track in 3D using good old polygons and small textures , but I'm glad you got the big images to work.
What was your solution Shag? Splitting the image?
The GeForce 2 MX (chipset: 100/200) card is an AGP2x card if I am not mistaken and I think the particular card is quite common having 32MB of video memory. The GeForce 4 MX is based on the GeForce 2 (chipset: 400/440) but it is AGP4x with 32 or 64MB of video memory. Also this depends on the Mother Board. If the MB has an AGP2x slot, you can have an AGP8x card if you want but it will only use 2x speed.
IMO, if you are targeting the casual gamer, you should stear clear from being AGP bound.
Source Image (the big one in system memory) was split into 64x64 tiles. Then a viewport varible was set up - holding the screens postions (top-left). Now, each frame i flag all gfx mem tiles for deletion - then reflag all tiles on screen and bordering the screen by 2 tiles for caching.
Each frame, i seek a tile flagged for deletion (if there is one) and delete it from gfx mem. Then seek a tile flagged for loading (if there is one) and load it.
The gfx card now has a collection of tiles for the cache to display to the screen. If the screen dont scroll too fast, then a un-cached area of the screen wont be ever be displayed.
Nice, simple and quite elegent!.