Separate names with a comma.
Discussion in 'Game Development (Technical)' started by sofakng, Feb 5, 2009.
The next GTA for handhelds is in fact, 2D.
Ahhhh, voxels. I wonder what a modern computer could do with a voxel engine (one that uses the GPU).
I've uploaded a better example:
In the inventory screen, note the smooth rotation of the objects at the top right of the screen (the toy dog, take-out box, etc.). Camtasia's recording unfortunately introduced some hitching, but you should still be able to get the idea.
Yes, you and Knau are right; it's a voxel rendering system accelerated with DirectDraw. The objects are stored as a bunch of horizontal slabs or slices, and DirectDraw just renders the objects as a series of thin, horizontal rectangles. This allows for free rotation in the Y axis without having to pre-render every single viewpoint (which would have had staggering storage requirements; remember kids, this game came out 11 years ago ).
Anyway, just thought you guys might be interested in seeing a clever use of 2D acceleration. IIRC, this was back in the DirectX 3.0 days.
This can certainly be true, and there are lots of 2D games that can attest to this, but it comes with the caveats of rapidly increasing storage requirements and massively limited flexibility. So this limits the kinds of games you can make, and you have to make tough choices like "do we have the space to pre-render every single lighting scenario, or do we give up and go with incorrect lighting, or do we forget lighting altogether and go stylized?". Blade Runner is a perfect example of the kind of compromises that must be made.
That said, it will be interesting to see where 3D tech goes. It's true that 2D took the path of vector graphics->pre-rendered sprites, but I'm not sure 3D will make the same transition. Lighting calculation is still one of the biggest problems in modern 3D real-time rendering, and pre-caching everything kinda runs contrary to that goal. And again, the storage requirements for pre-caching every single animation + animation transition are an order of magnitude higher than what we are doing now.
One of my side projects is making a voxel rendering engine in CUDA that mixes static voxel scenes with (theoritically) unlimited detail (by using some octree-like format that stores an approximation of the "next detail level" in each node and only loads that level from the disk if you're close enough to care), with 3D-card rendered polygon models. So far i've only got some dots in the screen though and it has been a while (more than a month basically) since i touched that code. The basic idea is simple actually and i would be surprised if there isn't already some CPU-based implementation of such a thing.
The hard part is making the world editor. My current idea is to use a polygon mesh as a base and use "sculpt-like" tools to put the extra detail (probably similarly to tools like zbrush) while hovering around in the 3D world.
However this is just a test and i don't believe i'll extend it much. But on the other hand i believe polygons wont be around forever - they're too limiting and with the programmability level that today's GPUs have it wouldn't be too long until you see other forms of rendering taking place in pixel shaders.
I was thinking more that the artist would define the low poly shapes and that the gpu would tesselate this based on procedural information for the detail - see DX11 specs for further reading.
For this kind of stuff you should use curves, not polygons. I'm sure its totally possible to render curve surfaces using only the GPU and in fact its one of the features i'm planning to put in my main 3D engine at some point.
In fact its totally possible to do any kind of non-polygon rendering these days. But the problem with polygons is that they're so long around and so popular that most people know about them much better than anything else and every artist tool in the world works mostly with polygons. Some people can't even think beyond polygons. So unfortunately it is much more work to use anything else than polygon rendering than just adding the feature to your engine. Which is unfortunate since many elements are better suited to non-polygon rendering (water surfaces are better done with non-solid voxels for example, pipes, most parts of a head, some futuristic rooms, etc are better done with curves, a great part of architecture can be done with CSG volumes, etc). But usually these elements are just converted to polygons with "polygon-y" results (you need a shitload of polygons for example to make someone's head not have corner and you do this only to avoid the corners in the silhouette while most polygons are wasted).
Hey all, I can understand what's being said here...I was talking about creating something like a GTA on consoles. For instance, to try and create, in 2d, the same game that GTA4 is on 360. That's all I was referring to. And while a GTA on mobile may be 2d, the handheld platforms (DS and PSP) are a completely different story, they're 3d. I'm sure mobile phones will use 2d though, but it's not the same experience, and perhaps not the same game then.
My point is you just pick and choose your battles wisely in development, and that 2d is alive and well, and will probably re-emerge as 3d apex's. Just my opinion
2D and 3D gameplay and styles are very much equal and there's no reason to remove 2D.
Hardware and engines, however - should be all moving to 3D hardware based rendering.
Yes. That plus the fact that hardware is extremely optimized for handling triangles, now. We're in a nice comfort zone now where hardware for calculating stuff like partial derivatives for filtering and such is fast and cheap. I don't think any IHVs are in a hurry to throw that all away and watch their margins plummet just to support curves (and has the industry really decided exactly which parameterization we prefer, yet?).
The idea is to not let them decide . With technologies like CUDA and OpenCL which bring the raw power of the 3D card's parallel processors in a generic programmable environment, you won't need specialized hardware. All they have to do is focus on one thing: provide generic purpose fast parallel processors. In fact i believe a very good (but somewhat long term) solution would be to dedicate the hardware to provide only parallel processing and have the whole graphics pipeline implemented in software for these processors. This way compatibility with the current and previous software is kept and people who want raw performance (for graphics or other things, like physics or AI) or use their own rendering methods can program the hardware via a lower level API (using something like CUDA or OpenCL).
Programmability is a good thing that provides flexibility and less boundaries for programmers and even if initially performs worse than a hardware solution, the simpler design that hardware that focuses on one thing (parallel processing) instead of everything brought in realtime graphics the last two decades lends itself to a better situation for optimizations. Check pixel shaders for an example: initially using the programmable pipeline was slower, but today its faster since the focus has shifted there and even the "fixed" pipeline is a compatibility layer over the programmable one.
Oh, I didn't mean to imply that the IHVs should decide everything, and I agree with you. In fact, I think you'll generally find that both NVIDIA and ATI are very proactive about working with developers to find out what they want in future hardware (if you meet a person at GDC with the title "Developer Technology" on his badge, that's who you want to talk to). Not just because they are geeks, too, but also because it makes them more competitive and opens up new markets for them (like CUDA). The unified architecture was one example where everyone knew they had to eat the costs to take that next step. But it will always be a balance against costs. When stockholders get mad you lose money, and no money means no advancements, period
I'm sure we'll eventually ditch triangles, but it's gonna be a slow process until all the axes finally line up (output equal to or better than triangle rendering, mature tech that scales across all price points, developer demand, and performance per dollar falling somewhere in the vicinity of existing triangle renderers).
Yes, I think this will be the trend of the near future. Current 3D engines can do a lot of very fancy things with the classical triangle rasterization technique, but it is getting quite complex and a bit messy.
If we will have just the raw power, then we finally can change to ray tracing based techniques, wich are fundamentally more simple (let's compare how simply shadows and reflection is implemented in ray tracing, and how it is done with the classical rasterization (environment mapping, shadow mapping)), so that developers can concentrate on putting in even more fancy features.
The only reason that real-time ray tracing is not the main 3D technique today is that 3D hardware was not optimized for that. On the processor real-time ray tracing would be the choice now. (Real-time ray tracing was even getting feasible back in 2000 with simple scenes.)
Who knows, maybe even some kind of real-time photon mapping will be feasible in the near future... That would be very cool!
There may be another factor: competition between 3D card manufacturers and core processor manufacturers. Core processors contain more-and-more high performance parallel cores. If 3D card manufacturers don't go in the direction of providing general purpose parallel hardware, then there is a risk for them that developers will implement 3D rendering in the core processor. We are not far away from the possibility of implementing very nice real-time ray tracing on the core processor with reasonable performance with quite complex scenes...
Going back to the original question...
I was at a job interview in a big games studio in 1993 when someone asked me the same question. I stupidly answered "No, there will always be a place for 2D in the games industry. Some games work and look better that way", I didn't get the job. I guess the snotty twat interviewing me wanted a more progressive attitude. 17 years later I'm going to stupidly say the same thing "No".
As an aside that studio no longer exists like so many other high flying UK studios of the time. But there are still plenty of 2D games about... I don't know if that has any relevance
Nice summation - for the fifth time.
To many of us, you gave the right answer You probably wouldn't have been happy working there, anyway. That said, though, 1993 was when mainstream real-time 3D was JUST around the corner... So you have to cut people some slack
I don't really see that as an additional factor, though, because all of the above concerns are factors for CPU manufacturers, as well. And I'm not sure how you can say "if 3D card manus don't go more general", because they have a 10+ year history of becoming more generalized. If you think about it, both GPUs and CPUs are headed in the same direction; the only difference is their respective starting points.
I'm pretty sure Intel, NVIDIA, and AMD all have the same goals, here (ultimately, domination of all computing), and are well aware of the fierce competition that lies ahead. It should be interesting to see how it all pans out!!!
When the PS1 launched it was a requirement for a game to be in 3d. No 2d games were allowed apart from a couple of favourable developers.
And maybe your answer was correct but you didn't get the job for other reasons?
Yeah rayman was the only 2D title around launch time, with pandemonium being 2D to play, and side on, but using 3D hardware and 3D models.
I've been switching all my graphics to using graphic tiles of fixed size and slopes are an absolute fucking nightmare to line up and draw for an artist. Previously, my engine was "modern" in that you could have a tile of any shape and size and simply rotate it and position it anywhere so you didn't need to fiddle with a ton of tiny graphic blocks just for a slope.
Once you cross a certain threshold, 3D is far faster with media assets too. To solve the same problem for media, I would have needed to just export a mesh or pre render of the shape.
Last night I spent 6 hours just making slopes line up with tiles, thats not faster, thats much slower for media too.
Those saying 2D art is faster obviously are referring to games which are entirely square shaped with platforms and no slopes. Try making sonic the hedgehog tilesets or something.
Go ambitious and you can forget about 2D being fast in any way to work with
I know I repeat myself but this was proof of what I've been saying and it happened to me last night so I thought I'd share.
If I remember correctly, Sonic's sprites were based on the 8 compass directions, and the "loop" were line segments in those same directions.
You could actually either rotate in realtime, or just pre-rotate the 8 compass directions.
I just had a head-slapping moment. Trying to restart my contribution to this thread with how I should've responded in the first place...
There's a game in development right now that's an absolutely 100% amazingly perfect example of what I've been on about all along, and it's being discussed right here on this board. The head-slapping part is that I have an association with this project yet never gave it a second thought.
This thread answers the original question with an affirmative "YES" in my book. You can churn out your 2D games looking flat, or you can make them look like that.