Choice of code design

Discussion in 'Game Development (Technical)' started by lordmetroid, Aug 25, 2007.

  1. MedievalElks

    MedievalElks New Member

    Joined:
    Jul 25, 2006
    Messages:
    82
    Likes Received:
    0
    That same paradigm (XP) also advocates refactoring and design patterns. The idea is to do The Simplest Thing That Could Possibly Work and then refactor out the bad smells.
     
  2. Applewood

    Moderator Original Member Indie Author

    Joined:
    Jul 29, 2004
    Messages:
    3,859
    Likes Received:
    2
    You're right. Seeing as I can't actually seem to leave this thread, I'll admit I have no clue what you're on about.

    I kinda feel like I'm being made an idiot of here for recommending people write efficient code, so I'm sticking around for the thrills. One of those neigh-sayers being someone who can't at the same time envision rendering lots and lots of stuff at good fps! :)
     
  3. bvanevery

    bvanevery New Member

    Joined:
    Jun 25, 2007
    Messages:
    369
    Likes Received:
    0
    I don't consider YAGNI to be synonymous with XP, even if it may have arisen from XP.

    YAGNI means first you do what matters now. Refactoring usually doesn't matter now. Sometimes it does, then you do it. I wait until something really proves it needs to be refactored. For instance, translation of Makefile.in and configure.in to CMake code is similar but not identical. Currently I'm just cutting and pasting the code from my Makefile.in translator to create a configure.in translator, and modifying it as necessary. I'm not yet attempting to create any common code because it's not clear that's necessary or beneficial, and the Makefile.in code works. Considering how fragile regexes are, I'd very much like not to touch it unless really essential.
     
  4. bvanevery

    bvanevery New Member

    Joined:
    Jun 25, 2007
    Messages:
    369
    Likes Received:
    0
    Do you know for certain that's what your ASM output is actually doing?
     
  5. bvanevery

    bvanevery New Member

    Joined:
    Jun 25, 2007
    Messages:
    369
    Likes Received:
    0
    We have been discussing 5M polys per second. Bandwidth claims of 9GB/sec, 3GB/sec due to shader compression, less/sec due to static vertex buffers. What puzzles me is whether you're claiming 5M polys/sec in your pool table game or in more general rendering.
     
  6. princec

    Indie Author

    Joined:
    Jul 27, 2004
    Messages:
    4,873
    Likes Received:
    0
    What I'm on about is making sweeping statements on coding practice without even an inkling of the engineering problem in the first place. Very dangerous thing to do these days what with the forum full of newcomers.

    Cas :)
     
  7. bvanevery

    bvanevery New Member

    Joined:
    Jun 25, 2007
    Messages:
    369
    Likes Received:
    0
    Optimization is an expensive specialty skill that provides no commercial value by itself. If I'm getting paid by some 3d graphics card company to optimize the hell out of their driver in ASM code, it is on the backs of legions of engineers who got the driver working in the 1st place. Looking at the world through the lens of optimization gives a person a very poor perspective on how long it takes to actually get things done.

    I've lived that foible in my own career. Once upon a time I left DEC a superlative 3d optimization jock - and unbeknownst to me at the time, a lousy application developer who didn't know how to quickly get things working from scratch. I went bankrupt on my mistake, learned my lesson, and moved on. I Was An Idiot [TM] or at least ignorant of the cost of pursuing efficiency for efficiency's sake. I've thrown a lot of my own money away on it, so now I'm much more aware of the fully loaded cost of every single thing I might choose to implement. Thus I avoid implementing things that don't contribute commercial value now.
     
  8. bvanevery

    bvanevery New Member

    Joined:
    Jun 25, 2007
    Messages:
    369
    Likes Received:
    0
    Paul's got plenty of inklings. But if he's talking 5M polys/sec for the pool table, that's not anywhere near as impressive as it sounds because it's a special case.
     
  9. princec

    Indie Author

    Joined:
    Jul 27, 2004
    Messages:
    4,873
    Likes Received:
    0
    Quite so. There are plenty eyebrows being raised at the assertion, for example, that a design using a virtual method call on an object is an inefficient method of coding. Slower it may be in some circumstances - but irrelevant. No-one in here need worry about this sort of stuff. The whole thread is a pointless waste of bandwidth, which I am enjoying contributing to because I've had a nasty streak on for a couple of days.

    Cas :)
     
    #89 princec, Aug 30, 2007
    Last edited: Aug 30, 2007
  10. bvanevery

    bvanevery New Member

    Joined:
    Jun 25, 2007
    Messages:
    369
    Likes Received:
    0
    I really can't be bothered to pay attention to the fine details of the argument about virtual method calls. I suspect that what's really being debated is people who write batching architectures vs. people who don't. Which is an orthogonal discussion. And the answer to that one is simple. You want performance, you batch.
     
  11. Applewood

    Moderator Original Member Indie Author

    Joined:
    Jul 29, 2004
    Messages:
    3,859
    Likes Received:
    2
    Yes and no. It seemed that values occasionally get passed on the stack despite the fact that powerpc in 360 has like five trillion vector registers available and routines in question were just vector mungeing. Recoding the functions cleared it up, but I wouldn't have expected to see it in the first place.

    Actually I never claimed it was impressive, just highlighting that we're not talking about a match-3 game. What I don't get is that anyone thinks it's even slightly impressive if we're talking about throughput demos. This is 83,000 polys a frame at 60 hz and it is indeed for our pool game. If anyone thinks that that *is* impressive they really need to look at their optimisation skills! :) We could add a pair of zeroes to that figure if we were targetting 360 only, but we're keeping it real for PC too.
     
    #91 Applewood, Aug 30, 2007
    Last edited: Aug 30, 2007
  12. Applewood

    Moderator Original Member Indie Author

    Joined:
    Jul 29, 2004
    Messages:
    3,859
    Likes Received:
    2
    I slightly disagree. On 360 for example, whose memory really is as slow as a fast hard-drive, you spend time optimising or you don't get any work! I take your general point though - don't do it if you don't need it.

    But, and this is why I started shouting. Well, tried to:

    I've said it several times already, optimising is not the same thing as writing efficiently in the first place. A totally trivial example is taking static calculations out of loops. The compiler *can* do this, but often doesn't. And doing it yourself into a temp variable will make your code EASIER to read. And it's faster. Where's the drawback here ? 10 more keystrokes ?
     
  13. jessechounard

    Original Member

    Joined:
    Apr 4, 2006
    Messages:
    70
    Likes Received:
    0
    If it's easier to read, then sure, it's better. But worrying about details like that could lead a person to writing:

    p = (y << 8) + (y << 6);

    Instead of:

    p = y * 320;

    Which is certainly not easier to read, and has been made completely irrelevant by modern compilers and hardware. Profiling should tell you when to make that sort of optimization. It's architecture and algorithms that a coder should be focusing on, not minor optimizations.

    But I'm going to guess you don't disagree with this, and just picked a poor example.
     
  14. Applewood

    Moderator Original Member Indie Author

    Joined:
    Jul 29, 2004
    Messages:
    3,859
    Likes Received:
    2
    This is a complete myth, I'm afraid. The compiler gets credited with all sorts of things it could do but actually doesn't. "Sometimes it does" is as far as I would go and unlike what I'm advocating for others, as lead engine guy I *do* spend a lot of time optimising and looking at compiler output.

    It was meant as a trivial example though. There was once a time when shifts and lea instructions were faster, but these days imul wins anway. In fact most code is largely irrelevant as it's running at least 5,000 times faster than your data. I agree that example was overly trivial and more of a throwback.

    Agreed. Algorithms I can't help you with. For architecture, see below:
    ==================

    I just thought of this example. It's a 360 snippet again, but the effect is similar on PC although admittedly nowhere near as extreme. It's also not as trivial as the above and far more meaningful in terms of performant game code.

    You have a routine that accesses objects via linked list and need to process it once per object. Lets say for a boids "objects near me" AI routine. The random access nature of the linked list means that you're missing the L2 cache a lot when fetching positions.

    You recode that to take a growable array data structure that can now stay within the L2 cache the whole time. Do this as a struc of arrays, not an array of structs so that all the positions are near each other in memory.

    What sort of speed up would you expect for swapping the first data-access method with the second ? The answer is roughly 60,000%. That's sixty thousand percent.

    The cost of the above ? Swapping a nasty, usually buggy linked list with a nice simple array. Easier to read, easier to code, easier to debug, faster to code, faster to debug, fuckloads faster to execute.
     
    #94 Applewood, Aug 30, 2007
    Last edited: Aug 30, 2007
  15. princec

    Indie Author

    Joined:
    Jul 27, 2004
    Messages:
    4,873
    Likes Received:
    0
    Called the Flyweight pattern I think. Patterns is where it's at these days.

    Cas :)
     
  16. jessechounard

    Original Member

    Joined:
    Apr 4, 2006
    Messages:
    70
    Likes Received:
    0
    The optimization I mentioned is built into gcc, for sure. And even if a particular compiler doesn't have it, it shouldn't be used until the profiler indicates it. It's extremely ugly, and the next programmer to see it is unlikely to understand what's going on.

    I'm with you on the other example you provided. But that's an architectural change, and I wouldn't call that a premature optimization.
     
  17. Applewood

    Moderator Original Member Indie Author

    Joined:
    Jul 29, 2004
    Messages:
    3,859
    Likes Received:
    2
    If you say so. I was using a linked-list at least 10 years before I knew what it was called, so this comes as no surprise.

    If you want to bait me, you need to try much hard than that. ;)
     
  18. Applewood

    Moderator Original Member Indie Author

    Joined:
    Jul 29, 2004
    Messages:
    3,859
    Likes Received:
    2
    That's interesting. I've only used gcc on the PS2 and that backend was an utter abortion. The VS2005 one is erratic at best at what we'd class as obvious optimisations, and the 360 backend for it is even more random, though improving. I hear good things about Intels own compiler but I've never tried it tbh

    Well yeah, neither would I - I'd start out that way knowing from experience that I'm gonna be blatting the cache with the control code. Apparently this kind of thinking is either pointless or too expensive though so I guess we're screwed! :)
     
    #98 Applewood, Aug 30, 2007
    Last edited: Aug 30, 2007
  19. princec

    Indie Author

    Joined:
    Jul 27, 2004
    Messages:
    4,873
    Likes Received:
    0
    Not baiting - that's just how software engineering is going these days. None of this blanket "virtual methods are slow" stuff. Just the correct design pattern for a particular problem.

    Cas :)
     
  20. Applewood

    Moderator Original Member Indie Author

    Joined:
    Jul 29, 2004
    Messages:
    3,859
    Likes Received:
    2
    In that case I'm in agreement. You sounded like you were one of the guys decrying what I've been saying. I don't know all the fancy jargon, I just learn the best way to do things by combination of experiment and discussion.
     

Share This Page

  • About Indie Gamer

    When the original Dexterity Forums closed in 2004, Indie Gamer was born and a diverse community has grown out of a passion for creating great games. Here you will find over 10 years of in-depth discussion on game design, the business of game development, and marketing/sales. Indie Gamer also provides a friendly place to meet up with other Developers, Artists, Composers and Writers.
  • Buy us a beer!

    Indie Gamer is delicately held together by a single poor bastard who thankfully gets help from various community volunteers. If you frequent this site or have found value in something you've learned here, help keep the site running by donating a few dollars (for beer of course)!

    Sure, I'll Buy You a Beer