Hacker News new | past | comments | ask | show | jobs | submit login

I remember, when starting university, with no real programming knowledge, making a little game in my spare time (a clone of Pang if you know it) but it was before I knew what a linked list is. I implemented things like the balls which bounce around the level in an array, and just gave that array a max size. I was very unsatisfied by this, and then later finding out about linked lists was an ah-hah moment, very satisfying.

Next project was making a "Countdown" game, where you are given 9 pseudo random letters and have to make the longest word you can. I found a dictionary as a text file, so could see if the word you entered existed. The game was on Gameboy Advance, so not a huge amount of space or very fast CPU. As you can imagine, walking the entire dictionary file from start to end looking for a word was far too slow. So there was another ah-hah moment when binary search was introduced.

Next I worked on a rendering engine for this device called GP32, you basically got a pointer to the screen buffer and could put what you liked in it, so I learned how to write polygon fill routines, back face call, etc but didn't know what perspective projection was or how to find out about it. I finally found a book, Game Programmers Black Book or something like that, which explained perspective projection, at least to some extent, another ah-hah moment (previously I was dividing my XY by Z as I knew I wanted things smaller in the distance but this doesn't give a nice result by itself).

These are just very early examples when I first started programming, when information was harder to find, and when a lot of games development involved DIY, if you want a polygon you need to pixel fill a polygon! Even when PS2 came out you still had to write ASM render programs to take an array of vertices and transform them, their UVs, etc and send them to the Graphics Interface.

But I haven't found later tech developments have stopped me finding and needing to use other algorithms and structures. Just last week I had to diagnose a crash which resulted with the target device and debugger showing a nonsensical callstack, so I enabled the profile options in GCC/Clang so I had an epilogue / prologue for every function that is called, so I could store my own call graph history, and then, on crash, display it nicely, with indentation etc. This allowed me to see what happened just before the crash (turned out to be a system UTF16 conversation routine stomping over a pointers boundaries as the NULL termination of the string was incorrectly done as if the string was a normal char*, effectively NULL terminating half of a UTF16 pair, which wasn't treated as a terminate, so the actual bug was bad string termination done by the off the shelf engine we use). As the profile code ran twice for every single function that was run, it had to be pretty efficient, using appropriate data structures, etc.

So I guess the point of this post is to say I believe having a good knowledge of algorithms and data structures always seems beneficial to me. The extent which some companies push them is too much for me, but I don't think this should lead to us thinking it is all pointless. There is a nice balance out there.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: