Saturday, April 3, 2010

2 Ways to Develop

There appear to be two major ways to make things. When I say make things, I mean develop. When I say develop, I mean make stuff. Stuff can be games, electronics, bridges, term papers, homework, etc.

Now that we've defined the terms of our agreement, let's get into the nitty gritty.

The two ways I'm referring to are a more simulation/statistical model and a theory model.

It's quite simple really. In the former you simply try a bunch of different things and see what works. For example let's say that you want to figure out how to price something, say a carton of milk, to maximize profit. What you can do is to run several simulations (either virtual or with real people) to see how many people will buy at what price and using that information pick the price.

The latter deals with more abstract theory. Going back to our pricing example we have models and equations for how much people will buy based on certain variables. Perhaps the latest health trend is to eat cereal, or it's cold outside and people want to make hot chocolate. Using this models we can try to predict the price we should sell the milk at to maximize profits.

Typically we use a mix of the two. We use theory and "common sense" to guess an initial value. We would never price milk at $50 for a gallon in today's economy, and selling it at $0.50/gal wouldn't net us a profit. Then we try it out and make modifications as we go based on how well the system works.

The reason is that typically these two methods are great for two very different things: accuracy and time. The models are quick, simple, and generally you just have to punch the numbers into the computer and go. However, there is the theory that you can never account for every single variable in the real world exactly. The whims of people, chance, and random inane whathaveyous will inevitably gunk up the works. So, the modeling theory will get you close, but more likely than not it won't give you the perfect answer.

Simulations and statistical design, on the other hand, tends to be much more accurate. In a virtual simulation, assuming the simulation correctly models the real world environment, you will get better results. Running a test in a real supermarket will be much more accurate since people are actually buying your milk. The downside is the time it takes to do a test. To get enough data you often need hundreds if not thousands of data samples. So, having a day of people buying milk with a randomly chosen price for that day would take years. Similarly, a computer simulation (inherently not perfect, but much closer to the real world than an simple equation) may take hours if not days to crunch through all the possible variables.

This is why people will take a good first guess and make slight adjustment later. I personally believe that getting a really good first guess is the key to happiness. Perhaps it's my analytical nature. However, there have often been times when I'd rather just setup an hour long simulation and let it run.


One recent example was a recent homework. We were told to try and make the best filter we could. After playing around with the filter, I noticed certain things that would help make sure the filter worked and to reduce its size (the metric we were trying to minimize). I then setup a six-dimensional search that took 4 hours (on the order of 2000 different possibilities were tried). And by golly did I find the smallest area possible. Compare with my friend who instead picked only two dimensions to search, so he got an answer in about 10 minutes. Granted, his was larger than mine by about 25%, but it was good enough.

I also was listening to a podcast given by the developers of Natural Selection 2 where they talked about talks given at the Game Developer's Conference a few weeks ago. They mentioned that one company was using a very heavily statistical approach to see how well they could squeeze out the money from their players. They would make a slight change to about 5% of the players and see how it did. They felt it was wrong, that they were not developing games anymore but just crunching statistics. While my initial feeling is that the NS2 developers are correct, there is no inherent fault in using a statistical model for game development. It's just that some games like NS2 cannot allow that kind of model since if suddenly 5% of the players did more damage the competitive players would figure it out and hang them for giving those players an advantage/disadvantage compared to other players. However, in internal testing I assume statistical models are used all the time.

Sirlin of internet and game balancing fame claims that his wealth of intuitive (and theoretical) game balance knowledge allowed him to develop the game Yomi. It's an intricate battle game, similar to Pokemon but with more advanced rules. Sirlin compares this intuitive design to the statistical approach in this article near the end. He claims that because of his fail-safes and good design a minute statistical approach to try and exploit the game seem to fail and provide very little extra insight. However, he also advocates in this article (again near the end) to playtest the heck out of your games to find the good, the bad, and the overpowered things that need to be fixed. So, yeah, at the end of the day Sirlin is also a sucker for both methods.


So, whatever you do, make good educated guesses first and then tweak later. Sounds like a solid approach to most anyone I hope.