Systems Modelling - Abandoned
Created | Updated Jan 29, 2002
This entry has been abandoned, as I no longer study computer science, and hence have no desire to finish the entry. Perhaps some other researcher will turn this entry into a finished product, but that researcher isn't me!
Computers are complicated. Systems which use many computers with external random influences like people are even more complicated. The internet is mindbogglingly complicated. Computer Systems Modelling is designed to cut out all that complexity by viewing computers as black boxes, or a series of black boxes, and just dealing with how things interact with them.
What's it good for?
Suppose you are planning to buy a new computer product, and you want to know which one performs a particular task well. You could try them all out individually, but this would take a long time. You could read a magazine or newspaper, but they could be biased. You could choose it based on the colour of the packaging1. Alternatively, you could build a model, and see which one will perform best.
Sometimes when a product is made it is found that it is way too slow to be useable2. Obviously, this means it needs improvement - but what should be improved? Ideally the makers would like to perform the least work possible to make it lightning fast, so where should be attacked first?
When designing a system, using a model can help avoid bad design decisions, either in the computers, or the access to them. For example, a bank might decide to have a queue for each cashier, thus ensuring maximum irritation and resentment, chaos when a new window opens or closes, and old ladies with handbags somehow always making it to the front of the queue. Alternatively they could have a single merged queue, which modelling would predict to be more efficient and fairer.
One wideranging result from Systems Modelling is that every 'stable' system in the world obeys Little's Result.
Techniques
There are at least five ways to do this modelling process - the first two are simple, and you've probably already done them without thinking about it.
Firstly, there's measurement. If you already have a system working, you can simply tweak it and measure the results. So, if your car's not going terribly fast, you might try taking the handbrake off, and see if that makes any difference. Naturally, this technique is useless if you don't have a car yet.
Next, there's simple back of envelope calculations, and intuition. Depending on the amount that your web connection is delayed by, you might guess roughly where the problem is, or type a few numbers into a calculator to reassure yourself that it couldn't be your fault. These don't tend to give any detailed insight, though.
Thirdly, there's Computer Simulation. This has the disadvantage that it can be timeconsuming to do - almost as slow as actually making the system and measuring it directly, and it can be complicated. On the plus side, it can model systems of arbitrary complexity.
Less information can be obtained from Operational Analysis, but it's quicker, and it makes very few assumptions about the system, allowing very general results to be shown. Unfortunately very general results are often very obvious.
Finally, Queueing Theory uses heavy maths to solve the problem, but even so it often has to make huge assumptions about all parts of the model. While critics say that Queueing Theory is so far removed from reality as to be useless, at least it keeps people off the streets. Of course, it only applies to queueing systems, but that is still pretty wide.
Choosing between these methods is a trade-off between the amount of time and resources that are available, and the required accuracy. There is also an issue of how close the system is to some mathematical model - if it's a textbook example, using the textbooks to solve the problem has got to be a good idea.