The Mathematics Behind Digital Technology Content from the guide to life, the universe and everything

The Mathematics Behind Digital Technology

2 Conversations

A CD

Nowadays, when you watch TV or listen to a CD, you'd be forgiven for taking the high quality of the audio and video for granted. It seems a long time since we played warped, scratched and dusty vinyl records, or struggled to watch Match of the Day through a blizzard of atmospheric 'snow'. Today's digital technology reproduces to a high degree the images which were captured at the filming location or the sound recorded in the studio.

This Entry introduces a branch of mathematics which underpins modern communication technology and ensures that, among other things, the television pictures we watch and the recorded sounds we hear are of optimum quality. It's known as coding theory.

Going Digital

These days we use digital technology to record, communicate and replay. The analogue or real-life signals are converted into long strings of binary digits when they are recorded, stored and communicated, and then converted back into an analogue signal when played back to the viewer or listener.

One of the benefits of using binary numbers is that we can use some very clever mathematics to check whether the signals are correct at each stage of the process, and, in many cases, where it isn't we can automatically correct them. In practice, it's a very complicated business, and the actual methods used would be far beyond the scope of this Entry. However, we'll examine one of the principles involved, and by using simple examples we'll describe the mathematics behind one type of binary code, and how errors can be automatically detected and corrected.

Each digital signal consists of a series of numbers. In the case of a CD, for example, each number might indicate the sound level at one particular point in time along one of the stereo channels. The signal is read by a laser which scans the binary information burnt into the microscopic indentations in the track of the disc. If the track is at a particular depth at a given point, it reads 0, whereas if it's indented (known as a pit1), then it reads 1. In the course of playing the entire CD, the laser will detect many millions of these numbers.

Let's say that each number we read can have one of 16 values at any point in time (in fact, standard compact disc technology uses a more complex and entirely different coding system, which we'll mention later). In binary, these values will be represented by the numbers 0000, 0001, 0010, through to 1111, ie the decimal values 0 to 15). Binary digit is abbreviated to 'bit', so we call this a 4-bit message.

Now, consider the binary number 1001, representing a value of 9. This could represent, for example, the sound level on a Nigel Kennedy CD through the left-hand stereo channel at exactly 1 minute and 14 seconds into his rendition of the third movement of 'Spring' from Vivaldi's Four Seasons. What happens if there's an error when we read it? Maybe the CD is slightly warped, or there's a minute manufacturing fault in the track depth, or maybe someone jogs the CD player at that moment. If there's just a small error, then one of the bits of our number will be misread – a 0 will be represented as a 1 or a 1 as a 0 in our binary message.

Depending on which bit was incorrect, our number 1001 could be read by the CD player as any of the following: 0001, 1101, 1011 or 1000. But the trouble is, these are still valid numbers in our set. Instead of 9, they represent 1, 13, 11 or 8 respectively. Our CD player can't know that anything is wrong - it would just process it as if the violinist had played a sound according to level 13, say, rather than level 9 at that point, and so this would corrupt the signal we hear. In practice, a single error wouldn't usually be noticeable to the listener, but in some cases it might be heard as a small 'pop' or crackle on the soundtrack. With lots of errors throughout the recording, however, we would notice a deterioration in sound quality, not unlike the familiar effects of dust or scratches on a vinyl LP.

So, if an error occurs with this code, the CD player can't tell, as any error condition is the same as another valid number in our code. In order to tell if there's an error, we need to add some additional information to our message.

Error Detection

The most simple way is to add an extra bit to the end of the code number - a check bit - and choose its value according to a rule. For example, we might say that every word in our code has to have an even number of 1s. So 0000 would have a 0 appended to the end (giving it zero 1s in total), 0001 would have a 1 appended (making two 1s in total), and so on. Our new code looks like this:

Message bitsCheck bitCodeword
0000000000
0001100011
0010100101
0011000110
0100101001
0101001010
0110001100
0111101111
1000110001
1001010010
1010010100
1011110111
1100011000
1101111011
1110111101
1111011110

The first four digits of each codeword are our original message, and each complete codeword has an even number of 1s, or, as mathematicians would say, has even parity.

Now, see what happens if an error occurs in one bit. No matter where it occurs, it invalidates our rule for the check bit. It will change a 0 into a 1 or a 1 into a 0, and we end up with an odd number of 1s as a result. So we know if there's an error and we can programme our equipment to check for this condition and take appropriate action. In the case of some communication systems, when we detect an error, we might ask for the signal to be retransmitted.

For the CD player, however, this isn't possible. The music would sound very strange and we'd lose a fair amount of rhythm if we asked the laser to go back and read each error again. If it were a manufacturing fault causing the error, then the laser may never be able to read the correct value in any case. All is not lost, though. All we need is a clever way to not only detect an error but to make an informed guess as to what the correct value might have been.

In the case of the previous code, we may know that something is wrong, but it's not possible to tell which bit was in error. If we receive the message 10101, say, containing three 1s, then we know there's an error, but, assuming only one bit was in error, the intended message may have been any of five values: 00101, 11101, 10001, 10111 or 10100 (in decimal: 2, 14, 8, 11 or 10) . We don't know which one of these is correct.

Error Correction and Hamming Codes

For automatic error correction we need to add more check bits. We'll illustrate this using one common set of codes used for 4-bit messages, where three check bits are appended, making each into a 7-bit codeword. These are known as Hamming codes, named after American mathematician Richard Hamming (1915 - 98).

There are many ways to encode the check bits. In the following example, we'll derive them according to these three rules:

  1. The first, second, fourth and fifth bits have even parity
  2. The first, third, fourth and sixth bits have even parity
  3. The second, third, fourth and seventh bits have even parity

This allows us to construct the following code. The columns show the binary number which is our message, the check bits we add to it, as calculated by the above rules, and the final codeword which we would transmit (and hopefully receive).

Message bitsCheck bitsCodeword
00000000000000
00011110001111
00100110010011
00111000011100
01001010100101
01010100101010
01101100110110
01110010111001
10001101000110
10010011001001
10101011010101
10110101011010
11000111100011
11011001101100
11100001110000
11111111111111

The reason why this code is clever is a property known as the minimum distance of the code. Unlike our earlier code where an error in one bit merely turned it into another valid codeword, each of these codewords in a Hamming code differs from any other codeword by at least three bits. If there is an error in any one of the seven bits, then we can identify the nearest codeword which is only one bit different from it.

If we received the code 1010111, say, then this isn't a valid codeword in our list. If we search through the allowable codes, we can see it's only one bit different from 1010101, so it's reasonable if we correct it to this. As the minimum distance of our code is three bits, we know there is no other codeword which is only one bit different.

Automating the Error Correction Process

In the case of our Hamming code example, we can automate this process using a neat bit of mathematics. It involves a bit of modulo-2 matrix multiplication. Now, we appreciate that maths doesn't float everyone's boat, so if you really don't want to see the clever bit, look away now.

Remember those rules we used to construct our Hamming Code? We first need to write them in the form of a binary matrix. There is one row for each rule, and a column for each bit of the code. We will therefore have three rows and seven columns. Each cell has a 1 if the rule checks that bit and 0 if it doesn't. In our example, our first rule was that the first, second, fourth and fifth bits would have even parity, so the first row is 1101100, and so on. Our completed parity check matrix looks like this:

1101100
1011010
0111001

This is where the matrix multiplication comes in. If we multiply our received message by this matrix, it returns a three-bit result which identifies one of the columns of the parity check matrix. The column it matches with is the bit which is in error. So, using the received code 1010111 once again, we perform the following multiplication:

1101100
1011010
0111001
X2
1
0
1
0
1
1
1
=
0
1
0

The result 010 is the sixth column of our parity check matrix, indicating that the sixth bit of the received message 1010111 is in error. When we correct this bit, we transform the message into the nearest valid codeword, 1010101.

If we multiply every codeword we receive by the parity matrix and then correct the bit indicated by the result, then we will automatically convert the received message into a string of the nearest valid codewords. This corrects every codeword which has no more than one error in it.

Multiple Errors

The Hamming (7,4) code isn't capable of correcting more than one error bit, but other coding mechanisms exist which do.

The more errors we wish to detect, the more check bits we have to use and the longer our message becomes as a result. There is a trade-off between the efficiency or rate of a code – ie what proportion of the code is the actual message – and the error-correcting ability of it.

Different codes are suitable for different applications. In the case of a CD, we would wish to make the code as efficient as we can, so that we can store as much real data on the disk as possible. The precision manufacturing process ensures that errors are not widespread, and any clusters of localised errors, perhaps indicating a pressing defect or a scratch, are minimised by a process of interleaving the numbers such that that data from the same logical location are spread out across different physical locations of the disk.

On the other hand, consider the example of a space probe sending back photographs from a distant planet. This will transmit a weak signal, one which will be susceptible to many errors caused by background noise. This application lends itself to a coding system which corrects multiple errors, ensuring that the received picture is as high a quality as possible. It will necessarily be an inefficient code, however, with maybe five or six times as many parity check bits as message bits, and it will take far longer to transmit as a result.

Back to those CDs

In practice, compact discs don't use Hamming codes; they use something known as a Reed-Solomon code2. This code is too complex to describe here, but it has the advantage of being applicable to the problem of detecting and correcting long bursts of errors, as may be caused by a scratched CD.

1Not to be confused with 'bit'. A pit is a physical indentation on a disc, whereas a bit is a binary number (it's short for 'binary digit') which indicates the value of data at a single point.2Invented in 1960 by American mathematicians Irving Reed and Gus Solomon.

Bookmark on your Personal Space


Edited Entry

A85655046

Infinite Improbability Drive

Infinite Improbability Drive

Read a random Edited Entry

Categorised In:


Written by

Write an Entry

"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."

Write an entry
Read more