CHAPTER 7: INFORMATION
©1982, John E. Miller.
A computer is an information engine. It intakes and exhausts information. The purpose of this chapter is to show how a computer can represent information, that information can be quantified, and that it takes a definite amount of time to transmit a given amount of information. The transmission speed of a computer is one of the factors that determines how fast it can operate. The memory size of a computer determines its capacity for storing information. In the Preface it was recommended that you read this chapter, once after the introduction, and again upon arriving back here. It is not necessary to read this in order to understand the first part of the book. However, as you progress it becomes increasingly more important that you appreciate the limits of your particular machine. It is often necessary to quantify how much memory and storage a particular application will require.
A Measure of Information
The basis for measuring information is the 'bit'. A bit has only two possible values: on or off, yes or no, one or zero, black or white, true or false, clockwise or counter-clockwise, etc. A single bit of information can be very useful, but only if its value is paired with its meaning. Someone or something must keep track of which bit means what in order to preserve the information embodied by the bit. If the values of three bits are known but not which questions they answer, then the bits probably would be useless. (If all three bits had the same value, we would not need to know which was which in order to use the information).
While it is true that a single identifiable bit can be useful, few single-bit situations are encountered in relation to the variety of uses of larger groups of bits. More bits are not just more in terms of quantity, but more in terms of 'structure'. This section examines what we can do with increasingly larger groups of bits.
TWO or THREE BITS
Consider two separate bits representing values of either 0 or 1. Both bits could have values of 0, both could be 1, or one could be 0 and the other could be 1. If we add the two values together, in three of the cases their sum can be represented with just one bit: 0+0=1, 0+1=1, and 1+0=1. In the fourth case, 1+1=2, the value is neither a 0 or a 1. This situation is analogous to 9+1; we have used up all of the one bit numbers. Just as in base ten, a 'carry' is generated to the next higher position. The 2-bit binary number 10 represents the same number as our familiar base ten digit 2. Figure 7-1 summarizes the preceding in a binary addition table. If we continue to add 1 to the one's place, we generate a unique pattern of bits for each value from 0 to as large as we like:
10 (two) +1 -- 11 (three) +1 -- 100 (four) The carry cleared the low order bits just like 99 +1 +1 --- -- 101 (five) 100 +1 --- 110 (six) +1 --- 111 (seven)
Thus, with 3 bits the numbers from 0 to 7 can be represented.
FOUR BITS
A group of four bits is called a 'nibble'. If we add 1 to 111, we get 1000, which is four bits long. Now, the three zeroes in 1000 can go through all the three-bit configurations generating 8 more numbers:
1000 is 8 1001 is 9 1010 is 10 1011 is 11 1100 is 12 1101 is 13 1110 is 14 1111 is 15
Note that in the all the above, the leading bit is a 1. Figure 7-2 shows how the value of the bit in each position determines the value of the number being represented.
'Hexadecimal notation' is used to represent the values of nibbles rather than 0's and 1's. The hexadecimal system is based on 16, which is naturally suited for nibbles. A system based on 16 needs sixteen symbols just as base ten uses 10 symbols. The symbols used are the ten decimal digits, 0-9, plus the first six letters of the alphabet, A-F. "A" represents ten (nibble 1010), "B" represents eleven, and "F" represents fifteen. Hex notation is not used in this book, nor in standard BASIC, but you may encounter it in some other literature. It can be used to represent an arbitrarily long binary number, e.g. 12345 (hex) represents the 20-bit binary number 0001 0010 0011 0100 0101! (The spaces were inserted for readability.)
EIGHT BITS
With eight bits the number of configurations is 256. A group of eight bits is called a 'byte'. A byte can be used to represent a character. Each character in a set of characters is assigned one of the codes. The most common coding scheme currently in use is the American Standard Code for Information Interchange. ASCII is not used on every computer. It is a standard which the industry follows to share in the market of data entry and display devices. ASCII covers the more common characters, but makes no attempt to cover special symbols used in mathematics, medicine, music, etc. ASCII is a 7-bit coding scheme that associates a character with 128 of these patterns. Since ASCII is a 7-bit code, the bit on the left end of a byte is unused.
The program in Figure 7-3, when run on an ASCII system, shows which characters are associated with each code from 32 to 126. CHR$ is a string function that represents the character corresponding to a given value. The codes from 0 to 31 represent certain control characters which are used to communicate things like: "feed lines till the top of the next page is reached", "return the carriage to the left", "feed one line", etc... Figure 7-4 shows both decimal and hexadecimal equivalents for characters having ASCII codes. The assignment of characters to codes was made somewhat arbitrarily. People met and agreed upon the code. The only remarkable thing about the code is that characters 0123456789 have codes 48 through 57, the upper case alphabet has codes 65 through 90, and the lower case alphabet occupies codes 97 through 122. The code for the capital of any letter of the alphabet is the code for the small letter, minus 32.
While single characters may be meaningful in certain contexts, intelligent messages may be encoded in a sequence of bytes. This is not to say that the computer has any comprehension of what is stored in its memory by virtue of the meaning we attribute to the characters. In England, a computer's memory is called the "store" to avoid any inference that the computer has memories even vaguely similar to human ones. Computer memory and human memory differ more than they are similar. Current technology is far from making a computer as complex as your brain, even though science has estimated a its capacity in bits. Figure 7-5 gives an idea of what can be done with lots of bytes. A shorthand notation is used to denote large numbers of bytes. A kilobyte, or 1K is 1024 bytes. (Kilo means one thousand in the metric system. Since 2 to the 10th power is 1024, we use the prefix KILO for 1024 in the binary system). Similarly, a megabyte is a little more than one million bytes.
16 BITS
A group of 16 bits can represent any value from 0 to 65,536. This is a reasonably large number for counting or indexing purposes. A 16-bit computer is so called because it can work with groups of 16 bits.
32 BITS
A group of 32 bits can represent any value from zero to 4,294,967,295. A computer that can work with 32-bit groups is called a 32-bit computer. It is said to have a 'word size' of 32. Different brands of computers have various word sizes from 4, 12, 16, 24, 32, 36, 48 to 60 bits! Actually, the binary number scheme is modified to represent negative numbers as well, using a form called 'two's complement'. We trade half of the range of a word to allow for equally large negative numbers (-2,147,483,648 to +2,147,483,647).
All we have seen thus far is how to represent the integer numbers in binary. How is it that the binary number system can represent fractional numbers or very large numbers like ten to the 30th power? Are more than 32 bits used? Fractional numbers and very large numbers are represented with just 32 bits. Only the most significant bits are retained in the representation at any time. The magnitude of the number is always preserved by the computer.
Every school child would immediately recognize the pattern shown in Figure 7-6 as a common ruler. The binary fraction 0.1 represents one-half, 0.01 is one-fourth,... 0.0001 is one sixteenth. The fractions may be added: 0.11 is three-fourths, 0.1111 is fifteen-sixteenths. By adding another level of marks to the ruler or one more bit to the binary fraction, any of the 32nds can be represented. Each time another bit is added to the representation, the resolution of the ruler is doubled. Turn Figure 7-2 sideways and you will see how this process can continue. Most computers use 24 of the 32 bits to represent the location of the value in the range 0 to 1. Each 24-bit combination would name a particular mark on the ruler. That would make a very finely divided ruler! It could measure an amount as small as one hundred millioneth of an inch! These 24 bits are called the 'mantissa' of the number.
The remaining 8 bits, called the 'characteristic', are used to represent the magnitude, as large as 2 to the 127th power. The computer automatically uses special machine language instructions to operate on numbers represented in this form. To add two values, one must be 'normalized' to the other. The binary points must be aligned just like decimal points, and then the 24-bit patterns can be added as usual. To multiply two values, the characteristics are added and the mantissas are multiplied. (A review of logarithms and exponents would help you understand the algebra of 'floating point' arithmetic.)
As precise as the binary ruler is, there are still infinitely many numbers between the marks, no matter how many bits we use. The most shocking fact is that one-tenth nor any of its cousins are on the binary ruler. This is similar to the situation with one-third in base ten. The decimal fraction for 1/3 repeats endlessly. Anything less, say 0.33333333333, is not one-third. The binary fraction for 1/10th also repeats: 0.000110011001100... Therefore, one-tenth is cannot be quantified in any finite number of bits. As evidence of this, consider the program in Figure 7-7. Try running the program on your computer, you may get slightly different results due to differences between your computer and the computer used here. This demonstrates that the error introduced by the inability to represent one-tenth can accumulate. The field of programming that deals with computing in ways to minimize such errors is called "Numerical Techniques".
Various configurations of bits are used appropriately by computer systems to represent different kinds of data. Floating point numbers are a compromise between realistic precision and economy of memory. As we shall see in Chapters 8 and 9, we can combine the above forms to represent a collection of aspects or attributes of some entity. A bit of information is not something like "George Washington was the father of the Unites States", that would take about 60 bytes.
Hardware Technology
This section will give you some idea of the nature and capacity of devices currently available for storing information. The cost of storing information on different devices can be compared by looking at the number of bits per dollar or cents per bit for each device. The cost of storing a bit not only includes operating the storage device but also the original cost of the device spread out over its lifetime of service. If all memory technologies were equally expensive, then we would choose the fastest one and there would be no competition. However, there is a wide range of costs so we have many choices which involve compromises between speed and capacity. Intuitively it makes sense that the fastest memories generally have less capacity and that the slower devices all have immense capacities. In the middle we have a range of choices between devices which are of different design but have approximately the same performance. This economy of bits tends to separate devices into two categories of uses analogous to short-term memory and long-term storage. In practice, we see three kinds of memory: primary, secondary, and tertiary.
Primary memory was originally done with electromechanical relays, or 'flip-flops'. They were very slow, noisy and power-consuming. Another relic used sound waves circulating in a large ring of liquid mercury. In the 50's and early 60's 'core storage' was the fastest and most expensive memory for your money. Iron rings one fiftieth of an inch in diameter were woven into a mesh of wires that allowed polarization (clockwise or counter clockwise) of any ring (called a 'core'). The wires could be sensed to read the polarity of a core which was associated with the value of a bit. The active switching elements and registers of core-storage computers were first made of vacuum tubes, then transistors, and then 'integrated circuits', called "IC's". IC's (Figure 7-8) are entire circuits optically reduced and photographically etched onto a silicon wafer. We now have primary memories consisting of 'chips' of IC's. Personal computers have anywhere from 1 to 64K (K=1024) bytes of memory that can access any particular byte within a millionth of a second. The computer the author is currently using has four megabytes of "16K chips", meaning that the memory is actually divided up into two-hundred and fifty-six 16K chips. These chips occupy less than half a cubic foot of space and consume little power, but they cost around $7000 per megabyte! That memory can access any byte within a few nanoseconds (billionths of a second).
Secondary memory technology is characterized by reasonably quick access and relatively large capacity. The secondary memory of a computer system is usually 10-1000 times as large as the primary memory. A disk drive (Figure 7-9) usually serves this purpose. Disks are either hard metal or flexible plastic coated with a magnetic oxide. The disk drive spins the disk so that an 'arm' mechanism may position a read/write 'head' over a particular 'track' of bits. A track is a circular path traced by the head while the arm stays stationary and the disk makes one revolution.
The head does not touch the surface of the disk but floats on a cushion of air. When the electromagnetic head is energized, it aligns the magnetic field of the material below it. As the disk spins, the device may precisely position the head and transmit a series of bits to the write head. The bits are effectively written onto the disk. Later, the head may be positioned over a particular track and the previously written magnetic field may be sensed to read back the series of bits. The bits along one track are divided into fixed-length pieces, known by different names on different systems. Some of the names for divisions of a track are: blocks, grans, and sectors.
Hard disks (Figure 7-10) spin around a few thousand times per minute allowing access times in the range of milli-seconds. A plastic, or 'floppy' disk (Figure 7-11) spins more slowly but still affords good human-scale access times. A hard disk may hold anywhere from 20M to 600M (M=Megabyte or million bytes) whereas a floppy holds 100K to 400K bytes depending on the manufacturer and the 'density' of the organization. Hard disk drives initially cost around $20K and use a fair amount of energy to keep the disk(s) spinning and to move the servomechanism arm. Floppy disk drives cost around $1K and consume as much energy as an average home appliance.
Tertiary memory technology is characterized by massive 'off-line' storage and momentary access times. Magnetic reel-to-reel tapes (Figure 7-12) are used for this purpose, as well as some very bizarre devices. A standard reel of 1/2 inch tape is 2400 feet long. Information can be written on the tape in variable length blocks at a density of 800 to 1600 bytes per inch. The bytes are written across the width of the tape in separate tracks. Since the tape unit must start and stop between writing information, a gap is created during the speeding-up and stopping. These 'inter-record gaps' prevent us from using 100% of the tape, allowing only about 7 to 14 million bytes per reel. This is still a relatively cheap storage medium. Cassette tape recorders have even been used for this purpose, but they are very slow, storing the bits serially rather than in 8 parallel tracks. A professional-quality cassette cartridge is shown in Figure 7-13.
New memory technologies are evolving and will surpass the capacities and speeds given here. We are entering a generation of 'terabit' (trillion bit) devices. Also, a concept called 'virtual memory' allows a mini-computer to use a portion of its secondary memory to create the illusion that it has more primary memory than it really does. Inactive or low-demand 'pages' (usually around 1K bytes) of memory are kept on the secondary level and brought down to the primary level only when needed. The primary memory is used only for the most active pages, so-called the 'working set'. {REWORD} The same sort of illusion or agreement can be managed between secondary and tertiary memory to help users archive inactive files that would otherwise occupy secondary memory.
The Transmission of Information
The capacity of a line to transmit information is measured in bits per seconds. A rate of one bit per second is called one baud, from the name Baudot, a pioneer in telegraphy. Figure 7-14 summarizes the transmission rates of several kinds of systems. At 9600 baud, a video terminal's screen of 24 lines of 80 characters could be completely filled in 1.6 seconds! Transmission rates inside the computer can be inconceivable, 13 million bytes per second. Disk drives commonly transmit at rates in the hundreds of thousands of bytes per second. This means that the computer can transmit a handful of bytes very quickly from one point to another inside the boundary of the machine. If we want to transmit between computers in different cities, we are limited by the satellite or whatever channel is used to carry the signal.
Rather than sending the bits of one byte down a single line serially, one after another, the bits may be sent over eight parallel lines. A computer may have many such paths, some as wide as 128 bits. While transmission between devices in the same room is often done in parallel, any longer range transmission is usually done in serial.
Exercises:
- How long would it take to transmit 1 megabyte over a 300 baud line?
- How long would it take to transmit an average telegram of 64 characters over a 300 baud line?
- If you have a cassette tape recorder as a storage device, design and carry out an experiment that will determine the capacity of a 30 minute audio cassette.
- If you are using a time-sharing system, find out how much space you are sharing with other users and how much primary memory you have in your 'partition'. Also find out the total capacity of the secondary memory and what your 'disk block quota' is.
- Compare the binary number system and the english system of weights and measures.
- Compare the binary number system and the time measuring system of music.
- {REWORD/clarify} Describe the extension of the diagram of all possible nibbles. (ie what pattern is to the left, right, above and below).
- The 'parity' of a group of bits tells us whether there is an odd or even number of bits on or off in the group. If the group of bits is a byte and our parity system is "even", then the parity bit for 00010111 would be 0, indicating that there are an even number of bits on in the byte. If there were an odd number of bits on in the byte, the parity bit would be on, making the total number of bits (including the parity bit) even. Figure out the parity bit for each binary number from 0000 to 1111. Describe how you could extend this pattern indefinitely without having to count the bits in each binary number. Parity schemes are used to check the integrity of data that is transmitted over noisy lines.
Created By: john@timehaven•us
Updated: 01-May-95
Updated: 01-Dec-2018