Reprinted with corrections from The Bell System Technical Journal, Vol. 27, pp. 379-423, 623-656, July, October, 1948.
Note from the editor: This page is part of a collection of articles on the Thocp site because
of its historic importance. |
TD align="left" valign="top" width="14%">
INTRODUCTION
TD colspan="10" align="left" valign="top" width="92%"> THE recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist 1 and Hartley 2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information. TD colspan="10" align="left" valign="top" width="92%"> The fundamental problem of communication is that of reproducing at one point either exactly or ap proximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. TD colspan="10" align="left" valign="top" width="92%"> If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure. TD colspan="06" align="left" valign="top" width="56%"> The logarithmic measure is more convenient for various reasons: TD colspan="09" align="left" valign="top" width="89%"> 1. It is practically more useful. Parameters of engineering importance such as time, bandwidth, number of relays, etc., tend to vary linearly with the logarithm of the number of possibilities. For example, adding one relay to a group doubles the number of possible states of the relays. It adds 1 to the base 2 logarithm of this number. Doubling the time roughly squares the number of possible messages, or doubles the logarithm, etc. TD colspan="09" align="left" valign="top" width="89%"> 2. It is nearer to our intuitive feeling as to the proper measure. This is closely related to (1) since we in tuitively measures entities by linear comparison with common standards. One feels, for example, that two punched cards should have twice the capacity of one for information storage, and two identical channels twice the capacity of one for transmitting information. TD colspan="09" align="left" valign="top" width="89%"> 3. It is mathematically more suitable. Many of the limiting operations are simple in terms of the loga rithm but would require clumsy restatement in terms of the number of possibilities. TD colspan="10" align="left" valign="top" width="92%"> The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits, since the total number of possible states is 2 N and 109 2 2 N = N. If the base 10 is used the units may be called decimal digits. Since TD colspan="03" align="left" valign="top" width="22%">
log 2 M =log 10 M/ log 10 2 = 3.32log 10 M, (2)
We wish to consider certain general problems involving communication systems. To do this it is first necessary to represent the various elements involved as mathematical entities, suitably idealized from their physical counterparts. We may roughly classify communication systems into three main categories: discrete, continuous and mixed. By a discrete system we will mean one in which both the message and the signal are a sequence of discrete symbols. A typical case is telegraphy where the message is a sequence of letters and the signal a sequence of dots, dashes and spaces. A continuous system is one in which the message and signal are both treated as continuous functions, e.g., radio or television. A mixed system is one in which both discrete and continuous variables appear, e.g., PCM transmission of speech.
|
|||||||
We first consider the discrete case. This case has applications not only in communication theory, but also in the theory of computing machines, the design of telephone exchanges and other fields. In addition the discrete case forms a foundation for the continuous and mixed cases which will be treated in the second half of the paper.
|
|||||||
PART I: DISCRETE NOISELESS SYSTEMS 1. THE DISCRETE NOISELESS CHANNEL Teletype and telegraphy are two simple examples of a discrete channel for transmitting information. Gen erally, a discrete channel will mean a system whereby a sequence of choices from a finite set of elementary symbols S1, . . , S„ can be transmitted from one point to another. Each of the symbols Si is assumed to have a certain duration in time ti seconds (not necessarily the same for different Si, for example the dots and dashes in telegraphy). It is not required that all possible sequences of the Si be capable of transmission on the system; certain sequences only may be allowed. These will be possible signals for the channel. Thus in telegraphy suppose the symbols are: (1) A dot, consisting of line closure for a unit of time and then line open for a unit of time; (2) A dash, consisting of three time units of closure and one unit open; (3) A letter space consisting of, say, three units of line open; (4) A word space of six units of line open. We might place the restriction on allowable sequences that no spaces follow each other (for if two letter spaces are adjacent, it is identical with a word space). The question we now consider is how one can measure the capacity of such a channel to transmit information. |
|||||||
In the teletype case where all symbols are of the same duration, and any sequence of the 32 symbols is allowed the answer is easy. Each symbol represents five bits of information. If the system transmits n symbols per second it is natural to say that the channel has a capacity of 5n bits per second. This does not mean that the teletype channel will always be transmitting information at this rate - this is the maximum possible rate and whether or not the actual rate reaches this maximum depends on the source of information which feeds the channel, as will appear later. | |||||||
In the more general case with different lengths of symbols and constraints on the allowed sequences, we make the following definition: | |||||||
Definition: The capacity C of a discrete channel is given by | |||||||
![]() | |||||||
where N(T) is the number of allowed signals of duration T. | |||||||
It is easily seen that in the teletype case this reduces to the previous result. It can be shown that the limit in question will exist as a finite number in most cases of interest. Suppose all sequences of the symbols Sl,. . . , S„ are allowed and these symbols have durations What is the channel capacity? If N(t) represents the number of sequences of duration t we have | |||||||
![]() |
|||||||
The total number is equal to the sum of the numbers of sequences ending in Si, S2, . . . , S„ and these are N(t-ti),N(t-t2),...,N(t-tn), respectively. According to a well-known result in finite differences, N(t) is then asymptotic for large t to Xt/o where Xt/o is the largest real solution of the characteristic equation: | |||||||
![]() |
We now consider the information source. How is an information source to be described mathematically, and how much information in bits per second is produced in a given source? The main point at issue is the effect of statistical knowledge about the source in reducing the required capacity of the channel, by the use of proper encoding of the information. In telegraphy, for example, the messages to be transmitted consist of sequences of letters. These sequences, however, are not completely random. In general, they form sentences and have the statistical structure of, say, English. The letter E occurs more frequently than Q, the sequence TH more frequently than XP, etc. The existence of this structure allows one to make a saving in time (or channel capacity) by properly encoding the message sequences into signal sequences. This is already done to a limited extent in telegraphy by using the shortest channel symbol, a dot, for the most common English letter E; while the infrequent letters, Q, X, Z are represented by longer sequences of dots and dashes. This idea is carried still further in certain commercial codes where common words and phrases are represented by four- or five-letter code groups with a considerable saving in average time. The standardized greeting and anniversary telegrams now in use extend this to the point of encoding a sentence or two into a relatively short sequence of numbers. | |||||||
We can think of a discrete source as generating the message, symbol by symbol. It will choose succes sive symbols according to certain probabilities depending, in general, on preceding choices as well as the particular symbols in question. A physical system, or a mathematical model of a system which produces such a sequence of symbols governed by a set of probabilities, is known as a stochastic process. 3 We may consider a discrete source, therefore, to be represented by a stochastic process. Conversely, any stochastic process which produces a discrete sequence of symbols chosen from a finite set may be considered a discrete source. This will include such cases as: | |||||||
1. Natural written languages such as English, German, Chinese. | |||||||
2. Continuous information sources that have been rendered discrete by some quantizing process. For example, the quantized speech from a PCM transmitter, or a quantized television signal. | |||||||
3. Mathematical cases where we merely define abstractly a stochastic process which generates a se quence of symbols. The following are examples of this last type of source. | |||||||
(A) Suppose we have five letters A, B, C, D, E which are chosen each with probability .2, successive choices being independent. This would lead to a sequence of which the following is a typical example. | |||||||
BDCBCECCCADCBDDAAECEEA ABBDAEECACEEBAEECBCEAD. | |||||||
This was constructed with the use of a table of random numbers. (B) Using the same five letters let the probabilities be .4, .1, .2, .2, .1, respectively, with successive choices independent. A typical message from this source is then: AAACDCBDCEAADADACEDA EADCABEDADDCECAAAAAD. |
|||||||
(C) A more complicated structure is obtained if successive symbols are not chosen independently but their probabilities depend on preceding letters. In the simplest case of this type a choice depends only on the preceding letter and not on ones before that. The statistical structure can then be described by a set of transition probabilities pi (j), the probability that letter i is followed by letter j. The indices i and j range over all the possible symbols. A second equivalent way of specifying the structure is to give the "digram" probabilities p(i, j), i.e., the relative frequency of the digram i j. The letter frequencies p(i), (the probability of letter i), the transition probabilities | |||||||
3 See, for example, S. Chandrasekhar, "Stochastic Problems in Physics and Astronomy," Reviews of Modern Physics, v. 15, No. 1, January 1943, p. 1. | |||||||
4 Kendall and Smith, Tables of Random Sampling Numbers, Cambridge, 1939. 5 |
page brake
page brake 6
page brake
page brake 8
page brake 9
page brake
page brake
page brake
page brake 13
page brake 14
page brake
page brake
page brake
page brake
page brake
12. EQUIVOCATION AND CHANNEL CAPACITY | ||||||||||
If the channel is noisy it is not in general possible to reconstruct the original message or the transmitted signal with certainty by any operation on the received signal E. There are, however, ways of transmitting the information which are optimal in combating noise. This is the problem which we now consider. | ||||||||||
Suppose there are two possible symbols beta. and 1, and we are transmitting at a rate of 1000 symbols per second with probabilities po = p1 = z. Thus our source is producing information at the rate of 1000 bits per second. During transmission the noise introduces errors so that, on the average, 1 in 100 is received incorrectly (a beta. as 1, or 1 as 0). What is the rate of transmission of information? Certainly less than 1000 bits per second since about 1% of the received symbols are incorrect. Our first impulse might be to say the rate is 990 bits per second, merely subtracting the expected number of errors. This is not satisfactory since it fails to take into account the recipient's lack of knowledge of where the errors occur. We may carry it to an extreme case and suppose the noise so great that the received symbols are entirely independent of the transmitted symbols. The probability of receiving 1 is z whatever was transmitted and similarly for beta. Then about half of the received symbols are correct due to chance alone, and we would be giving the system credit for transmitting 500 bits per second while actually no information is being transmitted at all. Equally "good" transmission would be obtained by dispensing with the channel entirely and flipping a coin at the receiving point. | ||||||||||
Evidently the proper correction to apply to the amount of information transmitted is the amount of this information which is missing in the received signal, or alternatively the uncertainty when we have received a signal of what was actually sent. From our previous discussion of entropy as a measure of uncertainty it seems reasonable to use the conditional entropy of the message, knowing the received signal, as a measure of this missing information. This is indeed the proper definition, as we shall see later. Following this idea the rate of actual transmission, R, would be obtained by subtracting from the rate of production (i.e., the entropy of the source) the average rate of conditional entropy. | ||||||||||
![]() |
||||||||||
The conditional entropy H y (x) will, for convenience, be called the equivocation. It measures the average ambiguity of the received signal. | ||||||||||
In the example considered above, if a beta. is received the a posteriori probability that a beta. was transmitted is .99, and that a 1 was transmitted is .01. These figures are reversed if a 1 is received. Hence | ||||||||||
![]() |
||||||||||
or 81 bits per second. We may say that the system is transmitting at a rate 1000 - 81= 919 bits per second. In the extreme case where a beta. is equally likely to be received as a beta. or 1 and similarly for 1, the a posteriori probabilities are z, z and | ||||||||||
![]() |
||||||||||
or 1000 bits per second. The rate of transmission is then beta. as it should be. | ||||||||||
The following theorem gives a direct intuitive interpretation of the equivocation and also serves to justify it as the unique appropriate measure. We consider a communication system and an observer (or auxiliary device) who can see both what is sent and what is recovered (with errors due to noise). This observer notes the errors in the recovered message and transmits data to the receiving point over a "correction channel" to enable the receiver to correct the errors. The situation is indicated schematically in Fig. 8.
|
||||||||||
Theorem 10: If the correction channel has a capacity equal to H y (x) it is possible to so encode the correction data as to send it over this channel and correct all but an arbitrarily small fraction E of the errors. This is not possible if the channel capacity is less than H y (x). |
![]() ![]() ![]() | Last Updated on September 10, 2002 | For suggestions please mail the editors |
Footnotes & References