On Wednesday, Feb. 1, the Natural Science and Mathematics Colloquium, “Trusty Transmission Techniques,” was given by Alissa Crans, Associate Professor of Mathematics at Loyola Marymount University, and centered around algebraic coding theory, a branch of math concerned with sending and compressing digital information – in this case, using binary code.
Before beginning her presentation, Crans gave the example of text messaging, and errors in transmission when sending a text. Three attempts were made to send a text message, and each time, an error occurred. In these attempts, 1 meant “yes” and 0 meant “no.” In the first attempt, a text was sent, saying “yes,” as a response to a dinner request; however, an error occurred and a text saying “no” was received because the 1 was turned to a 0. After this attempt failed, another text was sent, this time with two 1’s (11), but again, an error occurred and one of the 1’s was turned into a zero. Again, the “yes” was turned into a “no” because the number was 01; in this case however, the error was detectable – though not correctable – whereas in the first case, the error was neither detectable nor correctable. The third attempt finally allowed the “yes” that was sent to remain a “yes.” Three 1’s were sent (111), one of which was changed (101), though the answer did not change; the problem was both detectable and correctable. This, according to Crans, is what is known as “majority rules decoding.”
Crans presented a number of PowerPoint slides, containing definitions and methods and probability theories regarding coding theories and detecting errors in transmission. One of Crans’ first slides defined coding theory as “the branch of mathematics concerned with developing methods of reliably transmitting information,” after which she presented the procedure during which said information is decoded and transmitted. The procedure is as follows: the message begins at the “message source,” then is put into the encoder. Once the message enters the channel that would take it to the decoder, however, transmission error becomes probable. By the time the message gets to the “user” (whomever is intended to read the message), it may be incorrect.
Among the ways to detect error is the Hamming distance – in short, the distance between two code words. In order to detect a transmission error, there must be at least two spaces between code words; to correct an error, there must be at least three. These distances are called the minimum distance, and it is desired that the minimum distance be large, because the larger the number, the higher the probability that an error is not only detectable, but correctable.
Opinions about the lecture were very similar. When asked what he thought about the lecture, sophomore Kevin Tennyson said, “Overall it was interesting; I did not realize how this stuff could be done at such a basic level. This is also an area of mathematics that I was unaware existed.”
Senior Todd Newman, when asked the same question, said, “It was pretty interesting, and the kind of field of research that is important, if not altogether useful.” However, he said he thought it was going to be “more about broad practical applications of math instead of one specific application of math.”
This application of math is, according to Tennyson and Newman, useful to know because it is important to know that this branch of mathematics allows problems regarding errors in transmission to be solved at a basic level, even if being able to detect errors can sometimes be difficult.