Recovering Vanishing Images in Paintings

On Wednesday, Feb. 8, Jennifer Mass, senior scientist and head of the Scientific Research and Analysis Laboratory at Winterthur Museum in Delaware, presented her lecture entitled, “When masterpieces meet x-rays: Recovering hidden and vanishing images in paintings” as part of this semester’s Natural Science and Mathematics (NS&M) Colloquium. Mass discussed how x-rays can be used to reveal why a painting is degrading and if the painting has been altered in any way.

“There are only about a hundred or so chemists in the U.S. who spend their careers on the scientific study of objects of art,” began Mass. This type of scientific study is important because it can prove the authenticity of a piece of art, which is invaluable information to people purchasing expensive paintings.

The scientific study of art can also reveal what art conservationists have done to a painting over the years. According to Mass, “20 percent of what you’re looking at [in a painting] is material that’s been added by art restorers and conservators.” Only recently have people begun taking extensive notes on what kinds of things were done in prior and more current restorations.

Modern science has changed the way art conservationists work with damaged pieces of art. For example, Mass explained that in the nineteenth century, plaster was used to retain ceramics. However, it is now understood that the sulfates in plaster can cause further damage to a ceramic piece of art as opposed to retaining it.

The science of using x-rays to study how a piece of art is interacting with its environment is very important because x-rays are non-destructive. This just means that in order to do an x-ray, one does not need to remove a sample from a piece of art, thus avoiding permanently damaging a small part of it.

One painting that Mass has done a lot of work with is Matisse’s “La Bonheur de Vivre” (The Joy of Life), which was painted in the winter of 1905 to 1906. “Matisse had very little money for artist materials,” said Mass. “He used very inexpensive paints.” Because of this low quality paint, a lot of flaking and major color changes have occurred. For instance, the warm yellow color of some foliage in the painting now appears a dull tan color.

By using different kinds of x-rays, Mass and other scientists were able to discover the composition of the paint in Matisse’s art. There are high concentrations of cadmium, lead, and zinc. Cadmium is the chemical in the yellow pigment that faded. Paint continues to react to its environment even after it has finished drying. “It is continually reacting and undergoing hydrolysis,” said Mass.

One of Mass’s most significant discoveries has been through the technique of confocal x-ray fluorescence (XRF) microscopy. This kind of x-ray produces a three-dimenstional map of paint layers. “It gets information about all the paint layers individually, instead of all the paint layers simultaneously,” said Mass. A traditional x-ray provides data about all chemicals present in a section of art, while a confocal x-rays provides the series of changes in chemical composition occurring as the x-ray travels through the different layers of paint sequentially.

Using confocal XRF microscopy, Mass was able to reproduce a 1918 full-color painting buried beneath a family portrait painted by N.C. Wyeth in the 1920s. “XRF intensity mapping can be used to digitally reproduce a buried painting,” said Mass. “It allowed us to identify the chemistry… of the buried painting.” This technique can also be used to see if an artist painted over anything while creating a painting or made any changes in the position of objects or subjects in the piece of art. Another technique discovered in the 1960s, infrared reflectography, can show an artist’s sketches underneath the polished surface of a painting.

Next week’s NS&M, on Wednesday, Feb. 15 at 4:40 p.m. in Schaefer 106, will be about the physiology of birdsong from UMBC’s Bernard Lohr.

Algebraic Coding Theory, Transmission Techniques

On Wednesday, Feb. 1, the Natural Science and Mathematics Colloquium, “Trusty Transmission Techniques,” was given by Alissa Crans, Associate Professor of Mathematics at Loyola Marymount University, and centered around algebraic coding theory, a branch of math concerned with sending and compressing digital information – in this case, using binary code.

Before beginning her presentation, Crans gave the example of text messaging, and errors in transmission when sending a text. Three attempts were made to send a text message, and each time, an error occurred. In these attempts, 1 meant “yes” and 0 meant “no.” In the first attempt, a text was sent, saying “yes,” as a response to a dinner request; however, an error occurred and a text saying “no” was received because the 1 was turned to a 0. After this attempt failed, another text was sent, this time with two 1’s (11), but again, an error occurred and one of the 1’s was turned into a zero. Again, the “yes” was turned into a “no” because the number was 01; in this case however, the error was detectable – though not correctable – whereas in the first case, the error was neither detectable nor correctable. The third attempt finally allowed the “yes” that was sent to remain a “yes.” Three 1’s were sent (111), one of which was changed (101), though the answer did not change; the problem was both detectable and correctable. This, according to Crans, is what is known as “majority rules decoding.”

Crans presented a number of PowerPoint slides, containing definitions and methods and probability theories regarding coding theories and detecting errors in transmission. One of Crans’ first slides defined coding theory as “the branch of mathematics concerned with developing methods of reliably transmitting information,” after which she presented the procedure during which said information is decoded and transmitted. The procedure is as follows: the message begins at the “message source,” then is put into the encoder. Once the message enters the channel that would take it to the decoder, however, transmission error becomes probable. By the time the message gets to the “user” (whomever is intended to read the message), it may be incorrect.

Among the ways to detect error is the Hamming distance – in short, the distance between two code words. In order to detect a transmission error, there must be at least two spaces between code words; to correct an error, there must be at least three. These distances are called the minimum distance, and it is desired that the minimum distance be large, because the larger the number, the higher the probability that an error is not only detectable, but correctable.

Opinions about the lecture were very similar. When asked what he thought about the lecture, sophomore Kevin Tennyson said, “Overall it was interesting; I did not realize how this stuff could be done at such a basic level. This is also an area of mathematics that I was unaware existed.”

Senior Todd Newman, when asked the same question, said, “It was pretty interesting, and the kind of field of research that is important, if not altogether useful.” However, he said he thought it was going to be “more about broad practical applications of math instead of one specific application of math.”

This application of math is, according to Tennyson and Newman, useful to know because it is important to know that this branch of mathematics allows problems regarding errors in transmission to be solved at a basic level, even if being able to detect errors can sometimes be difficult.

Chemistry Professor Discusses Fight Against Tuberculosis

On Nov. 16, for the last lecture of the semester in the Natural Science and Mathematics (NS&M) Colloquium Series, Cynthia S. Dowd, a researcher and assistant professor of chemistry at George Washington University, provided information about the therapeutics that are now being used to combat Mycobacterium Tuberculosis (TB).

The deadly disease tuberculosis has affected the world for hundreds of years.  “There have been traces of TB found in Egyptian mummies dating back to 3000 or 2400 B.C.,” said Dowd. She explained that TB is a disease of the lungs with symptoms including weight loss and a cough that contains blood. It is a contagious disease that can be passed from the infected person through coughs and sneezes. “Tuberculosis kills two million people per year,” Dowd stated.

TB is a tough disease to fight because there are two strands of it, latent and active. Dowd explained that with latent TB, the person is just a carrier and does not exhibit any symptoms. However, latent TB can become active if a person contracts Human Immunodeficiency Virus (HIV). Dowd remarked, “In the world, there are over eight million active cases of tuberculosis, but two to three billion cases of latent TB.” So far, nothing has been found that will permanently kill latent TB.

Other issues that come with fighting TB are the drugs used to attempt to do so. “TB will never be a disease killed by one drug only,” said Dowd. In fact, eliminating TB involves at least six months of two different phases and six different drugs. But in many cases of TB, the patient will exhibit a resistance to the drugs. Dowd’s lecture focused on how researchers are currently developing drugs to combat tuberculosis that will shorten the duration of therapy, be effective against resistant strains, and kill both active and latent TB cells.

Designing a drug to combat mycobacterium tuberculosis is a difficult process. Because TB has a thick cell wall, it is hard to find something that will break through it. But the silver lining is, “Since TB is old, there is a lot of information known about steps that should be taken in the drug making process,” Dowd stated. This has helped researchers locate an essential enzyme that should work against latent and active TB: 1 deoxy-d-xylulose 5-phosphate reductoisomerase (DXR). This is an inhibitor that should get around the cell wall by changing the structure of the molecules. Researchers have conducted tests and are doing all they can to stop mycobacterium TB.

Those in attendance at the lecture were glad to hear about the developments in this long battle against TB. Elizabeth Lee, a junior, was excited that Dowd provided a continuation of what she was learning in her bio-chemistry class. “Since we’re talking about inhibitors in class I enjoyed the real life application. It was really interesting.” Lee said. Everyone expressed their gratitude to Dowd for sharing her research.

Preservation: The Sciences in Art and Art History

On Feb. 23, United States Library of Congress preservation research scientist Lynn Brostoff presented her studies of ancient artifacts and works of art using scientific methods in her lecture “Using Science to Unlock the Secrets of Art and Historic Artifacts” as part of the Natural Science and Mathematics Colloquium series.

Given in the Schaefer Hall lecture room, Brostoff’s lecture focused on her role in the field of cultural heritage science, which includes elements of biology, chemistry, physics, forensics, and materials science (the analysis of how an object’s properties are linked to its atomic and molecular structure).

“We’re doing a lot of material science,” said Brostoff, “and what the materials present say about the condition of the object.”

Brostoff discussed how the analytical study of museum and library collections is based on technical studies, model studies of degradation mechanisms, and conservation methods development.

Technical studies refers to the study of a material’s identity, methods of manufacture, the history of the manufacturer, innovations of certain components of the object, and the context of the object in relation to its found location.

“A lot of people develop analytical tools specifically for the applications we have,” she said.  “The first thing we want to do is look at things non-invasively.”

Electromagnetic (EM) radiation is the primary method that scientists like Brostoff use to analyze artifacts non-invasively.

The EM waves are scattered, reflected, transmitted, and absorbed by different objects, the results of which are detected and analyzed to understand more about the artifacts in question.

Microscopy, spectral imaging, and Raman spectroscopy are other methods of analyzing these materials.

When more analytical techniques are needed, the next stage of object investigation is the use of minimally invasive techniques, which includes calorimetry and even fold endurance testing on micro-samples of the object.

Brostoff discussed examples of several artifacts analyzed by the preservation staff of the Library of Congress, including a fifteenth century version of the Armenian Gospel from Verin Noravank Monastery in Siwnik (Syunik Province), Armenia.

The book itself was acquired by the Library of Congress in 2008, and has since been under intensive technical study.

The objective of working with the book is to preserve the colors and text of the Gospel of St. Mark, which is inside the book.

Beginning with X-ray fluorescence, or the use of X-rays to excite electrons of atoms on the surface enough to cause a detectable energy release, Brostoff explained how the artifact was analyzed in terms of the colors used on one of the pages.

The X-ray fluorescence, or XRF, detected tin oxide in the work, a rare white pigment that was used to make the white color used on the pages.

XRF also detected smalt (a cobalt glass material from arsenic ore) used for the blue in the book’s pages.  This finding was especially surprising, as it was thought that smalt was not used until Venetian paintings over a century later.

Further analysis of the blue pigments on the page with Fourier transfer infrared spectroscopy (FTIR) indicated the presence of ultramarine, a silicon oxide compound also known as lapis lazuli, found almost exclusively in what is now Afghanistan.

Further cobalt traces indicating smalt presence were verified with elemental analysis.  Doing all of these tests aided in accurate identification of the pigment compounds.

“We could have missed the pigment elements by doing only one analytical technique,” said Brostoff.

The red pigment of the page was analyzed and found to contain mercury and lead, common sources of red in other works at the time.

Further analysis with micro-XRD indicated lead tetroxide for the lead source, and mercury sulfide for the mercury source.

The fact that all of these different compounds could be used for the painting makes more sense given the proximity of Syunik to the Silk Road, the main trade route of the time.

The Persian influence can even be seen in the pages’ artwork.

“There is little known about Armenian painting,” said Brostoff, “so this told us a lot.”

Another method of analysis used by the lab is laser ablation inductively-coupled mass spectrometry (LA-ICP-MS), which was used to analyze trace elements of Ancient Chinese gold.

XRF was also used to analyze the moon dust left on the space suits of astronauts from the Apollo 17 mission.

“You can never see with the naked eye where we’ve been,” said Brostoff.  “And digitizing does not replace study and analysis.”

“I thought the lecture was interesting,” said Kevin Tennyson, a first-year Physics student who attended the talk.

“I have an appreciation for the physics of it, and the materials science that I personally would not have thought of.”

 

NOAA Officer Presents on Subaquatic Cartography

Lieutenant Commander Ben Evans of the Office of Coast Survey at the National Oceanic and Atmospheric Association (NOAA) presented on his work in underwater charting and navigation at the College’s Natural Science and Mathematics Colloquium (NS&M) lecture last Wednesday, in a presentation titled Hydrography: Science, Art, and Sea Stories of Seafloor Mapping.

A friend of two College professors, Evans was introduced by Physics Assistant Professor Josh Grossman and Mathematics Assistant Professor Alex Meadows, all of whom went to Williams College for their undergraduate degrees. “Two of us actually finished the Physics degree there,” said Grossman, referring to himself and Evans.

Evans began the presentation in Schaefer Hall by giving the audience of students, faculty, and community members an idea of what hydrography entails.

“‘Hydrography’ is not a word,” said Evans. “But, hydrographic relates to the characteristic features (as flow or depth) of bodies of water,” referring to the Webster’s Dictionary definition.

Hydrography is a specific discipline in the study of oceans and other bodies of water, known as oceanography.

The goal of hydrographers has not changed for thousands of years: to make nautical charts for maritime commerce. “The majority of goods we import come from water,” said Evans.

In 2008, annual cargo tonnage reached 8,720 million tons, over twice as much as its value in 1980 (3,704 million tons).

Ships that used to draw 30 feet of water due to their large size and cargo now can draw as many as 51 feet, and shipping miles have increased by 123 percent.

With the contraction of the Arctic ice caps, allowing for more trade routes and a potentially ice-free Arctic by as early as 2013, these routes and cargo sizes are expected to increase even further, placing more importance on safe, accurate navigation.

From 1973 to 1982, the United Nations Third Conference on the Law of the Sea (UNCLOS III) resulted in the Law of the Sea Treaty, which dictated that signatories of the treaty could claim seabed resources of the Extended Continental Shelf (ECS) around the country, beyond its Exclusive Economic Zone (EEZ).

While the United States did not sign the latest 1994 greement of the treaty (Article XI), which concerned the establishment of the International Seabed Authority (ISA) to monitor a country’s underwater activities beyond its EEZ, its EEZ’s monetary worth approaches $1 trillion, and is 25 percent larger than the U.S. land mass.

In 2010, President Obama approved the National Ocean Policy on Coastal and Marine Spatial Planning to zone (chart) the EEZ.

But President Thomas Jefferson, in 1807, had begun the process of coastal surveying by establishing the Survey of the Coast, the first U.S. scientific agency.

The Survey of the Coast merged with several other nautical agencies to form the NOAA in 1970.

The NOAA has about 1,000 charts that detail the depths of the U.S. coast, including charts from Jefferson’s era that focused on depth measurements.

These were done using lead lines that were spun in the air on a ship and thrust into a body of water, with a measurement of the line when the lead weight hit the bottom of the sea floor.

Combined with horizontal positioning techniques where two relative points on the coast were referenced to determine nautical position, accurate depth charts were generated that are still maintained by the NOAA.

The process of lead lining was abandoned in the 1930s with the development of SONAR (or, the Vertical Beam Echosounder).

With this technology, a sound would be emitted below a ship, which would quickly travel through water to the sea floor.

The sound waves would echo to the surface and be detected by a receiver, which would use the echo time delay and known speed of sound in water to calculate depth.

The development of computers in the 1990s that could handle vast depth calculations and data aided in the development of more thorough nautical charts.

“The physics aren’t strenuous,” said Evans, “but suddenly in the 90s, we had computers that could handle this vast array of data.”

While the science may not be difficult, the navigation and depth determination can bring its own challenges to hydrographers.

With changes in tide and vessel conditions (if heaving, pitching, rolling, or yawing), it can be difficult to maintain frames of reference, as well as modify measurements (due to sound speed changes in the water) or survive the water itself. “The sea doesn’t care about you,” said Evans.

The process has been further advanced with the development of Autonomous Underwater Vehicles (AUVs), which can do SONAR sweeps for about a day underwater for mass depth calculations, and FLIP (or Floating Instrument Platform), which is a platform ship resting on two buoyant supports that decrease the buoyant force on the ship to prevent depth data skewing.

Evans concluded the lecture with a discussion of the AUVs’ recent practical application: they were used to survey for underwater debris of Space Shuttle Columbia, which disintegrated upon re-entry on Feb. 1, 2003, over much of Texas and Louisiana.

While the in-flight data recorder was not found, a video of the seven astronauts that ends four minutes before the shuttle began to disintegrate was recovered

The next NS&M lecture will be on Feb. 16 at 4:40p.m., presented by Brigham Young University’s Dr. Michael Dorff and titled Shortest Paths, Soap Films, and the Shape of the Universe.

Penn State Professor Discusses Life of Indian Math Genius

On Wednesday, Oct. 20, as part of the Natural Science and Mathematics Colloquium Series, Penn State University Professor George Andrews discussed the history of the mathematics genius Srinivasa Ramanujan in his lecture The Indian Genius, Ramanujan: His Life and the Excitement of His Mathematics.

“This is a beautiful place,” said Andrews, after an introduction and being given a Math Club keychain for presenting from Mathematics professor Dr. Alex Meadows.

Andrews’ presentation was given to a large audience of students, faculty members, and members of the community in Schaefer 106 at 4:40 p.m. that Wednesday.

Andrews began with a history of Ramanujan, who was born in Southern India in 1887 to a poor Brahmin family. He showed mathematical prowess by the age of ten, and received awards in high school by the age of 17 for the development of new theories. “He was a child prodigy of mathematics,” said Andrews.

Ramanujan was awarded a scholarship upon graduation to attend the Government College in Kumbakonam, but failed most of his classes that were not mathematics-based. “He was not well-rounded student,” said Andrews. “[He was] a case of great promise lost to the world.”

After not being able to earn a steady job for years and marrying in 1909, Ramanujan earned a clerk’s position in Madras.

Still interested in mathematics, Ramanujan attempted to contact mathematicians in the United States, finally drawing the attention of Godfrey Harold Hardy in 1913.

Hardy noticed in the theories that came with the letter that Ramanujan had not only “discovered” some of Hardy’s already-proven advanced theories, but also that he had found new theorems of his own.

In 1914, Hardy arranged for Ramanujan to come to Cambridge, where Hardy was currently a professor, and Ramanujan was able to work alongside him to solve complex mathematical theories.

Both Hardy and Ramanujan’s work with the Circle Method opened the doors to the development of analytical theory.

“This was a truly exciting time,” said Andrews, “and an exciting period for analytical theory in the 20th century.”

At a young age, Ramanujan was diagnosed with what doctors at the time recognized as tuberculosis, and in 1919 his health improved enough that he wanted to return to India with the hope of staying more healthy. This turned out to be unsuccessful, and after a stage of worsening health Ramanujan died in the Spring of 1920.

On his deathbed, Ramanujan discussed some of what is regarded by some today as “the most advanced mathematics known” in his notebooks and in his notes, which were eventually collected at Trinity College in Cambridge, United Kingdom but lost during storage. Mock-theta functions, included in Ramanujan’s work, were also crucial to analytical mathematics.

In his efforts to study the mock-theta functions as part of his thesis project, Andrews came across Ramanujan’s notebook at Trinity College. Recognizing Ramanujan’s handwriting from older textbooks in college, Andrews worked to interpret Ramanujan’s notes and theories, coming across third and fifth-ordered mock-theta functions believed to be undiscoverable or unsolvable at that time.

What Ramanujan had referred to in the last letter to Hardy (before his death in 1920) as “several new functions” was detailed in Ramanujan’s notes.

Andrews went on to discuss several of Ramanujan’s theories, including the practical use of Mock-Theta functions in the Heat Equation, the expansion of five Taylor series (sums in mathematics represented by single, expandable expressions based on notation) that led to the discovery of the Mock-Theta functions, and (while slightly unrelated) Hardy and Ramanujan’s work in solving p(n), or the number of ways to solve an integer “n” by adding other integers together.

Andrews concluded his lecture with a discussion of a potential film production of Ramanujan’s life, and the potential impact of the story on the Indian and mathematics communities due to the dramatic, fictional addition of a love interest of Ramanujan’s while he was in the United States.

There’s “no evidence that this occurred,” said Andrews, who served as full consultant for the movie production.

“I need to sleep more,” said sophomore Josh Kaminsky, at the conclusion of the presentation. “Ramanujan accomplished more in mathematics while sleeping than I ever did while awake.”

“Ramanujan is awesome,” said senior Brian Tennyson, who also attended the lecture. “It is incredible that the work he did during his illness is still applicable to mathematics research today, even at the undergraduate level.”

The next NS&M Colloquium lecture will be held on Nov. 3 and will discuss protein regulators during squid embryonic development, followed by a lecture on the H1N1 virus and potential influenza vaccines on Nov. 10.

Professor Discusses Neuromuscular Disease Research

Baylor College of Medicine professor Thomas Cooper expressed his research lab’s interest in neuromuscular disease in the fifth lecture of the Natural Science and Mathematics Colloquium Series this semester, held on March 10.

In front of a large audience of students, professors, and community members in Schaefer Hall, Thomas Cooper, Professor of Pathology at Baylor, discussed developmental mechanisms that can lead to the symptoms of neuromuscular disease, including the more common myotonic dystrophy, in his presentation Developmentally-Regulated Alternative Splicing and Its Disruption in Neuromuscular Disease.
Cooper was not afraid to tell his audience how little is known about the mechanism of alternative splicing and myotonic dystrophy itself; rather, it serves as a major stimulus for his research.

“It’s exciting to find out how little we know,” he said.

Beginning with what is known about the genetics behind alternative splicing, Cooper discussed the central dogma of gene expression. DNA, a double-stranded sequence of nucleotides housed in the nucleus of a cell, undergoes a process known as transcription, where a single-stranded version of DNA (called mRNA) is created. This single-stranded sequenced undergoes processing in the nucleus before exiting into the cell’s cytoplasm as mature mRNA, which in turn undergoes a process known as translation to generate a functional protein.

While this sequence of events is generally followed, Cooper’s work focuses on the processing of pre-mRNA in the nucleus of the cell. RNA processing itself plays a major role in the expression of genes as proteins, simply due to the structure of the RNA itself. RNA is composed of, essentially, two types of gene sequences: introns, which do not code for a specified expressed gene, and exons, which do code for the expressed gene.

During pre-mRNA processing, the exon regions are precisely spliced, or cut, so that exons combine to make an exact coding sequence.
If splicing is not exact, and extra sequences appear in between the exons or if the exons are cut short, the resulting protein will usually be deformed in some way, as each building block of the final protein product (called an amino acid) is added based on sets of three nucleotides.

“If it misses a nucleotide or adds a nucleotide by mistake, it puts the sequence out of frame, and it’s not going to make a protein,” said Cooper.

Alternative splicing refers to the removal of certain introns and exons in various ways to make a different mRNA sequence, which in turn leads to a different protein. Given the extremely high number of possibilities of mRNA products, alternative splicing is regulated.

“A lot of proteome diversity changes not because of transcription,” said Cooper, “but what happens after the gene is transcribed.”
Cooper illustrated this point by describing the Dscam gene, a sequence found in chromosome 21 in humans that, with a certain mutation, can lead to Down Syndrome. The final mRNA product is composed of four main exons.

The first exon is chosen among 12 different exons, while the second is chosen among 48 different exons, the third among 33 different exons, and a fourth among two exons. Given the different combinations of these sequences, 38,000 possible mRNA sequences could be generated, up to 30,000 possible proteins due to alternative splicing alone.

This process can be regulated in multiple ways, including regulation of transcription. This would involve either increasing the activity (upregulating) or decreasing the activity (downregulating) of proteins involved in that process, or using positive and negative regulators to control which regions the spliceosomes (proteins and molecules that bind to splicing sites) bind.

Myotonic Dystrophy (DM) is the second most common form of muscular dystrophy, and is caused by a disruption in the alternative splicing mechanism. In this disease, the alternative splicing abnormality leads to the addition of nucleotide repeats.

Having more of these CTG repeats (named after the nucleotide bases that make up the mutation) causes a faster onset of the disease symptoms; while having 8-40 CTG repeats is normal, DM patients can have 80-2000 repeats.

Unlike other alternative splicing disorders, however, DM occurs in an intronic region, which does not even code for a portion of the final protein. Instead, the major problem occurs while the sequence is still in the pre-mRNA form.

The large repeat region is not able to leave the nucleus for further processing and translation, and instead serves as a toxic buildup in the nucleus. This buildup can be visualized using staining techniques, which show condensed matter (foci) in the nuclei of cells.
The CTG repeats, CUG repeats in pre-mRNA form, sequester (essentially, bind and deactivate) a splicing factor known as MBNL-1 (Muscleblind-like), which causes another splicing factor (CUGBP-1) to be induced (leading to the CUG sequence repeats and DM symptoms). CUGBP buildup can also be visualized, and correlates with MBNL buildup at the foci sites in the nuclei.

These abnormalities can lead to many issues, including problems in alternative splicing of other genes, faulty chloride channels in the muscle cells (which lead to the tensed muscles characteristic of DM), and insulin resistance (due to the inability to properly splice the adult form of the receptor).

Using a mouse model, Cooper’s lab studied the effects of altering the levels of CUGBP1 on the expression of DM-related symptoms. DM symptoms can be created in a mouse by injecting it with a chemical known as tamoxifen, which leads to the increased levels of CUGBP found in adult patients with DM.

By injecting the mice with BIS-IX, an inhibitor that slows production of CUGBP, the mice showed the same levels of mutated pre-mRNA, but not in a toxic buildup characteristic of DM; furthermore, the mortality of the mice was significantly reduced.

“PKC inhibitors [like BIS-IX] are being used as therapeutics for individuals with other diseases,” said Cooper.

Cooper concluded his presentation with a discussion of Baylor College itself, and the differences between being a researcher and a medical doctor. “The hardest part of research is all of the decisions you have to make,” he said. “You have to know how to make your best calls and know when to cut things off.”

“I thought Dr. Cooper did a good job at leading the audience into his area of expertise,” said Biochemistry professor Danielle Cass. “It seems that he tries to always keep in mind the patients with DM and how his research could benefit them.”
“I thought it was well-presented, but rather complicated,” said sophomore Steven Sheridan. “Considering the audience, I thought it was too advanced and fast-paced.”

College Professors Discuss Art and Science of Beer

Professor Jeffrey Byrd explains how a combination of materials including barley, grain, and water, are malted, roasted, and fermented to create beer. (No beer was given out) (Photo by Dave Chase)
Professor Jeffrey Byrd explains how a combination of materials including barley, grain, and water, are malted, roasted, and fermented to create beer. (No beer was given out) (Photo by Dave Chase)

In front of a large audience of St. Mary’s students, professors, and community members, College professors Jeffrey Byrd and Andy Koch presented the complicated science and subtle art behind beer brewing, as part of the Natural Science and Mathematics Colloquium Series, Jan. 17.

“It’s a full house today,” said microbiologist Dr. Byrd, after he and Chemistry department chair Dr. Koch were introduced by Physics department chair Dr. Charles Adler.  “How many of you came to learn how to make beer?”

As expected, but possibly to the disappointment of some of the audience members, Byrd and Koch were not giving out free samples during the lecture; the objective was to learn about the process of making alcoholic beverages at home, an enjoyable and surprisingly academic hobby in which both professors are well-practiced.

“So how can we use science to improve that overall product?” said Byrd.  A microbiologist, he began the lecture with the organismal side of the process: yeast fermentation.  Dr. Koch entered the lecture to discuss the chemistry behind fermentation, and how the different products formed during the process can affect the brew product.

The professors continued with a discussion of the process of brewing, which begins with barley malting.    “What you’re actually doing is taking the seeds and allowing them to germinate,” said Byrd.

The resultant barley, grain, and water mixture is dried, crushed, and roasted, a step that can lead to darker or lighter-colored beers based on the degree of roasting.  Afterwards, a “mash tun,” an insulated brewing vessel, exposes the barley enzymes to the generated starches, leading to the formation of a wort, referred to by Koch as the “sweet liquor goo” that will be further processed in later steps.

After malting, boiling the products in a kettle kills off unwanted organisms and creates conditions for further malt processing. The malt product evaporates during this procedure, which can lead to a loss of both the good and bad flavor compounds created, and “hops,” one of the most important ingredients of the process that add a recognizable bitterness to the beer, are added to the wort.

The product generated from the kettle is moved to a fermenter (in home-brewing, known as the Alepail, a bucket with a bubbler for carbon dioxide release), where yeast is added to the processed wort to begin the fermentation process.

“You use a particular yeast for the style you’re trying to generate,” said Byrd.  Each yeast culture undergoes fermentation differently, meaning that the same species of yeast can be used for different products and requires different fermentation reactants and conditions. After fermentation, the final product is lagered (stored) in cold temperatures for further flavor development.

Koch and Byrd concluded the presentation with final remarks about the process.  They recommend following traditional guidelines and procedures, knowing what is in the type of hops being used, and keeping a notebook detailing the procedure used so that the process can be repeated if desired.  “It’s a hobby,” said Koch.  “It’s scientific, but can have a great benefit,” said Byrd.

The Schaefer lecture hall remained packed for questions after the talk, ranging from brewing certain kinds of beer over others to how to avoid “skunking” the final product.

“I thought it was a really good lecture,” said Thomas Montgomery, a senior who attended the lecture.  “I found both sides of the topic really interesting.”

“I thought the lecture was thorough, and it was neat to see all of the aspects of brewing covered from start to finish,” said Elizabeth Bromley, a sophomore at the College.  “They were very entertaining and lively.”

Dr. Byrd began brewing beer with his father in 1976, as a father-son project.  Dr. Koch began in college, and took a stronger interest around ten years ago.  Both professors brew different styles of beer, but both seem to enjoy the art for both its academic benefit and social experience.

This lecture was the third of the NS&M Colloquium Series; the fourth, titled The World of the Future, will be presented as the 2010 Muller Lecture in the Sciences by College professor Charles Adler on Wednesday, Feb. 24, at 7 p.m. in Daugherty-Palmer Commons.

Escher’s Geometry Explained

Mathematics professor Susan Goldstine demonstrates the orientation of spherical triangles on a positively-curved object. (Photo by Rowan Copley)
Mathematics professor Susan Goldstine demonstrates the orientation of spherical triangles on a positively-curved object. (Photo by Rowan Copley)

As part of the Natural Science and Mathematics Colloquia Series, St. Mary’s mathematics professor Susan Goldstine presented The Geometries of Escher on Wednesday, Feb. 3. Her presentation discussed the link between the math and art behind M. C. Escher’s work.

“Professor Goldstine is the reason I am at St. Mary’s College,” said mathematics professor Alex Meadows, who introduced the talk on Wednesday.  “I am especially excited for what she has in store for us today.”

Goldstine, in turn, began the lecture with an introduction of her own, showing the audience a gallery of Escher’s work and including the works of Scott Kim, a letterform-focused artist who follows Escher’s style of reflection and illusion in his work.
As Escher’s work scrolled across the screen, the audience, composed of St. Mary’s college students, professors, and community members, could see the many different kinds of pieces Escher completed, from simple mathematical oddities like the Mobius Strip (a circular band with only one surface) to illusion works like the misplaced ladder of the Belvedere (an impossible cube piece) and the depiction of three-dimensional objects on a flat surface.

From there, Goldstine presented Escher’s works on the regular division of the plane, mainly in forms of animals, including lizards, swans, and fish.  “It’s a tiling of the plane, or if you want to sound all fancy and mathematical, it’s a tessellation,” said Goldstine.  “They were partly mathematical and artistic experiments, and partly tools to base more interesting artwork on.”
Escher’s work with tessellations artistically focused on alternations of foregrounds, but mathematically focused on the reflection of shapes across the plane in such a way that a pattern of polygons, from triangles to hexagons, could be seen around several vertices in the work.

After a mathematical inspiration from a 1924 paper by George Pólya that explained the seventeen possible mathematical structures of divisions of the plane, Escher explored this idea in his works with reflection tiling.  His resulting artwork depicted reflecting triangles with specific angles in a way that would create complex patterns around a center point.
Goldstine continued by discussing Escher’s work with spherical shapes, using spherical triangles (triangles with curved edges and an angle sum greater than the traditional 180 degrees) alternating in similar reflection patterns as his previous work to create complex symmetries on the surface of a sphere.

The patterns created with these triangles were explained in a proof designed by mathematician Leonhard Euler. “There’s this thing where you’re really not supposed to give a math talk without any proofs…and the cool thing about this one is that it uses very simple geometry,” said Goldstine.

In his work, Escher was concerned with showing the concept of infinity in a finite space, in his own mind not feeling successful in doing this until he discovered a paper published in 1957 by H. S. M. Coxeter, who worked with the idea of negative curvature (in mathematics terms, hyperbolic space).

The triangles and other shapes placed on this kind of surface followed different mathematical rules that allowed for different patterns of reflection and translation, as shown by Escher’s Circle Limit 3 and Circle Limit 4, which depicted tessellated angels and demons that decreased in size to infinity, theoretically, on the edge of the piece.

Goldstine concluded with this reflection style, also mentioning how University of Minnesota Duluth professor Douglas Dunham and others have used computers to analyze and experiment with the patterns in Escher’s work, before asking questions from the audience.

“I was amazed that [Escher’s] images were so mathematical, but now that seems rather obvious,” said Dr. Robert Paul, a St. Mary’s Biology professor who attended the lecture. “Dr. Goldstine made all this quite clear and accessible in her talk…to a general audience of mathphobic nincompoops as well as the mathematicians in the audience.”

Professor Predicts the Future, Discusses Error

For the first NS&M lecture of the semester, A. Bowdoin Van Riper, a professor at Southern Polytechnic State University, discussed why the future frequently turns out differently from the way we would expect. (Photo by Brendan Larrabee)
For the first NS&M lecture of the semester, A. Bowdoin Van Riper, a professor at Southern Polytechnic State University, discussed why the future frequently turns out differently from the way we would expect. (Photo by Brendan Larrabee)

For the first Natural Science and Mathematics Colloquium of the semester, Southern Polytechnic State University professor A. Bowdoin Van Riper presented on how views of the future have changed over the decades, and why they never seem to be accurate.

“Ladies and Gentlemen, behold the Future!  At least, behold the future as people invisioned it in the 1920s,” began Professor Van Riper, in a presenter’s voice that seemed to capture the audience of College students, faculty, and community members in Schaefer Hall’s lecture room on Jan 28.  “Really big skyscrapers…heliports outside your door for your own personal auto gyro…robot servants…and jumpsuits.”

As the decades have passed, ideas of how the future would be have evolved from food pills and flying cars to space stations and supersonic airliners, but these ideas never seem to define those futures, Van Riper explained.  “What’s missing in these past visions of the future?  Why are our visions of the technological future so often off the mark?”

Van Riper went on to explain one major reason for the flawed predictions: people believe that big technology will define the future.  While the atomic bomb and ballistic missile were both incredible innovations in the mid-20th century, no one expected the best invention out of World War II to be penicillin, the beginnings of a new wave of antibiotic drugs.  Big technology is certainly impressive, but not always era-defining, and certainly not always easy to predict.

Another fallacy people seem to follow is the idea that technological advances will happen without any way of controlling it.  Van Riper disagreed, stating that consumers have the power to endorse or reject any great technology simply by saying yes or no.  From the large kitchen computer with built-in recipes, to food pills that take away the social experience of dining, to the Boeing 7207 that flies at supersonic speed but costs a fortune to fly, Van Riper suggested one answer that anyone can give:  “Thanks, but no thanks.”

Van Riper presented a third possible reason for this trend, that people believe in linear growth in technology; that is, tomorrow’s technology will be bigger and better than that of today.  In reality, the growth is not so simple, following a more branching or even “hockey-stick curve” pattern due to the unpredictability of success of certain inventions over others.  But, with this curve comes another problem: with new inventions entering the market at a faster rate, the usefulness of previous inventions diminishes, decreasing the value of consumer products over time.

Some technological “masterpieces” truly are magnificent, but have too many negative aspects that make the benefits seem less helpful.

A nuclear-powered car may be more energy-efficient, even faster, but the damage due to a “fender bender” could be enormous.  Flying cars would certainly save airspace, but asking people who have difficulty enough with forward, backward, left, and right to also drive up and down would lead to catastrophic disasters, not to mention how dangerous an empty gas tank could be.

The lecture concluded with Van Riper discussing his own predictions for the future of our generation: wearable computers, smart houses, electric cars, designer genes, nanobots, and (Van Riper said with a smile) “jumpsuits.”

“I really enjoyed it,” said Dr. Charles Adler, an associate professor of physics at the College.  “I’m a long-time science fiction reader, and when you read science fiction from the 1950s, there are all of these predictions about the world in the future: by the year 2000, we’ll have bases on the Moon, settlements by Mars, and have had a nuclear war with the Soviets.  It’s always interesting to think why didn’t we get those things, although I’m glad we didn’t get the war, and I think that Dr. Van Riper’s talk showed why not, or at least some of the reasons why not:  the future isn’t a simple extrapolation of what went on in the past.”

“I thought he was very enthusiastic about his subject, which made me become involved in his presentation,” said Jesse Burke, a sophomore who also attended the lecture.  “I felt that he was correct to say that we always try to predict future technologies, but always expect way too much.”

The next lecture of the Natural Science and Mathematics Colloquia, Waging Chemical Warfare and Hazardous Waste: Green Chemistry at St. Mary’s College of Maryland, will be on Wednesday, Feb 10, presented by College chemistry professor Leah Eller.