Monthly Archives: April 2019

DESERT FORMATION

The deserts, which already occupy approximately a fourth of the Earth’s land surface, have in recent decades been increasing at an alarming pace. The expansion of desertlike conditions into areas where they did not previously exist is called desertification. It has been estimated that an additional one-fourth of the Earth’s land surface is threatened by this process.

Desertification is accomplished primarily through the loss of stabilizing natural vegetation and the subsequent accelerated erosion of the soil by wind and water. In some cases the loose soil is blown completely away, leaving a stony surface. In other cases, the finer particles may be removed, while the sand-sized particles are accumulated to form mobile hills or ridges of sand.

Even in the areas that retain a soil cover, the reduction of vegetation typically results in the loss of the soil’s ability to absorb substantial quantities of water. The impact of raindrops on the loose soil tends to transfer fine clay particles into the tiniest soil spaces, sealing them and producing a surface that allows very little water penetration. Water absorption is greatly reduced, consequently runoff is increased, resulting in accelerated erosion rates. The gradual drying of the soil caused by its diminished ability to absorb water results in the further loss of vegetation, so that a cycle of progressive surface deterioration is established.

In some regions, the increase in desert areas is occurring largely as the result of a trend toward drier climatic conditions. Continued gradual global warming has produced an increase in aridity for some areas over the past few thousand years. The process may be accelerated in subsequent decades if global warming resulting from air pollution seriously increases.

There is little doubt, however, that desertification in most areas results primarily from human activities rather than natural processes. The semiarid lands bordering the deserts exist in a delicate ecological balance and are limited in their potential to adjust to increased environmental pressures. Expanding populations are subjecting the land to increasing pressures to provide them with wood and fuel. in wet periods, the land may be able to respond to these stresses. During the dry periods that are common phenomena along the desert margins, though, the pressure on the land is often far in excess of its diminished capacity, and desertification results.

Four specific activities have been identified as major contributors to the desertification process: overcultivation, overgrazing, firewood gathering, and overirrigation. The cultivation of crops has expanded into progressively drier regions as population densities have grown. These regions are especially likely to have periods of severe dryness, so that crop failures are common. Since the raising of most crops necessitates the prior removal of the natural vegetation, crop failures leave extensive tracts of land devoid of a plant cover and susceptible to wind and water erosion.

The raising of livestock is a major economic activity in semiarid lands, where grasses are generally the dominant type of natural vegetation. The consequences of an excessive number of livestock grazing in an area are the reduction of the vegetation cover and the trampling and pulverization of the soil. This is usually followed by the drying of the soil and accelerated erosion.

Firewood is the chief fuel used for cooking and heating in many countries. The increased pressure of expanding populations have led to the removal of woody plants so that many cities and towns are surrounded by large areas completely lacking in trees and shrubs. The increasing use of dried animal waste as a substitute fuel has also hurt the soil because this valuable soil conditioner and source of plant nutrients is no longer being returned to the land.

The final major human cause of desertification is soil salinization resulting from overirrigation. Excess water from irrigation sinks down into the water table. If no drainage system exists, the water table rises, bringing dissolved salts to the surface. The water evaporates and the salts are left behind, creating a white crustal layer that prevents air and water from reaching the underlying soil.

The extreme seriousness of desertification results from the vast areas of land and the tremendous numbers of people affected, as well as from the great difficulty of reversing or even slowing down the process. Once the soil has been removed by erosion, only the passage of centuries or millennia will enable new soil to form. In areas where considerable soil still remains, though, a rigorously enforced program of land protection and cover-crop planting may make it possible to reverse the present deterioration of the surface.

THE ORIGINS OF CETACEANS

It should be obvious that cetaceans – whales, porpoises, and dolphins – are mammals. They breathe through lungs, not through gills, and give birth to live young. Their streamlined bodies, the absence of hind legs, and the presence of a fluke and blowhole cannot disguise their affinities with land-dwelling mammals. However, unlike the cases of sea otters and pinnipeds (seals, sea lions, and walruses, whose limbs are functional both on land and at sea), it is not easy to envision what the first whales looked like. Extinct but already fully marine cetaceans are known from the fossil record. How was the gap between a walking mammal and a swimming whale bridged? Missing until recently were fossils clearly intermediate, or transitional, between land mammals and cetaceans.

Very exciting discoveries have finally allowed scientists to reconstruct the most likely origins of cetaceans. In 1979, a team looking for fossils in northern Pakistan found what proved to be the oldest fossil whale. The fossil was officially named Pakicetus in honor of the country where the discovery was made. Pakicetus was found embedded in rocks formed from river deposits that were 52 million years old. The river that formed these deposits was actually not far from an ancient ocean known as the Tethys Sea.

The fossil consists of a complete skull of an archaeocyte, an extinct group of ancestors of modern cetaceans. Although limited to a skull, the Pakicetus fossil provides precious details on the origins of cetaceans. The skull is cetacean-like but its jawbones lack the enlarged space that is filled with fat or oil and used for receiving underwater sound in modern whales. Pakicetus probably detected sound through the ear opening as in land mammals. The skull also lacks a blowhole, another cetacean adaptation for diving. Other features, however, show experts that Pakicetus is a transitional form between a group of extinct flesh-eating mammals, the mesonychids, and cetaceans. It has been suggested that Pakicetus fed on fish in shallow water and was not yet adapted for life in the open ocean. It probably bred and gave birth on land.

Another major discovery was made in Egypt in 1989. Several skeletons of another early whale, Basilosaurus, were found in sediments left by the Tethys Sea and now exposed in the Sahara desert. This whale lived around 40 million years ago, 12 million years after Pakicetus. Many incomplete skeletons were found but they included, for the first time in an archaeocyte, a complete hind leg that features a foot with three tiny toes. Such legs would have been far too small to have supported the 50-foot-long Basilosaurus on land. Basilosaurus was undoubtedly a fully marine whale with possibly nonfunctional, or vestigial, hind legs.

An even more exciting find was reported in 1994, also from Pakistan. The now extinct whale Ambulocetus natans (“the walking whale that swam”) lived in the Tethys Sea 49 million years ago. It lived around 3 million years after Pakicetus but 9 million years before Basilosaurus. The fossil luckily includes a good portion of the hind legs. The legs were strong and ended in long feet very much like those of a modern pinniped. The legs were certainly functional both on land and at sea. The whale retained a tail and lacked a fluke, the major means of locomotion in modern cetaceans. The structure of the backbone shows, however, that Ambulocetus swam like modern whales by moving the rear portion of its body up and down, even though a fluke was missing. The large hind legs were used for propulsion in water. On land, where it probably bred and gave birth, Ambulocetus may have moved around very much like a modern sea lion. It was undoubtedly a whale that linked life on land with life at sea.

INTEGRATED 24

Type I Supernova, the kind of supernova that you read about in the reading passage, is not the only kind of supernova. The other kind of supernova is called, as you might expect, a Type II supernova.

A Type II supernova occurs when a large star, a single star and not a double star, is in the process of dying. A Type II supernova occurs only in a star that is truly massive, a star that is at least ten times as massive as our Sun.

A supernova occurs in this type of massive star only when it is very old. The core of such a massive star in its very late stages of life becomes progressively hotter and hotter until the core collapses and a whole series of thermonuclear reactions occur, causing a supernova.

Probably the most famous and brightest historical Type II supernova occurred in 1054, near the beginning of the last millennium. It was recorded in China, and Chinese records indicate that it was visible to the naked eye even during daylight for twenty-three days and was visible to the naked eye at night for 653 days, or almost two years. The Chinese also recorded two other supernovae, in 1006 and in 1181, though these were not as bright as the 1054 supernova. From then, it was not until 1987 that another Type II supernova was visible to the naked eye. In 1987, a Type II supernova occurred in a galaxy close to the Milky Way, our galaxy. This was the only supernova that was strong enough and close enough to Earth to be seen from Earth without a telescope in over 400 years, since the two Type I supernovae were observed in 1572 and 1604. The 1987 supernova was the only Type II supernova to be visible to the naked eye in close to a thousand years.

INTEGRATED 23

In simple terms, a supernova is a star that explodes. During a supernova, a star brightens considerably over a period of about a week and then starts to fade slowly, over a period of a few months or a year or two before it disappears completely.

One kind of supernova is called a Type I supernova. This kind of supernova occurs in a double star system in which one of the stars has become a white dwarf. A double star system, or a binary star, is a pair of stars that are held together by the force of gravity and orbit around each other; a white dwarf is a formerly medium-sized star in the last stages of its life, a star that has run out of fuel and has collapsed into a small, dense star that is smaller than our planet. A Type I supernova occurs only in this very specific situation, when a white dwarf is part of a double star system.

A Type I supernova occurs in a double star system in a situation when a white star’s companion star has grown too big. The companion star is always growing, and the white dwarf’s companion star will continue to grow in size until its proximity to the white dwarf causes its growth to halt. When the companion star can grow no further, material from the companion star flows from the companion star to the white dwarf. When the white dwarf reaches a certain critical mass, a mass equal to approximately 1.4 times the mass of the Sun, the white dwarf explodes catastrophically in a supernova event.

Only two Type I supernovae have been visible to the naked eye in recorded history, one in 1572 and the other in 1604. Since then, numerous other Type I supernovae have been observed using the telescope, which was invented by Galileo in 1610.

INTEGRATED 22

Let me talk a bit about the expression “catch-22.” Do you understand what a catch-22 is? This expression is so well known now that it has entered the American lexicon: well, a catch-22 is a situation that is unresolvable, one where there is no good choice, no best path to take.

In Heller’s novel, the catch-22 is a very specific catch in a very specific situation. The situation in which the protagonist found himself was that he wanted to get out of combat by declaring himself insane. So you see that in this situation there was a very specific catch. In American culture now, though, this expression is used more generally. It refers to any situation where there’s a catch, where there’s no solution, where there’s no way out.

One more bit of information about the expression “catch-22,” about the number 22 in the expression. This number doesn’t have any real meaning; it just signifies one in a long line of catches. Heller really could have used any number; it didn’t have to be 22. When Heller was first writing the book, he used the number 14; the book was originally titled Catch-14. Then, in the production process, the number was changed to 18, so the title was Catch-18. But then there was a problem with the number 18 because there was another book with 18 in the title, so Heller’s title became Catch-22.

INTEGRATED 21

Joseph Heller’s Catch-22 (1961) is one of the most acclaimed novels of the twentieth century. It is a black comedy about life in the military during World War II. It features bombardier John Yossarian, who is trying to survive the military’s inexhaustible supply of bureaucracy and who is frantically trying to do anything to avoid killing and being killed. Heller was able to use his own experiences in the Air Force during World War II to create this character and the novel.

Even though Catch-22 eventually became known as a great novel, it was not originally considered one. When it was first published in 1961, the reviews were tepid and the sales were lackluster. It was not well received at this point at least in part because it presented such a cowardly protagonist at a time when World War II veterans were being lauded for their selfless courage.

Within a few years of the release of the book, as an unpopular war in Southeast Asia was heating up, Heller’s Catch-22 found a new audience eager to enjoy the exploits of Heller’s war-averse protagonist. It was within the framework of this era that Catch-22 was newly discovered, newly examined, and newly credited as one of the century’s best novels.

INTEGRATED 20

Well, when managers tried out these principles of scientific management in their factories in the early twentieth century, things did not work out as expected. Many factory managers did not find the improved efficiency, lower costs, and higher profits they expected from scientific management. Instead, they often found the exact opposite.

The first problem managers ran into was with the time-and-motion studies. Very thorough time-and-motion studies were necessary to improve productivity, and very thorough time-and-motion studies were very costly, so they added to costs and did not improve profits. In addition, these time-and-motion studies were often difficult to conduct because the workers in the factory were so resistant to them.

In addition to the problem with the time-and-motion studies, there was also a problem with the lower-skilled workers. When the principles of scientific management are applied to lower-skilled workers, these lower-skilled workers must work like machines. They must change the way they work so that they work in exactly the same way as other workers, and they must do the same single repetitive motion over and over again, thousands of times a day. The low-skilled workers were not eager to work this way and often took steps to make the process less efficient.

Finally, there was also a serious problem with the high-skilled workers. One of the components of scientific management was to break down the jobs of higher-skilled workers into smaller tasks that lower-skilled workers could do, in order to save money. The result of this for the higher-skilled workers was that they would no longer have high-paying jobs. Thus, the higher-skilled workers were extremely resistant to attempts to institute scientific management.

Overall, managers who tried to employ the principles of scientific management found that they had lower efficiency, higher costs, and lower profits than they had expected.

INTEGRATED 19

Frederick Winslow Taylor, author of The Principles of Scientific Management (1911), was a leading proponent of the scientific management movement in the early twentieth century, a movement dedicated to improving the speed and efficiency of workers on factory floors. In order to institute the principles of scientific management in factories, managers would first conduct thorough time-and-motion studies in which they sent out time-and-motion inspectors to workstations with stopwatches and rulers to time and measure the movements each factory worker was making in doing his or her job. The purpose of these studies was to identify wasted motion and energy in order to improve efficiency and thereby improve productivity and factory profits.

According to Taylor’s principles, scientific managers could use the results of extensive time-and-motion studies to institute changes in their factories in order to make the factories more efficient. One major type of change that could be instituted as a result of time-and-motion studies was that the jobs of lower-skilled workers could be reorganized. Lower-skilled workers could also be instructed in the most efficient way of doing their jobs, instructed in how to stand and where to look, and instructed in how to move their bodies. Another major type of change was that higher-skilled and more highly paid workers could be replaced with lower-skilled and lower-paid workers. If the jobs of the more highly skilled workers could be broken down into more manageable tasks, then lower-skilled workers could more easily be brought in to replace various components of a higher-skilled worker’s job. Factory management hoped that, by instituting these kinds of changes as a result of scientific time-and-motion studies, there could be greatly improved efficiency and lower costs, and therefore much greater profits, in the factories.

INTEGRATED 18

You may read all of this information about garlic, about how it was used in the past, and think that this was all just a lot of superstition, like breaking a mirror brings seven years of bad luck or throwing salt over your shoulder protects you from bad luck. But this is different. It’s not all just superstition, though some of it is. There’s actually a lot of scientific evidence that garlic does have certain medicinal benefits.

First of all, garlic does kill bacteria. In 1858, Louis Pasteur conducted some research that showed that garlic does actually kill bacteria. When garlic was used during World War I to prevent infection, there was good reason. There is actually research to back up garlic’s ability to kill bacteria. It’s raw, or uncooked, garlic that has this property. Raw garlic has been shown to kill twenty-three different kinds of bacteria.

Then, when garlic is heated, it’s been shown to have different medicinal properties. When it’s heated, garlic forms a compound that thins the blood. The blood-thinning property can help prevent arteries from clogging and reduce blood pressure, which may have some impact on preventing heart attacks and strokes.

INTEGRATED 17

Garlic, a member of the lily family with its distinctive odor and taste, has been used throughout recorded history because it was considered to have beneficial properties. The earliest known record of its use is in Sanskrit records from 3,000 B.C.

It was used as a medicine in Ancient Egypt, where it was used to cure twenty-two different ailments. It was also fed to the slaves who were building the pyramids because the Egyptians believed that, in addition to keeping the slaves healthy so that they could continue to work, garlic would make the slaves stronger so that they could work harder.

The ancient Greeks and Romans found even more uses for garlic than the Egyptians had. In addition to using garlic to cure illnesses, as the Egyptians had, the Greeks and Romans believed that garlic had magical powers, that it could ward off evil spells and curses. Garlic was also fed to soldiers because it was believed to make men more courageous.

Quite a few seafaring cultures have also used garlic because they believed that it was beneficial in helping sailors to endure long voyages. Homer used it on his odysseys, the Vikings always carried garlic on their long voyages in the northern seas, and Marco Polo left records showing that garlic was carried on his voyages to the Orient.

Finally, even as late as early in the twentieth century, it was believed that garlic could fight infections. Because of this belief, garlic juice was applied to soldiers’ wounds in World War I to keep infection at bay and to prevent gangrene.