Welcome to the hub of knowledge.Here you can know about the details of our new developing world.

Wednesday 30 March 2011

NASA Satellites Eyeing 4 Tropical Systems Around the World For Possible Development

There are four low pressure areas in the tropics today that NASA satellites are all keeping an eye on for possible development. They are Systems 90S, 91S and 99S in the Southern Pacific, and System 93B in the Northern Indian Ocean. Despite a poor chance for development in all of them, one has triggered warnings in northern Australia because of its proximity to land.

NASA and the Japanese Space Agency manage the Tropical Rainfall Measuring Mission or TRMM satellite and TRMM passed over two of those four systems today. TRMM captured light to moderate rainfall in the low pressure area called "System 90S" on March 30 at 01:49 UTC. Rainfall rates were between 5 and 20 millimeters (0.2 and 0.8 inches) per hour within the storm. System 90S is located 500 miles north-northwest of Port Hedland, Australia, near 12.0 South latitude and 116.0 East longitude.

Infrared satellite imagery from the Atmospheric Infrared Sounder (AIRS) instrument aboard NASA's Aqua satellite revealed that the low has consolidated during the morning hours, while the Advanced Microwave Scanning Radiometer-E instrument showed deep convection on the north and south sides of the center of circulation. Despite these developments atmospheric dynamics are not currently favorable, so the Joint Typhoon Warning Center currently gives this low a poor chance for development.

The second tropical low pressure area NASA satellites are watching is also 500 miles away from land, and that's System 99S. System 99S is 500 miles north of the Cocos Islands today, near 9.8 South and 99.4 East. The TRMM satellite measured rainfall rates between 5 and 20 millimeters (0.2 and 0.8 inches) per hour within the System 99S early today. The AIRS infrared imagery captured from NASA's Aqua satellite shows that areas of deep convection exist on all sides of the low pressure center, but it's not uniform. Vertical wind shear is currently light and sea surface temperatures are warm enough to support development, however, the chance that it will develop into a tropical storm in the next 24 hours is poor. As the week progresses, perhaps the chance will improve with the environmental conditions.

The third tropical low pressure area isn't a tropical storm but it has triggered a watch for Australia's Northern Territories. Because of System 91S' location, about 200 miles northeast of Darwin, Australia (near 10.0 South and 133.1 East), a tropical cyclone Watch has been issued for the coastal communities between Cape Hotham, Port Keats, including Darwin and the Tiwi Islands. The Tiwi Islands include Melville and Bathurst Islands and are part of Australia's Northern Territory, 25 miles (40 km) north of Darwin where the Arafura Sea joins the Timor Sea. In addition, a Strong Wind Warning has been issued from Milingimbi to Troughton Island.

System 91S is expected to move in a southwesterly direction over the next several days and track over Snake Bay, Mellville Island and Cape Fourcroy, Bathurst Island.

NASA AIRS infrared imagery revealed today that the convection (rapidly rising air that produces the thunderstorms that power a tropical cyclone) are intensifying and expanding around System 91S' center. The convection, however appears disorganized in the low and the maximum sustained winds are between 15 and 20 knots (17-23 mph/27-37 kmh). The chance for development into a tropical storm in the next 24 hours remains poor, but the areas under the watch may feel System 91S' rains and some gusty winds.

The fourth area is in the northern hemisphere and in a different ocean. Tropical low 93B is located in the Northern Indian Ocean. Last night it was only 50 miles north of Phuket, Thailand near 9.0 North latitude and 98.7 East longitude. However, today, infrared satellite imagery from NASA's AIRS instrument showed that the low level circulation center has drifted inland. When a low is inland, its center of circulation is cut off from the warm waters that power the tropical cyclone. Because of weak steering winds, however, the low may move back seaward and redevelop in the warm waters offshore.

Currently the system's maximum sustained winds are between 15 and 20 knots (17-23 mph/27-37 kmh). The chance for development into a tropical storm in the next 24 hours remains poor, but coastal Thailand are already experiencing rains and some gusty winds from System 93B.

NASA's TRMM and Aqua satellites continue to provide data to forecasters who are keeping a watchful eye on all of these tropical low pressure areas.

Evolutionary 'Winners' and 'Losers' Revealed in Collaborative Study

March 22, 2011-Houston-
In a study that literally analyzed competing bacteria fighting it out to the death, a University of Houston (UH) researcher and his colleagues identified evolutionary ‘winners’ and ‘losers.’ Continuing research to understand the basis of these fates may become a useful tool is designing roadblocks to antibiotic resistance. Tim Cooper with student

In collaboration with scientists at Michigan State University (MSU), UH evolutionary biologist Timothy Cooper and his graduate student Utpala Shrestha were co-authors on a paper titled “Second-Order Selection for Evolvability in a Large Escherichia coli Population.” The report appeared March 18 in Science, the world’s leading journal of original scientific research, global news and commentary.

“The project found that bacteria growing for thousands of generations in an environment containing glucose as the only food had evolved to be better at getting better,” Cooper said. “We found that two lineages of bacteria arose and competed in a single experimental population. The lineage that initially grew more slowly, yet had the potential to evolve more rapidly, was the evolutionary ‘winner.’ This is surprising because it’s usually thought that competition is decided by what competitors can do now and not what they are capable of in the future.”

As genetic changes occurred, making some individuals better competitors on the glucose food, other individuals that did not quickly get their own beneficial mutations were outcompeted and went extinct. Down the line, understanding the benefits of evolving quickly like this will be a useful tool to predict such things as antibiotic resistance and the evolution of infectious disease. Cooper said this knowledge may one day help scientists design intervention strategies that make the evolution of these traits less likely to occur.

The work done by Cooper and Shrestha at UH established the specific genetic changes occurring during this bacterial evolution experiment that caused the change in their ability to evolve further. They discovered the genetic change that was important for determining which bacteria would prevail and which were destined to become extinct.

“Our collaborators isolated individual bacteria from a population that had evolved for 500 generations and sequenced their entire DNA genome to determine all the changes that had occurred,” Cooper said. “By isolating these changes and adding them in defined combinations back into the original ancestral strain, we were able to determine their individual effects.”

Reminiscent of Aesop’s lesson that ‘slow and steady wins the race,’ Cooper adds that even bacteria can benefit from a long-term view, with their experiment showing that bacteria that adapted, slowly but consistently, outcompeted those that initially grew quickly but then ran out of ways to improve.

With much of his work based on experimental evolution, which is the lab-based study of evolving populations, Cooper’s motivation for this experiment comes from wanting to understand the factors involved in evolution of organisms to better fit their environments. Using bacterial and computational experimental systems he aims to identify and integrate these mechanisms and examine how they depend on genetic and environmental factors.

“Bacteria provide an ideal model system to address these questions, because they evolve so quickly, undergoing thousands of generations in only a few years,” Cooper said. “Additionally, we can now sequence their entire genomes and determine the genetic changes that lead to improvements in their ability to grow.”

Funded by the National Science Foundation and the Defense Advanced Research Projects Agency, this work was a multidisciplinary effort done in collaboration with researchers in zoology, microbiology and molecular genetics at MSU. In addition to UH’s Cooper and Shrestha, the MSU team consisted of Richard Lenski, Jeffrey Barrick, Robert Woods and Mark Kauth. Woods has since moved on to the University of Michigan and Barrick is currently at the University of Texas at Austin.

iMobot Rolls, Crawls and Creeps

An intelligent, reconfigurable modular robot invented by a UC Davis alumnus and a professor of mechanical and aerospace engineering is headed for commercial development with the help of a grant from the National Science Foundation.
Graham Ryland and Professor Harry Cheng hope their “iMobot” will be a useful research and teaching tool. They also say the technology could be used in industrial applications for rapidly prototyping complex robotics — and may eventually form the basis of robots for search-and-rescue operations in difficult terrain. The university has filed a patent on the robot.
Ryland and Cheng developed the iMobot while Ryland was studying for his master's degree in mechanical engineering and conducting research in Cheng’s Integration Engineering Laboratory at UC Davis.
A single iMobot module has four controllable degrees of freedom, with two joints in the center section and two wheels, one on each end. An individual module can drive on its wheels, crawl like an inchworm, or raise one end of its body and pan around as a camera platform.
Individual modules could be assembled into larger robots for particular tasks, such as a snakelike robot that could get into confined spaces, or a larger, wheeled robot for smoother terrain.
"We wanted to create a robot that was modular and could be assembled together, but was also mobile and useful by itself. We feel this hardware platform could drastically speed up university and industry research in the field of robotics," Ryland said.
Commercial robots are usually built for a specific application. But there is a lot of interest in modular robots -- machines made up of durable subunits that can function alone or be configured for a specific task.
The iMobot could be used as a testbed tool for engineers studying control systems for individual robots or groups of robots, Cheng said.
"It's very difficult to build the kind of robot with flexibility, modularity, and reconfigurability that people want to use for research and teaching," he said.
By using an off-the-shelf commercial robot like iMobot, researchers can focus on solving problems in areas such as artificial intelligence, robot collaboration, and reconfigurable and adaptive systems, without having to first develop the hardware part of the robot.
Currently, there are no commercial research-grade modular robots available, Ryland said.
Cheng and Ryland have formed a company, Barobo Inc., to develop the robot commercially. Ryland is the company’s president. Barobo recently received a small-business innovation research grant from the National Science Foundation to begin commercial development. The initial grant is for $150,000 over six months, with the opportunity to apply for another $500,000. The inventors hope to have the robot on the market by the end of this year.

Epigenomic Findings Illuminate Veiled Variants

Genes make up only a tiny percentage of the human genome. The rest, which has remained measurable but mysterious, may hold vital clues about the genetic origins of disease. Using a new mapping strategy, a collaborative team led by researchers at the Broad Institute of MIT and Harvard, Massachusetts General Hospital (MGH), and MIT has begun to assign meaning to the regions beyond our genes and has revealed how minute changes in these regions might be connected to common diseases. The researchers’ findings appear in the March 23 advance online issue of Nature.
The results have implications for interpreting genome-wide association studies – large-scale studies of hundreds or thousands of people in which scientists look across the genome for single “letter” changes or SNPs (single nucleotide polymorphisms) that influence the risk of developing a particular disease. The majority of SNPs associated with disease reside outside of genes and until now, very little was known about the functions of most of them.
“Our ultimate goal is to figure out how our genome dictates our biology,” said co-senior author Manolis Kellis, a Broad associate member and associate professor of computer science at MIT. “But 98.5 percent of the genome is non-protein coding, and those non-coding regions are generally devoid of annotation.”
The term “epigenome” refers to a layer of chemical information on top of the genetic code, which helps determine when and where (and in what types of cells) genes will be active. This layer of information consists of chemical modifications, or “chromatin marks,” that appear across the genetic landscape of every cell, and can differ dramatically between cell types.
In a previous study, the authors showed that specific combinations of these chromatin marks (known as “chromatin states”) can be used to annotate parts of the genome – namely to attach biological meaning to the stretches of As, Cs, Ts, and Gs that compose our DNA. However, many questions remained about how these annotations differ between cell types, and what these differences can reveal about human biology.
In the current study, the researchers mapped chromatin marks in nine different kinds of cells, including blood cells, liver cancer cells, skin cells, and embryonic cells. By looking at the chemical marks, the researchers were able to create maps showing the locations of key control elements in each cell type. The researchers then asked how chromatin marks change across cell types, and looked for matching patterns of activity between controlling elements and the expression of neighboring genes.
“We first annotated the elements and figured out which cell types they are active in,” said co-senior author Bradley Bernstein, a Broad senior associate member and Harvard Medical School (HMS) associate professor at Massachusetts General Hospital (MGH). “We could then begin to link the elements and put together a regulatory network.”
Having pieced together these networks connecting non-coding regions of the genome to the genes they control, the researchers could begin to interpret data from disease studies. The team studied a large compendium of genome-wide association studies (GWAS), looking to characterize non-coding SNPs associated with control regions in specific cell types.
“Across 10 association studies of various human diseases, we found a striking overlap between previously uncharacterized SNPs and the control region annotations in specific cell types,” said Kellis. “This suggests that these DNA changes are disrupting important regulatory elements and thus play a role in disease biology.”
The researchers confirmed the reliability of their approach by showing that SNPs were associated with the appropriate cell types. For example, SNPs from autoimmune diseases such as rheumatoid arthritis and lupus sit in regions that are only active in immune cells, and SNPs associated with cholesterol and metabolic disease sit in regions active in liver cells. While more in-depth, follow-up studies will be needed to confirm the biological significance of these connections, the current study can help guide the direction of these investigations.
“GWAS has identified hundreds of non-coding regions of the genome that influence human disease, but a major barrier to progress is that we remain quite ignorant of the functions of these non-coding regions,” said David Altshuler, deputy director at the Broad and an HMS professor at MGH, who was not involved in the study. “This remarkable and much-needed resource is a major step forward in helping researchers address that challenge.”
SNPs in the non-coding regions of the genome may have subtler biological effects than their counterparts that arise in genes because they can influence how much protein is produced. The researchers mainly focused on SNPs in enhancer regions, which help boost a gene’s expression, and their network connections to regulators that control them and genes that they target. Follow-up efforts can then focus on specific pieces of this network that could be targeted with drugs.
The team involved in this study hopes to expand its analysis to include many other cell types and map additional marks to expand their networks beyond enhancer regions. In the meantime, researchers involved in genome-wide association studies will be able to use the maps from this project to analyze non-coding SNPs in a new light.
“These maps can be used to come up with hypotheses about how the variants themselves are working and which ones are causal,” said Bernstein. “This resource now goes back to the GWAS community, which can use the maps to form and test new functional models.”
This research was supported by the National Human Genome Research Institute under an ENCODE grant, the Howard Hughes Medical Institute, the National Science Foundation, and the Sloan Foundation.
About the Broad Institute of Harvard and MIT
The Eli and Edythe L. Broad Institute of Harvard and MIT was launched in 2004 to empower this generation of creative scientists to transform medicine. The Broad Institute seeks to describe all the molecular components of life and their connections; discover the molecular basis of major human diseases; develop effective new approaches to diagnostics and therapeutics; and disseminate discoveries, tools, methods and data openly to the entire scientific community.
Founded by MIT, Harvard and its affiliated hospitals, and the visionary Los Angeles philanthropists Eli and Edythe L. Broad, the Broad Institute includes faculty, professional staff and students from throughout the MIT and Harvard biomedical research communities and beyond, with collaborations spanning over a hundred private and public institutions in more than 40 countries worldwide. For further information about the Broad Institute, go to http://www.broadinstitute.org.

Extra-Cold Winters in Northeastern North America and Northeastern Asia?

f you're sitting on a bench in New York City's Central Park in winter, you're probably freezing.
After all, the average temperature in January is 32 degrees Fahrenheit.
But if you were just across the pond in Porto, Portugal, which shares New York's latitude, you'd be much warmer--the average temperature is a balmy 48 degrees Fahrenheit.
Throughout northern Europe, average winter temperatures are at least 10 degrees Fahrenheit warmer than similar latitudes on the northeastern coast of the United States and the eastern coast of Canada.
The same phenomenon happens over the Pacific, where winters on the northeastern coast of Asia are colder than in the Pacific Northwest.
Researchers at the California Institute of Technology (Caltech) have now found a mechanism that helps explain these chillier winters--and the culprit is warm water off the eastern coasts of these continents.
"These warm ocean waters off the eastern coasts actually make it cold in winter--it's counterintuitive," says Tapio Schneider, a geoscientist at Caltech.
Schneider and Yohai Kaspi, a postdoctoral fellow at Caltech, describe their work in a paper published in the March 31 issue of the journal Nature.
"This research makes a novel contribution to an old problem," says Eric DeWeaver, program director in the National Science Foundation's (NSF) Division of Atmospheric and Geospace Sciences, which funded the research.
"Traditional wisdom has it that western Europe is warm because of the Gulf Stream, but this paper presents evidence that atmospheric circulation plays an important role in maintaining the colder temperatures found on the eastern boundaries of the mid-latitude continents."
Using computer simulations of the atmosphere, the researchers found that the warm water off an eastern coast will heat the air above it and lead to the formation of atmospheric waves, drawing cold air from the northern polar region.
The cold air forms a plume just to the west of the warm water. In the case of the Atlantic Ocean, this means the frigid air ends up right over the northeastern United States and eastern Canada.
For decades, the conventional explanation for the cross-oceanic temperature difference was that the Gulf Stream delivers warm water from the Gulf of Mexico to northern Europe.
But in 2002, research showed that ocean currents aren't capable of transporting that much heat, instead contributing only up to 10 percent of the warming.
Kaspi's and Schneider's work reveals a mechanism that helps create a temperature contrast not by warming Europe, but by cooling the eastern United States.
Surprisingly, it's the Gulf Stream that causes this cooling.
In the northern hemisphere, the subtropical ocean currents circulate in a clockwise direction, bringing an influx of warm water from low latitudes into the western part of the ocean.
These warm waters heat the air above it.
"It's not that the warm Gulf Stream waters substantially heat up Europe," Kaspi says. "But the existence of the Gulf Stream near the U.S. coast is causing the cooling of the northeastern United States."
The researchers' computer model simulates a simplified, ocean-covered Earth with a warm region to mimic the coastal reservoir of warm water in the Gulf Stream.
The simulations show that such a warm spot produces so-called Rossby waves.
Rossby waves are large atmospheric waves--with wavelengths that stretch for more than 1,609 kilometers (1,000 miles).
They form when the path of moving air is deflected due to Earth's rotation, a phenomenon known as the Coriolis effect.
Similar to how gravity produces water waves on the surface of a pond, the Coriolis force is responsible for Rossby waves.
In the simulations, the warm water produces stationary Rossby waves, in which the peaks and valleys of the waves don't move, but the waves still transfer energy.
In the northern hemisphere, the stationary Rossby waves cause air to circulate in a clockwise direction just to the west of the warm region.
To the east of the warm region, the air swirls in the counterclockwise direction. These motions draw in cold air from the north, balancing the heating over the warm ocean waters.
To gain insight into the mechanisms that control the atmospheric dynamics, the researchers sped up Earth's rotation in the simulations.
In those cases, the plume of cold air gets bigger--consistent with it being a stationary Rossby-wave plume.  Most other atmospheric features would get smaller if the planet were to spin faster.
Although it's long been known that a heat source could produce Rossby waves, which can then form plumes, this is the first time scientists have shown how the mechanism causes cooling that extends west of the heat source.
According to the researchers, the cooling effect could account for 30 to 50 percent of the temperature difference across oceans.
It also explains why the cold region is just as big for both North America and Asia, despite the continents' differences in topography and size.
The Rossby-wave induced cooling depends on heating air over warm ocean water. Since the warm currents along western ocean boundaries in both the Pacific and Atlantic are similar, the resulting cold region to their west would be similar as well.
The next step, Schneider says, is to build simulations that more realistically reflect what happens on Earth. Future simulations would incorporate more complex features like continents and cloud feedbacks.

New Research Suggests Wild Birds May Play a Role in the Spread of Bird Flu

A study by the U.S. Geological Survey, the United Nations Food and Agriculture Organization and the Chinese Academy of Sciences used satellites, outbreak data and genetics to uncover an unknown link in Tibet among wild birds, poultry and the movement of the often-deadly virus.
Researchers attached GPS satellite transmitters to 29 bar-headed geese – a wild species that migrates across most of Asia and that died in the thousands in the 2005 bird flu outbreak in Qinghai Lake, China. GPS data showed that wild geese tagged at Qinghai Lake spend their winters in a region outside of Lhasa, the capitol of Tibet, near farms where H5N1 outbreaks have occurred in domestic geese and chickens.
This is the first evidence of a mechanism for transmission between domestic farms and wild birds, said Diann Prosser, a USGS biologist at the USGS Patuxent Wildlife Research Center. “Our research suggests initial outbreaks in poultry in winter, followed by outbreaks in wild birds in spring and in the breeding season. The telemetry data also show that during winter, wild geese use agricultural fields and wetlands near captive bar-headed geese and chicken farms where outbreaks have occurred.”
The part that wild birds play in the spread of bird flu has been hotly debated since the 2005 outbreaks in Qinghai Lake. Bird flu that spread beyond Asia and into Europe and Africa was later found to have genetically originated in the Qinghai Lake area. Discovering the Tibet connection adds another significant link in the global transmission of bird flu.
From 2003 through 2009, the Qinghai-Tibet Plateau experienced 16 confirmed outbreaks of the virus in wild and domestic birds, most of them in the newly documented migratory pathway of the wild bar-headed geese. “Every summer, more than 150,000 migratory birds use Qinghai Lake, which sits within the eastern portion of the Central Asian Flyway, which extends from India to Russia,” said John Takekawa, a wildlife biologist at the USGS Western Ecological Research Center. 
The study also uncovered an undocumented migratory link between Qinghai Lake and Mongolia, further suggesting that Qinghai may be a pivotal point of H5N1 transmission. 
Scott Newman, Head of the EMPRES Wildlife Health and Ecology Unit of the FAO, noted that poultry production at the southern end of the Central Asian Flyway is extensive, which has resulted in many more HPAI H5N1 outbreaks there than in the northern end, where poultry production is more limited. “In general,” said Newman, “H5N1 outbreaks along this flyway mirror human and poultry densities, with domestic poultry the primary reservoir for this disease.”
This study is part of a global program to not only better understand the movement of avian influenza viruses and other diseases in the Central Asian Flyway, but also to improve the understanding of the ecological habits of waterbirds internationally, identify important conservation issues, and better define  interactions among wild and domestic birds.
The H5N1 virus continues to reemerge across much of Eurasia and Africa, with high fatality rates in people, and the threat of a possible global pandemic.  Since 2003, H5N1 has killed 300 people, including 18 in 2010, and has led to the culling of more than 250 million domestic poultry throughout Eurasia and Africa.  Sixteen countries reported H5N1 outbreaks in poultry in 2010.  No HPAI H5N1 has been detected in North America, despite extensive efforts to test migratory birds to provide early warnings should birds with the virus arrive in the country.
The study was funded by the USGS, FAO, National Science Foundation and the Chinese Academy of Sciences. It was published in PLoS ONE and authored by Diann J Prosser, Peng Cui, John Y. Takekawa, Scott Newman and others.

A bit of fiber makes for sudsier beer

his St. Patrick’s Day, raise a glass to the beer researchers of Ireland. By figuring out how bubbles form in stout beers, a team of mathematicians has come up with an idea that could better unleash the foamy head of Guinness: beer cans lined with a material similar to a coffee filter.
Stout beers foam less than other beers when poured, thanks to the nitrogen gas that brewers inject into the liquid before packaging. Nitrogen cuts down on acidic tastes and provides a longer-lasting head and a creamier mouthfeel, owing to the small size of its bubbles. But nitrogen doesn’t dissolve as well as other gases or form bubbles as easily. That’s why bars need special devices to force the bubbles out of tapped stout beers and why every can of Guinness contains a hollow ball filled with nitrogen gas that triggers foaming. It’s also why beer aficionados in search of a foamy head spend 30 seconds on the perfect pour.
Carbon dioxide, present only in small quantities in stouts, gives a quick fizz and foam to soft drinks, champagne and paler beers. Pour a bottle of one of these carbonated beverages, and the gas readily comes out of the liquid. Bubbles speedily form and grow larger while rising through the liquid, adsorbing gas and releasing it at the surface.
To see if stout beers could learn a thing or two from lighter beers, mathematician William Lee of the University of Limerick and colleagues investigated the mechanism behind bubble formation.
In a glass of beer, bubbles form only at the sides and bottom — not within the beer itself. Trapped pockets of gas create steady streams of these bubbles much in the same way that dropping sugar crystals into an overly sweet — what chemists call supersatured — sugar solution creates rock candy. Irregularities in the glass itself were once thought to be the origins of these pockets of gas.
But in 2002 Gérard Liger-Belair, a chemical physicist at the University of Reims Champagne-Ardenne in France, proposed that gas pockets in bits of fibrous material clinging to the sides of containers were actually driving the formation of bubbles in carbonated beverages. Observations of these bits of cellulose under a microscope led to a mathematical explanation of bubble nucleation in champagne.
“The cellulose fiber model does a better job than the previous one dealing with irregularities because it is more ‘stuck’ to reality,” Liger-Belair says. “Most of bubble nucleation sites in a glass are tiny cellulose fibers.” The fibers, he suggests, float into the glass from dust in the air or break off from cloths used to wipe down glasses.
By adapting Liger-Belair’s math, Lee showed that the same mechanism is at work in stout beer, only much more slowly. A cellulose fiber triggers the formation of a bubble in a stout beer in 1.28 seconds, as compared to the 0.079 seconds required in a typical carbonated liquid.
“I was surprised to find that it did work, albeit slowly, and even more surprised when a rough estimate suggested it could potentially be useful,” says Lee, who posted his results online at arXiv.org on March 3.
Adding a coating of dense fibers inside of a stout can’s upper lip should trigger the release of nitrogen bubbles during pouring without the need for a nitrogen gas widget now in use, he suggests. To demonstrate this concept, he poured stout beer onto a coffee filter and watched with satisfaction as it bubbled on the fibrous material — sacrificing a perfectly good beer to the pursuit of the perfect pour.

Missing bits of DNA may define humans

In evolution as on reality TV, sometimes the biggest loser is really a winner.
Losing 510 chunks of DNA may have enabled humans to develop bigger brains, spineless penises and other human traits, researchers from Stanford University and their colleagues report in the March 10 Nature.
The research is the latest attempt to find genetic factors that make humans human. Previously researchers have searched for genes that humans have but other species do not, but the new study turns that approach on its head, looking instead for pages redacted from the human genetic instruction book during the course of evolution.
“This is a clever thing to do and as so often with good ideas, seems almost obvious in hindsight,” says Svante Pääbo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.
Looking for missing DNA that could shape human traits was something of a no-brainer for Stanford researchers David Kingsley and Gill Bejerano. Kingsley, a developmental geneticist, had previously discovered that stickleback fish species can shed prominent pelvic spines by losing a bit of DNA involved in growing limbs and other appendages (SN: 1/31/09, p. 26).
The team looked at the genetic blueprints of humans, chimpanzees and macaques to see if humans were missing any pieces found in the other two species.
Indeed, humans lack many chunks of DNA that chimps, macaques and mice all seem to share — at least 510 different bits. Most are also missing from Neandertals, suggesting that the pieces were lost sometime between 500,000 and 6 million years ago.
Only one of the missing bits contained an actual gene. The rest of the absent genetic instructions may be important switches for turning on genes. Such switches, known as enhancers, can be located far from a gene but still govern when and where the gene is flipped on during development.
Humans and chimps have roughly the same set of genes, and yet have clear physical and behavioral differences. Some scientists have reasoned that changing the way genes are used — by turning a gene on or off in a particular tissue or during a phase of development — may be a way to evolve new characteristics without damaging important genes. Adding or deleting enhancers is one way to regulate gene activity.
“It’s probably the best way to develop new functionality in the short run,” says David Haussler, a Howard Hughes Medical Institute investigator at the University of California, Santa Cruz.
Because most of the missing chunks of DNA don’t contain genes, it is difficult to say exactly what the pieces are supposed to do. The Stanford team used “a very clever computational analysis” to tease out a couple of pieces of DNA that might have clear-cut functions, Haussler says.
One of the bits is near a gene that controls production of an androgen receptor, a protein that detects testosterone. The piece of DNA missing in humans is an enhancer for the gene, which controls the production of facial sensory whiskers and small spines on the penises of both chimps and mice, the researchers found. Since humans don’t have the enhancer, the gene is not turned on, and sensory whiskers and penile spines fail to develop.
In some mammals, penile spines are prominent and may help males achieve ejaculation quickly. “The key to reproductive success is impregnation, and the faster you can achieve that, the better,” says Owen Lovejoy, a biological anthropologist at Kent State University in Ohio. But even though the loss of the spines makes copulation last longer, it hasn’t hurt the reproductive success of humans, he says. Longer copulation times may help cement bonds between mating partners, which Lovejoy says has been a key to humans’ evolutionary success.
Another missing enhancer identified in the new study may help explain the evolution of human brain size. The enhancer lies next to a tumor-suppressing gene called GADD45G, which normally reins in cell growth so that cancer doesn’t develop. In mice and chimps, the enhancer DNA turns the gene on in the brain. Because humans lack the enhancer, the gene is not turned on in the brain and brain cells may proliferate, possibly contributing to bigger brains.
The enhancer probably isn’t the only thing that led to humans’ increased brain power, Kingsley says. “There’s no way changes in a single gene are going to produce all the interesting changes that led to the human brain.”
Removing the enhancer from mice may help researchers learn just how much brain growth the missing piece of DNA is responsible for, he says.

Who felt it not, smelt it not

People who feel no pain due to a particular rare genetic defect also can’t smell anything, a new study finds. The unexpected discovery shows that nerves that detect pain and odors rely on the same protein to transmit information to the brain.
Researchers examined three people who have mutations in the SCN9A gene and can’t feel pain. All of the people had broken multiple bones without feeling any pain, and two had given birth painlessly. But none were aware that they also couldn’t smell a thing.
None of the study participants could distinguish balsamic vinegar, orange, mint, coffee or perfume from plain water, even when researchers poured on so much perfume and vinegar that the scents were unbearable to people with a normal sense of smell, an international team of researchers reports online March 23 in Nature.
It may not be so strange that none of the people realized that they lacked a sense of smell. “If this was a genetic defect from birth they wouldn’t even know what they were missing,” says Graeme Lowe, a neurophysiologist at the Monell Chemical Senses Center in Philadelphia who was not involved in the study.
As oblivious as the patients were to their smell deficit, the scientists had been equally clueless that smell and pain shared a common communication gateway.
Researchers had previously shown that SCN9A controls pain sensitivity in people (SN Online: 3/8/10). The gene makes a protein, called a sodium channel, that lets sodium pass in through a nerve cell’s membrane when the nerve detects something painful. That flood of sodium sends an electrical signal racing down a long tendril, called an axon, toward the brain.
In the new study, the team discovered that odor-detecting nerve cells have the same sodium channel at the tips of their axons. In those cells, the channel controls the release of a chemical called glutamate, which in turn sparks electrical communication with odor-processing centers in the brain.
Because the sodium channel is missing in people with SCN9A mutations, the messages sent by pain- and odor-sensing nerves never actually make it to the brain.
“It was completely surprising that these two sensory systems would use the same sodium channel,” says Frank Zufall, a neurophysiologist at the University of Saarland School of Medicine in Homburg, Germany. “But it’s clearly not needed for all senses.” None of the people with the faulty gene had hearing or vision problems. The researchers plan to test whether those people have a sense of taste, and whether taste cells also use the sodium channel to communicate.
Although the people weren’t bothered by not being able to smell, mice engineered to lack the sodium channel in smell-sensing nerve cells had a tough time. Baby mice are blind and rely on their sense of smell to guide them to their mother’s milk. Without the sodium channel, baby mice were underweight and mother mice couldn’t locate babies that had gotten out of the nest. Adult mice lacking the protein also wandered into territory marked with the scent of foxes — a place normal mice strictly avoided. “These [mutant] mice have no chance in nature,” Zufall says.
Some pharmaceutical companies are already working on painkilling drugs that would block the sodium channel’s activity. But the new study suggests that such drugs could have the side effect of eliminating smell, Lowe says. Because odor is an important component in creating the flavor of a food, people’s ability to taste would also be compromised, he says.

When bacteria are flu fighters

Not all germs are created equal. Some invade the body and cause disease or harmful infections but others live peacefully in your intestines, helping your body run smoothly. Antibiotics can't tell the difference. They knock out good and harmful bacteria alike.
That's usually okay for people fending off bacterial infections, like the painful ones that can grow in your ear, but it might be a problem for fighting the flu. Researchers from Yale University say that when mice on antibiotics were infected with the flu virus, they had a harder time fighting off sickness than did mice that hadn't been taking antibiotics.
Helpful bacteria are called “commensal” bacteria — and you've got a lot of them. More than 1,000 different types of bacteria live in the human body. In the intestines, commensal bacteria help break down food . They can also prevent bad bacteria from launching an infection.
In earlier experiments, scientists had shown that commensal bacteria can help fend off bad bacteria in the intestines. But that's not the only place, say the Yale researchers. The scientists discovered that these friendly gut-dwelling germs can help the immune system find and fight invaders elsewhere in the body. When antibiotics knocked out the good bacteria in the mice intestines, the mice had trouble fighting off viral infections in their lungs.
This long-distance relationship shows that the good germs in the gut can make a difference all over the body, and that antibiotics may jeopardize those benefits. The gut bacteria help the immune system create a protein called interleukin-1 beta, or IL-1 beta. The body uses this molecule to fight off the flu and other viral infections.
When the scientists gave antibiotics to the mice, the antibiotics wiped out the good bacteria that made IL-1 beta. As a result, when those mice were given the flu, they had a harder time fighting the infection than did the mice that hadn't been given The scientists showed that antibiotics can wipe out important commensal bacteria, but a mystery remains. It’s not clear which bacteria are responsible for the IL-1 beta. But it’s possible to rule out some kinds.
“We know for sure that there are certain bacteria that can’t do it,” Akiko Iwasaki told Science News. Iwasaki, who led the study, is an immunologist at Yale. Immunologists study the body's immune system.
The mice had a hard time with the flu, but their immune systems still worked. Iwasaki and her colleagues found that even while on antibiotics, mice could stop an infection by the herpes virus. That's because the body doesn't need IL-1 beta to fight herpes.
The connection between the gut and the immune system could lead to new types of medicine. If scientists can find which bacteria help make IL-1 beta, for example, they might be able to find a way to boost production of those bacteria ― and thus boost the immune system at the same time.
The experiment is also a reminder: Next time you fight off the flu, thank your germs!
POWER WORDS (adapted from the New Oxford American Dictionary)
gut The intestines
bacteria Single-celled organisms that can live in soil, water and air, as well as inside and outside the human body. In the intestines, beneficial bacteria called commensal bacteria can help break down food and fight infection.
virus An organism that is made of genetic material surrounded by a protein shell. When a virus invades a host, it begins reproducing rapidly, causing infection.
proteins Large molecules that perform a wide variety of functions and are essential to life.
immunity The ability of a body to resist a particular infection.
immunology The branch of medicine and biology that focuses on immunity.

Club drug tied to out-of-body sensations

A popular “club drug” promises to open a scientific window on the strange world of out-of-body experiences, researchers say.
Recreational users of a substance called ketamine often report having felt like they left their bodies or underwent other bizarre physical transformations, according to an online survey conducted by psychologist Todd Girard of Ryerson University in Toronto and his colleagues.
Ketamine, an anesthetic known to interfere with memory and cause feelings of detachment from one’s self or body, reduces transmission of the brain chemical glutamate through a particular class of molecular gateways. Glutamate generally jacks up brain activity. Ketamine stimulates sensations of illusory movement or leaving one’s body by cutting glutamate’s ability to energize certain brain areas, the researchers propose in a paper published online February 15 in Consciousness and Cognition.
“Ketamine may disrupt patterns of brain activation that coalesce to represent an integrated body and self, leading to out-of-body experiences,” Girard says.
National surveys indicate that 1.6 percent of high school seniors in Canada and the United States have used ketamine at least once. An estimated 70 percent of Toronto rave-goers now report taking ketamine at these all-night parties, Girard notes.
In the new survey, use of marijuana, LSD and MDMA, also known as ecstasy, displayed modest links to volunteers’ reports of illusions of walking or moving rapidly up and down while actually remaining still. But only ketamine use exhibited a strong relationship with having had a range of out-of-body experiences, regardless of any other drugs ingested at the time of those sensations, researchers say.
Neuroscientist Olaf Blanke of the Swiss Federal Institute of Technology in Lausanne calls ketamine “an interesting candidate to further understand some of the brain mechanisms in out-of-body experiences.” Blanke, who like a growing number of scientists studies these phenomena in controlled experiments (SN: 6/5/10, p. 10), says that drugs such as ecstasy and amphetamines also deserve close scrutiny.
Blanke has linked out-of-body experiences to reduced activity in brain areas that integrate diverse sensations into a unified perception of one’s body and self. Ketamine and other recreational drugs act throughout the brain, making it difficult to explain how any one drug might specifically affect sensation-integrating tissue, Blanke says.
Girard’s team administered online surveys about drug use and drug-related experiences to 192 volunteers, ages 14 to 48. Almost half the sample reported having used marijuana, alcohol, ecstasy, ketamine and amphetamines. Roughly two-thirds had taken ketamine, and nearly everyone had used marijuana and alcohol.
Almost three-quarters of all participants reported having had a feeling of temporarily leaving their bodies, usually on several occasions. About 42 percent had experienced seeing their own bodies from an outside vantage point. Feelings of rapidly moving up and down, falling, flying or spinning had affected more than 60 percent of volunteers. Another 41 percent reported illusions of sitting up, moving a limb or walking around a room, only to realize that they had not moved.
Of those reporting feelings of leaving their bodies, 58 percent were under the influence of ketamine at the time. Ketamine use also displayed a close association with other unusual bodily sensations.
Apparent effects of drugs such as ecstasy on out-of-body experiences were largely explained by associated ketamine use, Girard says.

Half of adult males carry HPV

he virus notorious for causing cervical cancer in women also turns up frequently in men and can hang on unnoticed for months or even years, researchers report online March 1 in Lancet. The study solidifies earlier research indicating that human papillomavirus is highly prevalent in men and strengthens the case for vaccinating men and boys against it, the report’s authors say.

There are dozens of types of HPV, including more than 40 that can be transmitted sexually. Some can cause cancer. Two vaccines, Merck’s Gardasil and GlaxoSmithKline’s Cervarix protect against two types of cancer-causing HPV. Both vaccines are approved and recommended for girls and young women. Gardasil is also recommended for boys up to age 18 since its protection extends to two additional types of HPV that cause genital warts in males and females.

It’s widely assumed that limiting the virus in men or women would diminish its spread in the whole population. But while HPV has been extensively studied in women, its prevalence is less well understood in men, says Joseph Monsonego of the Institute of the Cervix in Paris, writing in the same Lancet issue. For that reason, he says, the new study results “are of substantial interest.”

Starting in 2005, epidemiologist Anna Giuliano of the H. Lee Moffitt Cancer Center & Research Institute in Tampa, Fla., and an international team of researchers recruited more than 4,000 men living in Brazil, Mexico and Florida into a study of HPV. The new study reports on the first 1,159 of these volunteers. Their average age was 32 and none had been vaccinated against HPV. Swabs of the penis and genital area of each man revealed that 50 percent were infected with at least one HPV type upon enrollment.

The researchers repeated these exams every six months, and the men completed personal-history questionnaires. Over a median of 28 months, the group acquired 1,572 new HPV infections.

The human immune system can clear HPV out of the body, and the men wiped out most of their new infections during the study period. But it took a median 7.5 months. Median clearance times didn’t vary substantially among the countries, but did vary between HPV types. Some cases lingered as long as 24 months in the men.

HPV 16 is the type responsible for the most cervical cancers in women and is covered by both vaccines. It took a median of 12 months to clear. “It’s hanging around longer, and it’s completely asymptomatic,” Giuliano says. “You don’t even know you have it.” This silent infection means a person can transmit this HPV type for longer periods and “might help explain why HPV 16 is one of the most common types in both men and women,” she says.

The data also reveal that men who reported having 10 or more sexual partners in their lifetimes had roughly twice as many HPV infections as did men who had had one partner.

Giuliano says many insurance programs cover HPV vaccination in boys up to age 18.

Male circumcision and the use of condoms have shown little protection against HPV infection, Monsonego says. “HPV vaccination in men will protect not only them but will also have implications for their sexual partners,” he says.

Giuliano says she expects to have data on the full group in three years

Brain chemical influences sexual preference in mice

When courting, male mice lacking the chemical messenger serotonin don’t seem to care whether the object of their affection is female. Mice without the neurotransmitter no longer eschew the smells of other males, wooing them instead with squeaky love songs and attempts to mount them, researchers report online March 23 in Nature.
Serotonin’s surprise role in mouse courtship may lead to a deeper understanding of how brain cells control a complex behavior.
“Nobody thought that serotonin could be involved in this kind of sexual preference,” says study coauthor Zhou-Feng Chen of Washington University School of Medicine in St. Louis.   
Scientists emphasize that the male-male courtship seen in the lab isn’t equivalent to human homosexuality. And what, if anything, serotonin has to do with human sexual behavior is still an open question.
“We have to be cautious because this is work done in mice,” Chen says. “I would be extremely careful to extrapolate these results into humans. We just don’t know much about this.”
In the study, male mice that were genetically engineered to lack serotonin-producing brain cells still courted females. But when given the choice between males or females, these mice no longer reliably chose females over males. In tests where both a male and a female mating partner were present, nearly half of the serotonin-lacking males mounted the male first, report researchers led by Yi Rao of the National Institute of Biological Sciences and Peking University in Beijing.
These mice were also more likely than control mice to emit ultrasonic squeaks — a type of mouse love song — toward other males. And although male mice usually spend more time sniffing the odor of female genitals, these mice spent equal time sniffing male and female odors. Some of these signs of altered sexual behaviors could be reversed by injecting a compound into the mice that restored brain serotonin, Rao and his colleagues found.
Psychiatrist and sexual-research scientist Milton Wainberg of Columbia University says that it’s too simplistic to apply the experimental results to human sexuality. “These mice are not gay,” Wainberg says, “These mice have a disease that makes them do one behavior, which happens to be a behavior that can be thought of as a homosexual behavior, but it’s not homosexuality. It’s not being gay.”
The researchers don’t yet know whether serotonin affects sexual behavior in female mice.
The new result “opens now quite a lot of fascinating avenues,” including questions about where, when and how serotonin exerts its control over mouse sexual behavior, says Catherine Dulac, a molecular neuroscientist and Howard Hughes Medical Institute investigator at Harvard University.

Serotonin performs a wide variety of jobs by carrying messages across brain cell connections. The neurotransmitter has been linked to behaviors including feeding, sleeping and aggression. Serotonin also regulates many cognitive processes, including mood.
In humans, antidepressant drugs that increase the amount of serotonin in the brain do have some sex-related side effects. Selective serotonin reuptake inhibitors, or SSRIs, for example, can decrease libido in people. Yet there’s absolutely no evidence that the neurotransmitter has any influence on sexual orientation, Wainberg says.

Jumping on the bandwagon brings rewards

When minds think alike great things can happen, even if the minds themselves are not so great. Stock day traders who act in sync — no matter the stock, or whether they are buying or selling — make more money at the end of the day than their out-of-sync peers, reports an analysis to appear in the Proceedings of the National Academy of Sciences.

The research adds to a growing body of evidence that a collective wisdom can emerge from the myopic views of individuals. Understanding how and when could lead to ways of tapping that wisdom, enabling trading firms to make more money. Mining collective wisdom could also lead to new approaches for combating terrorism or fighting flu outbreaks, says study leader Brian Uzzi of Northwestern University in Evanston, Ill.

Uzzi and his Northwestern colleagues analyzed a year and a half of trades — more than a million transactions — made by 66 day traders at a single firm. Parsing the trading behavior down to a scale of seconds revealed sweet spots of synchronization —seconds to minutes when many traders were engaged in frenetic activity. On average the traders made money on 55 percent of their trades, but those who were in sync with their peers profited 60 percent of the time.

“I love the counterintuitive nature of the finding,” says complex-networks expert Albert-László Barabási of Northeastern University in Boston. “The dogma is that the successful investors are the Buffets — those who swim against the current. Yet this study shows that when it comes to day trading, going with the wave has real benefits.”

A peculiar aspect of the emergent intelligence is its lack of intentional coordination. There wasn’t a preestablished crowd for traders to glom onto, nor a leader to follow. The assessment suggests instead that instant messaging among traders helped couple their behavior. Instant messaging volume went up and down with trading volume, and the flow of instant messaging became less random as traders got more in sync, the researchers found.

Each trader uses whatever information is available — whether about housing market changes or Steve Jobs’ health — to figure out the best time to buy and sell for a particular specialty. “No single one can figure out the market,” says Uzzi. Yet somehow when many traders reach a decision to act at the same time, “It’s really super-special,” says Uzzi. “And by super-special, I mean really lucrative.”

It’s unlikely that an individual trader could take advantage of the phenomenon, given that the golden bandwagon emerges suddenly, as if from a mist. But someone with a global view of what’s happening might be able to jump on, or urge others to do so, says Uzzi. It may prove fruitful for government agencies, for example, to harness the collective wisdom of intelligence officers trying to figure out when to act on information about terrorist activity, or to keep an eye on specialists as they monitor for an infectious disease outbreak.

Of course synchronicity can also be neutral, or worse. And distinguishing which is which is much easier after the fact. “Sometimes synchronicity doesn’t lead down the path of collective wisdom,” says Uzzi, “but down the blind alley of mob madness.”

Two stars caught fusing into one

For stellar astronomers, “the two shall become one flesh” just took on a whole new meaning.
Scientists have directly observed for the first time the merger of two closely orbiting stars. Experts have suggested for decades that such stars — which whirl so close to each other that their outer layers actually touch — should ultimately commingle. The new work, by Romuald Tylenda of the Nicolaus Copernicus Astronomical Center in Toruń, Poland and collaborators, catches the stars in the act.
The researchers’ claim of catching the stars in the act is “not just plausible; it's compelling,” says Robert Williams of the Space Telescope Science Institute in Baltimore, who was not involved in the study. The results, to appear in an upcoming Astronomy & Astrophysics, add to previous work by Williams and colleagues to understand the nature of the star pair, called V1309 Scorpii.
V1309 Sco was discovered in 2008 when it erupted in a bright flare. Astronomers have proposed various explanations for the burst since then without reaching a consensus.
The new work hinges on a piece of good luck: Tylenda realized that the Optical Gravitational Lensing Experiment, a Warsaw University Observatory project hunting for dark matter since the mid-1990s, had been pointing its telescope at V1309 Sco’s region of the sky for years. Trolling through more than 2,000 observations taken from 2002 to 2010, he and his colleagues found light variations that suggest V1309 Sco was originally a contact binary star, a just-touching pair of stars circling each other about every 1.4 days. Over time, this periodic variation shortened as the stars’ outer layers combined to cocoon both orbs in a single envelope.
At that point the object got brighter, its light doubling every 19 days until late August 2008, when it brightened by a factor of 300 over 10 days. V1309 Sco’s final burst occurred that month when the stars’ cores finally merged and energy from their combined spins erupted outward. It became 10,000 times brighter than its original luminosity and more than 30,000 times brighter than the sun, then quickly faded away over the course of a few months to roughly its original brightness.
The best explanation for these variations is the merger of a contact binary system, Tylenda and his colleagues assert.
While the resulting object should be a star — albeit one with a weird gut structure and a quick spin — the material thrown off during coalescence still largely blocks V1309 Sco, so astronomers can’t see what it looks like. Astronomers have requested time on the Hubble Space Telescope to observe the object, says Williams, but it may take years for the disk to dissipate, notes Stefan Kimeswenger of the University of Innsbruck in Austria.
Kimeswenger fully agrees with the conclusion that V1309 Sco was a contact binary. But he is skeptical of the authors’ suggestion that a merger scenario could explain flares of a larger class of objects called V838 Mon-type eruptions, named after an object sighted in 2002 (SN 10/14/06, p. 248). “They all had outbursts of unknown type and they moved quickly to a red cold shell,” he says. But with different burst energies and chemical compositions, “that's all they have in common.”
While Tylenda agrees that V838 Mon itself was “almost certainly not” a contact binary, he does think that stellar mergers of different types could explain these eruptions.
“The objects share the same crucial characteristics, that is they become extremely cool (for stars) at the end of the eruption,” he says. “This indicates that the energy source of the eruption quickly faded during the eruption. This is just what is expected to happen when the violent merger processes are over.”

Laser proposed to deflect space junk

It won’t prevent Armageddon, but a simple ground-based laser system could nudge small pieces of space junk away from satellites to prevent collisions, a new study suggests.
The proposed system uses photons generated by a medium-power laser and aimed into space through a 1.5-meter telescope. The photons exert pressure on space debris in low-Earth orbit, gently pushing the objects aside rather than vaporizing them. Researchers have applied the same idea, using the pressure from sunlight, to propel spacecraft (SN: 8/21/99, p. 120).
James Mason of the Universities Space Research Association and NASA’s Ames Research Center in Mountain View, Calif., and his colleagues describe their system online at arXiv.org on March 10. The proposed device, which would cost a little over $10 million, could be ready for testing next year and fully operational a few years later.
About 500,000 pieces of space debris centimeter-sized and larger reside in low-Earth orbit. Most are fragments of abandoned spacecraft that have broken up or exploded. The number of cataloged space-debris pieces larger than 10 centimeters has risen dramatically in recent years and most satellites don’t have shielding that would protect them from collisions with such debris, says Don Kessler, a retired NASA senior scientist and orbital debris expert.
If a piece of space debris had to be moved by about 200 meters a day to avoid a collision, a medium-power laser of about 5 kilowatts could provide the needed push — provided the debris had a large surface area and was no heavier than 50 to 100 kilograms, Mason calculates.
Such a laser couldn’t have prevented a 2009 collision between two satellites (SN: 3/14/09, p. 9), nor could it push aside an asteroid. But the system ”could move light debris out of the way of a big object,” says Mason.
Mason’s team suggests that the laser facility be built at a near-polar, high-altitude site, such as the Plateau Observatory in Antarctica, because most debris passes over the polar regions many times a day.
Researchers have suggested using lasers to vaporize space debris for more than two decades, but those systems would require powerful devices that might be mistaken for weapons, notes Mason.
Using a laser to slightly alter the speed of small debris doesn’t take much energy, notes Kessler. And if the medium-power laser missed its target it would be unlikely to do much damage, he adds. Kessler notes, however, that scientists would need precise knowledge of the path of debris in order for the system to be effective.

Go east, ancient tool makers

Finds unearthed in southeastern India offer a cutting-edge revision of hominid migrations out of Africa more than 1 million years ago that spread pivotal tool-making methods.
Makers of a specific style of teardrop-shaped stone hand ax, flat-edged cleavers and other implements that originated in Africa around 1.6 million years ago (SN: 1/31/09, p. 11) reached South Asia not long afterward, between 1.5 and 1 million years ago, say archaeologist Shanti Pappu of the Sharma Center for Heritage Education in Tamil Nadu, India and her colleagues.
Rather than waiting until around 500,000 years ago to head into South Asia, as many researchers have thought, the African hand ax crowd wasted relatively little time before hightailing it to India, Pappu’s team concludes in the March 25 Science.
Archaeologists categorize stone hand axes and related implements as Acheulian tools. Most researchers regard Homo erectus, a species that originated around 2 million years ago, as the original brains behind Acheulian innovations.
“Acheulian tool makers were clearly present in South Asia more than 1 million years ago,” Pappu says. Several previous excavations in different parts of India have also yielded Acheulian tools, but these finds lack firm age estimates, she adds
No fossils of members of the human evolutionary family, or hominids, turned up among the new tool finds.
H. erectus must have rapidly moved from East Africa to South Asia, proposes archaeologist Robin Dennell of the University of Sheffield in England. Pappu’s new finds raise the possibility that 800,000-year-old hand axes previously discovered in southeastern China (SN: 3/4/00, p. 148) indicate the presence of H. erectus groups that came from South Asia — or at least exposure of Chinese hominids to Acheulian techniques, Dennell suggests in a comment published in the same Science.
Prior finds point to a second migration of Acheulian-savvy hominids out of Africa, he says. Homo heidelbergensis — a species first identified in Europe that some researchers now suspect inhabited East Africa and possibly Asia — trekked northward to the Middle East and then westward into Europe by half a million years ago, Dennell hypothesizes.
Until now, scientific consensus held that Acheulian tool makers, presumably H. erectus, reached the Middle East at least twice, around 1.4 million and 800,000 years ago, but went no further. H. heidelbergensis then took Acheulian implements from Africa to both South Asia and Europe approximately 500,000 years ago in this scenario.
If that was the case, even older Chinese hand axes might represent a tool tradition that developed independently of outside influences.
Any relationship of those Chinese finds to tools unearthed by Pappu’s group remains unclear, comments Harvard anthropologist Philip Rightmire. But it’s not surprising that H. erectus inhabited South Asia sometime around 1.5 million years ago, he says. Other evidence suggests that H. erectus left Africa for several destinations throughout Asia beginning at least 1.8 million years ago, wielding simple chopping tools.
“For now, it’s enough to say that Homo erectus introduced Acheulian tools to India,” Rightmire says.
Pappu’s team excavated and dated stone tools at Attirampakkam, an Indian site discovered in 1863. Work since 1999 has produced more than 3,500 Acheulian artifacts, including 76 hand axes and cleavers.
Artifact-bearing soil contained signs of a reversal in Earth’s magnetic field that places the finds at between 1.07 and 1.77 million years old. Measurements of radioactive isotopes in six quartz tools unearthed at Attirampakkam indicated that these finds had been buried approximately 1.5 million years ago.
.

In evolution, last really can be first

In an evolutionary equivalent of Revenge of the Nerds, bacteria that once seemed destined for loserdom can eventually use their hidden potential to overtake the competition.
Researchers led by Richard Lenski of Michigan State University in East Lansing have been watching E. coli bacteria evolve since 1988 (over more than 50,000 generations) and are doing experiments that replay that evolution to settle a debate about whether natural selection produces the same outcome every time or if surprises can happen along the way (SN: 1/31/09, p. 26). The experiments are also aimed at understanding the genetic mechanisms that drive evolution. 
In a new study published in the March 18 Science, Lenski and his colleagues address a new concept in evolutionary biology. The issue is whether some changes in DNA give organisms differing evolutionary potential, or evolvability.
The rise of the nerds seems to indicate that evolvability is a reality. A mutation that gave some bacteria an early advantage turned out to be their downfall, the researchers discovered. But the nerd bacteria carried a different mutation that interacted with later genetic changes, increasing the microbes’ evolutionary fitness.
The findings are some of the first experimental evidence for evolvability, says Massimo Pigliucci, an evolutionary biologist and philosopher of science at the City University of New York. Theorists have devised many models to show that changes in DNA ought to interact with each other to shape a species’ evolution and that some mutations allow for greater adaptability over time. But until now, there has been little hard data to support the claims. “There’s not many examples out there, but this one is a spectacular one,” Pigliucci says. “This is an elegant demonstration of evolvability and its molecular underpinnings.”
Eventual winners in this laboratory experiment didn’t start out that way. In fact, the population of nerdly bacteria that eventually evolved into top dogs was at one point in danger of extinction. Those bacteria carried a mutation in the topA gene, which makes a protein that winds up a bacterium’s circular chromosome like a twisted rubber band. Winding the DNA can change how easy it is to turn genes on and off.
Strangely enough, the eventual winners’ rivals — the eventual losers — also carried a mutation in the topA gene, but one that alters the next link down in the chain of amino acids that makes up the protein. In the first 500 generations of the experiment the eventual losers were riding high. They dominated laboratory flasks early on because their version of topA helped the bacteria make better use of limited nutrients than the competition.
But a few hundred generations later the situation was reversed. By generation 883, the eventual winners were growing 2.1 percent faster than the eventual losers, an indication of fitness. And by generation 1,500, the eventual losers had gone extinct in the flasks.
To find out how the eventual losers went from hero to zero, the researchers replayed the evolution experiment over and over using frozen samples taken from generation 500. In general, the eventual winners managed to overtake the eventual losers, but not every time. That finding indicates that chance is at play in evolution, Pigliucci says.
The researchers looked more closely at the DNA of the eventual losers and winners and found that the eventual winners’ version of topA combined with changes in other genes, such as one called spoT, to increase fitness. But although the eventual losers’ topA gave them a huge advantage early in the game, it didn’t interact as favorably with later mutations, leading the eventual losers down an evolutionary cul-de-sac, if not exactly a dead end.
“Evolution is a mindless process. It has no foresight,” says Jeffrey Barrick, an evolutionary genomicist at the University of Texas at Austin and a coauthor of the study. “So it’s not immune from making these short-term gains, but losing in the end.”

Noise is what ails beaked whales

Navy sonar unquestionably disturbs beaked whales, concludes a new analysis investigating how underwater sound affects these elusive deep-divers. The results, published online March 14 in PLoS ONE, suggest that the current noise levels deemed risky for beaked whales need to be lowered.
During sonar exercises at the U.S. Navy’s underwater test range in the Bahamas, beaked whales stopped their chirpy echolocations and fled the area, experiments employing a huge array of underwater microphones revealed. Other experiments that exposed tagged whales to increasing levels of sound found that at exposures of around 140 decibels, the animals stopped hunting for food and slowly swam toward the surface, heading north toward the only exit of the deepwater basin known as the Tongue of the Ocean. Current regulations rate underwater exposures of about 160 decibels as disturbing.
“It seems beaked whales may be more sensitive than other species to sound,” says study leader Peter Tyack of the Woods Hole Oceanographic Institution in Massachusetts. “At the very least we may need a special rule for these whales,” he says. “If the criteria are changed they will be more protected.”
Until a few different species of beaked whales started showing up in unusual mass strandings, the animals were understudied and rarely seen. Because the strandings often coincided with nearby naval sonar exercises, scientists suspected sonar was somehow driving these whales to the beach. And strange bubbles in the bodies of some of the whales suggested that sonar might trigger behavior that gave the whales the equivalent of the bends.
But designing experiments that might untangle cause and effect has been extremely challenging, both in tracking the whales and trying not to cause them harm. The new results arose from a major, concerted effort by scientists from the Navy, the National Marine Fisheries Service and several academic institutions.
As it turns out, sonar does seem to spur a behavioral response — to flee. Because such get-out-of-Dodge behavior is unusual for these whales, it may mess with them physiologically as well, says marine conservation biologist Tara Cox, who was not involved in the new work. In the current study, sonar activity was stopped as soon as it elicited a response, so its full effects are not clear.
Researchers had also speculated that that the whales might confuse sonar with killer whale calls and frantically flee this mortal enemy in some way that might be dangerous. But the study found that beaked whales respond to playbacks of killer whale calls at decibel levels much lower than those of the sonar.
Other marine mammals, such as harbor porpoises, are particularly sensitive to sound and react strongly while other species seem unbothered. The new results suggest beaked whales also may be especially sensitive.
“We treat porpoises differently, and now there’s evidence that beaked whales respond differently as well,” says Cox, of Savannah State University in Georgia. Current sound levels deemed safe for beaked whales are not low enough, she says.
The National Oceanic and Atmospheric Administration is reviewing how it assesses the impact of man-made sounds, says Jason Gedamke, who manages the agency’s ocean acoustics program. “This paper … is groundbreaking work,” he says. Additional experiments assessing how several species, including blue and fin whales, respond to sound are being conducted off Southern California and could be used to craft regulations that minimize harm to marine life.

Japan nuke accident seen from Seattle

Radioactive particles wafting eastward over the Pacific Ocean from Japan have been spotted in Seattle and used by a forensic team of physicists as a window into recent events inside the crippled Fukushima Daiichi nuclear plant 5,000 miles away. Working backward from these nuclear byproducts, the physicists have confirmed that contaminated steam is the source of this radiation, not spent fuel rods or material ejected directly from the reactor core.
“We haven’t seen any of the heavier stuff that would come right from the core, which people saw 30 years ago during the Chernobyl accident,” says Andreas Knecht, a nuclear and particle physicist at the University of Washington in Seattle who published the new data online March 24 at arXiv.org.
Starting March 16, Knecht and his colleagues saved and analyzed the air filters that clean 100 million liters of air every day in the ventilation system of the University of Washington’s physics and astronomy building. Using a detector originally designed to spot neutrinos coming from outer space, the researchers searched for gamma rays originating in the by-products of nuclear fission. On March 18, the first nuclear isotopes arrived from Japan.
The mixture of elements found in the Seattle filters drives home the differences between Chernobyl and Fukushima. The total meltdown of the Chernobyl reactor in 1986, which exposed the core, belched tons of radioactive material from fuel rods directly into the atmosphere. At the time, scientists in Paris detected 20 different isotopes. The partial meltdown of the Fukushima plant, in contrast, released only five isotopes measurable by the Seattle team’s equipment: iodine-131, iodine-132, tellurium-132, cesium-134 and cesium-137.
The complete absence of iodine-133, an ephemeral isotope that breaks down in days, confirmed that the radiation spotted by Knecht had been traveling for at least a week or so.
The presence of tellurium-132, a by-product of nuclear fission that degrades over weeks, suggests that the wind-blown radiation came from a material that had recently seen fission inside a nuclear reactor. This rules out older, spent fuel rods kept on the premises of the power plant and points to the fuel rods that were generating power until the earthquake struck.
The lack of heavier elements ruled out the possibility that the material in these fuel rods was tossed directly into the atmosphere after the earthquake. Instead, radioactive cesium and iodine — which dissolve easily in water as the compound cesium iodide — likely contaminated steam released by Japanese utility company TEPCO to control pressure inside the damaged reactors.
“This is what we expected to see,” says Knecht. “But obviously it doesn’t hurt to check.”
These particles tell the same story repeated by groups of scientists at the Environmental Protection Agency, the University of California, Berkeley, and other institutions monitoring the West Coast: Only minute amounts of radiation are reaching the United States. Levels of radioactive iodine, a cause for concern in Japan itself, were in Seattle a mere hundredth of the safety level set by the EPA.
“We’d like to confirm that what’s coming over here is at a level which is tolerable,” says Ed Morse, a nuclear engineer at the University of California, Berkeley. “So far that’s consistent with what we’re seeing.”
The scientists have analyzed only the first five days of data so far but will continue to monitor the air above Seattle. They’ve seen some evidence of day-to-day fluctuation in radiation levels and hope to explain these changes by looking to changes in local weather patterns or events in Japan.

Silicene: It could be the new graphene

The hottest celebrity in the world of nanomaterials may soon face a new rival. Inspired by the Nobel Prize-winning creation of the carbon material known as graphene, physicists have now created atom-thin sheets of carbon’s big brother, silicon.
Silicon shares many properties with carbon, which sits just above silicon on the periodic table. In 2007 Lok Lew Yan Voon and then-graduate student Gian Guzmán-Verri of Wright State University in Dayton, Ohio, proposed that silicon could exist in flat sheets similar to graphene, even though silicon doesn’t naturally form the kind of atomic bonds needed to accomplish this.
They coined the new term for this material: silicene.
“Silicon has the advantage of being more integratable in today’s electronics,” said Antoine Fleurence, a physicist at the Japan Advanced Institute of Science and Technology in Ishikawa. The semiconductor industry has spent decades building the infrastructure needed to manipulate silicon to create the chips that run modern electronics.
Speaking March 24 at a meeting in Dallas of the American Physical Society, Fleurence described a new recipe for making silicene. He and his Japanese colleagues grew a thin layer of silicon on top of the ceramic material zirconium diboride. X-rays shined on this thin layer of silicon revealed a honeycomb of hexagons similar to the structure of graphene.
This structure looks familiar to Guy Le Lay, a physicist at the University of Provence in Marseille, France. Last year, he created the first-ever silicene ribbons. Le Lay described these 1.6-nanometer wide stripes of honeycombed atoms, grown on top of silver, in the June 28 Applied Physics Letters.
“These ribbons can be more than a hundred nanometers long, perhaps micrometers,” Le Lay says.
New data from Le Lay’s group, also presented at the physical society meeting, suggests that silicene and graphene share not only a similar structure, but possibly similar electronic properties. Spectroscopic techniques provided evidence that silicene contains a Dirac cone — the entity that intrigues scientists because it allows electrons to move very quickly through graphene, which makes graphene a promising material for flexible electronics.
To prove silicene’s worth, though, Le Lay will need to grow it not on silver — which, as an electrical conductor can interfere with the movement of electrons in the single-layer silicon — but on an insulating material. On an insulating platform, physicists will be able to do direct tests of the material’s electronic properties and experiments to determine whether the same quantum effects that make graphene so remarkable are at work.
For silicene to compete with graphene in the long run, however, the process of creating it must be comparably simple, says Sankar Das Sarma, a physicist who studies graphene at the University of Maryland in College Park. “Graphene really took off in 2004 because it was so easy to make,” he says.
Competing with graphene in this regard won’t be easy: The Russian scientists who first made graphene in 2004 — and won the 2010 Nobel Prize in Physics for their efforts — did it using only a piece of Scotch tape and a chunk of graphite similar to pencil lead.

Planets take shape in embryonic gas clouds

A radical new theory that planets are born within a massive veil of gas may help explain how recently discovered extrasolar planets developed their stunning diversity of sizes and locations.
In the theory, planets are born under wraps, hidden at the centers of giant gas clouds far from their parent stars. A star’s gravity then reels in the planetary cloud, stripping away some or all of the gas to reveal the planet inside.
Depending on how much of the gas is removed in the process, the unveiled planet would resemble a gas giant like Jupiter, a solid core with a layer of gas like Neptune or a solid body like Earth. Sergei Nayakshin of the University of Leicester in England describes the process in an upcoming Monthly Notices of the Royal Astronomical Society as well as in several papers posted online at arXiv.org.
Such a beginning might explain the abundance of small-to-middling extrasolar planets — including many Neptune-sized planets — recently spotted by NASA’s Kepler spacecraft orbiting within roasting distance of their stars (SN: 2/26/11, p. 18). Standard planet-formation models are facing unprecedented challenges because they can’t easily account for the many types of extrasolar planets described since 1995, notes theorist Aaron Boley of the University of Florida in Gainesville. “At the end of the day, we need to explain this diversity of planetary systems,” he says.
Nayakshin’s theory, along with a similar one by Boley and his collaborators, borrows ideas from two more traditional models. In a scenario known as core accretion, bits of solid particles coalesce within the disk of gas and dust surrounding a young star and form a solid core that resembles Mercury or Earth. The core may then snare enough gas to form a Jupiter. In the other model, known as gravitational instability, gas within the planet-forming disk suddenly fragments to form a giant blob, forming a Jupiter in one fell swoop.
In contrast, Nayakshin begins with the giant blob of gas generated by gravitational instability. Then, he suggests, dust settles to the core of the blob, ultimately forming a solid body that snares some of the gas around it, as in the core accretion model. The gassy envelope around the core is initially fluffy and easily removed. As the clump migrates inward toward its star and reaches a distance similar to that separating Mars from the sun, some of the gas may be stripped away by the star’s gravitational tidal forces. Removing the outer layers of gas produces rocky planets similar to those in the inner solar system.
In other cases, when migration is slower, the fluffy gas envelope has enough time to contract and become denser, resisting stripping. In that case, only if the blob moves very close to the star, within half the average separation of Mercury from the sun, can the star’s tidal forces remove part or all of the dense gas envelope. Such a downsized planet sits within roasting distance of its star and could be a hot version of Jupiter, Neptune or Earth, as recently observed by Kepler and other telescopes searching for extrasolar planets.
Kepler has shown that Neptune-sized exoplanets are very common at distances closer to their parent stars than Earth is (and even at half the Earth-sun distance), rather than being confined to the chilly outer parts of a planetary system as previously thought, says Boley. “Determining whether tidal downsizing can explain a sizable fraction of these planets … is a direction worth exploring,” he says.
Jack Lissauer of the NASA Ames Research Center in Mountain View, Calif., however, says that he considers “the hypothesis to be quite weak” and “the idea of forming planetary cores in this manner is far from demonstrated.” In addition, he says, the theory would have trouble explaining some types of extrasolar planets that have been observed, such as a gas-rich one with a relatively small rocky core that closely orbits its parent star.
Alan Boss, who developed the gravitational instability model and is based at the Carnegie Institution for Science in Washington, D.C., takes a longer view. “I am all for having a thousand flowers bloom in the world of exoplanet formation theory — at this time it is hard to pick the eventual winners and losers.”

Global gale warning

As salty sea captains can attest, winds and waves can appear out of nowhere on the open sea.
Now, an analysis of satellite readings suggests that winds may be picking up speed over oceans worldwide. The seven seas aren’t getting gustier on the whole, but severe winds are getting stronger, researchers from Swinburne University of Technology in Melbourne, Australia report online March 24 in Science. That’s bad news for sea captains, perhaps, but good news for surfers, since waves may also be getting higher, the team says.
The windy trend isn't necessarily surprising, says Peter Challenor, who tracks ocean conditions in the North Atlantic. Global climate change could theoretically make oceans windier or wavier, but weather often goes through natural cycles too, says Challenor, a researcher at the National Oceanography Centre, Southampton in England. Wind speeds in the North Atlantic, for instance, vary based on multidecade shifts in the atmospheric pressures over Spain and Iceland. What’s surprising here is that the Australian researchers appear to have detected changes on a worldwide scale.
“We always assumed that each region was separate,” Challenor says. “They’ve shown that everywhere seems to be going up, which suggests that there may be a global pattern.”
The group analyzed 23 years of wave-height recordings and 17 years of wind data from seven different satellites. The fastest 1 percent of winds spiked the most over the period studied. These gales sped up in most locales by about 0.75 percent or more per year.
Waves didn’t surge as much, although at high latitudes the biggest swells got about half a percent bigger per year. That mismatch isn’t too shocking, says study coauthor Ian Young, a physical oceanographer and engineer now at the Australian National University in Canberra. A storm in one corner of the world can often make waves in another, he says. “A lot of the ocean wave conditions are swells which generate somewhere completely different.”
Wave heights are relatively easy to record from space with radar, but wind speeds are a bit trickier, Young says. Winds don’t just make waves, they also change water texture — blow softly into your morning cup of coffee and you’ll see tiny ripples appear. Those ripples scatter radar pulses coming from satellites, making for a fuzzier picture of the ocean. With a bit of math, fuzzy can be translated into windy.
“The sheer volume of trends is impressive,” says Philip Callahan, an engineer at NASA’s Jet Propulsion Laboratory in Pasadena, Calif. He’d like to see data from other satellites with different recording tools to confirm the observations. And it’s too early to say if the faster wind speeds are a true trend or just the uphill portion of a natural cycle, like an El Niño event but on the global scale.Regardless, if winds keep on surging and waves climbing, sea captains the world over could be in for rough sailing, Challenor says. Looking at the trends on the computer is one thing, seeing them from a boat, another: “If you’re a mariner, they are very practical things.”

Major earthquakes not linked

Big earthquakes like the Sendai quake that devastated Japan in March don’t cause similar disasters on the other side of the globe, a new study suggests.
Like ranks of falling dominoes, tremors on the scale of the Sendai quake can trigger other earthquakes, say geophysicists at the U.S. Geological Survey in Menlo Park, Calif. But, based on analyses of about 30 years of seismic data, those shocks are all very small or sit close to the original fault break, the group reports online March 27 in Nature Geoscience.
“If California is ready to go, it’s because California is ready to go,” says Jian Lin, a geophysicist at the Woods Hole Oceanographic Institute in Massachusetts. “Not because an earthquake in California would be triggered by Japan.”
There was previously some room for doubt, says study coauthor Tom Parsons. Fault lines aren’t islands. Like tossing a rock into a pond, breaking chunks of earth send out waves that can circle the globe several times over. In a 2008 study, Parsons and his colleagues discovered that as those waves pass, far-flung corners of the planet start to buzz with lots of tiny temblors. That worldwide rumbling raised some eyebrows, he says: “If the whole world becomes an aftershock zone, should we worry about big ones?”
In the current study, the team narrowed its focus to just that: the big ones. Parsons and his colleagues traced the seismic wave aftermath of every magnitude 7 or larger earthquake — over 200 in all — during the past 30 years. Following the initial earthquake, faults within 1,000 kilometers frequently ruptured domino-style. On the far scale, however — think Japan to California — the group couldn’t find a second domino larger than magnitude 5.
“It might be a surprising result to the public,” Lin says. “But it’s not a surprising result to an earthquake scientist like myself.”
How shaking could trigger tiny faults but not big ones remains unclear, Lin says. But he likes to think of faults as picket fences: Your neighbor can knock down your fence by pushing on it directly — just as earthquakes often shift the forces on nearby faults. Your neighbor could also knock down your rickety old fence with vibrations, say by turning up the stereo way too high. No matter how loud that music gets, however, it won’t knock down a sturdy fence down in the next town over, Lin says.
The study does, however, include a number of uncertainties, says Emily Brodsky, a geophysicist at the University of California, Santa Cruz. It can be difficult to determine whether a large quake was triggered independently or by a previous smaller quake elsewhere, she says, and the aftershock record is far from complete. The scarcity of seismograms in, say, the open ocean leaves holes in the data. “It’s actually a little embarrassing how bad our ability to detect earthquakes under certain circumstances are.”
For Parsons, resolving the physics behind these questions is goal number one. Knowing how earthquakes work makes a difference when drawing quake-hazard maps for places like Japan and California. “Ultimately, we’d like to understand the physics behind it,” he says. “That makes us a lot more comfortable when we’re forecasting things.”

DNA flaws can stack up as cancer grows

VANCOUVER — As a man’s cells grew cancerous, a wide range of mutations gradually emerged too, a new gene sequencing study finds. The results provide a deep understanding of the genetic changes that allowed an aggressive form of leukemia to set in and take hold in one patient, Elaine Mardis of Washington University in St. Louis said in a March 28 presentation at the annual conference on Research in Computational Molecular Biology.
“Cancers’ origins lie in the genome,” Mardis said. “These genetic approaches are really addressing the underlying questions of cancer biology.”
In the new study, Mardis and her colleagues collected cells from a 65-year-old man with a blood disease called myelodysplastic syndrome. About one in four people with this disorder go on to develop a fast-moving and dangerous type of cancer called acute myeloid leukemia. Two years after those samples were taken the man was diagnosed with full-blown acute myeloid leukemia, and the researchers harvested a second batch of cells. By reading the letter-by-letter DNA sequences in the two samples of cells, the team could pinpoint genetic changes as the cells turned cancerous.
Over time, the cells accumulated new genetic mutations, the team found. The early mutations didn’t seem to run rampant and overtake all of the cancerous cells, but they didn’t disappear, either. These early mutations became present in less and less of the cancer-cell population.
Watching the same cancer in the same person over time allows the team to see just how this cancer evolves, says computational biologist Cenk Sahinalp of Simon Fraser University in Canada.
“Maybe they don’t die out, but in time, their proportion gets lower and lower.”  
The finding helps answer a long-standing question: In leukemia, does a group of cells with a particular mutation rapidly spread, or do additional mutations pop up along the way and result in a cancer that’s more genetically mixed? In colon cancer, for instance, most cells in the final tumors contain mutations that are the same as the early mutations, suggesting that these mutations quickly spread. Yet in this particular patient and for this particular leukemia, more mutations occurred.
This steady accumulation of multiple mutations — and the genetically complex mishmash of cells that results — may confound treatment options, which are typically aimed at one select problem, Mardis says. But it’s important for doctors to be aware of the likely genetic complexities of leukemia when they’re deciding on a treatment. “Physicians need to know what they’re up against,” she says. 
The researchers now have cell samples from 25 patients who developed myelodysplastic syndrome and then AML, and preliminary tests show similar accumulation of new mutations. Comparing how these individual cancers change genetically over time may allow the researchers to identify common mutations.
Knowing what genes change, and when, may lead to more specific and effective treatments for leukemia, Mardis says. “At the end of the day, we want to inform how patients are treated — and better treated — for this disease.”