Showing posts with label biotechnology. Show all posts
Showing posts with label biotechnology. Show all posts

Sunday, May 13, 2012

Bird Flu, Bioterror, and Bioerror


This week the prestigious science journal, Nature, published the methods and results of a groundbreaking new experiment in biotechnology, reigniting a firestorm that has been raging on and off for nearly a year. The reason for the controversy is the ghastly topic of the research paper: How to genetically engineer the avian influenza virus (H5N1) to make it more communicable.

Although H5N1 (i.e. “bird flu”) has existed in bird populations for decades, it entered the public consciousness in 2004, when human cases began surfacing in China and Southeast Asia. The cases were quickly linked to contact with poultry – mostly slaughterhouse workers or chicken farmers, who directly worked with chickens in insanitary conditions. H5N1 was far more virulent than the seasonal strains of influenza which have been circulating since 1918; over 60% of people who have contracted H5N1 in the past eight years have died from it. Fortunately, there has been no pandemic. Only 600 people worldwide are known to have had avian influenza. Although it appears that humans can acquire the disease directly from birds, there have been no known cases of human-to-human transmission – a prerequisite for a global pandemic. Although influenza viruses mutate very quickly, the lack of human-to-human transmission caused some complacency. Some epidemiologists even went so far as to state that human-to-human transmission of H5N1 might be impossible.

The new research blows that theory out of the water. Scientists at the University of Wisconsin and Erasmus Medical College succeeded in taking the H5N1 virus and combining it with the H1N1 (“swine flu”) virus. Swine flu is known to be easily communicable between humans but relatively mild; bird flu is known to be extremely deadly but difficult to transmit. By combining the two into a hybrid and making other modifications to the genes of the virus, scientists developed a “super-strain” of flu. They tested the virus on ferrets, which have an immune system very similar to humans. Not only did many of the ferrets die, but the disease was easily transmitted to other ferrets who were not directly exposed to the virus themselves.

The research has horrified many scientists. Governments remain gravely concerned about its publication in Nature. In the United States, the National Science Advisory Board for Biosecurity requested that Nature not publish the findings in the interest of national security. Although academia typically does not view censorship kindly, many scientists found themselves agreeing with the government. Biologist and Nobel-laureate Sir Richard Roberts said, “Someone is trying to make the most dangerous virus we can think of. I don't understand how one can justify that, unless there is no other way of getting the data that you're interested in.” The risks are huge: Nature published the methodology that the scientists used to create their super-strain of flu, potentially providing a blueprint for terrorists to replicate their efforts. Additionally, there is the concern that if research like this isn't shunned, it will continue apace and may one day escape the laboratory through simple error.

Other scientists believe that publishing the research is necessary, in order to prevent future outbreaks. They argue that if we can learn more about how influenza mutates and infects new people, we will be better prepared to deal with a future pandemic. They acknowledge the dangers of the research, but argue that there is no avoiding the fact that it will soon be possible to create bioengineered diseases, and it is better to be prepared for them when they do occur. Additionally, there is the possibility that H5N1 may eventually evolve into a more communicable form on its own, for which epidemiologists should prepare. Last month, the US government finally relented. The National Science Advisory Board for Biosecurity reversed itself, voting 12-6 to allow the publication of the research to proceed.

As it stands, I find myself on the side of those urging extreme caution with this type of research. Bioterrorism will be the greatest security threat of the early 21st century; unlike nuclear weapons, biological weapons will soon be available to many people. Futurist Michio Kaku warns that in the not-too-distant future, creating new viruses may be as simple as typing base letters into a piece of software and having a computer assemble the DNA strand. When that happens, we may have no choice but to fund research to prevent diseases that do not yet exist in nature. But until then, it seems that the risks greatly outweigh the rewards.

Saturday, April 21, 2012

Book review - "Abundance" by Peter Diamandis

Peter Diamandis, the founder of the X Prize Foundation and the co-founder of Singularity University, is one of the foremost futurists today. He is well known for both popularizing emerging technologies and driving their development. His new book, Abundance, chronicles the ways in which technology is rapidly transforming life for people all around the world, and will soon usher in a “post-scarcity economy.”

Diamandis starts out by identifying the sources of humanity's biggest needs today – water, food, energy, education, information, communication, transportation, health care, and freedom/democracy – before going on to explain how technology can solve or is already solving these problems. Many of these same topics have already been covered in this blog.

Technologies like Dean Kamen's Slingshot will soon transform the way water is distributed and solve humanity's single greatest problem. Bioengineered crops, in vitro meat production, and vertical farming will soon enable us to grow food in places where it was not previously possible, under conditions that are much safer, more environmentally friendly, and less volatile. New online education technologies will soon enable far more people to have access to high-quality K-12 education, at a greatly reduced price, and Moore's Law is reducing the price of computing to the point where nearly anyone in the world can afford it (case in point: the proliferation of cell phones throughout even the poorest parts of Africa and India.) Solar energy will become cost-competitive with fossil fuels by the 2020s, thus offering a virtually unlimited source of environmentally-friendly energy.

But the part of the book that I found the most intriguing wasn't simply the range of technological solutions to humanity's greatest challenges; although Diamandis writes about these emerging technologies with an insider's knowledge, they have all been discussed elsewhere for years. The most intriguing part was Diamandis' idea of billions of new minds “coming online.” Sadly, people grinding out an existence in poverty are usually not able to contribute their ideas and talents to the world, and we are all worse off for it. But as we solve the problems of poverty and move toward a post-scarcity economy, billions of people will be freed from the task of eking out a subsistence lifestyle and will be able to contribute more to humanity's wellbeing themselves.

In my opinion, this reserve of squandered brainpower is the biggest overlooked resource of exponential growth that humanity has. Even the futurist most known for the concept of exponential growth, Ray Kurzweil, rarely talks about this untapped human potential. I find Diamandis' idea of exponential growth due to human intelligence far more plausible than Kurzweil's idea of exponential growth due to artificial intelligence...at least for the next few decades.

For most of human history, progress crawled along at an incredibly slow pace, because nearly everyone was dirt poor, focused on staying alive rather than making the world a better place. Progress accelerated dramatically in the 19th and 20th century, as more and more people gained access to the basic necessities of life and were able to build careers in areas in which they were talented and interested. But even today, at most a small fraction of humanity is currently driving the vast majority of the technological, social, political, and economic change around the world. This small fraction is disproportionately comprised of those who have already benefited from abundance. Far too many people still do not have access to the basics of life, which are a prerequisite to leaving a lasting mark on humanity.

As more and more people gain access to these things and we enter a post-scarcity economy, the world will begin to “wake up.” What happens when 8 billion people, rather than 1-2 billion, have everything they need to pursue their dreams? What will they do? How much more rapidly will our world progress when we have so many more people working for the betterment of the world? What kind of ideas, dreams, and talents already exist in the world today, lying dormant and waiting to be unlocked by the technological drivers of abundance?

Book rating: 5/5 stars

Monday, October 24, 2011

Welcome to Earth, Population: 7 Billion

This month, the global population will surpass 7 billion people, for the first time in history. Predictably, this milestone has sparked a discussion of overpopulation and sustainability. Pessimists are quick to point to the historical trend. Two thousand years ago, the global population was a mere 200 million people. Even by 1900, there were only 1.6 billion of us. But during the 20th century, population growth exploded as modern medicine dramatically increased life expectancies all across the world. Even people in the shortest-lived countries today (Afghanistan and Zimbabwe) live longer than those in the longest-lived countries did in 1800 (United Kingdom and Netherlands).

The unfortunate side effect of that amazing record of progress has been an ever-increasing population, consuming more and more resources from our planet. Many have questioned how long this can continue. As early as 1798, Thomas Malthus argued that growth in population was outstripping growth in the food supply, and this would inevitably result in widespread famine. In 1968, Paul Ehrlich wrote The Population Bomb, which predicted that overpopulation would soon cause mass starvation, destroy the environment, and lead to widespread war.

So far these predictions have failed to materialize, and it is unlikely that they ever will. Concerns of global overpopulation are overblown. There is plenty of room for everyone; we could fit the entire world’s population into the state of Texas, and the population density would be no greater than New York City. On the question of food security, the Malthusian predictions have always failed to take into account technological development. When Malthus made his original prediction, he completely overlooked the Industrial Revolution that was beginning to unfold around him, making the production of food much more efficient. When Ehrlich did the same, he overlooked both the Green Revolution that increased the production of food threefold, and the invention of new birth control technologies which reduced the rate of growth in the population.

In fact, population growth peaked in 1968, just as The Population Bomb was hitting bookshelves. Although it has not yet stabilized, the rate of growth continues to decline. Some parts of the world are already losing people; the fertility rates in Europe, Russia, and East Asia are well below the replacement rate. The United Nations projects that the global population will plateau by the end of this century, between 10 and 15 billion people.

Even the upper end of the UN’s projection is not unsustainable; if we have 15 billion people by the end of this century, we will easily be able to feed them all, with plenty of food to spare. The population doomsayers will continue to be wrong for the same reason that Malthus and Ehrlich were wrong: they ignore technological developments that allow the production of much more food. The coming Genomic Revolution will play a huge role in this process. Not only will it soon be possible to genetically engineer crops that produce a much higher yield in a given amount of space, it will also be possible to breed crops that can grow in much harsher climates than is currently possible, allowing for the widespread production of crops in parts of the world that were previously considered off-limits. The drought-ravaged Sahel Zone of Africa could one day be a breadbasket. Cold climates like Canada and Russia may one day be able to grow coffee.

Additionally, the development of in vitro meat production will soon mean that it is no longer necessary to raise large herds of livestock when meat can be more efficiently grown in a laboratory. This will free up a huge amount of the earth’s farmland and fresh water, which would otherwise be required to feed the animals. These technological developments – as well as others which may not even be anticipated yet – will undoubtedly allow us to continue feeding our growing global society.

Although global overpopulation is not a problem, the problems of specific regions of the world should not be minimized. South Asia is the hardest hit by overpopulation. India has 1.2 billion people crammed into an area one-third the size of the United States. Bangladesh has 150 million (half the population of the US) packed in an area smaller than Florida. In both of these cases, the vast majority of people live in crushing poverty, with some of the highest rates of malnourishment anywhere in the world. But even here, there is some good news. Both India and Bangladesh have taken steps to get their fertility rates under control, and now their fertility rates are only slightly above the replacement rate of 2.1 children per woman. The era of rapid population growth is drawing to a close for South Asia, although it will still take several decades before it levels off completely.

Many Sub-Saharan African countries have a demographic problem as well, although the nature of the problem is different from South Asia’s problem. With a few exceptions, Africa is not particularly densely populated. Some UN officials and developmental economists have even gone so far as to argue that many parts of Sub-Saharan Africa are underpopulated. I think that’s a bit of an exaggeration for a region of the world that cannot currently feed itself, but there is no denying that Africa’s population is nowhere nearly as densely populated as South Asia. India alone has more people than the entire African continent. Africa’s demographic problem is the rapid rate of growth in its population. Unlike India and Bangladesh, which have finally gotten their growth rates under control, many African nations still have fertility rates as shockingly high as 4 or more children per woman. In the case of Niger, the fastest-growing country in the world, women have 7.6 children on average. In other words, the problem facing many Sub-Saharan African nations is that their population is growing faster than their ability to provide for their people.

Can Africa’s demographic problem be solved? Of course. High fertility rates are closely linked to poverty. Families tend to have more children in societies where the opportunity cost of having kids is low, where children can assist with subsistence farming, and where infant mortality rates are high enough that having extra children provides a hedge against the tragic risk that some of them will not survive to adulthood. As Sub-Saharan African countries continue to develop their economies, these underlying problems will fade away just as they did in South Asia and Latin America, and fertility rates will drop. This issue has been widely studied by developmental economists, who have concluded that the best ways to reduce fertility rates are to ensure access to cheap birth control, improve education (especially for girls), and improve health care and nutrition for infants. Many African nations are making tremendous progress toward these goals, and are already seeing the dividends in reduced fertility rates.

So when our global population officially hits the 7 billion mark this week, remember to take the doomsayers with a grain of salt. Our planet is not overpopulated. At the present time, it is a localized problem for only a couple regions of the globe, and soon it will not be a problem at all. Our economic and technological development will see to that. There is plenty of room on our pale blue dot for more people.

Saturday, December 11, 2010

The Future of Health Care: Regenerative Medicine and Stem Cells

When an octopus is injured and loses one of its limbs, it will grow back after several months. When a starfish loses an appendage, not only will the starfish grow a new arm, but the severed arm will grow a new starfish! Even among vertebrates, regeneration is not unknown – salamanders can regrow lost body parts. Yet when a human loses an appendage, it is forever lost. What do these animals do that we don’t? Many scientists believe that the capacity for regeneration is lying dormant within our biology, and we may soon be able to activate it.

Most complex organisms including humans contain a huge number of different types of cells that each perform a specific function within the body. For the most part, these cells cannot do anything else; a brain cell can never become a white blood cell, or vice versa. But in addition to these specialized types of cells, we have stem cells – “wild card” cells that have no specific function of their own, but are able to become whatever type of cell the body needs. Stem cells show great promise in treating a wide range of diseases, rejuvenating our organs and tissues, and replacing entire body parts.

For several decades, the organ transplant process has been horrendously inefficient. The standard procedure has been for patients to beg their friends and family to donate an organ…if they can even find a compatible donor. If not, they enter their name onto a hopelessly long organ wait list, where they may die before finding a suitable replacement. If they are lucky enough to receive a transplant, patients will spend the rest of their lives taking a strict regimen of drugs to prevent their body from “rejecting” the organ (i.e. viewing it as a hostile invader to be eliminated).

Regenerative medicine will soon transform this process. People will be able to grow their own replacement organs in a lab, and since the new organ is their own, there will be no worries about their body rejecting it. Substantial progress has already been made in many areas. In 2006, doctors first created a human bladder from scratch. They extracted a few bladder cells from patients, and pasted them onto a three-dimensional mold shaped like a bladder. To their delight, the cells quickly grew into a new, fully-functional bladder, which they then transplanted into the patient. In 2010, doctors first performed a similar procedure using stem cells instead of bladder cells. Regenerative medicine is quickly becoming the standard for treating serious bladder diseases. Clinical trials are underway for similar procedures for other organs including the heart, although these procedures are at least a decade from being used in hospitals. In June 2010, scientists successfully grew a liver in the laboratory for the first time.

But replacing entire organs is not the only promising use for regenerative medicine. There is no fundamental reason why tissues and organs that have been badly damaged – by disease, injury, or natural wear and tear – cannot gradually be rejuvenated by replacing the damaged cells with healthy stem cells, allowing our body parts to remain in excellent condition throughout our lives. This has ramifications for slowing the human aging process, and possibly even reversing it. When people are able to replace their organs with newer versions of themselves, “old age” will need not be regarded as a time of enfeeblement and illness.

Our stem cells are essentially a blank slate, which can become whatever type of cell we want them to become. Their potential applications to regenerative medicine are practically limitless, as practically every major non-infectious, non-genetic disease results in some form of cellular damage. Regenerative medicine treatment will be a relatively slow and non-disruptive transformation – we will gradually see more and more of these therapies over the next few decades – and is not a cure-all by any means. However, it is one of the most promising new treatments (along with genomics) which will eventually radically extend the human lifespan.

Monday, November 8, 2010

Political Issues on the Horizon, Part 1

Americans went to the polls last Tuesday and, for the third time in as many election cycles, delivered a sharp rebuke to the incumbent party. No doubt concerned that the economic recovery seems to be stagnating and that unemployment remains high, the voters gave the House of Representatives back to the Republican Party. In the wake of the midterm elections in the United States, this is a good time to consider political issues on the horizon.

I generally shy away from making specific predictions about politics or the economy. Voters are fickle and economies are unpredictable, especially compared to the relatively simple trends that scientific and technological developments usually follow. However, I think we can at least speculate on the types of issues that are likely to become important, if not the precise way that they will be resolved by the voters and the government. In my next few blog posts, I’m going to explore some of the political issues that I think will grow in importance over the next decade, as well as a couple of oft-cited (and perhaps overblown) issues which may soon fade from the American political landscape.

Privacy. For the past couple decades, whenever a political gasbag has asked a judicial appointee about his or her views on “privacy,” it has typically been a code word for abortion. However, I believe privacy will soon become a political issue in its own right, spurred on by technological advances which encroach more and more on our privacy and demand access to sensitive information. Already, there have been court cases to determine if police can tag an automobile with a GPS tracker without a warrant, but this is just the tip of the iceberg of what is to come. RFID chips, which will soon replace bar codes on products, will be embedded in nearly everything we buy, allowing for constant surveillance and tracking of products (and by extension, of customers) from their point of manufacture to their point of disposal.

Additionally, we are probably no more than a decade from the point where sensors and face-recognition technology are commonplace in many public establishments, as in Minority Report, making it virtually impossible to step out of our own homes without appearing in a database somewhere. In the slightly more distant future (probably 10-20 years), insectoid-sized robots are on the horizon. DARPA is already designing them for use in military and spying applications, but eventually their spread to the general public is a virtual certainty as the cost of computing drops, allowing for practically anyone to monitor practically anyone.

In light of all of these emerging technologies, some erosion of our privacy seems almost inevitable. The extent of it remains open to debate. Will our governments pass privacy laws regulating how all of this information can be obtained and used? Or will our governments be part of the problem? Only time will tell.

Bioethics. The first decade of this century saw two important bioethical debates in the United States and Europe. In the United States, stem cell research was hotly debated in the first few years of the Bush presidency, but now seems to have decisively concluded in favor of scientific progress, as the huge benefits of stem cells become more obvious and the moral objections have fallen by the wayside. In Europe, the main bioethical debate of the past decade – genetically modified foods – is still ongoing. Many Europeans are concerned about the possibility of genetically modified foods wreaking unintentional havoc on the environment and public health. Although these fears do not have much scientific support, the controversy has nevertheless succeeded in quashing the industry in Europe, at least temporarily.

These are merely the first of many bioethical debates we will face in the 21st century. Some will be relatively trivial. For example, concerns about athletes on steroids may soon give way to concerns about professional athletes with enhanced body parts. A few years ago, Tiger Woods opted to get superhuman 20/15 vision through Lasik surgery, and the range of upgrades available to those who can afford them will soon be much wider. If athletes are able to buy improved bodies, it will make it difficult for “natural” athletes to compete. Will we have separate leagues for enhanced athletes and natural athletes? Will we ban these superhuman enhancements entirely, and if so, what qualifies as a superhuman enhancement?

Other bioethical concerns will be much more profound, and the government will have to take a stand. For example, if the technology exists and is widely available to screen for genetic abnormalities, would it be child abuse to not tinker with a fetus’ genome to prevent birth defects? And if preventing birth defects is morally acceptable (indeed if it is the ONLY morally acceptable option), why not preventing other undesirable traits like ugliness, propensity to violence, or low intelligence? Where does one draw the line? Eugenics, long discredited due to its ties to Nazism, may make a comeback in a world of easy access to genetic therapy.

Many of the questions related to human augmentation and genetic engineering have no easy answer, and any government decision is bound to leave many people feeling morally queasy. Look for political parties to become increasingly divided along the lines of these bioethical questions, with conservatives preferring a more restrictive approach to avoid creating ghastly new moral quandaries, and liberals favoring a more open approach to improving humanity through reengineering our own biology.

To be continued in another blog post…

Sunday, September 26, 2010

The Future of Sleep: Entirely Optional

Why do we sleep? This seemingly simple question is one of the longest-standing unsolved mysteries in biology. As bizarre as it sounds, there is no obvious reason why we are inactive for a third of our lives. National Geographic and TIME have both tackled this subject to explain some of the most predominant theories, but the bottom line is that no one really knows. Some scientists believe that sleep restores some as-of-yet-undiscovered substance in our brains, which is depleted while we are awake…or that it removes some as-of-yet-undiscovered substance that accumulates while we are awake. Some observers have noted that getting REM sleep is necessary for our learning of new tasks. Although this is true, it fails to explain how sleep helps us learn or why it is necessary for us to be dormant during this process. Others have theorized that the reason that mammals sleep so much compared to other animals is an artifact from the age of the dinosaurs: At the time, most mammals were nocturnal. Perhaps sleep forced mammals to keep quiet during the day, to avoid becoming dinner for a hungry Tyrannosaurus.

Last year, biologists identified a rare gene which allows people to feel rejuvenated after only a few hours of sleep, indicating that it should be possible – at least in theory – to modify the amount of sleep we need in the near future. As the Genomic Revolution picks up in the next year or two, there will undoubtedly be many other genes identified that govern our sleep processes. Understanding them may unlock the key to understanding sleep.

Meanwhile, neurologists are making progress in understanding how sleep affects our brain. As electroencephalograms become more and more obsolete, scientists are gaining access to new ways to monitor our brain activity. Improved neural scans should allow scientists to pinpoint which areas of the brain are most affected by sleep, and how.

Perhaps soon we will understand the nature of sleep. Is it vital for our survival in a way that we don’t understand? Or is it simply a relic of our evolutionary past, with no useful purpose? Understanding sleep is the first step to conquering it. Is it possible that we could develop safe medication to mimic the benefits of sleep, allowing us to remain conscious for 24 hours per day, without the nasty side effects of caffeine or other drugs? This would change our world profoundly. We would essentially be living 50% longer by squeezing an extra eight hours out of our days. People could earn much more money by working more hours without sacrificing their leisure time, or alternatively, they could have much more leisure time without sacrificing their career.

Although some people often claim to enjoy sleep, I think most people would prefer to do without it, if we had the option. I have difficulty believing that anyone could truly enjoy something that they aren’t even aware they are doing. For most of us, our “love of sleep” is really “dislike of waking up.” If we could invent a safe way to remain constantly awake while still reaping whatever benefits we get from sleep, most of us would jump at the opportunity.

PREDICTIONS:
By 2030 – Scientists have a basic understanding of the reasons (if any) that we sleep, as well as why it evolved in the first place.
By 2050 – Medication exists that makes sleeping optional, providing people with any benefits of sleep without the need to actually do so, and without any nasty side effects.

Wednesday, September 15, 2010

Black Swan Events: Bioterrorism

The Genomic Revolution is a double-edged sword. As I mentioned previously, the benefits will be enormous. Genomics will allow us to have personalized, preventative health care, instead of mass-market sick care. However, there is also a ghastly dark side to the Genomic Revolution. We will soon face the truly horrifying prospect of bioterror (or bioerror.) When any college student has access to pathogens and the capability to modify them to make them even more virulent or transmissible, someone almost certainly will.

Within a few years, the genomes of nearly all human pathogens will be publicly available. This will be necessary in order to better understand these diseases and develop cures. However, those who wish to use this information to commit acts of mass murder will have access to it as well. Some diseases may not require very much modification to become even more deadly. The 1918 Spanish flu pandemic, which killed more people than World War I, is very genetically similar to many of the strains of flu that are still circulating to this day. Soon it will be possible for an individual to create a virulent flu strain like the Spanish flu by genetically modifying other strains.

Since genomics is essentially an information technology, it is possible to swap genes from one species into another. This is what allows agronomists to copy the cold-resistant genes of Arctic fish and paste them into tomatoes (in theory.) However, the same principle could be used by bioterrorists to create a Frankenstein’s monster, combining the worst traits of many diseases. Imagine an illness with the virulence of ebola, the transmissibility of the common cold, and the evolutionary adaptability of HIV. Such a disease is the stuff that nightmares (or B-movies) are made of. Yet it will eventually be possible for malevolent individuals or groups to create them.

If a manmade disease was sufficiently different from anything found in nature, it could prove devastating. We humans have had a chance to evolve alongside influenza, the plague, malaria, and other naturally-occurring afflictions. People alive today are mostly descended from the hearty individuals who survived earlier strains of these diseases. But we would not have evolved such immunities to manmade diseases. Just as the vast majority of Native Americans were decimated by European diseases to which they had no immunity in the 16th century, we could face the same prospect with manmade superplagues.

Fortunately, we have a defensive weapon in our arsenal that the 16th century Native Americans did not have. Just as genomics can create such frightening diseases, it holds the potential to cure them. Within a few years, it will be possible to sequence a genome in a couple hours. As our understanding of how genes work continues to grow, it will take less and less time to understand the genomes we sequence. Assuming that bureaucratic procedures were waived to combat a public health emergency, a cure for a manmade disease could be on the market almost as quickly as software antivirus programs are patched when new threats are discovered. Soon, naturally-occurring diseases could be a mere minor annoyance. The real public health danger could shift to the arms race between bioterrorists and scientists racing to cure their latest concoctions.

BLACK SWAN EVENTS:

By 2040 – A disease created or modified by humans has been released into the public. Probability: 90%

By 2040 – A disease created or modified by humans has killed at least 100,000 people. Probability: 75%

(I hesitated to even call bioterrorism a “Black Swan,” since that implies that the event is at least somewhat unlikely to occur. In my opinion, the danger of manmade diseases being released onto the public is not a question of if, but when and where. Since we cannot forecast this, it is unpredictable enough to be considered a Black Swan Event in my opinion.)

Thursday, August 19, 2010

Blissful Genetic Ignorance - Will We Want to Know Our Genomes?

A couple readers have questioned me about the Genomic Revolution, wondering if people will truly want to know their genome even if they are able. As I mentioned in a previous post, people prefer to avoid thinking about things that seem both horrifying and inevitable. This is understandable. Would a person truly want to know that they are doomed to suffer from, say, Alzheimer’s disease or some other affliction that is commonly regarded as a fate worse than death?

While I can’t speak personally for anyone other than myself, I think that most people will ultimately prefer to know. As personal genomics becomes more commonplace, the mindset of blissful genetic ignorance will probably fade away. This wouldn’t be the first time that a new medical paradigm has changed public opinion about how much they should know. In a 1961 poll, 90% of US physicians surveyed said that they wouldn’t tell their patients if they had cancer. At the time, most doctors believed that patients would be better off not knowing since little could be done. But as cancer screening and treatment became more common in the subsequent decades, this mindset vanished almost entirely. Today it is hardly even imaginable that a doctor would not tell a patient if they had cancer.

There are so many advantages in knowing what conditions we are most at risk for. Ultimately, I think the knowledge of which of our unhealthy behaviors we most need to change (and which we can indulge in), and what prescriptions are most likely to be effective for our personal genome is simply more important than the unpleasant knowledge that we will eventually develop a certain condition. Practically everyone is at risk for something, and everyone accepts this. Would it really be so much worse for our psyches to know our specific risks instead of just a vague sense that we will develop something?

Please share your opinion. Would you want to know if you would eventually develop a disease?

Friday, August 6, 2010

The Future of Health Care - The End of Aging

What disease kills 100,000 people every day (usually after a prolonged period of pain and illness), affects nearly everyone, and kills about 90% of people in the industrialized world?

Aubrey de Grey, a renowned gerontologist, is on a quest to eliminate aging. The search for the fountain of youth has confounded humanity for millennia, but de Grey is on more solid scientific ground than most of his predecessors in this field. He has identified what he believes are the seven causes of biological aging – a list whi
ch has remained unchanged for the past 30 years – as well as the solutions for dealing with each cause. These solutions are not merely theoretical; they have all been demonstrated in labs, although most of them are many years away from being generally available.

Some casual observers may conclude that it is physically impossible to prevent aging since people have been trying and failing to do so for millennia. But the fact is that there are naturally-occurring examples of cells that do not age. Unfortunately, they’re called cancer cells, and tend to have the nasty side effect of killing people. Nevertheless, they do demonstrate the reality of cells that do not age.

Each cell in our body normally has an hourglass in it; the cell replicates as many times as it can, then commits suicide when the hourglass runs out of sand. But scientists have discovered how to add more sand to the hourglass. It’s an enzyme called telomerase that occurs at the end of DNA strands. E
ach time our cells divide, the DNA strands become frayed at the end, until eventually they are too unstable and self-destruct. For the discovery of telomerase in 1984 and subsequent analysis of how it relates to aging, three scientists were awarded the 2009 Nobel Prize in Medicine.

There is still a lot of research that needs to be done before it is possible to halt or reverse the aging process in humans. De Grey’s organizations, the SENS Foundation and the Methuselah Foundation, are currently testing life-extension therapies on mice. The Methuselah Foundation offers the MPrize: a reward of up to $4 million to anyone who can extend the lifespan of mice to record-breaking lengths. The goal is to eventually apply this knowledge to increase the human lifespan.

De Grey is not interested in extending the portion of life in which people are old, frail, and sick. His goal is to extend the healthy portion of life, and ultimately to prevent people from ever growing old at all…and reversing the aging process for those who are already elderly. This is not pie-in-the-sky immortality, as it won’t eliminate all causes of death. It would, however, offer the possibility of lifespans of indefinite length. De Grey has explained the concept as the “Longevity Escape Velocity.Over the past century, medicine has done an excellent job preventing people from dying at young ages, but very little to prevent aging or increase the maximum human lifespan. At the present, medicine is progressing relatively slowly, adding a few weeks to our lifespan every year. When the Genomic Revolution picks up pace within the next few years, it is likely that this will be increased to a few months every year. De Grey hopes that eventually we can attack the root causes of aging itself to add more than one year to the human lifespan every year. He believes that the first person to reach age 1,000 is alive today…and is probably only about ten years younger than the first person to live to age 150.

The concepts of aging and old age are so ingrained in our mindset that we tend to not even think about them. Like anything that is both horrifying and seemingly inevitable, we have a remarkable ability to push aging out of our minds, or even to go through mental contortions to rationalize it as a good thing. Virtually all major life decisions we make – what career to pursue, how much of our money to save, how much risk to take, who to marry, how many children to have, when to retire, what our religious beliefs are, if or when we should go to college – are ultimately premised on the assumption that we will grow old and die, probably between ages 70 and 100. But what if this ceases to be the case? There is almost nothing that would alter our lifestyles, worldviews, beliefs, and culture as profoundly as the end of aging and the mindset that accompanies it.

Modern biology has already discovered theoretical solutions to all of the causes of aging; it is now a matter of applying them and developing solutions that work for human beings.

(The SENS Foundation and the Methuselah Foundation are non-profit organizations under US law. All donations are tax-deductible. If you have some money to donate, these organizations are helping to solve the single worst disease threatening humanity.)

PREDICTIONS:
By 2045 – The aging process has been halted, for all intents and purposes. People no longer grow old beyond their peak healthy age, between 18 and 25.
By 2060 – It is possible to reverse existing damage from the aging process. It is no longer possible to estimate an adult’s chronological age merely by looking at them. Diseases of old age have, for the most part, ceased to be a problem.

Monday, July 26, 2010

Genome sequencing? There's an app for that.



Richard Dawkins talking about the Genomic Revolution.

I think we'll look back on 2010-2011 as the tipping point for the Genomic Revolution, just as we look back on 1993-1994 as the tipping point for the World Wide Web.
..and the Genomic Revolution will be every bit as transformational as the World Wide Web, if not moreso.

Saturday, July 24, 2010

The Future of Health Care - The Genomic Revolution

For the first time in decades, we are due for a completely transformational change in health care. We are on the cusp of the Genomic Revolution, and we will start seeing the earliest results in the immediate future. Personal genomics – the practice of tailoring prescriptions, treatments, and lifestyle choices to an individual based on their genes – will soon depose the old paradigm of medicine. No longer will doctors merely give patients the drugs with the highest chance of success; they will be able to predict whether or not the drug will be effective for a specific person. No longer will patients try to base their diet and exercise habits on generic recommendations of what is healthy and what is not; instead, they can determine the healthiest lifestyle for their genetic makeup specifically. Health care will become mostly preventative, rather than reactive.

Why now? What is the driving force behind this paradigm shift? For the first time in human history, we have enough computing power to cheaply and quickly sequence the human genome. In the very near future, nearly everyone will have access to their entire DNA code, which they can carry on their smartphones. When Craig Venter became the first person to have his genome sequenced in 2000 as part of the Human Genome Project, it cost $3 billion and took thirteen years. When James Watson had his genome sequenced in 2007, it cost $2 million and took two months. Today, sequencing a human genome costs about $6,000 and takes a couple weeks. Within the next year, it is very likely that companies will offer genome sequencing for less than $1,000. Some observers view the $1,000 mark as a tipping point: the point at which average people can afford the service, and at which health insurers may start covering it. And after we have $1,000 genomes, $1 genomes won’t be far behind. Let’s not forget that the cost has dropped nearly a thousandfold in the last three years. Fast-forward a few more years, and it is conceivable that the cost of genome sequencing will be essentially nothing. I envision a day in the not-too-distant future when Walgreens and CVS will have self-service genome sequencing machines, as quick, cheap, and user-friendly as self-service photo machines.

Of course, merely knowing one's genetic code is worthless without knowing how to interpret it. While biologists have identified thousands of disease markers, there is vastly more that we don’t know about our genetic code. Some services available now, such as Google-funded 23AndMe, can test DNA to determine one’s predisposition to a narrow range of diseases, but this is only the tip of the iceberg of what is possible. As the cost of genome sequencing approaches zero, nearly everyone will have it done. As the total number of genomes grows from thousands to millions to billions, scientists will have a treasure trove of data to analyze diseases and patient responses to medication. A machine called a microarray allows scientists to compare different DNA sequences and search for correlations. As more and more human genomes are available to be analyzed, patterns will become more evident and it will become much easier to unearth the specific genes associated with certain diseases. Patients who know the diseases for which they are at risk will be able to modify their lifestyle to prevent them from arising.

Those who are unlucky enough to get a disease in spite of (or because of) their lifestyle will have access to much more robust treatments than those currently available. By pinpointing the genetic location of a particular disease, scientists will be able to understand what caused the disease and how it can be reversed. Think of our genetic code like a computer program: Understanding the cause and location of the bugs will enable us to fix them. In the slightly more distant future, it will be possible to directly repair defective genes, such as those that cause cancer, through genetic therapy.

The next ten years will be the most transformative decade in human history for medicine, as we finally unlock the secrets of our genetic code which have been a mystery since the dawn of humanity. The things I have described here are by no means a comprehensive description of the benefits of the Genomic Revolution, and the new paradigm will not be without problems of its own. To be continued in another blog post…

PREDICTIONS:
By 2011 – At least one company offers genome sequencing for $1,000 or less
By 2014 – At least one company offers genome sequencing for $100 or less
By 2019 – Over half of all Americans have had their genomes sequenced
By 2021 – U.S. sales of personalized medicine (i.e. drugs tailored to the patient’s specific genetic profile) are greater than sales of non-personalized, mass-market medicine

Sunday, July 4, 2010

The Future of Agriculture - In Vitro Meat

With the speed at which biotechnology is progressing, it seems very likely that by the end of the decade, we'll be able to grow meat in laboratories at a price that is competitive with meat grown in ranches. It is already possible to produce it, but as of now it is horrendously expensive and has the texture of runny eggs. Not exactly appetizing. Scientists have learned that they can manually "stretch" the cells in a laboratory to mimic the muscle movements of a live animal. By the end of the decade, it is likely that scientists will have the ability to produce lab-grown versions of meats like hamburgers and hot dogs, for which texture is not as important. It will probably take several years longer before we get to taste any lab-grown steaks.

New Harvest is a non-profit dedicated to the research and development of in vitro meat. PETA has offered a $1 million reward for the first team that can develop lab-grown chicken with the taste and texture of real chicken (although their 2012 deadline makes it highly unlikely that anyone will claim the prize). How would the world change if we switched from farm-grown meat to lab-grown meat? The benefits of this are hard to overstate.

The environmental impact will be enormous. Every pound of beef requires 30 or more pounds of crops to feed the cow. Pork and chicken aren't quite as crop-intensive as beef, but nevertheless consume a very large amount of resources. This is a huge drain on our water supplies and farmland. If our meat was grown in a lab, it could completely eliminate these problems, freeing up our land and water supplies to be used for other productive things or returned to nature. Along with solar energy, this is the emerging environmental technology that I am most excited about.

The health impacts of lab-grown meat could be very large too. As it stands now, red meat is extremely unhealthy. It has been linked to heart disease, diabetes, obesity, and cancer. Growing our meat in the laboratory would enable us to tinker with the genes to make it more nutritious, and to control how much fat is in the meat. Imagine eating something that tastes like a cow, with the nutritional content of a fish. We would be able to eat some of our favorite foods as often as we wanted, without any guilt or negative health consequences.

Furthermore, those with moral or religious qualms about eating meat could sleep easily at night, knowing that no animal was killed just so that they could eat dinner.

I think that right now, the "yuck" factor might dissuade people from trying it. But this is really just a matter of how the lab-grown meat was marketed. If it had the same taste and texture of actual meat, I definitely could see this becoming very popular. And after it became commonplace, the "yuck" factor would disappear on its own. What do you think? Would you eat lab-grown meat, assuming it had the same taste and texture of regular meat, at a reasonable price? I certainly would. It could save the world.

(Donations to New Harvest are tax deductible under US law, and are spent on university research on in vitro meat. It's a great cause with enormous potential to transform the world.)

PREDICTIONS:
By 2022 - Lab-grown hamburger (with the taste and texture of real hamburger) is sold commercially, for the same price or less.
By 2029 - Lab-grown steak (with the taste and texture of real steak) is sold commercially, for the same price or less.

Thursday, May 20, 2010

Scientists create artificial life


Today was a milestone for biotechnology. J. Craig Venter and his team successfully created an artificial life form, for the first time in human history. I think this is the most important development in biotechnology since the Human Genome Project was completed in 2003. The ramifications of this are enormous.

Eventually, we could create microbes that eat pollution, microbes that we ingest to keep us healthy, microbes to destroy insects and other pests, and microbes that we use to terraform other worlds by developing a biosphere. The downsides are pretty steep as well; I think it's only a matter of time before someone intentionally or inadvertently creates nasty new diseases.

These things aren't going to happen immediately. The life form that was created today is very crude. While it (mostly) does what its DNA was "programmed" to do, scientists are still many years from being able to create organisms capable of doing our bidding.