Sunday, December 26, 2010

Nanotechnology

Industrial manufacturing has, for the past two hundred years, mostly been concerned with macro-scale objects. The specifications for our cars, bridges, homes, and widgets are almost always in familiar dimensions that we can see or feel. Partially this is out of necessity: until recently, we simply lacked the ability for more precise specifications. Millimeters are the smallest units of length we deal with in our everyday lives, so making our products accurate to this level was typically good enough. This is quickly changing.

Nanotechnology is the science of manipulating objects on nanometer (one-billionth of a meter) scales. At this level, it is possible to position individual atoms and molecules where we want them. One of the best-known early demonstrations of this concept came in 1989, when IBM scientists essentially used molecule-sized tongs to carefully pick up and position 35 xenon atoms to spell IBM. Since then, our ability to work with tiny objects has improved every year, and shows no sign of slowing down. One of the most useful applications to date is in computing: Nearly all modern computer chips now use transistors that are only a few nanometers across.

Other applications of nanotechnology are starting to reach the market. Stain-proof and water-proof clothes have been available for a few years now. The cotton fibers in the clothes are attached to tiny nanomachines which actively repel foreign substances like water. Nanoparticles have found their way into commercial sunscreens as well, making them much more effective by forming a thin screen at the molecular level.

But today’s applications are just the tip of the iceberg of nanotechnology’s potential. As our ability to manipulate tiny structures improves, so will the range of possibilities available to us. Mature nanotechnology will grant us access to a veritable cornucopia of goods. Graphene – an arrangement of carbon atoms first created in 2004 – is 100 times stronger than steel, harder than diamond, but as flexible as plastic. It is one of the thinnest, lightest, strongest substances ever discovered, and may find its way into many common products in the coming decades, making computers faster, batteries better, food fresher, solar cells more efficient, and vehicles and bridges lighter. Two scientists took home the 2010 Nobel Physics Prize for their work with graphene.

Another recently-discovered nanoparticle, the gold nanosphere, may one day prove to be an effective cancer treatment. Its talent lies in its tiny size and its ability to conduct immense amounts of heat. By attaching a piece of protein to a gold nanosphere, it is capable of seeking out cancer cells and attaching itself. Once it has attached itself, a doctor can flash a burst of infrared light. This causes the gold nanosphere to heat up to extreme temperatures, killing the cell to which it is attached. If it withstands FDA trials, it could become a standard treatment for cancer, since it results in far less collateral damage than chemotherapy.

But new substances and chemicals are not the only benefit of nanotechnology. In the more distant future, there is no physical barrier preventing the development of microscopic robots at the nano-scale. These nanobots could radically transform our world – patrolling the environment to clean up pollution one molecule at a time, keeping intruding pathogens or harmful mutations out of our bodies, assembling anything we want from a hamburger to a piece of jewelry in front of our eyes, or (if proper precautions aren’t taken) consuming the entire world and reducing it to gray goo. In 1995, the late nanotechnology grandfather Richard Smalley wrote, “The list of things you could do with nanotechnology reads like much of the Christmas Wish List of our civilization.” As we master the ability to manipulate the world at the atomic level, we must also master the ability to prevent the technology from destroying us.

PREDICTIONS:
By 2026 – At least one treatment employing nanoparticles is routinely used in the United States to treat cancer.
By 2035 – Graphene is routinely used in structures (e.g. bridges and buildings) that need to be strong and light.
By 2050 – Nanobots can patrol the cells of our bodies, looking for any unwelcome intruders or mutations.
By 2055 – Molecular assemblers are able to produce nearly any macro-scale product we need, provided that they have the raw materials.

Saturday, December 11, 2010

The Future of Health Care: Regenerative Medicine and Stem Cells

When an octopus is injured and loses one of its limbs, it will grow back after several months. When a starfish loses an appendage, not only will the starfish grow a new arm, but the severed arm will grow a new starfish! Even among vertebrates, regeneration is not unknown – salamanders can regrow lost body parts. Yet when a human loses an appendage, it is forever lost. What do these animals do that we don’t? Many scientists believe that the capacity for regeneration is lying dormant within our biology, and we may soon be able to activate it.

Most complex organisms including humans contain a huge number of different types of cells that each perform a specific function within the body. For the most part, these cells cannot do anything else; a brain cell can never become a white blood cell, or vice versa. But in addition to these specialized types of cells, we have stem cells – “wild card” cells that have no specific function of their own, but are able to become whatever type of cell the body needs. Stem cells show great promise in treating a wide range of diseases, rejuvenating our organs and tissues, and replacing entire body parts.

For several decades, the organ transplant process has been horrendously inefficient. The standard procedure has been for patients to beg their friends and family to donate an organ…if they can even find a compatible donor. If not, they enter their name onto a hopelessly long organ wait list, where they may die before finding a suitable replacement. If they are lucky enough to receive a transplant, patients will spend the rest of their lives taking a strict regimen of drugs to prevent their body from “rejecting” the organ (i.e. viewing it as a hostile invader to be eliminated).

Regenerative medicine will soon transform this process. People will be able to grow their own replacement organs in a lab, and since the new organ is their own, there will be no worries about their body rejecting it. Substantial progress has already been made in many areas. In 2006, doctors first created a human bladder from scratch. They extracted a few bladder cells from patients, and pasted them onto a three-dimensional mold shaped like a bladder. To their delight, the cells quickly grew into a new, fully-functional bladder, which they then transplanted into the patient. In 2010, doctors first performed a similar procedure using stem cells instead of bladder cells. Regenerative medicine is quickly becoming the standard for treating serious bladder diseases. Clinical trials are underway for similar procedures for other organs including the heart, although these procedures are at least a decade from being used in hospitals. In June 2010, scientists successfully grew a liver in the laboratory for the first time.

But replacing entire organs is not the only promising use for regenerative medicine. There is no fundamental reason why tissues and organs that have been badly damaged – by disease, injury, or natural wear and tear – cannot gradually be rejuvenated by replacing the damaged cells with healthy stem cells, allowing our body parts to remain in excellent condition throughout our lives. This has ramifications for slowing the human aging process, and possibly even reversing it. When people are able to replace their organs with newer versions of themselves, “old age” will need not be regarded as a time of enfeeblement and illness.

Our stem cells are essentially a blank slate, which can become whatever type of cell we want them to become. Their potential applications to regenerative medicine are practically limitless, as practically every major non-infectious, non-genetic disease results in some form of cellular damage. Regenerative medicine treatment will be a relatively slow and non-disruptive transformation – we will gradually see more and more of these therapies over the next few decades – and is not a cure-all by any means. However, it is one of the most promising new treatments (along with genomics) which will eventually radically extend the human lifespan.

Saturday, December 4, 2010

Arsenic-Based Life

This week, NASA geobiologist Felisa Wolfe-Simon announced the discovery of arsenic-based microbes in Mono Lake in Yosemite National Park. This is a major scientific bombshell that is causing biologists to re-examine much of the conventional wisdom about what life is. All life, from the smallest bacteria to the largest redwood tree, was thought to be based on five elements: Carbon, hydrogen, oxygen, nitrogen, and phosphorus. These elements are crucial ingredients in life’s software – DNA and RNA molecules – as well as life’s fuel – ATP molecules. But now scientists have discovered organisms that use arsenic instead of phosphorus in their biochemistry, which was thought to be impossible. Most organisms are poisoned by arsenic because it is highly reactive, destroying the DNA and ATP found in living cells. But this new species of extremophile not only thrives in a lake with an arsenic concentration 700 times what the EPA deems safe, but actually manages to incorporate it into its DNA and ATP. This is more than just a strange new species with an interesting quirk; many of those are discovered every year. This is a radical redefinition of what “life” is.

Although these microbes appear to have evolved from more traditional forms of life, their mere existence opens up the possibility of a "shadow biosphere" on earth. All known living things have descended from a common ancestor, but what if organisms with other biochemistries are living right under our noses undetected? Might they provide evidence of a second genesis on earth? If life has developed twice, with two different biochemistries, it would prove that the evolution of life on earth was not just a one-in-a-trillion fluke, and would greatly increase the likelihood of life developing on any suitable world.

The discovery of arsenic-based life has important applications in the search for extraterrestrial life, which is where NASA comes in. For ages, scientists have pondered about the possibility of completely new types of life. A handful of astrobiologists (and a slew of science fiction writers) have speculated that extraterrestrial life might be too alien for us to even recognize as life. In fact, this is one possible explanation for Fermi's Paradox, which questions why we haven't already found life if it is commonplace in the universe. In light of this week's discovery, this explanation has become a lot more plausible. Perhaps DNA, reliance on water, and cells are just unique traits of life on earth, and we are barking up the wrong tree if we focus solely on finding them elsewhere.

Others have believed that “life as we know it” was the only type of life possible. This has been the dominant mindset of NASA for several decades, and is the basis of NASA’s search for life. NASA has concentrated its efforts on locating worlds similar to our own, where the conditions exist to permit the development of life as we know it. This generally means finding worlds with water on them, located in the “Goldilocks Zone” of their solar systems where they are neither too hot nor too cold to sustain life. This, of course, has been premised on the assumption that any extraterrestrial life is probably not too different from the life we know.

The discovery of arsenic-based life has cast doubt on this approach. If a new form of life can be found in Yosemite National Park, we can scarcely imagine how different extraterrestrial life must be. NASA’s obsessive search for earth-like planets may be overlooking a huge number of worlds where life may exist, in forms unknown to us earthlings. If it is possible for life to exist with a completely different biochemistry from our own, then it’s equally possible that it could thrive on worlds far different from our own, under conditions that have traditionally been regarded as hostile to life.

NASA will need to do a lot of soul-searching in light of this week’s discovery, and reevaluate how it determines if a world is potentially suitable for life. For the first time in history, a long-standing astrobiological question has been answered: Is “life as we know it” the only type of life possible? We now have our answer: It is not.

Thursday, November 25, 2010

Giving Thanks

“The arc of the moral universe is long, but it bends towards justice.” – Martin Luther King

With the constant barrage of stories in the media about war, recession, terrorism, swine flu, natural disasters, and various other tragedies, one could easily conclude that the world is a horrifying place. But instead of looking at the latest headlines, we should instead contemplate the broad trends of history. The world is a far better place to live today than it ever has been before, and all indications are that the quality of life will continue to improve. This is not just Panglossian optimism; the world is empirically a better place to live by almost any metric one chooses, compared to almost any other historical era one chooses. The arc of human development is long, but it bends towards a better quality of life. And for that, we should be thankful.

For most of human history, life was “nasty, brutish, and short” as Thomas Hobbes described it. The average life expectancy in most major civilizations – including ancient Greece, ancient Rome, and medieval Britain – hovered around 30 years. And these were among the leading civilizations of their era. Today, even the least developed countries on earth typically have life expectancies far higher than that. The advent of the Germ Theory of Disease has revolutionized the way we think about infectious diseases, and has increased the global life expectancy to 69 years. In many developed countries, life expectancy at birth exceeds 80 years.

Education is now ubiquitous in a way that it never has been before in human history. Prior to the middle of the 19th century, universal education was a completely unknown concept. Education was the province of the elite, designed solely to teach young people how to be the future leaders of the world. Although the American education system is rightly the target of much criticism today, we should not lose sight of the fact that it is a crowning achievement of our history. Universal education has enabled people from all walks of life to apply their talents to make the world a better place in a way that would not have been possible otherwise. Chris Anderson, the curator of TED Talks, proposes this thought experiment: “Pick your favorite scientist, mathematician, or cultural hero. Now imagine that instead of being born when and where they were, they had instead been born with the same abilities in a typical poverty-stricken village in, say, the France of 1200 or the Ethiopia of 1980. Would they have made the same contribution they did make? Of course not.” The ubiquity of education in our society is something for which we should be truly thankful, since it has allowed many more people to solve societal problems than were previously able to do so.

But even many of those who readily acknowledge that life today is much better than in the distant past may wonder if we have run out of steam in recent decades. Stories about the decline of Western civilization are not hard to find in the media, nor are dire warnings about how abrupt climate change could cause widespread famines and wars. If only we could return to the good old days, they wish. But the fact is that the world is much better now than in recent decades as well. Although many people might like to turn back the clock to 2007, before the recession, in the grand scheme of things the recession will be a small blip on the radar screen. If you turn the clock back much farther than just a few years, it becomes obvious that the world is a much better place today than in the past. Over a billion people have been lifted out of grinding poverty in China and India in the last 20 years. The World Wide Web – a strong contender for the single greatest invention of mankind – is less than 20 years old. In the United States, women and minorities have had equal rights for less than 50 years. The Cold War no longer enslaves half the world, and no longer carries the credible threat of a nuclear apocalypse. The overall number of humans per capita killed in warfare is at its lowest level since at least World War II, and possibly all of human history. The “good old days” were never that good. The quality of life today is staggeringly better than even the recent past.

But the most important thing for which we should be thankful is the hope for an even better future. The 20th century was by far the most disruptive century since the dawn of civilization, and there is no reason the 21st century can’t be just as important for radically altering the way humans live. Some of the transformative technologies that are now on the horizon include self-driving vehicles, stem cell therapy, lab-grown meat, ubiquitous computing, genomics, 3D printing, solar energy, and mature nanotechnology. These all offer the potential to dramatically improve our quality of life for the better, just as sanitation and education did in the 19th century, and as plumbing and electricity did in the 20th.

Let us give thanks for the fact that we live in the best epoch of human history – relative to both the distant past and the recent past. Furthermore, let us be grateful that the technological revolution of the last 150 years shows no signs of slowing down, and will continue to unlock the true potential of human beings by freeing us from menial tasks and unpleasant maladies. We live in interesting times. Let’s treat that as a blessing, not a curse.

Wednesday, November 17, 2010

Political Issues on the Horizon, Part 3

In my last two posts, we explored four political issues that are likely to be important in the next decade. To wrap up this political trilogy, I’m going to talk about two issues that are often discussed as vitally important for our future…which I nevertheless think will soon fade from the political landscape.

Cap and Trade. For the past decade, the climate change debate in the United States has focused on the wrong issues. One side of this debate has tended to overhype the worst-case climate change scenario, arguing for immediate, draconian carbon cuts which would harm the economy while doing almost nothing to avert climate change. The other side has flatly denied climate change even exists, despite a huge amount of scientific evidence indicating otherwise.

Yet technological advancement, not political decrees, will determine the fate of our climate. As it becomes clear that none of the common prescriptions for climate change are politically feasible and scientifically plausible, we can and must stop focusing on policies like cap-and-trade and instead turn to the real solution: technology.

Fossil fuel combustion is responsible for 96.5% of all man-made carbon dioxide emissions. Addressing the need for fossil fuel combustion is of paramount importance to stop clogging our atmosphere with carbon dioxide. Fortunately, solar power will soon supplant fossil fuels as our primary source of energy. Any political policies to fix climate change must instead focus on making this happen as soon as possible, through subsidies and tax credits for solar energy pioneers.

Furthermore, research into geoengineering must be heavily funded to determine if there are any plausible ways to counteract the harmful effects of carbon dioxide, without unleashing even worse environmental damage in the process. Some of the more promising ideas include stationing powerful underwater turbines in the ocean to spray water high into the atmosphere, injecting chemicals underneath large glaciers to prevent them from sliding into the ocean, seeding the ocean with iron flakes to encourage carbon-eating plankton to grow, and spraying sulfur dioxide into the stratosphere to mimic the cooling effects of volcanoes. Any of these solutions would cool the earth, but they carry environmental risks of their own that may make them unpalatable. In many ways, we would be picking our poison: do the risks of geoengineering outweigh the risks of climate change, or vice versa? Governments should commit to researching these geoengineering techniques in depth to determine the answer.

Ultimately, political debates over cap-and-trade or possible successors to the Kyoto Protocol are a dead-end. They are unproductive and unlikely to succeed. I expect them to disappear from the political landscape in the very near future, as more voters realize that a more practical approach is necessary to truly combat climate change.

National Debt. In 2006, the Democratic Party gained control of Congress, in part, by campaigning on deficit reduction. In 2010, the Republican Party did the same. Yet there is no evidence that either party in the United States has any actual interest in reducing the debt. Both parties value their other priorities – cutting taxes in the case of Republicans, and increasing spending in the case of Democrats – much more highly than deficit reduction.

And that is not entirely a bad thing. For all the hype that surrounds our debt, it is still not at dangerous levels. Even after the worst recession in recent history, our debt stands at about 94%. This is high by American standards, but still much lower than many other developed nations which are at no risk of defaulting. The interest rate on treasury bonds is at an all-time low, meaning that the idea of the US government defaulting on its debt is barely even in the minds of investors.

To be sure, we cannot continue running large deficits forever. We need a medium-term plan to get our deficits under control, but even so, we don’t need to balance the budget entirely. Running a small annual deficit is fine. Between 1940 and 1980, the United States cut its debt-to-GDP ratio from 120% to 35%, despite running an annual deficit for most of the intervening years. There is no reason we cannot do this again in the next few decades. As long as our economy grows faster than our debt, our debt-to-GDP ratio will fall, eventually returning to more typical levels.

Although the national debt will probably continue to be a complaint of the out-party, I am skeptical that it will be more than a political tactic anytime soon. Neither party has shown any interest in seriously addressing the debt, and as long as we can get our deficits down to a more reasonable level once the economy picks up, neither party will need to.

Sunday, November 14, 2010

Political Issues on the Horizon, Part 2

In my last entry, I explored two political issues that I expect to grow in importance in the United States over the next decade: privacy and bioethics. Today the focus is on two issues which already have a firm hold on the political landscape, but will nevertheless continue to evolve and grow in importance.

Terrorism. Ever since 9/11, American political discussions about terrorism have tended to boil down to two main components: Airport security and the wars in Afghanistan and Iraq. Unfortunately, there is little evidence that either component has actually done anything to prevent terrorism. Changes in airport security have been less about improving safety than creating the illusion of heightened security for travelers. And the grossly mismanaged wars have succeeded only in pushing terrorism across an arbitrary national border (in the case of Afghanistan) or actively creating terrorists where they were not previously a problem (in the case of Iraq). With these issues so completely dominating the discussion of how to fight terrorism, little attention has been given to important questions, such as how to keep weapons of mass destruction out of the hands of terrorists, and how to quickly respond to a WMD terrorist attack to minimize the devastation.

In the future, the United States will not have the luxury of being able to ignore these questions. Today’s terrorists typically have access to only the crudest weapons: bombs capable of killing, at most, a few hundred people. Spectacular attacks like 9/11 are vanishingly rare, making the current level of funding, military commitment, and political capital spent on terrorism vastly disproportionate to the actual problem. But it remains to be seen how long that will be the case. Weapons of mass destruction may soon be available to terrorists, which could pose a much more serious danger than traditional terrorist attacks. The most worrying threat in the near future is nuclear proliferation, as a growing number of unstable regimes acquire nuclear weapons. In the slightly more distant future, biological weapons may pose an even greater danger to the world, as the necessary ingredients and know-how will be available to nearly any university student. Governments need to begin developing serious plans for how to minimize the spread of weapons of mass destruction, or failing that, rapid response plans after a massive terror attack. To date, the United States has done neither.

As a political issue, I worry that debates over terrorism will continue to be dominated by those seeking to eliminate "terrorism" as a concept, with conservatives favoring an aggressive foreign policy to combat known terrorist havens, and liberals preferring more targeted nation-building efforts to eliminate the conditions in which terrorists typically arise (e.g. poverty and lawlessness). While this debate is not entirely unproductive, it is of secondary importance since terrorist attacks are relatively rare anyway. My hope is that the debate will shift from how to prevent "terrorism" as a whole, to how to prevent the spread of weapons of mass destruction. This is a much more focused and achievable goal. But regardless of how the debate evolves, terrorism does not seem likely to disappear from the political landscape anytime soon.

Globalization. 20 years after the end of the Cold War and 15 years after the birth of e-commerce, it is almost a cliché to say that the world is becoming more interconnected. The nations of the world depend on one another more than ever. Wars between national governments are on a terminal decline, as the cost of waging these wars (in terms of being cut off from neighboring markets) continues to grow relative to the benefits. Many developing countries have found that globalization is the quickest path to economic development, with many hundreds of millions of Indians and Chinese escaping poverty in the last 20 years.

Many nations are seeing strong political backlashes against globalization. In the United States, this has manifested itself in debates over immigration and outsourcing. There does not seem to be any clear-cut ideological division on globalization. Traditionally, the Democratic Party has been friendlier toward immigration, and the Republican Party has been more receptive toward free trade, despite the fact that these policies are two sides of the same coin, pitting globalists against nationalists. However, even within these policies, the partisan lines are blurry: some Democratic politicians have been staunch supporters of free trade (including President Bill Clinton) and some Republican politicians have been ardent defenders of open immigration (including President George W. Bush).

As developing countries open up their markets and continue to grow richer, and rich countries become more dependent on economic rivals like China, the debate over globalization will continue to grow louder. The specific objections will vary depending on the specific policy: In some cases the opposition will be fueled by concerns over environmentalism, and other times by fears of rising income inequality. In some cases, nations may not like the fact that their economic well-being is so dependent upon their trading partners' policies, with economic problems in one nation spilling over to others. In 1999, labor activists in Seattle successfully disrupted a meeting of the World Trade Organization, largely motivated by fears that their jobs would be outsourced.

As globalization grows in importance as a political issue, it is likely that the advocates and opponents of globalization will firmly drag the political parties toward opposing viewpoints. One party will probably come to represent open immigration and trade, while the other becomes more nationalistic and inward-looking. At the present time, it is difficult to determine which party will be which.

In my next entry, I will look at two political issues which I think will fade in importance over the next decade.

Monday, November 8, 2010

Political Issues on the Horizon, Part 1

Americans went to the polls last Tuesday and, for the third time in as many election cycles, delivered a sharp rebuke to the incumbent party. No doubt concerned that the economic recovery seems to be stagnating and that unemployment remains high, the voters gave the House of Representatives back to the Republican Party. In the wake of the midterm elections in the United States, this is a good time to consider political issues on the horizon.

I generally shy away from making specific predictions about politics or the economy. Voters are fickle and economies are unpredictable, especially compared to the relatively simple trends that scientific and technological developments usually follow. However, I think we can at least speculate on the types of issues that are likely to become important, if not the precise way that they will be resolved by the voters and the government. In my next few blog posts, I’m going to explore some of the political issues that I think will grow in importance over the next decade, as well as a couple of oft-cited (and perhaps overblown) issues which may soon fade from the American political landscape.

Privacy. For the past couple decades, whenever a political gasbag has asked a judicial appointee about his or her views on “privacy,” it has typically been a code word for abortion. However, I believe privacy will soon become a political issue in its own right, spurred on by technological advances which encroach more and more on our privacy and demand access to sensitive information. Already, there have been court cases to determine if police can tag an automobile with a GPS tracker without a warrant, but this is just the tip of the iceberg of what is to come. RFID chips, which will soon replace bar codes on products, will be embedded in nearly everything we buy, allowing for constant surveillance and tracking of products (and by extension, of customers) from their point of manufacture to their point of disposal.

Additionally, we are probably no more than a decade from the point where sensors and face-recognition technology are commonplace in many public establishments, as in Minority Report, making it virtually impossible to step out of our own homes without appearing in a database somewhere. In the slightly more distant future (probably 10-20 years), insectoid-sized robots are on the horizon. DARPA is already designing them for use in military and spying applications, but eventually their spread to the general public is a virtual certainty as the cost of computing drops, allowing for practically anyone to monitor practically anyone.

In light of all of these emerging technologies, some erosion of our privacy seems almost inevitable. The extent of it remains open to debate. Will our governments pass privacy laws regulating how all of this information can be obtained and used? Or will our governments be part of the problem? Only time will tell.

Bioethics. The first decade of this century saw two important bioethical debates in the United States and Europe. In the United States, stem cell research was hotly debated in the first few years of the Bush presidency, but now seems to have decisively concluded in favor of scientific progress, as the huge benefits of stem cells become more obvious and the moral objections have fallen by the wayside. In Europe, the main bioethical debate of the past decade – genetically modified foods – is still ongoing. Many Europeans are concerned about the possibility of genetically modified foods wreaking unintentional havoc on the environment and public health. Although these fears do not have much scientific support, the controversy has nevertheless succeeded in quashing the industry in Europe, at least temporarily.

These are merely the first of many bioethical debates we will face in the 21st century. Some will be relatively trivial. For example, concerns about athletes on steroids may soon give way to concerns about professional athletes with enhanced body parts. A few years ago, Tiger Woods opted to get superhuman 20/15 vision through Lasik surgery, and the range of upgrades available to those who can afford them will soon be much wider. If athletes are able to buy improved bodies, it will make it difficult for “natural” athletes to compete. Will we have separate leagues for enhanced athletes and natural athletes? Will we ban these superhuman enhancements entirely, and if so, what qualifies as a superhuman enhancement?

Other bioethical concerns will be much more profound, and the government will have to take a stand. For example, if the technology exists and is widely available to screen for genetic abnormalities, would it be child abuse to not tinker with a fetus’ genome to prevent birth defects? And if preventing birth defects is morally acceptable (indeed if it is the ONLY morally acceptable option), why not preventing other undesirable traits like ugliness, propensity to violence, or low intelligence? Where does one draw the line? Eugenics, long discredited due to its ties to Nazism, may make a comeback in a world of easy access to genetic therapy.

Many of the questions related to human augmentation and genetic engineering have no easy answer, and any government decision is bound to leave many people feeling morally queasy. Look for political parties to become increasingly divided along the lines of these bioethical questions, with conservatives preferring a more restrictive approach to avoid creating ghastly new moral quandaries, and liberals favoring a more open approach to improving humanity through reengineering our own biology.

To be continued in another blog post…

Sunday, October 31, 2010

The Official List of Predictions

I thought it would be fun to compile a list of all the predictions I’ve made on this blog, and discuss which ones are likely to be right, and which ones I may have gotten wrong.

In my predictions, I’ve consistently tried to err on the side of pessimism as much as I thought was reasonable. I think that predictions of the future often have a tendency to be overenthusiastic, based on wishful thinking rather than what the evidence actually suggests. I’m certainly not exempting myself from that, but I’ve tried to minimize the problem by giving myself a few years’ leeway whenever reasonable. That’s why all of my predictions are in the “By 2020” format. For most of my predictions, I think that the most likely date is earlier than the date listed.

Furthermore, I’ve tried to focus most of my predictions on the foreseeable future - the next 20 years. The farther ahead we try to gaze into the crystal ball, the more difficult it becomes to make predictions with much confidence. Since my predictions tend to be based on examining the state of emerging technologies today, it becomes much more likely that they will hit some unforeseen roadblock or that I will completely miss a transformative technology, the farther ahead I try to look. Imagine trying to explain an iPhone to someone in 1980 who had probably never seen a personal computer, much less the internet or a cell phone!

I suppose that one of the benefits of not being a famous prognosticator is that no one is likely to hold me accountable if I’m wrong. This website may not even exist for more than a few years into the future, but I’m hoping to at least maintain the list of predictions as a sort of time capsule. If nothing else, it provides an interesting snapshot of “the future” as seen from 2010.

Without further ado, here is my list of predictions. This list will be continuously updated as I make new ones, so check back for the latest updates. This blog entry is now linked from the menu on the right.

PREDICTIONS:

By 2011 – Near-field communication comes equipped in many new smartphones. Mobile payments become even more popular in the developing world, and makes inroads in Europe and the United States. (Jan. 2011)

By 2011 – Internet-equipped televisions or add-ons will become popular. (Jan. 2011)

By 2011 – At least 75% of countries improve their score on the Human Development Index compared to 2010, with the biggest improvements in developing countries. (Jan. 2011)

By 2011 – There will be a major shakeup (or a total implosion) in the top leadership of North Korea and/or Iran. (Jan. 2011)

By 2011 – Tablet computers account for at least 13% of the US personal computer market. The market will become competitive with several new tablets seriously challenging Apple’s iPad. (Jan. 2011)

By 2011 – Gaming (led by Microsoft’s Xbox Kinect) will begin to become gesture-based, rather than controller-based. (Jan. 2011)

By 2011 – The migration of computer files from the hard drive to the cloud will begin in earnest, as people become more willing to allow third-parties to store all of the content on their computers via the internet. (Jan. 2011)

By 2011 – Voice Over IP services, such as Skype, become popular on smartphones, thus portending the eventual demise of traditional voice-telephone services. (Jan. 2011)

By 2011 – At least one company offers genome sequencing for $1,000 or less (Jul. 2010)

By 2013 – Useful augmented reality applications exist on PCs and/or tablets to allow shoppers to virtually try on clothing before purchasing it online. (Aug. 2010)

By 2013 – Cuba has made substantial progress toward democracy relative to where it stood at the beginning of 2011. (Feb. 2011)

By 2014 – At least one company offers genome sequencing for $100 or less (Jul. 2010)

By 2015 – Effective smartphone applications exist which can turn lights on and off, and start or stop home appliances. (Sep. 2010)

By 2016 - Personal health monitors, which are ingested or worn, can automatically call 911 whenever a person's vital signs indicate an emergency. (Sep. 2010)

By 2016 – Tunisia, Egypt, and Jordan have made substantial progress toward democracy relative to where they stood at the beginning of 2011. (Feb. 2011)

By 2017 – The average American carries at least ten computing devices on (or inside) his or her person. (Sep. 2010)

By 2017 – At least one-third of all smartphone and/or tablet users have an augmented reality application to project virtual images over the real world, as seen through their screen. (Aug. 2010)

By 2018 – Smart walls are becoming popular, which can display any image the user wants at any given moment, or can cycle through a series of posters. (Sep. 2010)

By 2018 – China has made substantial progress toward democracy relative to where it stood at the beginning of 2011. (Feb. 2011)

By 2019 – Over half of all Americans have had their genomes sequenced (Jul. 2010)

By 2020 - Driverless cars are commercially-available and street-legal somewhere in the United States. (May 2010)

By 2020 – There are fewer than 5 million cases of malaria annually, and fewer than 15,000 deaths. (May 2011)

By 2021 – U.S. sales of personalized medicine (i.e. drugs tailored to the patient’s specific genetic profile) are greater than sales of non-personalized, mass-market medicine (Jul. 2010)

By 2021 – Augmented reality is routinely used to train people how to perform process-based tasks such as cooking, dentistry, surgery, furniture assembly, factory work, and/or auto repair. (Aug. 2010)

By 2022 - Lab-grown hamburger (with the taste and texture of real hamburger) is sold commercially, for the same price or less. (Jul. 2010)

By 2022 – Silicon computer chips are no longer flat. They are now three-dimensional because it is impossible to shrink transistors any further. (Sep. 2010)

By 2023 – At least half of all new, non-driverless automobiles in the US have augmented reality technology in the windshield for safety and/or navigational purposes. (Aug. 2010)

By 2025 - Youth literacy rates exceed 90% in both Sub-Saharan Africa (up from 72% in 2008) and South Asia (up from 79% in 2008). Gender disparities in literacy have mostly disappeared; the global female youth literacy rate is no less than 98% of the male youth literacy rate (up from 94% in 2008). (Oct. 2010)

By 2025 - Fewer than 75% of students in the United States physically attend a school on a daily basis. (Oct. 2010)

By 2025 – In the United States, solar energy is cheaper than oil on average, on a per kilowatt-hour basis. (Oct. 2010)

By 2026 – At least one treatment employing nanoparticles is routinely used in the United States to treat cancer. (Dec. 2010)

By 2027 - New driverless cars outnumber new cars requiring at least some human control, in the US market. (May 2010)

By 2028 – Augmented reality contact lenses exist which can place virtual overlays of the world directly onto the wearer’s eye, or block out the real world altogether if the wearer desires. (Aug. 2010)

By 2029 - Lab-grown steak (with the taste and texture of real steak) is sold commercially, for the same price or less. (Jul. 2010)

By 2030 – Scientists have a basic understanding of the reasons (if any) that we sleep, as well as why it evolved in the first place. (Sep. 2010)

By 2035 - Driverless cars are widely perceived as safer than human drivers. Somewhere in the United States, it is illegal for humans to drive. (May 2010)

By 2035 – The global oil trade is less than 25% the size that it is in 2010 (approximately $2.1 trillion), adjusted for inflation. (Oct. 2010)

By 2035 – Graphene is routinely used in structures (e.g. bridges and buildings) that need to be strong and light. (Dec. 2010)

By 2040 - An "invisibility suit" exists which renders the wearer almost completely invisible to those who aren't actively looking for him or her. (Aug. 2010)

By 2045 – The aging process has been halted, for all intents and purposes. People no longer grow old beyond their peak healthy age, between 18 and 25. (Aug. 2010)

By 2050 – Medication exists that makes sleeping optional, providing people with any benefits of sleep without the need to actually do so, and without any nasty side effects. (Sep. 2010)

By 2050 – Nanobots can patrol the cells of our bodies, looking for any unwelcome intruders or mutations. (Dec. 2010)

By 2055 – Molecular assemblers are able to produce nearly any macro-scale product we need, provided that they have the raw materials. (Dec. 2010)

By 2060 – It is possible to reverse existing damage from the aging process. It is no longer possible to estimate an adult’s chronological age merely by looking at them. Diseases of old age have, for the most part, ceased to be a problem. (Aug. 2010)