Monday, August 30, 2010

Augmented Reality Will Change How We See The World

Augmented reality is the technology of superimposing virtual images over top of one’s view of the real world. One of the earliest examples is the yellow first down line in televised American football games, which has existed since 1998. The technology typically works by recognizing a certain object in the real world, which triggers a computer to impose a virtual image into the viewer’s line of vision. In the case of the first down line, it operates through a television screen, but any type of screen will do. Augmented reality applications already exist for computers, smartphones, special-design glasses, and airplane windshields.

General Electric has a very cool interactive demonstration of augmented reality with which you can experiment on your computer. In this demonstration, an ordinary computer webcam scans the room. GE’s program detects a certain symbol (on a printed piece of paper), and displays a virtual image over top of it. You can hold the paper from almost any angle. If you move the paper or change the angle, the virtual image will follow your movements so that it constantly appears that the virtual image is on top of the symbol.

But although this is certainly a cool trick, does augmented reality have any useful applications beyond keeping us mildly amused? Any good augmented reality application has at least two components: 1) Recognizing a real-world object to "trigger" the application to do something, and 2) Superimposing a virtual image into the user's line of sight. Imagine you are walking the streets of an unfamiliar city looking for a place to eat lunch. As you walk by restaurants with your augmented reality glasses on, you see the restaurants’ average Yelp rating (and the most recent reviews) hovering just above the buildings. Based on this data, you select the restaurant you want to eat at. This would certainly be more convenient than looking up each restaurant individually as you walked by.

Augmented reality software could be installed in the windshields of automobiles (as it already is in airplanes) to dramatically improve safety, according to General Motors. The technology could scan the car's surroundings for possible hazards. Any object deemed dangerous, such as a deer on the side of the road, could flash on the windshield to draw the driver’s attention to it. In conditions of poor visibility, an augmented reality windshield could help point out the edges of the road, the lines on the road, and any important road signs.

BMW has another example for how augmented reality might work in automobiles in the near future. Suppose that you wanted to repair something in your car, but knew very little about how to go about it. Rather than take it to a mechanic, you could put on your augmented reality glasses, and get step-by-step instructions for how to repair it yourself. The software in your glasses would recognize the parts of your car as you looked at them, and create a detailed illustration showing you exactly what you needed to do in real-time.

The tutorial applications of augmented reality are endless. In addition to repairing a car, augmented reality could help teach people how to perform a wide variety of tasks, from processes as simple as cooking to those as complex as open-heart surgery. Eventually, even augmented reality glasses will become obsolete, as computing power becomes cheap enough to fit inside regular eyeglasses or contact lenses to project virtual images directly onto the user’s eye.

Futurist Ray Kurzweil envisions a future in which contact lenses come with several different viewing modes, depending on what the user wants to see at any given moment. There would be a “normal mode,” which simply displays the real world as we currently observe it. There would be an “augmented mode,” in which the user’s augmented reality applications superimpose virtual images over top of the user’s regular view of the world. Finally, there would be a “blocking mode,” in which the real world was not displayed at all, allowing the user to become fully visually immersed in a website, book, movie, or computer game.

Some of the earliest applications of augmented reality are already being rolled out on the iPhone and Android. This technology seems poised to explode into mainstream use over the next 10-15 years, and will quite literally change how we see the world.

PREDICTIONS:

By 2013 – Useful augmented reality applications exist on PCs and/or tablets to allow shoppers to virtually try on clothing before purchasing it online.

By 2017 – At least one-third of all smartphone and/or tablet users have an augmented reality application to project virtual images over the real world, as seen through their screen.

By 2021 – Augmented reality is routinely used to train people how to perform process-based tasks such as cooking, dentistry, surgery, furniture assembly, factory work, and/or auto repair.

By 2023 – At least half of all new, non-driverless automobiles in the US have augmented reality technology in the windshield for safety and/or navigational purposes.

By 2028 – Augmented reality contact lenses exist which can place virtual overlays of the world directly onto the wearer’s eye, or block out the real world altogether if the wearer desires.

Saturday, August 28, 2010

Harry Potter's Invisibility Cloak May Soon Be A Reality

Acclaimed theoretical physicist and futurist Dr. Michio Kaku is designing a garment to make the wearer almost completely invisible, like Harry Potter. This is possible by making the light bend around an object instead of passing through it. Until a few years ago, most physicists thought that invisibility was an impossible violation of the laws of optics, but it turns out they were wrong...and it's much easier than anyone thought. In 2006, a team at Duke University demonstrated how substances called metamaterials, with refractive indices different from anything found in nature, could deflect beams of light.

The catch? If light bends around an object such as an invisibility suit, it is impossible to see out of it. Complete invisibility would render the invisible man completely blind. Kaku's solution is to use a beamsplitter to redirect a very small percentage (perhaps 4%) of the light into the invisible man's eye. Careful observers would be able to see the faint image of two eyes floating in the air, but that would be the only visual clue of the invisible person's presence.

Dr. Kaku believes that something resembling Harry Potter's invisibility cloak will be a reality in a few decades. I was skeptical that progress could be made that quickly, until I saw the current state of progress in this field. Skip to 3:40 in the first clip to be amazed.

While an invisibility cloak would certainly be impressive from a technical standpoint, I'm not sure it would actually be a good development for society. I can think of a lot of pernicious uses for an invisibility cloak, but very few positive applications. This is an invention that may end up being banned, and/or limited to military/intelligence agencies.

PREDICTIONS:
By 2040 - An "invisibility suit" exists which renders the wearer almost completely invisible to those who aren't actively looking for him or her.






Wednesday, August 25, 2010

Black Swan Events: Nuclear War and Nuclear Terrorism

In my new Black Swan series of blog posts, I will be looking at a few of the potential surprises that history could have in store for us. Nassim Nicholas Taleb defines a Black Swan as an event that is very difficult to predict in advance, but radically changes the course of history. Some examples in the last 50 years include 9/11, the sudden collapse of communism and the end of the Cold War, the AIDS pandemic, and the invention of the birth control pill. It would have been very difficult for a futurist to predict any of those events in advance based on trends, and yet they have had a very large impact on the world. A Black Swan Event could throw a wrench into predictions for the future, which tend to be based on analyzing trends rather than anticipating surprises.

All of the predictions I have made about the future of technology, especially those in the more distant future, should have the following disclaimer: “Provided that we do not destroy ourselves.” Today I’ll be examining a Black Swan event that has been hanging over humanity like a Sword of Damocles for 65 years. A nuclear war has been widely viewed as the ultimate catastrophe. Very few things could set back the progress we have made in improving the quality of life in the past two centuries more than a nuclear war could. Where is the greatest threat of a nuclear war, and where is the greatest potential for destruction? Could a sovereign nation launch a nuclear first-strike against its foes, or are nuclear-armed terrorists the greater threat?

Let’s first examine the potential sources of an international nuclear war. While less likely than a nuclear terrorist attack, it has far more destructive capability. I think there are three main global flash points to consider: India-Pakistan, North Korea-United States, and Israel-Iran.

As with any Black Swan Events, we need to evaluate scenarios based upon both their likelihood and their destructive potential. The India-Pakistan flash point is very high on both measures, making it of supreme importance. This region has the greatest potential for an international nuclear war in my opinion. The two foes have 60-80 nuclear weapons each, and neither seems to have many qualms about nuclear brinkmanship. The extreme population density in this part of the world means that even a single nuclear volley could have enormous destructive potential. Despite (or perhaps because of) the instability of the Pakistani government, the two nations remain as suspicious of each other as ever.

North Korea is another source of a potential nuclear war. The nation makes a habit of antagonizing nearly every other country in the world, projecting an irrational image, and seems to perpetually be on the verge of either a power transfer or total collapse. In my opinion, the greater danger is an accidental nuclear launch against South Korea or Japan, rather than a directive from North Korea’s leadership. Little is known about North Korea’s nuclear weaponry, but it is unlikely that such an impoverished nation has adequate safeguards in place to prevent a Dr. Strangelove-style strike ordered by a rogue commander. If a North Korean weapon was used on South Korea or Japan, the United States would almost certainly respond with nuclear force of its own.

Another potential source of international conflict is between Israel and Iran. If Iran eventually gains nuclear weapons (as I think it will), either Iran or Israel may be tempted to strike the other first to solidify its position as the preeminent nuclear power in the Middle East. However, in my opinion conflict is unlikely, because neither Israel nor Iran would truly gain much from a nuclear exchange. It is far more likely that the two nations would come to an uneasy truce than fight a nuclear war.

But the greater danger comes from nuclear terrorism, rather than nuclear war. While far less destructive, it is also far more likely. Warren Buffett stated in 2002 that he believed an eventual nuclear attack on US soil was a “virtual certainty.” While I am not quite that pessimistic, it is hardly unthinkable.

Where could terrorists obtain nuclear weapons? Three nuclear powers – Russia, Pakistan, and North Korea – have had difficulty maintaining control of their stockpile, thus allowing the possibility of a nuclear black market. When the Soviet Union collapsed, the nuclear material was spread amongst four nations (Russia, Kazakhstan, the Ukraine, and Belarus). A small amount of it has never been accounted for. It may still be changing hands on the black market. North Korea and Pakistan pose even greater threats. North Korea may be on the verge of total state collapse. If the nation collapses and the neighboring powers do not take immediate action to secure its nuclear supplies, it is plausible that terrorists could eventually gain access to this material. The Pakistani government is increasingly unstable, and many elements of the military are sympathetic to terrorist groups. Many parts of the nation are not controlled by the central government. Furthermore, Pakistani nuclear officials under A.Q. Khan have a history of selling nuclear material on the black market.

While the end of the Cold War greatly reduced the threat of a nuclear conflict, the trend has now reversed as nuclear weapons continue to proliferate, especially in unstable regimes. The use of nuclear weapons is becoming more likely with each passing year. While I doubt that any terrorist groups have access to nuclear weapons yet, I think the day is coming in the near future unless the world takes immediate action to reverse the trend.

BLACK SWAN EVENTS:
A nuclear weapon is detonated in a major world city by 2030 –
Probability: 50%
A nuclear war (defined as a nuclear attack and counterattack) occurs by 2030 –
Probability: 30%

Thursday, August 19, 2010

Blissful Genetic Ignorance - Will We Want to Know Our Genomes?

A couple readers have questioned me about the Genomic Revolution, wondering if people will truly want to know their genome even if they are able. As I mentioned in a previous post, people prefer to avoid thinking about things that seem both horrifying and inevitable. This is understandable. Would a person truly want to know that they are doomed to suffer from, say, Alzheimer’s disease or some other affliction that is commonly regarded as a fate worse than death?

While I can’t speak personally for anyone other than myself, I think that most people will ultimately prefer to know. As personal genomics becomes more commonplace, the mindset of blissful genetic ignorance will probably fade away. This wouldn’t be the first time that a new medical paradigm has changed public opinion about how much they should know. In a 1961 poll, 90% of US physicians surveyed said that they wouldn’t tell their patients if they had cancer. At the time, most doctors believed that patients would be better off not knowing since little could be done. But as cancer screening and treatment became more common in the subsequent decades, this mindset vanished almost entirely. Today it is hardly even imaginable that a doctor would not tell a patient if they had cancer.

There are so many advantages in knowing what conditions we are most at risk for. Ultimately, I think the knowledge of which of our unhealthy behaviors we most need to change (and which we can indulge in), and what prescriptions are most likely to be effective for our personal genome is simply more important than the unpleasant knowledge that we will eventually develop a certain condition. Practically everyone is at risk for something, and everyone accepts this. Would it really be so much worse for our psyches to know our specific risks instead of just a vague sense that we will develop something?

Please share your opinion. Would you want to know if you would eventually develop a disease?

Tuesday, August 10, 2010

Book Review - "You Are Not a Gadget: A Manifesto" by Jaron Lanier

Jaron Lanier’s new book, You Are Not a Gadget: A Manifesto, is a criticism of many emerging technologies and beliefs which Lanier finds dehumanizing. In this long, rambling critique, Lanier’s targets include open-source projects, crowd-sourcing mechanisms such as wikis and prediction markets, and “cybernetic totalism.” If Lanier was not one of the founding fathers of virtual reality, it would be easy to label him a luddite, but he insists that he merely wants to help steer the course of technological development rather than inhibit it. But compared to the vast majority of third culture futurists, Lanier comes across as extremely conservative and reactionary. While I knew before I picked up the book that I would disagree with almost everything Lanier had to say, I think it is important to give a fair hearing to contrary ideas from intelligent people to avoid the echo chamber effect. The world always needs gadflies.

Lanier believes that crowd-sourcing tools are causing people to rely too much on the hive mind. He rejects the notion of the “wisdom of the crowds,” viewing wikis as error-prone and bias-prone compared to more scholarly texts. This may or may not be true (the empirical evidence is mixed), but ultimately it is irrelevant. While Lanier laments the fact that Wikipedia has become the central repository for human knowledge, he offers no convincing solution. “Stop relying on Wikipedia” is a poor excuse for a solution. If it were that simple, Wikipedia would never have become so popular in the first place. It’s no accident that people prefer Wikipedia to Encyclopedia Brittanica.

He criticizes “cybernetic totalism,” which he defines as the belief that human brains are nothing more than complex computer programs, and will one day merge with computer technology. This seems like a thinly veiled swipe at transhumanism in general, and specifically the Singularity (the belief that one day soon we will create a computer smarter than we are, which will create an even smarter computer, which will endow us all with godlike powers), which is the dominant mindset among most computer scientists. I have my own criticisms of the Singularity, so I really wanted to root for Lanier in this section of the book. But ultimately, I think that Lanier’s conclusions are just as irrational as some of the more wild claims of Singularity enthusiasts. Rather than question the plausibility of this worldview, Lanier attacks the desirability of a man-machine merger. I think he is barking up the wrong tree here. If this is plausible and most people view it as beneficial, it will almost certainly occur eventually regardless of whether Lanier thinks it dehumanizes us or not. Once again, Lanier offers no solution as to how to avert this technological outcome, or suggestions on how we could steer technological progress toward a goal he views as more desirable.

On the rare occasions when Lanier does suggest an alternative solution, his recommendations are so laughably impractical that they are difficult to take seriously. For example, he considers the evolution of music from a physical product (e.g. a record, tape, or CD) to a digital file as a small tragedy. He believes that this encourages piracy and destroys the incentive to create songs, and that as a consequence we have entered a musical dark age. That’s a perfectly valid opinion, but what is his solution? For us all to go back to physical music products! He suggests “songles” – little trinkets like bracelets, necklaces, or coffee mugs that could unlock our music when they are physically near a computer. Aside from the sheer ridiculousness of this, Lanier seems oblivious to the fact that we moved away from physical music products because people didn’t WANT physical music products. They simply want to be able to listen to their music whenever they want to, with as little hassle as possible.

Ultimately, my biggest problem with this book is Lanier’s presumption that we can simply choose to not walk down a certain path of technological development if most of us agree it is detrimental. He is clearly a technological determinist, whereas I’m more of a technological fatalist. In my view, anything that CAN be developed WILL be developed, provided that enough people view it as beneficial, the technology is diffused enough to make it impossible to ban, and the economic incentives exist for its development. Ultimately, Lanier falls victim to what my good friend Nassim Nicholas Taleb calls “the illusion of control.” While it may be comforting to Lanier to believe that we can guide technological development so directly, this seems to be nothing more than wishful thinking. Don’t get me wrong; we absolutely need to have ethical debates about technological progress. But ultimately the naysayers will need to propose actual solutions instead of merely lamenting about the undesirable consequences of technology.

You Are Not a Gadget: A Manifesto is only 192 pages but feels much longer, given the disjointed, rambling nature of the chapters. It’s worth a read for anyone interested in futurism, for the simple reason that Lanier is one of the few contrarian voices in the third culture. But don’t expect to be convinced by many of his arguments.

3/5 stars

Sunday, August 8, 2010

The Future of Privacy - Radical Openness

Generally, my visions of tomorrow are quite optimistic. But there is one area where I am decidedly a pessimist. The future of privacy seems bleak. Many of us who are relatively tech-savvy have already given up much of our privacy voluntarily for the sake of convenience, fun, or money. I am a member of a social network which tracks my every movement by allowing me to check in with my smartphone to nearly every location I visit. Another of my social networks allows me to publish my every thought in real time, as long as it doesn’t exceed 140 characters. A popular financial website requires that users turn over the passwords to their bank accounts, then analyzes the users’ financial habits. Nearly all young people belong to a social network which makes no secret of its desire to collect our personal information to mine the data.

Ten years ago, most people would have been shocked by these technologies. Who would have thought that the biggest threat to our privacy would come from us voluntarily giving up our information? Facebook CEO Mark Zuckerberg rightly received a lot of criticism by forcing Facebook users to publicly reveal their personal information by default, then claiming that Facebook was merely adapting to users’ declining expectations of privacy. Zuckerberg’s claim was certainly untrue – Facebook is one of the driving forces behind declining expectations of privacy, rather than merely a response to it – but his instinct is probably correct that users will tolerate more intrusions on their privacy once they are accustomed to it.

My fellow futurist blogger David Houle notes, “As technology advances, privacy declines.” I think this is unfortunately correct, and very little can be done to change it. Tech-savvy young people in developed countries certainly have less privacy than they did a decade ago, and are mostly OK with that. In the coming decades, society will have to radically redefine its notions of privacy. I imagine that there may come a time when it is no longer practical to expect to be able to travel anywhere without your visit ending up in an online database, perhaps publicly available to anyone who cares. In the not too distant future, there may be vast government or corporate databases of genomes and other biometric indicators from nearly everyone in the nation.

Does this mean that an Orwellian dictatorship is likely? I don’t think so. Chances are, people will be willing to adjust their privacy expectations downward for the sake of convenience, just as they do now for Facebook. It remains to be seen if lowered expectations of privacy will actually help dictatorships thrive. Repressive states are increasingly blocking the very same tools that are responsible for the declining expectations of privacy – Facebook, Twitter, YouTube, and even Foursquare. It seems that these governments view them more as tools for subversion than as useful ways to snoop on political opponents.

This past week, the US Senate voted to confirm Elena Kagan as the next Supreme Court Justice. Much of her testimony dealt with her constitutional views on the right to privacy. In the United States at the present time, this is mostly just a code phrase for a nominee’s views on abortion, but I think that the right to privacy will become a more important issue in its own right for future confirmation hearings. The Supreme Court will probably eventually need to define what the right to privacy entails, who it protects, and from whom it protects them.

In the legislature, stronger safeguards for privacy can help prevent the emergence of corporate Big Brothers. Google recently decided to withdraw from China instead of obeying China’s censorship laws or facilitating its eavesdropping on its citizens. As commendable as this is, most large corporations are not as civic-minded. Yahoo! has been known to turn over personal emails from political dissidents to dictatorships. A law modeled on the Foreign Corrupt Practices Act which prevents US companies from infringing on their users’ privacy, wherever in the world they are located, would be a major step toward keeping Big Brother at bay. American companies would no longer have the excuse that they will be singled out for persecution if they refuse to participate in invasions of privacy by the governments of the countries in which they operate. Very few nations could feasibly ban every American company merely for obeying US laws.

Ultimately, government action and corporate good deeds can only slow the inevitable shift toward less privacy, and perhaps prevent some of its worst excesses. In the future, governments, corporations, and individuals can and will gain access to far more information about us than is currently available in the public domain. The totalitarianism of Big Brother probably isn’t going to happen…but hundreds of Little Brothers may be watching you soon, if they aren’t already.

Friday, August 6, 2010

The Future of Health Care - The End of Aging

What disease kills 100,000 people every day (usually after a prolonged period of pain and illness), affects nearly everyone, and kills about 90% of people in the industrialized world?

Aubrey de Grey, a renowned gerontologist, is on a quest to eliminate aging. The search for the fountain of youth has confounded humanity for millennia, but de Grey is on more solid scientific ground than most of his predecessors in this field. He has identified what he believes are the seven causes of biological aging – a list whi
ch has remained unchanged for the past 30 years – as well as the solutions for dealing with each cause. These solutions are not merely theoretical; they have all been demonstrated in labs, although most of them are many years away from being generally available.

Some casual observers may conclude that it is physically impossible to prevent aging since people have been trying and failing to do so for millennia. But the fact is that there are naturally-occurring examples of cells that do not age. Unfortunately, they’re called cancer cells, and tend to have the nasty side effect of killing people. Nevertheless, they do demonstrate the reality of cells that do not age.

Each cell in our body normally has an hourglass in it; the cell replicates as many times as it can, then commits suicide when the hourglass runs out of sand. But scientists have discovered how to add more sand to the hourglass. It’s an enzyme called telomerase that occurs at the end of DNA strands. E
ach time our cells divide, the DNA strands become frayed at the end, until eventually they are too unstable and self-destruct. For the discovery of telomerase in 1984 and subsequent analysis of how it relates to aging, three scientists were awarded the 2009 Nobel Prize in Medicine.

There is still a lot of research that needs to be done before it is possible to halt or reverse the aging process in humans. De Grey’s organizations, the SENS Foundation and the Methuselah Foundation, are currently testing life-extension therapies on mice. The Methuselah Foundation offers the MPrize: a reward of up to $4 million to anyone who can extend the lifespan of mice to record-breaking lengths. The goal is to eventually apply this knowledge to increase the human lifespan.

De Grey is not interested in extending the portion of life in which people are old, frail, and sick. His goal is to extend the healthy portion of life, and ultimately to prevent people from ever growing old at all…and reversing the aging process for those who are already elderly. This is not pie-in-the-sky immortality, as it won’t eliminate all causes of death. It would, however, offer the possibility of lifespans of indefinite length. De Grey has explained the concept as the “Longevity Escape Velocity.Over the past century, medicine has done an excellent job preventing people from dying at young ages, but very little to prevent aging or increase the maximum human lifespan. At the present, medicine is progressing relatively slowly, adding a few weeks to our lifespan every year. When the Genomic Revolution picks up pace within the next few years, it is likely that this will be increased to a few months every year. De Grey hopes that eventually we can attack the root causes of aging itself to add more than one year to the human lifespan every year. He believes that the first person to reach age 1,000 is alive today…and is probably only about ten years younger than the first person to live to age 150.

The concepts of aging and old age are so ingrained in our mindset that we tend to not even think about them. Like anything that is both horrifying and seemingly inevitable, we have a remarkable ability to push aging out of our minds, or even to go through mental contortions to rationalize it as a good thing. Virtually all major life decisions we make – what career to pursue, how much of our money to save, how much risk to take, who to marry, how many children to have, when to retire, what our religious beliefs are, if or when we should go to college – are ultimately premised on the assumption that we will grow old and die, probably between ages 70 and 100. But what if this ceases to be the case? There is almost nothing that would alter our lifestyles, worldviews, beliefs, and culture as profoundly as the end of aging and the mindset that accompanies it.

Modern biology has already discovered theoretical solutions to all of the causes of aging; it is now a matter of applying them and developing solutions that work for human beings.

(The SENS Foundation and the Methuselah Foundation are non-profit organizations under US law. All donations are tax-deductible. If you have some money to donate, these organizations are helping to solve the single worst disease threatening humanity.)

PREDICTIONS:
By 2045 – The aging process has been halted, for all intents and purposes. People no longer grow old beyond their peak healthy age, between 18 and 25.
By 2060 – It is possible to reverse existing damage from the aging process. It is no longer possible to estimate an adult’s chronological age merely by looking at them. Diseases of old age have, for the most part, ceased to be a problem.

Monday, August 2, 2010

Inception and the Ethics of Virtual Reality

***SPOILER ALERT***

I watched Inception last weekend - the first movie I've seen in the theater all year. It’s a great movie, and is a great illustration of the simulation hypothesis. Along with movies like The Matrix and Vanilla Sky, it explores the concept of whether our reality may be a simulation, and if there’s any way to tell the difference.

The reason I often blog about the simulation hypothesis is not merely because it’s an interesting philosophical question, although it certainly is. As Inception illustrates, there are some profound ethical dilemmas we will need to face in the future, when we have both the raw computing power and the understanding of our own neurology to escape to convincing, fictitious worlds.

In Inception, Cobb and Mal dream for decades (in dream-time) in a world of their own creation. Mal goes so far as to hide any evidence that she is dreaming, preferring to forget that their world is not real. When we have the technology to create simulated realities, there will almost certainly be people who want to do this. Even today, many people choose to spend a huge portion of their lives escaping into the crude virtual worlds that our technology allows, such as World of Warcraft. As long as this lifestyle is limited to computer geeks, most people will view it as an unhealthy activity. But when simulated worlds become truly convincing, it is probable that many others will want to join in.

How will society view people who want to spend years or decades in a simulation, living a better life than they have in this world? Will people scorn them like drug addicts? Will organized religions extol simulations as a way to have profound spiritual experiences, or will they be fearful of the threat that simulations pose to their monopoly on heaven? Might there be a mass exodus of people from this world, if nearly everyone prefers to live in a simulation? What would be the economic impact if a large portion of the population suddenly decided to stop working and live in a simulation? Will governments simply accept their choice and allow people to sleep the years away, or will they wake them up? Will those who remain behind envy those who have escaped to a better, simulated life? Perhaps we will empathize with them and respect their decision. (EDIT: I realized I used the pronouns “we” and “them” here to describe, respectively, those who remain behind and those who choose to live in a simulations…but honestly I have no idea which camp I would be in.) After all, how would we feel if we suddenly “woke up” to a higher plane of reality today, only to find that our “real” life was much worse than the one we experience in this world? I imagine that many of us would feel cheated out of our lives like Mal did, and be unable to accept the sudden decline in our standard of living. It seems very human to want to live the best life we possibly can. Since most of us wouldn’t want to wake up to a worse world, I think that eventually society will empathize with the dreamers and accept those who wish to remain lost in their subconscious.

This is but one of many new ethical questions we may have to confront by the middle of this century. Near the end of Inception Mal tells Cobb (who knows full well that he is dreaming), “You no longer believe in a single reality. So choose. Choose to be here with me.” Cobb rejects her offer, choosing to live in “actual reality” instead. While I’m pretty sure that the audience of 2010 is expected to applaud his decision, ultimately Mal is right. In a world where convincing simulations are possible, there is really no way to know if one is in a simulation or not. So why not just accept this and let people live in whatever world makes them happy? Could we really pass judgment on those who want to live permanently in a simulation, when we have no idea if we have chosen to do the same? Waking them up could be tantamount to destroying their lives. The audience of 2060 may react very differently to the ending of this movie.

What are your thoughts on the ethics of simulated reality? Do you think it would be more ethical (or practical) to let the dreamers dream, or to wake them up to "reality?"