Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Sunday, May 13, 2012

Bird Flu, Bioterror, and Bioerror


This week the prestigious science journal, Nature, published the methods and results of a groundbreaking new experiment in biotechnology, reigniting a firestorm that has been raging on and off for nearly a year. The reason for the controversy is the ghastly topic of the research paper: How to genetically engineer the avian influenza virus (H5N1) to make it more communicable.

Although H5N1 (i.e. “bird flu”) has existed in bird populations for decades, it entered the public consciousness in 2004, when human cases began surfacing in China and Southeast Asia. The cases were quickly linked to contact with poultry – mostly slaughterhouse workers or chicken farmers, who directly worked with chickens in insanitary conditions. H5N1 was far more virulent than the seasonal strains of influenza which have been circulating since 1918; over 60% of people who have contracted H5N1 in the past eight years have died from it. Fortunately, there has been no pandemic. Only 600 people worldwide are known to have had avian influenza. Although it appears that humans can acquire the disease directly from birds, there have been no known cases of human-to-human transmission – a prerequisite for a global pandemic. Although influenza viruses mutate very quickly, the lack of human-to-human transmission caused some complacency. Some epidemiologists even went so far as to state that human-to-human transmission of H5N1 might be impossible.

The new research blows that theory out of the water. Scientists at the University of Wisconsin and Erasmus Medical College succeeded in taking the H5N1 virus and combining it with the H1N1 (“swine flu”) virus. Swine flu is known to be easily communicable between humans but relatively mild; bird flu is known to be extremely deadly but difficult to transmit. By combining the two into a hybrid and making other modifications to the genes of the virus, scientists developed a “super-strain” of flu. They tested the virus on ferrets, which have an immune system very similar to humans. Not only did many of the ferrets die, but the disease was easily transmitted to other ferrets who were not directly exposed to the virus themselves.

The research has horrified many scientists. Governments remain gravely concerned about its publication in Nature. In the United States, the National Science Advisory Board for Biosecurity requested that Nature not publish the findings in the interest of national security. Although academia typically does not view censorship kindly, many scientists found themselves agreeing with the government. Biologist and Nobel-laureate Sir Richard Roberts said, “Someone is trying to make the most dangerous virus we can think of. I don't understand how one can justify that, unless there is no other way of getting the data that you're interested in.” The risks are huge: Nature published the methodology that the scientists used to create their super-strain of flu, potentially providing a blueprint for terrorists to replicate their efforts. Additionally, there is the concern that if research like this isn't shunned, it will continue apace and may one day escape the laboratory through simple error.

Other scientists believe that publishing the research is necessary, in order to prevent future outbreaks. They argue that if we can learn more about how influenza mutates and infects new people, we will be better prepared to deal with a future pandemic. They acknowledge the dangers of the research, but argue that there is no avoiding the fact that it will soon be possible to create bioengineered diseases, and it is better to be prepared for them when they do occur. Additionally, there is the possibility that H5N1 may eventually evolve into a more communicable form on its own, for which epidemiologists should prepare. Last month, the US government finally relented. The National Science Advisory Board for Biosecurity reversed itself, voting 12-6 to allow the publication of the research to proceed.

As it stands, I find myself on the side of those urging extreme caution with this type of research. Bioterrorism will be the greatest security threat of the early 21st century; unlike nuclear weapons, biological weapons will soon be available to many people. Futurist Michio Kaku warns that in the not-too-distant future, creating new viruses may be as simple as typing base letters into a piece of software and having a computer assemble the DNA strand. When that happens, we may have no choice but to fund research to prevent diseases that do not yet exist in nature. But until then, it seems that the risks greatly outweigh the rewards.

Monday, November 8, 2010

Political Issues on the Horizon, Part 1

Americans went to the polls last Tuesday and, for the third time in as many election cycles, delivered a sharp rebuke to the incumbent party. No doubt concerned that the economic recovery seems to be stagnating and that unemployment remains high, the voters gave the House of Representatives back to the Republican Party. In the wake of the midterm elections in the United States, this is a good time to consider political issues on the horizon.

I generally shy away from making specific predictions about politics or the economy. Voters are fickle and economies are unpredictable, especially compared to the relatively simple trends that scientific and technological developments usually follow. However, I think we can at least speculate on the types of issues that are likely to become important, if not the precise way that they will be resolved by the voters and the government. In my next few blog posts, I’m going to explore some of the political issues that I think will grow in importance over the next decade, as well as a couple of oft-cited (and perhaps overblown) issues which may soon fade from the American political landscape.

Privacy. For the past couple decades, whenever a political gasbag has asked a judicial appointee about his or her views on “privacy,” it has typically been a code word for abortion. However, I believe privacy will soon become a political issue in its own right, spurred on by technological advances which encroach more and more on our privacy and demand access to sensitive information. Already, there have been court cases to determine if police can tag an automobile with a GPS tracker without a warrant, but this is just the tip of the iceberg of what is to come. RFID chips, which will soon replace bar codes on products, will be embedded in nearly everything we buy, allowing for constant surveillance and tracking of products (and by extension, of customers) from their point of manufacture to their point of disposal.

Additionally, we are probably no more than a decade from the point where sensors and face-recognition technology are commonplace in many public establishments, as in Minority Report, making it virtually impossible to step out of our own homes without appearing in a database somewhere. In the slightly more distant future (probably 10-20 years), insectoid-sized robots are on the horizon. DARPA is already designing them for use in military and spying applications, but eventually their spread to the general public is a virtual certainty as the cost of computing drops, allowing for practically anyone to monitor practically anyone.

In light of all of these emerging technologies, some erosion of our privacy seems almost inevitable. The extent of it remains open to debate. Will our governments pass privacy laws regulating how all of this information can be obtained and used? Or will our governments be part of the problem? Only time will tell.

Bioethics. The first decade of this century saw two important bioethical debates in the United States and Europe. In the United States, stem cell research was hotly debated in the first few years of the Bush presidency, but now seems to have decisively concluded in favor of scientific progress, as the huge benefits of stem cells become more obvious and the moral objections have fallen by the wayside. In Europe, the main bioethical debate of the past decade – genetically modified foods – is still ongoing. Many Europeans are concerned about the possibility of genetically modified foods wreaking unintentional havoc on the environment and public health. Although these fears do not have much scientific support, the controversy has nevertheless succeeded in quashing the industry in Europe, at least temporarily.

These are merely the first of many bioethical debates we will face in the 21st century. Some will be relatively trivial. For example, concerns about athletes on steroids may soon give way to concerns about professional athletes with enhanced body parts. A few years ago, Tiger Woods opted to get superhuman 20/15 vision through Lasik surgery, and the range of upgrades available to those who can afford them will soon be much wider. If athletes are able to buy improved bodies, it will make it difficult for “natural” athletes to compete. Will we have separate leagues for enhanced athletes and natural athletes? Will we ban these superhuman enhancements entirely, and if so, what qualifies as a superhuman enhancement?

Other bioethical concerns will be much more profound, and the government will have to take a stand. For example, if the technology exists and is widely available to screen for genetic abnormalities, would it be child abuse to not tinker with a fetus’ genome to prevent birth defects? And if preventing birth defects is morally acceptable (indeed if it is the ONLY morally acceptable option), why not preventing other undesirable traits like ugliness, propensity to violence, or low intelligence? Where does one draw the line? Eugenics, long discredited due to its ties to Nazism, may make a comeback in a world of easy access to genetic therapy.

Many of the questions related to human augmentation and genetic engineering have no easy answer, and any government decision is bound to leave many people feeling morally queasy. Look for political parties to become increasingly divided along the lines of these bioethical questions, with conservatives preferring a more restrictive approach to avoid creating ghastly new moral quandaries, and liberals favoring a more open approach to improving humanity through reengineering our own biology.

To be continued in another blog post…

Sunday, September 5, 2010

The Turing Test and Artificial Consciousness

In a party game dating back to the 1940s or earlier, a man and a woman were put in separate rooms and allowed to communicate with a judge through typed messages. One of the two would be trying to deceive the judge about his or her gender; the judge’s task was to determine the gender of the two participants through the typed conversations. In 1950, computer scientist Alan Turing modified the game to be used in the context of artificial intelligence. In the Turing Test, there is a human and computer participant, rather than a male and female participant. Both attempt to convince a judge that they are human via a text conversation. If the judge is unable to determine the human more often than chance would dictate, the computer is said to have passed the Turing Test. As of now, no computers have even come close to passing a Turing Test.

The Turing Test is commonly viewed as the holy grail of artificial intelligence. A computer that is capable of convincing humans of its humanity would have to be as richly programmed as a human brain. But would it truly be conscious, or would it merely be mimicking intelligence? Most computer scientists assert that the computer would actually be conscious in the same sense that we are. Since a brain, after all, is merely a pattern of information, it is no fundamentally different than a computer program. Both a brain and a computer program merely respond to external inputs and produce an output. There is no empirical test that we can conduct to determine if an entity is “conscious.” The only way to gauge that is by our interactions with the entity in question. When we interact with other humans, we typically take them at their word that they are conscious entities, because we are aware of our own consciousness and we observe that other humans generally behave like we do. Therefore, I think that any computer capable of passing the Turing Test would have just as much claim to consciousness as any human.

The mindset that computers, no matter how well-programmed, can only mimic consciousness will probably fall by the wayside in the 21st century, as the distinction between natural and artificial becomes much less clear. For all of their merits, silicon computer chips have a lot of drawbacks, such as the amount of heat they emit and the amount of power they consume. In the coming decades, we will probably see more organic, carbon-based computers. At the same time, we will probably see a lot more “natural” humans with artificial additions to their brains. To some extent, brain implants already exist to help people cope with brain damage or to mitigate certain mental conditions. Eventually, they may be used in perfectly healthy individuals to enhance their mental capacity. These kinds of developments will likely blur the line between human and computer. When complex forms of intelligence can no longer be so neatly classified as “human” or “computer,” but instead represent a diverse spectrum ranging from 100% organic to 100% machine, will it still make any sense to assert that computers are able to mimic intelligence without being intelligent? I think not.

I think the reason that some people believe a computer would only be mimicking intelligence is because intelligent computers are not yet commonplace. While we have grown accustomed to computers that can crunch numbers and play chess much better than we can, we have not yet encountered any computers that can recognize patterns or respond with emotions as well as we can. As computers become more and more powerful, this day will come. Many decades from now, we may have computers that are truly capable of passing the Turing Test. They will probably lobby for basic rights under the law. When this happens, I think we will expand the definition of human rights to include non-human forms of intelligence, as there would be no moral basis for doing otherwise. And will we believe their claims that they are truly conscious beings? I think we will. They’ll get mad if we don’t.

Monday, August 2, 2010

Inception and the Ethics of Virtual Reality

***SPOILER ALERT***

I watched Inception last weekend - the first movie I've seen in the theater all year. It’s a great movie, and is a great illustration of the simulation hypothesis. Along with movies like The Matrix and Vanilla Sky, it explores the concept of whether our reality may be a simulation, and if there’s any way to tell the difference.

The reason I often blog about the simulation hypothesis is not merely because it’s an interesting philosophical question, although it certainly is. As Inception illustrates, there are some profound ethical dilemmas we will need to face in the future, when we have both the raw computing power and the understanding of our own neurology to escape to convincing, fictitious worlds.

In Inception, Cobb and Mal dream for decades (in dream-time) in a world of their own creation. Mal goes so far as to hide any evidence that she is dreaming, preferring to forget that their world is not real. When we have the technology to create simulated realities, there will almost certainly be people who want to do this. Even today, many people choose to spend a huge portion of their lives escaping into the crude virtual worlds that our technology allows, such as World of Warcraft. As long as this lifestyle is limited to computer geeks, most people will view it as an unhealthy activity. But when simulated worlds become truly convincing, it is probable that many others will want to join in.

How will society view people who want to spend years or decades in a simulation, living a better life than they have in this world? Will people scorn them like drug addicts? Will organized religions extol simulations as a way to have profound spiritual experiences, or will they be fearful of the threat that simulations pose to their monopoly on heaven? Might there be a mass exodus of people from this world, if nearly everyone prefers to live in a simulation? What would be the economic impact if a large portion of the population suddenly decided to stop working and live in a simulation? Will governments simply accept their choice and allow people to sleep the years away, or will they wake them up? Will those who remain behind envy those who have escaped to a better, simulated life? Perhaps we will empathize with them and respect their decision. (EDIT: I realized I used the pronouns “we” and “them” here to describe, respectively, those who remain behind and those who choose to live in a simulations…but honestly I have no idea which camp I would be in.) After all, how would we feel if we suddenly “woke up” to a higher plane of reality today, only to find that our “real” life was much worse than the one we experience in this world? I imagine that many of us would feel cheated out of our lives like Mal did, and be unable to accept the sudden decline in our standard of living. It seems very human to want to live the best life we possibly can. Since most of us wouldn’t want to wake up to a worse world, I think that eventually society will empathize with the dreamers and accept those who wish to remain lost in their subconscious.

This is but one of many new ethical questions we may have to confront by the middle of this century. Near the end of Inception Mal tells Cobb (who knows full well that he is dreaming), “You no longer believe in a single reality. So choose. Choose to be here with me.” Cobb rejects her offer, choosing to live in “actual reality” instead. While I’m pretty sure that the audience of 2010 is expected to applaud his decision, ultimately Mal is right. In a world where convincing simulations are possible, there is really no way to know if one is in a simulation or not. So why not just accept this and let people live in whatever world makes them happy? Could we really pass judgment on those who want to live permanently in a simulation, when we have no idea if we have chosen to do the same? Waking them up could be tantamount to destroying their lives. The audience of 2060 may react very differently to the ending of this movie.

What are your thoughts on the ethics of simulated reality? Do you think it would be more ethical (or practical) to let the dreamers dream, or to wake them up to "reality?"