Sunday, September 26, 2010

The Future of Sleep: Entirely Optional

Why do we sleep? This seemingly simple question is one of the longest-standing unsolved mysteries in biology. As bizarre as it sounds, there is no obvious reason why we are inactive for a third of our lives. National Geographic and TIME have both tackled this subject to explain some of the most predominant theories, but the bottom line is that no one really knows. Some scientists believe that sleep restores some as-of-yet-undiscovered substance in our brains, which is depleted while we are awake…or that it removes some as-of-yet-undiscovered substance that accumulates while we are awake. Some observers have noted that getting REM sleep is necessary for our learning of new tasks. Although this is true, it fails to explain how sleep helps us learn or why it is necessary for us to be dormant during this process. Others have theorized that the reason that mammals sleep so much compared to other animals is an artifact from the age of the dinosaurs: At the time, most mammals were nocturnal. Perhaps sleep forced mammals to keep quiet during the day, to avoid becoming dinner for a hungry Tyrannosaurus.

Last year, biologists identified a rare gene which allows people to feel rejuvenated after only a few hours of sleep, indicating that it should be possible – at least in theory – to modify the amount of sleep we need in the near future. As the Genomic Revolution picks up in the next year or two, there will undoubtedly be many other genes identified that govern our sleep processes. Understanding them may unlock the key to understanding sleep.

Meanwhile, neurologists are making progress in understanding how sleep affects our brain. As electroencephalograms become more and more obsolete, scientists are gaining access to new ways to monitor our brain activity. Improved neural scans should allow scientists to pinpoint which areas of the brain are most affected by sleep, and how.

Perhaps soon we will understand the nature of sleep. Is it vital for our survival in a way that we don’t understand? Or is it simply a relic of our evolutionary past, with no useful purpose? Understanding sleep is the first step to conquering it. Is it possible that we could develop safe medication to mimic the benefits of sleep, allowing us to remain conscious for 24 hours per day, without the nasty side effects of caffeine or other drugs? This would change our world profoundly. We would essentially be living 50% longer by squeezing an extra eight hours out of our days. People could earn much more money by working more hours without sacrificing their leisure time, or alternatively, they could have much more leisure time without sacrificing their career.

Although some people often claim to enjoy sleep, I think most people would prefer to do without it, if we had the option. I have difficulty believing that anyone could truly enjoy something that they aren’t even aware they are doing. For most of us, our “love of sleep” is really “dislike of waking up.” If we could invent a safe way to remain constantly awake while still reaping whatever benefits we get from sleep, most of us would jump at the opportunity.

PREDICTIONS:
By 2030 – Scientists have a basic understanding of the reasons (if any) that we sleep, as well as why it evolved in the first place.
By 2050 – Medication exists that makes sleeping optional, providing people with any benefits of sleep without the need to actually do so, and without any nasty side effects.

Wednesday, September 15, 2010

Black Swan Events: Bioterrorism

The Genomic Revolution is a double-edged sword. As I mentioned previously, the benefits will be enormous. Genomics will allow us to have personalized, preventative health care, instead of mass-market sick care. However, there is also a ghastly dark side to the Genomic Revolution. We will soon face the truly horrifying prospect of bioterror (or bioerror.) When any college student has access to pathogens and the capability to modify them to make them even more virulent or transmissible, someone almost certainly will.

Within a few years, the genomes of nearly all human pathogens will be publicly available. This will be necessary in order to better understand these diseases and develop cures. However, those who wish to use this information to commit acts of mass murder will have access to it as well. Some diseases may not require very much modification to become even more deadly. The 1918 Spanish flu pandemic, which killed more people than World War I, is very genetically similar to many of the strains of flu that are still circulating to this day. Soon it will be possible for an individual to create a virulent flu strain like the Spanish flu by genetically modifying other strains.

Since genomics is essentially an information technology, it is possible to swap genes from one species into another. This is what allows agronomists to copy the cold-resistant genes of Arctic fish and paste them into tomatoes (in theory.) However, the same principle could be used by bioterrorists to create a Frankenstein’s monster, combining the worst traits of many diseases. Imagine an illness with the virulence of ebola, the transmissibility of the common cold, and the evolutionary adaptability of HIV. Such a disease is the stuff that nightmares (or B-movies) are made of. Yet it will eventually be possible for malevolent individuals or groups to create them.

If a manmade disease was sufficiently different from anything found in nature, it could prove devastating. We humans have had a chance to evolve alongside influenza, the plague, malaria, and other naturally-occurring afflictions. People alive today are mostly descended from the hearty individuals who survived earlier strains of these diseases. But we would not have evolved such immunities to manmade diseases. Just as the vast majority of Native Americans were decimated by European diseases to which they had no immunity in the 16th century, we could face the same prospect with manmade superplagues.

Fortunately, we have a defensive weapon in our arsenal that the 16th century Native Americans did not have. Just as genomics can create such frightening diseases, it holds the potential to cure them. Within a few years, it will be possible to sequence a genome in a couple hours. As our understanding of how genes work continues to grow, it will take less and less time to understand the genomes we sequence. Assuming that bureaucratic procedures were waived to combat a public health emergency, a cure for a manmade disease could be on the market almost as quickly as software antivirus programs are patched when new threats are discovered. Soon, naturally-occurring diseases could be a mere minor annoyance. The real public health danger could shift to the arms race between bioterrorists and scientists racing to cure their latest concoctions.

BLACK SWAN EVENTS:

By 2040 – A disease created or modified by humans has been released into the public. Probability: 90%

By 2040 – A disease created or modified by humans has killed at least 100,000 people. Probability: 75%

(I hesitated to even call bioterrorism a “Black Swan,” since that implies that the event is at least somewhat unlikely to occur. In my opinion, the danger of manmade diseases being released onto the public is not a question of if, but when and where. Since we cannot forecast this, it is unpredictable enough to be considered a Black Swan Event in my opinion.)

Monday, September 13, 2010

Moore's Law and Ubiquitous Computing

The amount of computing power in a single iPhone is greater than the amount of computing power in the supercomputer that controlled the Apollo 11 mission to the moon. It is also greater than the total amount of computing power used by all the militaries of all the nations in World War II. It is no exaggeration to say that a single iPhone dropped into 1940 could have dramatically altered the outcome of the war.

The co-founder of Intel, Gordon Moore, made a stunningly accurate prediction in 1965. He noted that the number of transistors per integrated circuit had been doubling every year, and expected that trend to continue for at least ten more years. Moore’s Law, as it is now known, is still going strong 45 years later. This exponential acceleration of computer hardware has proven so consistent that we have grown to expect it. Moore’s Law has continued unhindered through booms and busts, war and peace. Approximately every 12-18 months, the amount of computing power that a person can buy for any given amount of money doubles. This has been accomplished by making transistors smaller and smaller, to fit more of them on a single integrated circuit. Just as Gordon Moore predicted way back in 1965, we can safely expect this trend to continue for yet another ten years.

But after about 2019, we will hit a wall. This is because by that time, our transistors will be so small that they will be just a few molecules across, and quantum effects will make it impossible to effectively shrink them any further. Fortunately, computer engineers have already found a way to keep our computing power accelerating beyond that. At the present time, most computer chips are flat, but there is no reason they have to be. After we can’t cram any more transistors onto an integrated circuit, we will still be able to expand our computer chips outward into the third dimension. However, this will introduce another problem. Three-dimensional chips will produce far more heat than flat chips do. If computer engineers cram too many tiny transistors on top of one another, they could fry the computer chips. Although there are some clever solutions in the works to address this problem, there are many skeptics, including Gordon Moore himself. While we can expect the raw power of computer chips to continue to increase beyond 2019 as they expand outward, it remains to be seen if we will still be able to double the power every 12-18 months in accordance with Moore’s Law.

With our computing power doubling every 12-18 months for at least the next decade, we can expect the computers of 2020 to be 100 to 1,000 times more powerful than equally-priced computers today, just as today’s computers are about 1,000 times more powerful than computers of ten years ago. This will profoundly transform the world. A thousandfold increase in computing power means far more than search engines that run a thousand times faster. It opens up a wide array of new applications that no one would have even attempted before.

The Information Age can be roughly divided into three epochs: mainframe computing, personal computing, and ubiquitous computing. The era of mainframe computing lasted from roughly 1946 to 1977. This era was dominated by enormous computers staffed by many people. The era of personal computing was the second epoch, lasting from roughly 1977 until the present. In this era, individuals were finally able to afford their own computers. We are now entering the third epoch: the era of ubiquitous computing. In this epoch, there will be many computers for each person. In addition to our PCs, many of us already have smartphones, portable music players, and e-readers. Computers will soon be woven into the fabric of our world so much that we will rarely even notice them. Virtually every machine, every wall, and every article of clothing will contain computers.

Although this constant connectedness will certainly have a negative impact on our privacy, it also has many benefits. Computers that constantly monitor our health will be able to automatically alert 911 whenever we are having an emergency, possibly before we are even aware of it ourselves. Ubiquitous computing will finally enable driverless cars, which offer the potential of saving thousands of lives per year, reducing traffic and pollution, and reducing the need to personally own a car. If we would like to have a change of scenery, we will be able to have interactive displays on our walls that could cycle through a preselected assortment of posters to display. It will enable truly smart homes, in which all appliances are connected to one another and to the internet, and can alert you when it is time to repair or replace them. Just as we have come to expect any building we enter to have electricity and plumbing, we will soon expect any building we enter to have internet access and to be connected to the outside world via computers woven unnoticeably into the walls, ceilings, and floorboards.

The exponential growth of computer hardware associated with Moore’s Law has been the single most important driving force in technology for the last half-century, and it still has at least another decade to go. As computers continue to become more and more powerful, things that seemed virtually impossible just a decade ago are beginning to look mundane. It begs the question: What seems virtually impossible today that will look mundane in 2020?

PREDICTIONS:

By 2015 – Effective smartphone applications exist which can turn lights on and off, and start or stop home appliances.

By 2016 - Personal health monitors, which are ingested or worn, can automatically call 911 whenever a person's vital signs indicate an emergency.

By 2017 – The average American carries at least ten computing devices on (or inside) his or her person.

By 2018 – Smart walls are becoming popular, which can display any image the user wants at any given moment, or can cycle through a series of posters.

By 2022 – Silicon computer chips are no longer flat. They are now three-dimensional because it is impossible to shrink transistors any further.

Thursday, September 9, 2010

Sixth Sense Technology

Check out this demonstration of Sixth Sense Technology, developed by Pranav Mistry at the MIT Media Lab. It’s a great demonstration of an augmented reality-like application. Sixth Sense consists of a camera and a projector, which the user wears. The camera recognizes objects, and the projector displays something over top of it. For some applications, the user can also wear sensors on his or her thumb and index finger, to trigger the technology to do something merely by gesturing. For example, it can snap a photograph if you make a rectangle with your fingers. By tracing a circle on your wrist, it will display the time. You can display your photographs and documents on any wall and move them around, as in Minority Report.

While Sixth Sense Technology doesn’t quite meet the technical definition of augmented reality since the image is literally being projected on top of the object in “actual” reality for anyone to see – rather than superimposing a virtual image via a screen that only the user can see – it is the most impressive augmented reality-like application that I have seen thus far, and is proof of concept of some of the applications that will be possible in the near future.

Wednesday, September 8, 2010

Book Review - "The Demon-Haunted World" by Carl Sagan

Astronomer Carl Sagan’s 1996 book, The Demon-Haunted World is the ultimate guide to critical thinking. Carl Sagan asks, “How can we make intelligent decisions about our increasingly technology-driven lives if we don’t understand the difference between the myths of pseudoscience, New Age thinking, and fundamentalist zealotry…and the testable hypotheses of science?”

Sagan offers a lengthy explanation of how the scientific method works, and how it is demonstrably more successful than any of its pseudoscientific imitators. He teaches the importance of having a skeptical worldview, especially of extraordinary claims. As he stated in Cosmos, “Extraordinary claims require extraordinary evidence.” Sagan explains the necessity of Ockham’s Razor – the rule of thumb that people should not assume anything that the evidence does not require them to assume – in critical thought. He systematically exposes the absurdity behind many of the most popular modern-day forms of pseudoscience, including astrology, homeopathy, psychics, alien abductions, Freudian psychoanalysis, doomsday predictions, graphology, and numerology.

While some have criticized the book as being anti-religious, Sagan himself disputed this characterization. He was clearly an agnostic, but had a deep appreciation for the sense of wonder that religion could inspire and recognized that it had much in common with his own appreciation for science. He does take certain religious views to task – such as the ideas of young-earth creationism, faith healing, and divine intervention. However, he notes that although “religions are often the state-protected nurseries of pseudoscience…there is no reason why religions have to play that role.” Sagan was a strong advocate of finding common ground between the religious and scientific communities: a central theme of his novel Contact.

14 years after its original publication, The Demon-Haunted World seems more topical than ever. The applications of critical thinking extend far beyond the realm of science. By being skeptical of unusual claims, we can better gauge the veracity of some of the odd statements we routinely hear from any authority, including our politicians, educators, corporate gurus, and religious leaders. In my opinion, there is no more important skill that our schools should teach students than the process of thinking critically. Unfortunately, most high schools and many universities don’t offer any classes in critical thought, even as electives. Carl Sagan’s The Demon-Haunted World can guide us where our schools do not.

The Demon-Haunted World was Sagan's last book before his death in 1996. Although it never reached the same level of popularity as some of his other works such as Cosmos and Contact, I think it is his most important work for the layperson. The Demon-Haunted World has shaped my view of the world more than any other book.

5/5 stars

Sunday, September 5, 2010

The Turing Test and Artificial Consciousness

In a party game dating back to the 1940s or earlier, a man and a woman were put in separate rooms and allowed to communicate with a judge through typed messages. One of the two would be trying to deceive the judge about his or her gender; the judge’s task was to determine the gender of the two participants through the typed conversations. In 1950, computer scientist Alan Turing modified the game to be used in the context of artificial intelligence. In the Turing Test, there is a human and computer participant, rather than a male and female participant. Both attempt to convince a judge that they are human via a text conversation. If the judge is unable to determine the human more often than chance would dictate, the computer is said to have passed the Turing Test. As of now, no computers have even come close to passing a Turing Test.

The Turing Test is commonly viewed as the holy grail of artificial intelligence. A computer that is capable of convincing humans of its humanity would have to be as richly programmed as a human brain. But would it truly be conscious, or would it merely be mimicking intelligence? Most computer scientists assert that the computer would actually be conscious in the same sense that we are. Since a brain, after all, is merely a pattern of information, it is no fundamentally different than a computer program. Both a brain and a computer program merely respond to external inputs and produce an output. There is no empirical test that we can conduct to determine if an entity is “conscious.” The only way to gauge that is by our interactions with the entity in question. When we interact with other humans, we typically take them at their word that they are conscious entities, because we are aware of our own consciousness and we observe that other humans generally behave like we do. Therefore, I think that any computer capable of passing the Turing Test would have just as much claim to consciousness as any human.

The mindset that computers, no matter how well-programmed, can only mimic consciousness will probably fall by the wayside in the 21st century, as the distinction between natural and artificial becomes much less clear. For all of their merits, silicon computer chips have a lot of drawbacks, such as the amount of heat they emit and the amount of power they consume. In the coming decades, we will probably see more organic, carbon-based computers. At the same time, we will probably see a lot more “natural” humans with artificial additions to their brains. To some extent, brain implants already exist to help people cope with brain damage or to mitigate certain mental conditions. Eventually, they may be used in perfectly healthy individuals to enhance their mental capacity. These kinds of developments will likely blur the line between human and computer. When complex forms of intelligence can no longer be so neatly classified as “human” or “computer,” but instead represent a diverse spectrum ranging from 100% organic to 100% machine, will it still make any sense to assert that computers are able to mimic intelligence without being intelligent? I think not.

I think the reason that some people believe a computer would only be mimicking intelligence is because intelligent computers are not yet commonplace. While we have grown accustomed to computers that can crunch numbers and play chess much better than we can, we have not yet encountered any computers that can recognize patterns or respond with emotions as well as we can. As computers become more and more powerful, this day will come. Many decades from now, we may have computers that are truly capable of passing the Turing Test. They will probably lobby for basic rights under the law. When this happens, I think we will expand the definition of human rights to include non-human forms of intelligence, as there would be no moral basis for doing otherwise. And will we believe their claims that they are truly conscious beings? I think we will. They’ll get mad if we don’t.