Thinking as a Hobby

Get Email Updates
Email Me

Admin Password

Remember Me

3478631 Curiosities served
Share on Facebook

IEEE on The Singularity
Previous Entry :: Next Entry

Read/Post Comments (4)

Institute of Electrical and Electronics Engineers (IEEE) has a periodical called IEEE Spectrum, and this month's issue (both online and in print) features a number of stories about The Singularity, which Glenn Zorpette's introductory article explains this way:

The singularity is supposed to begin shortly after engineers build the first computer with greater-than-human intelligence. That achievement will trigger a series of cycles in which superintelligent machines beget even smarter machine progeny, going from generation to generation in weeks or days rather than decades or years. The availability of all that cheap, mass-­produced brilliance will spark explosive economic growth, an unending, hypersonic, tech­no­industrial rampage that by comparison will make the Industrial Revolution look like a bingo game.

There are a number of different scenarios envisioned by The Singularity, including the fusion of human and inorganic machinery into a new hybrid species, accelerated genetic engineering that keeps us organic but fundamentally changes us, and a purely inorganic future where computers either replace us or into which we upload our consciousness and live in virtual utopia.

There's a lot of interesting material to talk about with respect to The Singularity, so I'll probably end up using it for a series of blog posts over the next week or so. The IEEE feature got views from a number of heavy hitters in the fields of neuroscience, philosophy, and computer science, and they say a lot of interesting stuff. But today I'll just focus on the lead article, The Consciousness Conundrum by John Horgan.

He starts out this way:

I'm 54, with all that entails. Gray hair, trick knee, trickier memory. I still play a mean game of hockey, and my love life requires no pharmaceutical enhancement. But entropy looms ever larger. Suffice it to say, I would love to believe that we are rapidly approaching “the singularity.” Like paradise, technological singularity comes in many versions, but most involve bionic brain boosting. At first, we'll become cyborgs, as stupendously powerful brain chips soup up our perception, memory, and intelligence and maybe even eliminate the need for annoying TV remotes. Eventually, we will abandon our flesh-and-blood selves entirely and upload our digitized psyches into computers. We will then dwell happily forever in cyberspace where, to paraphrase Woody Allen, we'll never need to look for a parking space. Sounds good to me!

If you haven't figured out by now, Horgan is not exactly a Singularity optimist. He spends the rest of the article most discussing how little we know about how the brain works and gives rise to consciousness, and this is actually a decent high-level overview so it's worth a read.

This section discussing rate and temporal encoding in neurons is pretty good:

The first neural code was discovered more than 70 years ago by the British electrophysiologist Edgar Adrian, who found that when he increased the pressure on neurons involved in the sense of touch, they fired at an increased rate. That so-called rate code has now been demonstrated in many different animals, including Homo sapiens. But a rate code is a crude, inefficient way to convey information; imagine trying to communicate solely by humming at different pitches.

Neuroscientists have long suspected that the brain employs subtler codes. One of them might be a temporal code, in which information is represented not just in a cell's rate of firing but also in the precise timing between spikes. For example, a rate code would treat the spike sequences 010101 and 100011 as identical because they have the same number of 0 and 1 bits. But a temporal code would assign different meanings to the two strings because the bit sequences are different. That's a vital distinction: the biophysicist William Bialek of Princeton University calculates that temporal coding would boost the brain's information-processing capacity close to the Shannon limit, the theoretical maximum that information theory allows for a given physical system.

Some neuroscientists suspect that temporal codes predominate in the prefrontal cortex and other brain structures associated with "higher" cognitive functions, such as decision making. In these regions, neurons tend to fire on average only one or two times per second, compared with the 100 or more times of sensory and motor neurons.

This is a good explanation of rate and temporal encoding, except for that last paragraph, which is a bit misleading. It makes it sound like the information processed by individual spikes is involved in some of the more "high-level" processing of the brain. However, there are famous, common examples of spike timing processing in well-understood contexts. For example, the auditory system of the barn owl is able to use the timing of spikes to locate objects in space. Auditory signals reaching the owl's ear closer to the sound source will cause neurons to fire slightly earlier than those reaching the ear that is slightly further away. The owl's auditory system is able to use this very small difference to locate objects in space.

As to whether or not the neural codes that the brain uses will be understandable, he quotes Christoph Koch:

Evidence from research on neural prostheses suggests that brains even devise entirely new codes in response to new experiences. "There may be no universal principle" governing neural-information processing, Koch says, "above and beyond the insight that brains are amazingly adaptive and can extract every bit of information possible, inventing new codes as necessary."

Whether or not there is a universal principle, there are most certainly regular principles that underlie the function of the brain. To alternative is to believe that the principles that underlie brain function are either random, or so bewilderingly complex that we cannot hope to understand them. One view is nonsensical, and the other is just pessimistic.

He talks about progress being made in understanding how the brain works:

At Caltech and elsewhere, engineers have designed hollow electrodes that can inject fluids into the surrounding tissue. The fluids could consist of nerve-growth factors, neurotransmitters, and other substances. The nerve-growth factors encourage cells to grow around electrodes, while the neurotransmitters enhance or supplement electrical-stimulation treatment. Neuroscientists are also testing optical devices that can monitor and stimulate neurons, as well as genetic switches that turn neurons on or off.

To be sure, it's promising work. Terry Sejnowski, a neuroscientist at the Salk Institute for Biological Studies, in San Diego, says the new technologies will make it possible "to selectively activate and inactivate specific types of neurons and synapses as well as record from all the neurons in a volume of tissue." That, in turn, might make it possible to build more effective and reliable neural prostheses.

I guess the implication is that as we keep chipping away at understanding the brain, the progress will be limited to improving prostheses, rather than allowing us to build working, stand-alone entities that embody the qualities of thinking things.

He concludes:

Let's face it. The singularity is a religious rather than a scientific vision. The science-fiction writer Ken MacLeod has dubbed it "the rapture for nerds," an allusion to the end-time, when Jesus whisks the faithful to heaven and leaves us sinners behind.

Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and AIDS. Engineers and scientists should be helping us face the world's problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the singularity.

That's kind of the ultimate slight to a technophile, to compare The Singularity to a religion. It's also insulting to suggest that trying to understand the brain and building thinking machines somehow distracts us from the "real" problems in the world.

Do I think The Singularity will occur? I think the only rational response is: Maybe. Catastrophic events could intervene that render such a future impossible or unlikely. The pessimists may be right about the impenetrability of understanding how the brain works. Or the future may unravel very much like one of the scenarios posited by Singularity enthusiasts.

I don't know.

I think it's a bit silly to come down too strongly on either side, since there are so many unknown variables. People who categorically state that a particular thing will never happen (e.g. that humans will never fly), often come out looking pretty damn silly in retrospect. On the other hand, wild-eyed optimists proclaiming the advent of super-intelligence machines within 20 years are being just as irresponsible.

It's fun to think about, and worth speculating about. But extreme predictions in either direction are simply foolhardy.

Read/Post Comments (4)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 All rights reserved.
All content rights reserved by the author.