Headlined Tampa’s Super Computer Show, today’s Tampa Tribune reports in its Business Section that the Tampa Convention Center will host SC06, short for SuperComputing 2006, on Nov 11-17. This year the conference will take its inspiration from Albert Einstein who said, "Computers are incredibly fast, accurate, and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination," so said the SC06 website , touting the byline, Powerful Beyond Imagination.
More than 9,000 computing experts from all over the world will attend the conference, with the highlight being the release of their Top 500 list of the world’s powerful computers. And along with that coveted crown goes the bragging rights of being the world’s fastest computing machine for the next one year.
To whet your appetite, last year’s clear winner was BlueGene/L System, a joint development of IBM and DOE’s National Nuclear Security Administration (NNSA) and installed at DOE’s Lawrence Livermore National Laboratory (LLNL) in Livermore, CA, a familiar pinnacle for BlueGene/L, one on which it has emplaced firmly in the last three TOP500 lists.
How fast is it? 280.6 teraFLOPS (or trillions of calculations per second), and it is the only system ever to exceed the level of 100 teraFLOPS. In fact, of the Top ten, 6 are from US (occupying the top 4), the remaining four being France (1), Japan (2) and Germany (1). So US still enjoys a pre-eminent position in the rarefied realm of high performance computing, even though there was a momentary scare when Japan announced its Earth Simulator some years back. Incidentally, the Earth Simulator of Japan is now placed 10th.
Some of the mind-boggling statistics from the Tampa tribune news report include:
- A combined capacity of about 100 gigabits per second (at least 20,000 times more capacity than the fastest home broadband Internet link, to put it in perspective).
- If the trade show were a country, it could rank as 4th or 5th in the world for computing horsepower.
- At the end of it all, much of the computing hardware (e.g., fiber optic connections) will stay, making the Tampa Convention Center one of the most well-connected digital hubs in the world.
In this highly charged virtual atmosphere of tera-scale computing and petaFLOPS, it is inevitable that people will try to pit the performance of the high performance computers (artificial intelligence in this corner) against the ultimate thinking machine (the human brain at the other corner).
In 1997, we were enthralled by the hype accompanying the defeat of the then reigning world champion in chess, Garry Kasparov of Russia, by the computer system dubbed "Deep Blue" (actually the upgraded version nicknamed "Deeper Blue"), as reported here, raising the specter of the advent of the supremacy of machine over man.
In this regard, the Tampa Tribune news report of the day has the following to say:
“The brain operates at an estimated 1 to 2 petaflops a second [1,000 to 2,000 teraFLOPS], or a thousand, trillion calculations a second, many times faster than IBM’s fastest supercomputer.
“And that happens in the space of about one liter in your head at a temperature of 98.6 degrees, with 30, 40, 80 years of training,” Dart [Eli Dart, a network engineer for the Energy Sciences Network, a project of the Department of Energy, quoted earlier in the news report] said, “We’ve got a long way to go before we get that.”
Now, that’s comforting.
According to Wikipedia, a brain has a processing capacity of 100 trillion instructions per second [100 teraFLOPS, with the understanding that the three terms, calculations, operations, and instructions, are used interchangeably]. Now, earlier in the news report, it was stated that at present “that machine [BlueGene/L] can operate at 360 trillion calculations a second [360 teraFLOPS].”
For further corroboration, I did an online search. According to the online article entitled Nations in Race to Produce World's Fastest, Most Powerful Computer at Red Orbit, “The current supercomputing speed champion, at 280 trillion calculations a second [280 teraFLOPS], is the IBM BlueGene/L, housed at the Lawrence Livermore National Laboratory in California.”
Then from the online article entitled Fastest Supercomputer in the World at the LLNL website:
"BlueGene/L—first on the Linpack TOP500 list of supercomputers with a sustained world-record speed of 280.6 teraFLOPS—is a revolutionary, low-cost machine delivering extraordinary computing power for the nation's Stockpile Stewardship Program.
Located in the Terascale Simulation Facility at Lawrence Livermore National Laboratory, BlueGene/L is used by scientists at Livermore, Los Alamos, and Sandia National Laboratories. The 360-teraFLOPS machine handles many challenging scientific simulations, including ab initio molecular dynamics; three-dimensional (3D) dislocation dynamics; and turbulence, shock, and instability phenomena in hydrodynamics. It is also a computational science research machine for evaluating advanced computer architectures."
So while rated at 360 teraFLOPS, BlueGene/L has a sustained speed of 280.6 teraFLOPS. So we are comparing a human brain with a computational speed of 100 – 2,000 teraFLOPS, typical of an average Joe on the street, against a top-notched supercomputer, which is only one of its kind, with a computational speed of 280-360 teraFLOPS.
Hmmm, we are still pretty good, but the darn machine is getting too close for comfort.
Fortunately, if you believe so, human intelligence is not measured in terms of raw speed only. We also have other mental states like emotions, deep meaning, and yes, nuances and semantics.
For reassurance, we can turn to the Chinese Room argument , a thought experiment designed by John Searle and published in his paper "Minds, brains and programs" of 1980. In a nutshell, Searle's contended that “syntax (grammar) is not tantamount to semantics (meaning)”.
Since its introduction, the Chinese Room argument “has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence (AI)", which posits that “an appropriately programmed computer actually counts as a mind...That is, it understands, has cognitive states, and can think.”
Because of space constraint, I am paraphrasing the Chinese Room argument here from the Wikipedia article:
Suppose that we are able to construct a computer that takes Chinese characters as input and, following a set of rules, correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it convinces a human Chinese speaker that it is a Chinese speaker. So the proponents of strong AI conclude that the computer understands Chinese, just as the person does.
Now, in analogy, Searle imagines himself [the computer program] to be in a small room [the computer hardware] in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate, exactly the same way as the computer would do (or actually instructed to do). In the same fashion, he argues that like him, the computer doesn't understand Chinese either, because it is in the same situation as he is. “They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.”
I like the Chinese Room argument, partly because I’m ethnic Chinese and I know Chinese. So it is not an understatement when I say that Chinese is a highly nuanced language that is beyond mimicry by purely algorithmic manipulation. But I must admit that I would have been less certain if it were an English Parlor Argument put forth by a non-English speaking person.
2 comments:
I am always of the opinion that AI is limited to a close domain system where the rules can be defined or patterns derived. In the real world it is just not inconceivable to me how it can work given all the limitations.
Japan started the 5th generation project back in the 80s. Given their economic performance at that time, many actually believed that the project can succeed. I was ridiculed when I said it is a pipe dream. They would need more than a few major breakthrough that are not just forth coming. Not really that much has changed in this area since then.
KTM ordered a system for train scheduling based on AI system based on the early 90s. The AI system is claimed to be effective for traffic control.
It so happened that I had an interest in this problem at that time.
Trains, unlike cars are not random. The train configuration can change drastically as well as the track conditions. This information is deterministic and I cannot imagine how the AI system can determine the train travel time over the entire route without the necessary engineering inputs.
Needless to say, the system never run.
KK Aw
Thanks, KK. At last I managed to write something that tickled you such that you decided to share your thoughts, which are gems in their own way.
I think what KTM has ordered is some kind of expert system that utilizes the raw speed of computers to augment human thinking, in the same way motors are used to augment human power.
The other aspect of AI is to understand how humans think. See here. At a very rudimentary level, these are rules (hueristics and fuzzy logic) that are incorporated into the machine logic as you have mentioned at the outset. This is what I read more than 15 years ago and I must admit that I've not been keeping abreast of the AI development, nor the capacity to do that too.
Then there are the genetic algorithms, artificial neural networks that are purportedly able to learn, through training. I will grant that AI will be able to approximate human thinking in some ways, but will never replicate it as consciousness simply does not exist there.
Post a Comment