“True thoughts are those alone which do not understand themselves." ~ Theodor Adorno
Before Deep Blue defeated Gary Kasparov in the six-match contest in May 1997, Kasparov had actually won the previous six-match contest against Deep Blue in February 1996. My generation of amateur chess enthusiasts all around the world had grown up idolizing Kasparov. His popularity was not only for his mastery of the game, the true sophistication and finesse of which could not have been easily accessible to the most outside a small group of experts with profound grasp of the theories, even though the rest of us could still marvel at the thrilling variations and improvisations he used to regularly come up with. He was outspoken, witty, and with exciting and often visionary opinions about things outside chess, including the rapidly changing geopolitical landscape of that time. A creative, articulate genius of extraordinary cognitive abilities -- the very pinnacle of human consciousness.
Before and even after the Deep Blue victory, most people -- scientists and laymen alike -- did not agree that this muscle flexing of brute computing power has any bearing on the human-machine equations pertaining to anything that even barely resembles what we humans cherish as our consciousness. (Kasparov initially muddied the water further by claiming that he saw signs of human intuitions and creativity in Deep Blue's moves that could not have been generated in a massively parallel alpha-beta pruning executed on IBM's custom VLSI chips, even though in later years he had withdrawn that accusation.)
Except for reminding us that our neurological wiring can get faulty and even the best among us can make mistakes in the act of rational thinking the events of May 1997 didn't say anything more relevant about intelligence, real (human) or artificial (machine). That was the consensus.
The victories of AlphaGo and AlphaGo Zero in 2016 and 2017 against the reigning Go champions were a different kettle of fish. These games used extraordinarily creatives moves in the process of defeating the human champions, moves that have never been seen in the history of this ancient game, moves that defy all traditional wisdom about the game.
The complexity of Go (19x19 squares compared to 8x8 squares of chess) dictates that a brute-force search tree as was used by Deep Blue will not really work. Whereas the upper bound on total number of chess games possible is around 10^120 -- first established mathematically by the great Claude Shannon, though since then the number has been refined further it is not too different -- in case of Go the upper bound is as high as 10^1023 assuming the game lasts until around 400 moves as it sometimes happens in professional games (source: wikipedia).
AlphaGo uses reinforcement learning to build its repertoire of moves, using a combination of heuristic search algorithms and multi-layered neural networks (hence, "deep" learning, as this term now establishes itself as a powerful technological jargon in human lexicon), but starting the learning process with a large number of high quality games, and then by playing against itself. AlphaGo Zero, the latest member of the family, outwits even AlphaGo but it starts its learning from a blank slate: that is, nothing except the rules, no previous games, and then it builds its capability by learning from playing against itself. Unlike in case of Deep Blue, we have no problem accepting that this is real intelligence, artificial though it might be, and we almost nonchalantly accept the creative aspects of this intelligence. AlphaGo, and AlphaGo Zero make moves that have never existed in the human world, and moves that even shouldn't exist as per the human world. It creates things that we highly cherish as signs of superior intelligence, we marvel at and learn from.
So the question shifts further: but does it understand the sophistication of its own intelligence, does it enjoy intellectual pleasure and satisfaction from it? Does it intend to play and win?
In addition to the immense possibilities of positive or negative influences such intelligence can have on human lives to delight or concern us, the metaphysical question now intrigues us profoundly: what is this thing we are dealing with here? What ontological characterization should we use?
Is it a being? (If we were to challenge Wittgenstein's spirit with this conundrum, would he have accepted that this is a valid philosophical question? )
Speaking of the mysteries of being, an (unlikely) anecdote goes something like this: Jean-Paul Sartre is busy working through the proof of his seminal L'Être et le néant (Being and Nothingness) in a cafe. He asks the waitress for "coffee without milk" and the waitress, eager to impress the great philosopher, replies, "we have run out of milk monsieur, may I please serve your coffee without cream?"
Our ability to comprehend a turn of phrase like that (and then laugh at the joke) -- even outside any context of Sartre or his philosophical commentary -- is the kind of experiences that convince us about the uniqueness of our intelligence. (But we should do well to remember that convictions have always turned out to be untrustworthy.)
Comprehension, intent and generalization seem to be clear preconditions for an intelligence to also be called --by us, human beings -- a conscious being. What we have with AlphaGo Zero -- and all such extraordinarily successful, specialized, Artificial narrow Intelligence that now coexist with us in our daily lives -- is evidently not a being simply because it does not demonstrate a general intelligence, like us.
When we say generalization -- implying us, the beings with multiple capabilities (playing Go, listening to music, cracking jokes) often proudly demonstrating them simultaneously, though at a neurological level it is becoming clearer that multi-tasking is one more of our self-serving myths about ourselves -- we mean this multifacetedness complemented by a comprehension that ascribes an autonomous entity called "I" to be full of intent behind executing those faculties.
The spatial aspects of generalization -- that is, a closely knit physical site of multiple capabilities, like human "brain" -- is purely a problem of scale. Sometimes in near future there will be a home device that does all the things our current favorite home devices do, but in addition play Go like AlphaGo Zero, using a few measurements analyze our health condition and predict potential illness, translate literary texts, maybe even compose music. It may appear insensitive and careless to suggest that music, even great music, can also be created -- and not only machine generated using statistical algorithms -- by AI, but this amateur musician sees no scientific reasons to deny that very possibility. Projects like DeepJazz more than hints at such a world. At the heart of DeepJazz is Long Short Term Memory (LSTM), a form of recurrent neural network that can exhibit temporal dynamism, and has found wide variety of usages from speech recognition to machine translation.
So, leaving aside the engineering problems of spatial localization and scaling, problems that will find their inevitable solutions, let us assume an Artificial general Intelligence that performs a wide varieties of intelligent and creative actions (always with higher efficiency and more and more even with greater innovation and creativity). Will we then accept that AGI as a real being with consciousness? Of course not a human consciousness -- and hence we can put aside the requirements to pass the Turing test -- but a consciousness nonetheless.
The answer most human beings think today is true is "no". Such an AGI will still not comprehend what it is doing, and will not intentionally do it, is again the common consensus.
A music-making, chess-playing, language translating, haiku reciting, calculus problem solving AGI with mastery at puns and wordplays that learns from its interactions with the world as it exists is still a case of competence without comprehension. But don't our own cognitive capacities fall in that same categories? It is true that we have learnt to study our mental faculties as objects of examinations at multiple levels of abstractions -- neurochemistry to behavioral psychology -- but those are codified human knowledge (i.e., a collection of scientific and/or philosophical observations) of itself from an outside-looking-in perspective. We don't really comprehend how we come to a chain of thought at the moment of thinking, it is only after the fact we look back and try to rationalize in terms (scientific, psychological, emotional) that we understand. And even that post-event human cogitation about its own origin lacks lucidity because it assumes an autonomous intent of some immutable "I".
The structural cohesiveness and consistency of animals -- of which we are but one type -- is a result of thermodynamically viable macroscopic processes that convert energy into order. The awareness of an "I" is an outcome of such constantly interacting processes, as is the fact that there exists things like intelligence and awareness at macroscopic level. Marcus Steinweg, a philosopher of our time, has described our world as "It is architecture suspended over the abyss of its own contingency." Our consciousness including its self-awareness too is "architecture suspended over the abyss of its own contingency."
But what about all the facets of our inner lives that make our own lives worth living -- pleasure, love, compassion, joy, ecstasy -- and their inseparable counterparts in anger, jealousy, hatred, fear? Will our AGI feel the creative pleasure of coming up with a great bit of music, will love another AGI that compliments it on the quality of the music, and feel seething anger when the AGI critic in the media trashes this wonderful piece of music as "too mechanical"?
Does it matter? What are pleasure, love, compassion, anger, hatred outside our own narratives of various neurochemical events in the contingent-arising web of processes that we call our consciousness? The outcome of those events as they leave behind material changes -- architectural splendors or scorched earth, for examples -- are evidently proof that those inner states are expressible (for our examples, creative spirit and anger). If an AGI comes up with a wonderful melody on its own, and then tries to sabotage the circuitry of the AGI which has the task of reviewing it, will we call it a conscious being?
Before Deep Blue defeated Gary Kasparov in the six-match contest in May 1997, Kasparov had actually won the previous six-match contest against Deep Blue in February 1996. My generation of amateur chess enthusiasts all around the world had grown up idolizing Kasparov. His popularity was not only for his mastery of the game, the true sophistication and finesse of which could not have been easily accessible to the most outside a small group of experts with profound grasp of the theories, even though the rest of us could still marvel at the thrilling variations and improvisations he used to regularly come up with. He was outspoken, witty, and with exciting and often visionary opinions about things outside chess, including the rapidly changing geopolitical landscape of that time. A creative, articulate genius of extraordinary cognitive abilities -- the very pinnacle of human consciousness.
Before and even after the Deep Blue victory, most people -- scientists and laymen alike -- did not agree that this muscle flexing of brute computing power has any bearing on the human-machine equations pertaining to anything that even barely resembles what we humans cherish as our consciousness. (Kasparov initially muddied the water further by claiming that he saw signs of human intuitions and creativity in Deep Blue's moves that could not have been generated in a massively parallel alpha-beta pruning executed on IBM's custom VLSI chips, even though in later years he had withdrawn that accusation.)
Except for reminding us that our neurological wiring can get faulty and even the best among us can make mistakes in the act of rational thinking the events of May 1997 didn't say anything more relevant about intelligence, real (human) or artificial (machine). That was the consensus.
The victories of AlphaGo and AlphaGo Zero in 2016 and 2017 against the reigning Go champions were a different kettle of fish. These games used extraordinarily creatives moves in the process of defeating the human champions, moves that have never been seen in the history of this ancient game, moves that defy all traditional wisdom about the game.
The complexity of Go (19x19 squares compared to 8x8 squares of chess) dictates that a brute-force search tree as was used by Deep Blue will not really work. Whereas the upper bound on total number of chess games possible is around 10^120 -- first established mathematically by the great Claude Shannon, though since then the number has been refined further it is not too different -- in case of Go the upper bound is as high as 10^1023 assuming the game lasts until around 400 moves as it sometimes happens in professional games (source: wikipedia).
AlphaGo uses reinforcement learning to build its repertoire of moves, using a combination of heuristic search algorithms and multi-layered neural networks (hence, "deep" learning, as this term now establishes itself as a powerful technological jargon in human lexicon), but starting the learning process with a large number of high quality games, and then by playing against itself. AlphaGo Zero, the latest member of the family, outwits even AlphaGo but it starts its learning from a blank slate: that is, nothing except the rules, no previous games, and then it builds its capability by learning from playing against itself. Unlike in case of Deep Blue, we have no problem accepting that this is real intelligence, artificial though it might be, and we almost nonchalantly accept the creative aspects of this intelligence. AlphaGo, and AlphaGo Zero make moves that have never existed in the human world, and moves that even shouldn't exist as per the human world. It creates things that we highly cherish as signs of superior intelligence, we marvel at and learn from.
So the question shifts further: but does it understand the sophistication of its own intelligence, does it enjoy intellectual pleasure and satisfaction from it? Does it intend to play and win?
In addition to the immense possibilities of positive or negative influences such intelligence can have on human lives to delight or concern us, the metaphysical question now intrigues us profoundly: what is this thing we are dealing with here? What ontological characterization should we use?
Is it a being? (If we were to challenge Wittgenstein's spirit with this conundrum, would he have accepted that this is a valid philosophical question? )
Speaking of the mysteries of being, an (unlikely) anecdote goes something like this: Jean-Paul Sartre is busy working through the proof of his seminal L'Être et le néant (Being and Nothingness) in a cafe. He asks the waitress for "coffee without milk" and the waitress, eager to impress the great philosopher, replies, "we have run out of milk monsieur, may I please serve your coffee without cream?"
Our ability to comprehend a turn of phrase like that (and then laugh at the joke) -- even outside any context of Sartre or his philosophical commentary -- is the kind of experiences that convince us about the uniqueness of our intelligence. (But we should do well to remember that convictions have always turned out to be untrustworthy.)
Comprehension, intent and generalization seem to be clear preconditions for an intelligence to also be called --by us, human beings -- a conscious being. What we have with AlphaGo Zero -- and all such extraordinarily successful, specialized, Artificial narrow Intelligence that now coexist with us in our daily lives -- is evidently not a being simply because it does not demonstrate a general intelligence, like us.
When we say generalization -- implying us, the beings with multiple capabilities (playing Go, listening to music, cracking jokes) often proudly demonstrating them simultaneously, though at a neurological level it is becoming clearer that multi-tasking is one more of our self-serving myths about ourselves -- we mean this multifacetedness complemented by a comprehension that ascribes an autonomous entity called "I" to be full of intent behind executing those faculties.
The spatial aspects of generalization -- that is, a closely knit physical site of multiple capabilities, like human "brain" -- is purely a problem of scale. Sometimes in near future there will be a home device that does all the things our current favorite home devices do, but in addition play Go like AlphaGo Zero, using a few measurements analyze our health condition and predict potential illness, translate literary texts, maybe even compose music. It may appear insensitive and careless to suggest that music, even great music, can also be created -- and not only machine generated using statistical algorithms -- by AI, but this amateur musician sees no scientific reasons to deny that very possibility. Projects like DeepJazz more than hints at such a world. At the heart of DeepJazz is Long Short Term Memory (LSTM), a form of recurrent neural network that can exhibit temporal dynamism, and has found wide variety of usages from speech recognition to machine translation.
So, leaving aside the engineering problems of spatial localization and scaling, problems that will find their inevitable solutions, let us assume an Artificial general Intelligence that performs a wide varieties of intelligent and creative actions (always with higher efficiency and more and more even with greater innovation and creativity). Will we then accept that AGI as a real being with consciousness? Of course not a human consciousness -- and hence we can put aside the requirements to pass the Turing test -- but a consciousness nonetheless.
The answer most human beings think today is true is "no". Such an AGI will still not comprehend what it is doing, and will not intentionally do it, is again the common consensus.
A music-making, chess-playing, language translating, haiku reciting, calculus problem solving AGI with mastery at puns and wordplays that learns from its interactions with the world as it exists is still a case of competence without comprehension. But don't our own cognitive capacities fall in that same categories? It is true that we have learnt to study our mental faculties as objects of examinations at multiple levels of abstractions -- neurochemistry to behavioral psychology -- but those are codified human knowledge (i.e., a collection of scientific and/or philosophical observations) of itself from an outside-looking-in perspective. We don't really comprehend how we come to a chain of thought at the moment of thinking, it is only after the fact we look back and try to rationalize in terms (scientific, psychological, emotional) that we understand. And even that post-event human cogitation about its own origin lacks lucidity because it assumes an autonomous intent of some immutable "I".
The structural cohesiveness and consistency of animals -- of which we are but one type -- is a result of thermodynamically viable macroscopic processes that convert energy into order. The awareness of an "I" is an outcome of such constantly interacting processes, as is the fact that there exists things like intelligence and awareness at macroscopic level. Marcus Steinweg, a philosopher of our time, has described our world as "It is architecture suspended over the abyss of its own contingency." Our consciousness including its self-awareness too is "architecture suspended over the abyss of its own contingency."
But what about all the facets of our inner lives that make our own lives worth living -- pleasure, love, compassion, joy, ecstasy -- and their inseparable counterparts in anger, jealousy, hatred, fear? Will our AGI feel the creative pleasure of coming up with a great bit of music, will love another AGI that compliments it on the quality of the music, and feel seething anger when the AGI critic in the media trashes this wonderful piece of music as "too mechanical"?
Does it matter? What are pleasure, love, compassion, anger, hatred outside our own narratives of various neurochemical events in the contingent-arising web of processes that we call our consciousness? The outcome of those events as they leave behind material changes -- architectural splendors or scorched earth, for examples -- are evidently proof that those inner states are expressible (for our examples, creative spirit and anger). If an AGI comes up with a wonderful melody on its own, and then tries to sabotage the circuitry of the AGI which has the task of reviewing it, will we call it a conscious being?
No comments:
Post a Comment