Said to be the most complex game ever designed, with an incomputable number of move options, Go requires human-like "intuition" to prevail.
"If the machine wins, it will be an important symbolic moment," AI expert Jean-Gabriel Ganascia of the Pierre and Marie Curie University in Paris told AFP.
"Until now, the game of Go has been problematic for computers as there are too many possible moves to develop an all-encompassing database of possibilities, as for chess."
Go reputedly has more possible board configurations than there are atoms in the Universe.
This fiendish complexity meant that mastery of the game by a computer was at least a decade away - or so it was thought.
The assumption began to crack when, last October, Google's AlphaGo programme beat Europe's human champion, Fan Hui.
Google has now upped the stakes, and will put its machine through the ultimate wringer in a marathon match kicking off Wednesday against South Korean Lee Sedol, who has held the world crown for a decade.
Intelligence of simplicity
Game-playing is a crucial measure of AI progress - it shows that a machine can execute a certain "intellectual" task better than the humans who created it.
Key moments included IBM's Deep Blue defeating chess Grandmaster Garry Kasparov in 1997 and the Watson supercomputer outwitting humans in the TV quiz show Jeopardy in 2011.
But AlphaGo is different.
It is partly self-taught - having played millions of games against itself after initial programming to hone its tactics through trial and error.
"AlphaGo is really more interesting than either Deep Blue or Watson, because the algorithms it uses are potentially more general-purpose," said Nick Bostrom of Oxford University's Future of Humanity Institute.
Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI - something resembling human reasoning based on a variety of inputs.
"General intelligence is about being good at achieving one's goals when solving problems that are new, and perhaps not well-defined," Bostrom's colleague Anders Sandberg told AFP.
"So if the machine can do new things when needed, then it has 'true' intelligence'."
In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.
To this end, AlphaGo uses two sets of "deep neural networks" containing millions of connections similar to neurons in the brain.
It is able to predict a winner from each move, thus reducing the search base to manageable levels - something co-creator David Silver has described as "more akin to imagination".
Master or servant?
What if we manage to build a truly smart machine?
For some, it means a world in which robots take care of our sick, fly and drive us around safely, stock our fridges, plan our holidays, and do hazardous jobs humans should not or will not do.
For others, it evokes apocalyptic images in which hostile machines are in charge.
Physicist Stephen Hawking is among the leading voices of caution.
"Computers are likely to overtake humans in intelligence at some point in the next 100 years," he told a conference of global thinkers last May.
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand," he warned in a video posted online.
For Sandberg, it will be up to us to teach intelligent computers certain "values".
"When AI becomes comparable to human intelligence in important areas, then not only human intentions matter but also what values are built into the system," he warned.
There are more than 10 million robots in the world today, according to Bostrom - everything from rescuers and surgical assistants, home-cleaners, route-finders, lawn-mowers and factory workers, even pets.
And while some machines may beat us at Checkers or maths, some experts think robots may never rival humans in some aspects of "true" intelligence.
Things like "common sense" or humour may never be reproducible, said Ganascia.
"We can imagine that in the future, ever more tasks will be executed by machines better than by humans," he said.
"But that does not mean that machines will be able to automate everything that our cognitive faculties allow us to do. In my view, this is a limitation that keeps the scientific discipline of AI in check."