Computers perform much worse than average humans at verbal reasoning questions, which are based around analogies, classifications, as well as synonyms and antonyms.
Huazheng Wang and colleagues at the University of Science and Technology of China and Bin Gao's team at Microsoft Research in Beijing have built a deep learning machine that outperforms the average human ability to answer verbal reasoning questions for the first time.
Until now, the furthest programmers had gotten was to build machines that were capable of analysing millions of texts to figure out which words are often associated with each other, essentially turning words into vectors that could be compared, added and subtracted.
But this approach has a well-known shortcoming: it assumes that each word has a single meaning represented by a single vector. Not only is that often not the case, verbal tests tend to focus on words with more than one meaning as a way of making questions harder, according to 'MIT Technology Review'.
Huazheng and colleagues tackle this by taking each word and looking for other words that often appear nearby in a large corpus of text. They then use an algorithm to see how these words are clustered.
The final step is to look up the different meanings of a word in a dictionary and then to match the clusters to each meaning.
This can help a machine recognise the multiple different senses that some words can have.
The researchers further enhanced their technique by identifying the category of each question so that the computer then knows which answering strategy it should employ.
Spotting each type of question is relatively straightforward for a machine learning algorithm, given enough to examples to learn from.
The researchers then tested the deep learning technique against 200 human participants of various ages and educational backgrounds.
"To our surprise, the average performance of human beings is a little lower than that of our proposed method," researchers said.