The new algorithm is accurate and 10 times faster, making it much more practical for real-time deployment with household robots.
For household robots to be practical, they need to be able to recognise the objects they are supposed to manipulate.
But while object recognition is one of the most widely studied topics in artificial intelligence, even the best object detectors still fail much of the time.
"If you just took the output of looking at it from one viewpoint, there is a lot of stuff that might be missing or it might be the angle of illumination or something blocking the object that causes a systematic error in the detector," said lead author Lawson Wong, graduate student in electrical engineering and computer science and lead author at MIT's computer science and artificial intelligence laboratory.
Wong and his team considered scenarios in which they had 20 to 30 different images of household objects clustered together on a table.
In several of the scenarios, the clusters included multiple instances of the same object, closely packed together, which makes the task of matching different perspectives more difficult.
The researchers show that a system using an off-the-shelf algorithm to aggregate different perspectives can recognise four times as many objects as one that uses a single perspective.
The paper is forthcoming in the International Journal of Robotics Research.