Twitter last week announced it was using machine learning to automatically cropping image previews on the social networking platform. Twitter will use neural networks to identify salient features in an image preview. The feature is in process of being rolled out and will be available on desktop (twitter.com), iOS, and Android in the coming weeks.
In an official blog post, Twitter Researcher Lucas Theis and Software Engineer Zehan Wang claimed that the company's previous face detection method had several limitations in offering optimised image previews. To counter that, Twitter is coming up with a new tool that focuses on the "salient" regions in an image. "In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast. This data can be used to train neural networks and other algorithms to predict what people might want to look at," said the blog post.
In instances mentioned in the post, Twitter's machine learning team compares Before and After shots to display the advantage the new technology is expected to have on image cropping. Images show focus on faces, food items, signboards, and vibrant objects to signify the prominent features.
The team behind it also claims that the process is fairly simplified as it does not require high-resolution predictions because it needs a rough estimate of the salient regions to offer an optimum image crop. While present day neural networks used to predict saliency are slow in production, a combination of knowledge distillation and pruning is said to have allowed Twitter to crop images 10 times faster compared to one without these optimisations. It lets the team perform saliency detection and crop images in real-time.