OpenAI has unveiled DALL-E and CLIP, two new generative AI models that can generate images from your text and classify your images into categories respectively. DALL·E is a neural network that can generate images from the wildest text and image descriptions fed to it, such as “as an armchair in the shape of an avocado”, or “the exact same cat on the top as a sketch on the bottom”. CLIP uses a new method of training for image classification, meant to be more accurate, efficient, and flexible across a range of image types.
Generative Pre-trained Transformer 3 (GPT-3) models from the US-based AI company use deep learning to create images and human-like text. You can let your imagination run wild as DALL·E is trained to create diverse — and sometimes surreal — images depending on the text input. But the model has also raised questions regarding copyrights issues since DALL-E sources images from the Web to create its own.
The name DALL·E, as you might have already guessed, is a portmanteau of surrealist artist Salvador Dali and Pixar's WALL·E. DALL·E can use text and image inputs to create quirky images. For example, it can create “an illustration of a baby daikon radish in a tutu walking a dog” or a “snail made of harp”. DALL·E is trained not only to generate images from scratch but also to regenerate any existing image in a way that is consistent with the text or image prompt.
GPT-3 by OpenAI is a deep learning language model that can perform a variety of text-generation tasks using language input. GPT-3 could write a story, just like a human. For DALL·E, the San Francisco-based AI lab created an Image GPT-3 by swapping the text with images and training the AI to complete half-finished images.
DALL·E can draw images of animals or things with human characteristics and combine unrelated items sensibly to produce a single image. The success rate of the images will depend on how well the text is phrased. DALL·E is often able to “fill in the blanks” when the caption implies that the image must contain a certain detail that is not explicitly stated. For example, the text ‘a giraffe made of turtle' or ‘an armchair in the shape of an avacado' will give you a satisfactory output.
CLIP (Contrastive Language-Image Pre-training) is a neural network that can perform accurate image classification based on natural language. It helps more accurately and efficiently classify images into distinct categories from "unfiltered, highly varied, and highly noisy data". What makes CLIP different is that it does not recognise images from a curated data set, as most of the existing models for visual classification do. CLIP has been trained on a wide variety of natural language supervision that's available on the Internet. Thus, CLIP learns what is in a picture from a detailed description rather than a labelled single word from a data set.
CLIP can be applied to any visual classification benchmark by providing the names of the visual categories to be recognised. According to the OpenAI blog, CLIP is similar to “zero-shot” capabilities of GPT-2 and GPT-3.
Models like DALL·E and CLIP have the potential of significant societal impact. The OpenAI team say that they will analyse how these models relates to societal issues like economic impact on certain professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.
A generative AI model like DALL·E that picks images directly from the Internet can pave the way to several copyright infringements. DALL·E can regenerate any rectangular region of an existing image on the Internet. And people have been tweeting about attribution and copyright of the distorted images.
I, for one, am looking forward to the copyright lawsuits over who holds the copyright for these images (in many cases the answer should be "no one, they're public domain"). https://t.co/ML4Hwz7z8m— Mike Masnick (@mmasnick) January 5, 2021
What will be the most exciting tech launch of 2021? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.