
June 6, Hugging Face, a company that hosts open-source AI projects, has seen a surge in traffic for an AI-powered image-generating tool called DALL-E Mini.
The simple-looking app, which generates nine images in response to any typed text prompt, was launched nearly a year ago by an independent developer. But after a few recent improvements and a few viral tweets, its ability to sketch out all sorts of surreal, hilarious and even nightmarish visions suddenly turned into meme magic. Check it out for “thanos searching for his mom at Walmart”, “drunk shirtless man wandering in Mordor”, “CCTV footage of Darth Vader’s breakdancing” and “Hamster Godzilla in a straw hat attacks Tokyo. “
As more people create and share DALL-E Mini images on Twitter And Reddit, and more new users coming, Hugging Face saw its servers flooded with traffic. “Our engineers didn’t sleep the first night,” Hugging Face CEO Clément Delangue said during a video call from his Miami home. “Serving these models at scale is really difficult; they have to solve all the problems.” In recent weeks, the DALL-E Mini has been delivering about 50,000 images a day.
Illustration: Wired Staff / Hug Face
The DALL-E Mini’s viral moment doesn’t just herald a new way to make memes. It also provides an initial look at what will happen when artificial intelligence tools for image ordering become widely available, and serves as a reminder of the uncertainty about their likely impact. Algorithms that generate custom photography and artwork may transform art and help businesses with marketing, but they can also have the ability to manipulate and mislead. A warning on the DALL-E Mini’s webpage warns that it could “enhance or exacerbate social prejudice” or “generate images that contain stereotypes directed at minorities.”
The DALL-E Mini is inspired by a more powerful AI image-making tool called DALL-E (a combination of Salvador Dali and WALL-E) revealed in January 2021 by AI research firm OpenAI. DALL-E is more powerful but not publicly available for fear of abuse.
Breakthroughs in AI research are often quickly replicated elsewhere within months, and DALL-E is no exception. Boris Dayma, a machine learning consultant in Houston, Texas, said he was fascinated by the original DALL-E research paper. Although OpenAI hasn’t released any code, he was able to piece together the first version of the DALL-E Mini at a July 2021 hackathon organized by Hugging Face and Google. The first version produced low-quality images that were often difficult to identify, but Dayma has been improving it since then. Last week, he renamed his project Craiyon after OpenAI asked him to change the name to avoid confusion with the original DALL-E project. The new website displays ads, and Dayma also plans to launch a premium version of its image generator.
DALL-E Mini images have a unique alien appearance. Objects are often distorted and smudged, and people have missing or damaged parts of their faces or bodies. But it’s often recognizable what it’s trying to portray, and it’s often interesting to compare the AI’s sometimes unhinged output to the original cue.