DALL-E can now use AI to extend images as a human artist might

Since it was announced in April, the text-to-image AI tool DALL-E 2 has been wowing artists, researchers, and media types its high-quality images. Now, four months later, developer OpenAI is giving DALL-E 2 a new trick: the ability to extend the original images it creates beyond their original borders in logical and creative ways.

The new feature, which OpenAI calls “outpainting,” could be useful to graphic designers who need to create multiple sizes and shapes of a particular image to present in different contexts. A movie promo image, for instance, might require a perfectly square shape in one context, and a tall rectangular shape in another. For the latter, new art is required to fill in the extra space.

The artist Paul Trillo used outpaintinig to extend this image of a UFO downward to include the pool. Click to expand [Image: courtesy of OpenAI]
DALL-E 2 creates original 1024 X 1024-pixel images based on a keyword descriptions entered by the user. It can also make images based on objects and styles it sees in other images. For example, it might be given a street art image of a mouse alongside an art deco version, then combine elements of the two styles into an original picture of the rodent. It also has editing capabilities, meaning a user can erase a section of a generated image and then tell DALL-E to add a specific object or style in that area. For instance, if designer doesn’t like the expressionist red roses in the foreground of an image, they can erase them and ask DALL-E to put photorealistic white orchids there instead.

Now, the editing interface is getting some new buttons to control the expansion of images. In a demo Tuesday, I watched OpenAI engineer David Schnurr extend an image DALL-E had created earlier based on the keywords “two teddy bears mixing sparkling chemicals inside of a laboratory.” I saw a kind of steampunk-style image of two cute teddy bears wearing goggles standing at a lab table in the foreground. Schnurr wanted to extend the image to show more area above the teddy bears. So, he positioned the bottom half of a blue square over the top left section of image, which told the AI to use the storybook laboratory context and vibe in the lower half of the square as the basis for the extension of the image into the top half of the square.

“We’re adding more sort of laboratory concepts into the image, and then we can also expand upwards and really just make an image that’s as big as we would like,” Schnurr says. 

Say Schnurr had wanted DALL-E to include something specific in the extended area of the image, like a Cuckoo clock hanging on the wall above the bears. He could have done that by giving DALL-E some additional keywords.

Actually, Schnurr tells me, DALL-E creates four different versions of the extended area, from which the user can choose. If they don’t like any of the four they can try the extension function again, perhaps with different keywords.

DALL-E product manager Joanne Jang says the new feature was driven directly by feedback from DALL-E users. Filmmakers are using DALL-E to cut storyboarding time in half, Jang says. They might want to experiment with closer or wider shots during the creative process. Game designers have been using DALL-E to reduce the time it normally takes to create new scenes and actions with concept artists.

The outpainting feature isn’t a free add-on. Every DALL-E beta user gets 50 free credits during their first month of use, and 15 free credits every subsequent month. Every time a user generates an additional section of an image it costs them a credit. Users can purchase additional credits in 115-generation packs for $15, OpenAI says.

Jang says more than a million users have been invited into the DALL-E beta program, including more than 3,000 working artists. As a result, OpenAI has been fielding a lot of different kinds of feedback on how to improve the DALL-E’s tools.

But one ask seemed to cut across user types, Jang adds: “I think amongst all those feedback points, one thing that was pretty commonly requested was a flexibility in aspect ratios,” she says.

https://www.fastcompany.com/90783798/dall-e-image-generator-now-goes-beyond-the-frame?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 3y | 31 авг. 2022 г., 17:21:18


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

The environmental impact of LLMs: Here’s how OpenAI, DeepSeek, and Anthropic stack up

The companies behind AI models are keen to share granular data about their performance on benchmarks that demonstrate how well they operate. What they are less eager to disclose is information abo

20 мая 2025 г., 11:30:05 | Fast company - tech
Agentic AI is the future of customer service. Here’s how you need to prepare for it

Twenty-four-hour customer support with zero hold time, infinite personalization, customized care, and behavior-based response are all aspects of the customer experience that will be expected soone

20 мая 2025 г., 11:30:04 | Fast company - tech
Microsoft launched Copilot+ PCs a year ago to mockery. Is the world finally ready for them?

A year ago today, Microsoft unveiled what it believed would be the future of home computing.

20 мая 2025 г., 11:30:03 | Fast company - tech
Why governments keep losing the ‘war on encryption’

Reports that prominent American national security officials used a

19 мая 2025 г., 23:50:04 | Fast company - tech
Gen Z is turning to ChatGPT for outfit advice

With around 1 billion searches on ChatGPT each week, Gen Zers are increasingly turning to AI to solve a daily dilemma: what to wear.

Last month, OpenAI announced updates to ChatGPT’s sea

19 мая 2025 г., 23:50:03 | Fast company - tech
Trump signs a bipartisan bill targeting revenge porn and AI-generated sexual images

President Donald Trump signed the TAKE IT DOWN Act into law on Monday, strengthening federal protections for victims of r

19 мая 2025 г., 21:30:05 | Fast company - tech