Will DALL-E the AI Artist Take My Job?

 

Selección

I was unnerved by how well DALL-E 2 mimics a human photo editor. So I put my AI competition to the test.

As someone working in a creative field, I’ve never been concerned about a computer taking my job. I always felt confident that the tasks required of me as a photo editor for New York Magazine are too complex and messy — too human — for an artificial intelligence to perform. That is, until DALL-E 2, a sophisticated AI that generates original artwork based only on text input, opened to public beta last June.

It’s easy to lose hours on the r/dalle2 subreddit, where beta testers have been posting their work. More often than not, the only way to differentiate a DALL-E creation from a human-generated image is five colorful squares tucked in the bottom right corner of each composition — DALL-E’s signature. As I scrolled through images of Super Mario getting his citizenship at Ellis Island and Mona Lisa painting a portrait of da Vinci I couldn’t shake the question that town criers and elevator operators of yore must have confronted: Was my obsolescence on the horizon?

DALL-E, named after surrealist artist Salvador Dali and Pixar’s lovable garbage robot WALL-E, was released by San Francisco-based research lab OpenAI in January 2021. The first iteration felt like a curious novelty, but nothing more. The compositions were only remarkable because they were generated by AI. In contrast, DALL-E 2, which launched in January 2022, is lightyears ahead in image complexity and natural semantics; it’s easily one of the most advanced image generators in development, and it’s evolving at an astonishing speed. Last week, OpenAI launched a new Outpainting feature, which allows users to extend their canvas beyond its original borders, revealing, for example, the cluttered kitchen surrounding Johannes Vermeer’s Girl With a Pearl Earring.

Like other forms of artificial intelligence, DALL-E stirs up deep existential and ethical questions about imagery, art, and reality. Who is the artist behind these creations: DALL-E or its human user? What happens when fake photorealistic images are unleashed on a public that already struggles with deciphering fact from fiction? Will DALL-E ever be self-aware? How would we know?

These are all important ideas that I’m not interested in exploring. I just wanted to see if I need to add “robot takes my job” to the long list of things that make me anxious about the future. So I decided to put my AI competition to the test.

I have one of those ambiguous job titles that no one understands, like “marketing consultant” or “vice-president.” In the most basic terms, my job as photo editor is to find or produce the visual elements that accompany New York Magazine articles. The mechanics of how DALL-E and I do our work are pretty similar. We both receive textual “prompts” — DALL-E from its users, mine from editors. We then synthesize that information to produce visuals that are (hopefully) compelling and accurate to the ideas in play. My toolkit includes a corporate Getty subscription, countless hours of Photoshop experience, and an art degree that cost me an offensive amount of money. DALL-E’s toolkit is the millions of visual data points that it’s been trained on, and the algorithms that allow it to link those concepts together to create images.

For our competition I set simple rules. If DALL-E was able to produce an image that was pretty close to my original artwork without too much hand holding, the AI won the round. If it needed my stylistic guidance or wasn’t able to produce something satisfying at all, I’d award myself (and humanity) a point.

Seguir leyendo: New York Magazine

Imagen de Chen en Pixabay

Vistas:

276