Please disable your adblock and script blockers to view this page

All these images were generated by Google?s latest text-to-image AI


Imagen
Feed
CGI
Google
AI
DALL
Twitter
DrawBench
rivals’


doThere
OpenAI

No matching tags


Google Brain

No matching tags

No matching tags

No matching tags

Positivity     43.00%   
   Negativity   57.00%
The New York Times
SOURCE: https://www.theverge.com/2022/5/24/23139297/google-imagen-text-to-image-ai-system-examples-paper
Write a review: The Verge
Summary

Often, images generated by text-to-image models look unfinished, smeared, or blurry — problems we’ve seen with pictures generated by OpenAI’s DALL-E program. DrawBench isn’t a particularly complex metric: it’s essentially a list of some 200 text prompts that Google’s team fed into Imagen and other text-to-image generators, with the output from each program then judged by human raters. As Google’s researchers summarize this problem in their paper: “[T]he large scale data requirements of text-to-image models [...] have have led researchers to rely heavily on large, mostly uncurated, web-scraped dataset [...] Dataset audits have revealed these datasets tend to reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups.” In other words, the well-worn adage of computer scientists still applies in the whizzy world of AI: garbage in, garbage out.Google doesn’t go into too much detail about the troubling content generated by Imagen, but notes that the model “encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes.” This is something researchers have also found while evaluating DALL-E.

As said here by James Vincent