Follow us on social

Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below

Email
info@globaltechoutlook.com

Phone
+91 40 230 552 15

Address
540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

Globaltechoutlook

  /  Artificial Intelligence   /  Dall-E Mini’s Obsession with Women in Saris Looks More like a New AI Trouble
DALL-E Mini

Dall-E Mini’s Obsession with Women in Saris Looks More like a New AI Trouble

No one understands why DALL-E Mini is so obsessed with women wearing saris!

The only true restrictions on DALL-E Mini are those you place on it and its extraordinary brushwork. The universally accessible artificial intelligence online image generator can create hazy, contorted, melting approximations of any event you can imagine. Seinfeld phobias? You nailed it, courtroom sketches of variously arranged animals, cars, and famous people? Easy as pie. Monsters of terror from the minds of the mindless, never before seen. But when given nothing at all, DALL-E Mini swiftly demonstrates the boundaries of its own “imaginings.” The artificial intelligence model appears to have been stuck when there is no instruction or direction. The program will undoubtedly return an image of a woman wearing a sari without any prompting (a garment commonly worn across South Asia.)

According to information from the Rest of the World, even the tool’s creator, Boris Dayma, is unsure of the precise reason. He explained the occurrence to the rest of the world by saying, “It’s extremely interesting and I’m not sure why it happens.”

 

Describing DALL-E Mini

DALL-E 2, an effective image generator from OpenAI, served as the model for DALL-E Mini. DALL-E 2 produces images that are significantly more realistic than those produced by ‘mini’, but as a trade-off, it takes too much processing power to be used by the average internet user. There is a queue and a restricted capacity. Dayma decided to develop his own, less exclusive version, which was released in July 2021, despite not being connected to OpenAI. It has gained enormous popularity during the past few weeks. According to Dayma, the software has been handling 5 million inquiries daily. At the request of OpenAI, DALL-E Mini was renamed Craiyon as of Monday and moved to a new domain name.

The DALL-E Mini/Craiyon model produces results based on training data just like any other AI model. A diet of 15 million image and caption pairs, a further 14 million photos, and the pandemonium of the open internet served as Mini’s training material. And it is almost certain that the sari phenomena are related to the underlying data. If you will, the sari state of circumstances!

 

But, why is DALL-E Mini getting stuck on saris?

Dayma hypothesized that the initial photosets that feed DALL-E Mini may have contained a disproportionate amount of pictures of South Asian ladies wearing saris. Furthermore, the AI might link zero-character prompts with brief image explanations, thus the oddity may also have something to do with the length of the caption. A researcher of AI at Queen Mary University in London, Michael Cook, told the Rest of the World that he wasn’t quite convinced by the overrepresentation idea. According to him, “often, machine-learning systems have the opposite issue—they really don’t include enough photographs of non-white people.

Cook instead believes that a language bias in the data filtering process may be the cause. One thing that did occur to him when reading other materials was that many of these datasets take out non-English texts. He noted that captions for images that contain Hindi, for instance, can be deleted, leaving pictures in the abysmal AI soup floating around without any labels or supporting text. Cook and Dayma’s theories haven’t yet been tested, but they both provide excellent instances of the kind of issues that arise frequently in AI. Artificial intelligence is only as reliable as those who program and train it. An image generator will produce many cookies if you feed it a cookie. An awful load of human preconceptions and stereotypes is also carried by AI because we live in a place similar to hell.

Even if it could be entertaining to believe that the “lady in sari” image represents some sort of primitive message emanating from the depths of the unrestrained internet, the truth is that it is probably the result of a data anomaly or simple prejudice. The identity of the woman in the sari is unknown, but the issues with AI today are not.