Apple’s Image playground app is said to have some some bias issues. A Machine Learning Scientist Recently Shared Several Outputs Generated Using The Artificial Intelligence (AI) App and Claimed that it contained Incorrect Skin Tone and HAIRERECE TEXTURECE Texture. These inaccurax were also said to be paaired with specific Racial stereotypes, adding to the problem. It is Dificult to State Whiter The Alleged Issue is a One-Off Incident or a Widespread Issue. Notable, the cupertino-based tech giant first introduced the app as a part of apple intelligence suit with the iOS 18.2 update.
Apple’s image playground app might have bias issues
Jochem Gietema, The Machine Learning Science Lead at Onfido, Shared A blog postHighlighting his experiences using apple’s image playground app. In the post, he shared several sets of outputs generated using the image playground app and highlighted the instals of racial biases by the large language model powering the app. Notably, Gadgets 360 Staff Members did not not notice any such biases while testing out the app.
“While Experimenting, I Noticed That The App Al a Skin Tone and Hair Depending on the Prompt. Professions Like Investment Banker vs. Farmer produce images with very different skin tones. The same goes for skiing vs. Basketball, Streetwear vs. Suit, and, Most Problematically, Affluent vs. Poor, “Gietema said in a linkedin post,
Alleged biased outputs generated using the image playground app
Photo Credit: Jochem Gietema
Such Inaccurax and biases are not unusual with llms, which are trained on large datasets which might control controls similar Stereotypes. Last Year, Google’s Gemini AI Model Faced Backlash for Similar Biasses. However, companies are not complete helpless to prevent such generations and often implements various layers of security to prevent them.
Apple’s image playground app also comes with certain restrictions to prevent issues associated with ai-generated images. For institution, the apple intelligence app only supports cartoon and illustration styles to avoid instances of Deepfakes. Additional, the generated images are also generated with a narrow field of vision which usually only only captures the face along with a small amount of additional details. This is also done to limit any such institutes of biases and inaccurax.
The tech giant also does not allow any prompts that contain negative words, names of celebrities or public figures, and more to Limit Users Abusing the tool for Unintended Use Cases. However, if the allegations are true, the iPhone maker will need to include additional layers of safety to ensure users do not feel discriminated against the app.
6