One suggested adding a victim’s face to a naked adult actor’s body. The other two promoted tools to make a person appear undressed.
Apple strongly supports user privacy. It has firm rules to keep the data on its devices safe and has also acted to prevent sexual exploitation on its platform.
With the rise of generative AI apps, a new risk has appeared. Malicious users are finding clever methods to make lifelike fake multimedia content that damages others’ reputations on social media.
404 Media identified three gen AI-based iOS apps advertised on social media sites like Instagram, linking directly to the Apple App Store. These apps were shown to illegally create nude photos of individuals.
One app suggested placing a victim’s face on a naked adult actor’s body. The other two advertised features to strip a person’s clothes off digitally.
The publication notified Apple about the unlawful apps on the Apple App Store. Apple has removed these offensive apps from its platform.
Apple’s rules clearly tell app developers not to create apps that “offend people”. If they do, they could face penalties and might be banned from the platform without warning.
Apps should not include content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy.
Apps with user-generated content or services that end up being used primarily for pornographic content, Chatroulette-style experiences, objectification of real people (e.g. “hot-or-not” voting), making physical threats, or bullying do not belong on the App Store and may be removed without notice,” reads Apple Review guidelines.
Recently, there has been a rise in the misuse of generative AI-based deepfake technology. This is mainly used to harm the reputations of well-known figures in entertainment and politics.
The goal is to make the victim less popular right before elections or a film release, potentially costing them future opportunities.
Considering the amount of deepfake videos, Apple, Google, and Meta need to work hard to keep their platforms clean of such harmful applications and content.
Governments worldwide, including India, have introduced new Information Technology laws to stop the spread of explicit content on social media.
They also require that such content be removed from platforms within two days of a complaint being filed.
According to Rule 3(2)(b) of the Information Technology Rules, social media platforms are required to remove any harmful or defamatory content within 36 hours, or as soon as possible, after a complaint is made.
Under Section 66D of the IT Act, individuals who create fake videos using impersonation technology, such as deepfakes, with computers and related devices, can face up to three years in jail and a fine of up to Rs 1 lakh.
What we think?
I think Apple’s decision to remove these apps is great for protecting privacy. With AI becoming smarter, it’s scary to think about fake images and videos that look real.
By removing these apps, Apple helps stop bad uses of AI. Other companies like Google and Meta should do the same to keep everyone safe. It’s also good that laws are in place to fight this problem.