AI Imaging Can Be Used create artTry on clothes or help out in a virtual fitting room design advertising campaign.
but experts worry dark side Some easily accessible tools could make something worse that primarily harms women: non-consensual deepfake pornography.
Deepfakes are videos and images that have been digitally created or altered through artificial intelligence or machine learning. Porn created using this technology began to circulate on the internet a few years ago when a Reddit user shared a clip of a female celebrity being placed on the shoulder of a porn actor.
Since then, deepfake creators have spread similar videos and images targeting online influencers, journalists, and others with public profiles. Thousands of videos exist on numerous websites. Some have been offering users the chance to create images of themselves — essentially allowing anyone to turn anyone they want into a sexual fantasy without their consent, or use the technology to harm an ex-partner.
Experts say the problem is growing as it becomes easier to create complex and visually appealing deepfakes. That could get worse, they say, with the development of artificial intelligence-generating tools that are trained on billions of images from the internet and spit out new content using existing data.
“The reality is that this technology will continue to proliferate, it will continue to evolve, and it will continue to become as easy as pressing a button,” said Adam Dodge, founder of EndTAB, an organization that provides training on tech-enabled abuse. “As long as this As it happens, people will undoubtedly … continue to misuse the technology to harm others, primarily through online sexual violence, deepfake pornography, and fake nude photos.”
Noelle Martin in Perth, Australia, lived through this reality. The 28-year-old discovered deepfake porn of herself 10 years ago when she Googled her photos one day out of curiosity. To this day, Martin says she has no idea who created the fake images or videos of her having sex, which she later found out. She suspects someone may have taken the photos posted on her social media pages or elsewhere and doctored them to make them pornographic.
Horrified, Martin has been contacting different websites over the years to try to have the photos taken down. Some did not respond. Someone else took it down, but she soon found it again.
“You can’t win,” Martin said. “It’s something that’s always there. Like it ruins you forever.”
The more she talked, the worse the problem became, she said. Some have even told her that the way she dresses and posts images on social media has led to harassment — essentially blaming the images on her, not the creators.
Ultimately, Martin turned his attention to legislation, advocating for a national law in Australia that would fine companies A$555,000 ($370,706) if they fail to comply with takedown notices from the online safety watchdog for such content.
But governing the internet is nearly impossible when countries make their own laws about content that is sometimes produced around the world. Martin, who is currently a lawyer and legal researcher at the University of Western Australia, said she believed the problem had to be brought under control by some kind of global solution.
Meanwhile, some AI models say they are already restricting access to explicit images.
OpenAI said it removed explicit content from the data it used to train its image generation tool, DALL-E, which limited users’ ability to create these types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and high-profile politicians. Another mode, Midjourney, blocks the use of certain keywords and encourages users to flag questionable images to moderators.
Meanwhile, startup Stability AI rolled out an update in November that removed the ability to create explicit images using its image generator, Stable Diffusion. The changes follow reports that some users are using the technology to create nude photos inspired by celebrities.
Motez Bishara, a spokesperson for Stability AI, said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and return blurred images. But since the company released its code to the public, it has become possible for users to manipulate the software and generate what they want. Bishara said Stability AI’s license “extends to third-party applications built on top of Stable Diffusion” and strictly prohibits “misuse for any illegal or immoral purposes.”
Some social media companies have also been tightening their rules to better protect their platforms from harmful material.
TikTok said last month that all deepfakes or doctored content showing real scenes must be labeled to show they are fake or have been altered in some way, and that deepfakes of private people and young people will no longer be allowed .Previously, the company banned pornography and Deepfakes that mislead viewers About real-world events and doing damage.
Gaming platform Twitch also recently updated its policy on explicit deepfake images, after a popular streamer named Atrioc was caught opening a deepfake porn site on his browser during a livestream in late January. The site featured fake images of other Twitch streamers.
Twitch already banned explicit deepfakes, but now showing a glimpse of such content — even if it’s meant to express outrage — “will be removed and lead to law enforcement,” the company wrote in a blog post. Knowingly promoting, creating or sharing material is grounds for an immediate ban.
Other companies have tried to ban deepfakes on their platforms, but stopping them will take diligence.
Apple and Google recently said they removed an app from their app stores that was running a sexually suggestive deepfake video of an actress to promote the product. Research on deepfake pornography is not widespread, but a 2019 report by artificial intelligence company DeepTrace Labs found that it was almost exclusively weaponized against women, with Western actresses being the most targeted, followed by South Korean K- pop singer.
The same app that was removed by Google and Apple used to run ads on Meta’s platforms, which include Facebook, Instagram and Messenger. Meta spokesman Dani Lever said in a statement that the company’s policies restrict AI-generated and non-AI adult content and restrict the app’s pages from serving ads on its platform.
In February, Meta and adult sites such as OnlyFans and Pornhub began participating in online tools, called take it down, which allows teens to report explicit images and videos of themselves from the internet. The reporting site works for both regular imagery and AI-generated content – which has become a growing concern for Child Safety organisations.
“When people ask our senior leadership, what are the monoliths that we worry about coming down the hill? The first is end-to-end encryption and what it means for child protection. The second is artificial intelligence, and deepfakes in particular,” says Nation Missing and Gavin Portnoy, a spokesman for the Center for Exploited Children, said the center operates the Take It Down tool.
“We haven’t … been able to respond directly to that,” Portnoy said.
Haleluya Hadero, Associated Press
like us Facebook and follow us Twitter.
artificial intelligence