Deepfake porn: the need to regulate AI image and video generators
- Deepfake image and video generators are being used to create pornographic content.
- Most of the content involves celebrities, but there is an increase in public profiles being used as well.
- An increase in deepfake porn signals the need for urgent regulations and laws around AI-generated content.
Whenever a new technology is unveiled, there will always be two prominent groups of users of it. The first are those who want to use technology for good. This includes enhancing their work productivity and catering to society’s needs. The second, the more concerning group, is people who use technology for the wrong reasons.
Generative AI in particular has been used for both reasons. While there are concerns about the ethical use of generative AI for some use cases, governments and tech leaders are coming together to address these issues.
While cybercriminals have also been leveraging AI to launch cyberattacks, there is still one big problem with generative AI impacting society everywhere. AI-powered software is employed to craft deepfakes, which involve seamlessly substituting one individual’s facial features in an existing video with another person’s. This replacement extends to capturing the original facial expressions as well.
Today, deepfake image and video generators are available freely on the web. There are even apps that can generate such content for free. While they may sound harmless at first, deepfake generators have been causing problems in society worldwide.
Politicians, for example, have continuously been used in deepfake generators. Not only has there been an increase in fake images, but there have also been instances where deepfake videos of politicians have been convincing enough to seem real. In the US, President Joe Biden and former President Donald Trump remain two of the most common leaders to be used by deepfake generators.
Deepfake porn generators
Deepfake porn is another big problem for AI. Using the same concept, the AI generates uncompromising pictures and videos of individuals. A report from Bloomberg highlighted that big tech companies, including Google, Amazon, X, and Microsoft, own tools and platforms that enable the surge in deepfake porn.
For instance, the report stated that Google is the leading traffic driver to widely used deepfake sites. X is also known to circulate deepfake content regularly. In hosting these services, deepfake generators rely on Amazon, Cloudflare, and Microsoft’s GitHub.
In another report by NBC, a review of two of the most significant websites that host sexually explicit deepfake videos found that they were easily accessible through Google and that creators also used the online chat platform Discord to advertise videos for sale and create custom videos.
Although specific light-hearted deepfake videos showcasing celebrities have gained widespread attention, the prevalent application revolves around producing explicit content, particularly of a sexual nature. NBC highlighted a report by Sensity, a company based in Amsterdam specializing in detecting and overseeing AI-generated synthetic media across sectors like banking and fintech. According to the report, approximately 96% of deepfakes fall into the sexually explicit category. Notably, these videos predominantly feature women without consent for generating the content.
Although a significant portion of deepfake videos concentrates on female celebrities, content creators have now expanded their offerings to encompass videos of anyone. For instance, on Discord, a content creator extended an offer to produce a 5-minute deepfake of a “personal girl,” referring to individuals with under 2 million followers on Instagram, for a fee of just US$65.
Social media posts being turned into porn.
The need for regulations
While tech companies continue to work on ways to improve AI, some regulations may be required on image and video generators. Currently, some image and video generators, like the Canva AI image generator, do not allow users to generate pornographic deepfakes. The AI does not even allow users to create images based on famous people.
However, most deepfake generators work by a user uploading a pic or video and requesting the AI to generate the content. The AI, despite its intelligence, will not be able to differentiate if what it is doing is the ethical thing to do or not. This is why there are continuous calls for the technology to be regulated.
For now, most tech companies are focused on improving how AI comes up with its results. This includes avoiding biases and attributing the source of its work correctly. As such, it may be some time before regulations and laws will be in place for deepfake porn. For now, the constant demand for such content is soaring, and users will have to be careful about what images and videos they upload on social media.
READ MORE
- Data Strategies That Dictate Legacy Overhaul Methods for Established Banks
- Securing Data: A Guide to Navigating Australian Privacy Regulations
- Ethical Threads: Transforming Fashion with Trust and Transparency
- Top 5 Drivers Shaping IT Budgets This Financial Year
- Beyond Connectivity: How Wireless Site Surveys Enhance Tomorrow’s Business Network