Sexual Deepfake Volume Increases by 550% Annually: eSafety Commissioner

eSafety Commissioner Julie Inman Grant has revealed that the prevalence of sexual deepfakes has increased significantly in recent years. Deepfakes are manipulated videos and images created using software or artificial intelligence to make a person appear as someone else. According to Ms. Inman Grant, explicit deepfakes have grown by an average of over five times each year since 2019.

During an inquiry hearing on July 23, Ms. Inman Grant shared concerning data about the rise of deepfakes on the internet. She stated that pornographic videos account for 98 percent of the deepfake material currently online, with 99 percent of that imagery depicting women and girls.

The Commissioner also highlighted the increasing prevalence and distress caused by image-based abuse through deepfakes. She explained that law enforcement faces challenges in dealing with these cases due to the rapid production and sharing of such content.

In response to this issue, the federal Labor government is pushing for new legislation targeting non-consensual sharing of sexual deepfake materials. Under this proposed bill, individuals who share such content without consent could face up to six years in prison, while those who both create and share it could face up to seven years.

It is important to note that creating sexual deepfakes without sharing them would not be penalized under this legislation. The bill has already passed through the lower house and will need Senate approval before becoming law.

During the hearing, Ms. Inman Grant expressed her support for this legislation as it would enhance her agency’s efforts to combat abusive materials online. She believes criminalization serves as a deterrent while expressing society’s collective disapproval towards such conduct.

Under current laws, eSafety has authority over tech companies regarding takedown requests for abusive materials online. With advancements in AI technology, eSafety aims to ensure that synthetic materials and deepfakes are addressed through all four complaint schemes related to child sexual abuse, pro-terror content, image-based abuse, adult cyber abuse, and youth-based cyberbullying.

When asked about going after companies promoting sexual deepfake apps if these offenses were criminalized, Ms. Inman Grant expressed uncertainty about how exactly it would work but suggested that criminal provisions may be more effective as deterrents and punishments for perpetrators.

Share:

Leave the first comment

Related News