ESafety Commissioner Reports Numerous Sexual AI Apps on Smartphones

The ⁤proliferation of sexual AI ​(artificial intelligence) applications on smartphones has raised concerns about ⁤the ease with‍ which perpetrators can commit offenses, according to testimony given​ to a parliamentary ​committee. During an inquiry hearing on a new sexual deepfakes bill, eSafety Commissioner Julie Inman Grant highlighted the availability of numerous apps in app stores that are designed for ‌nefarious purposes.⁣ She specifically mentioned apps that openly promote their ability to modify pictures⁢ of girls using AI. These apps are widely ⁢accessible and free, making it simple and‍ cost-free for ‍perpetrators‍ to exploit them while‍ causing immeasurable harm to their ‌targets.

eSafety is particularly concerned about open-source sexual AI apps that use sophisticated monetization tactics⁤ and are ​gaining popularity on mainstream social media platforms, especially among younger⁤ audiences. Ms.​ Inman Grant cited⁢ a recent study showing a 2,408 ‌percent increase in referral links to non-consensual pornographic deepfake websites across Reddit⁢ and X ​(formerly Twitter) in 2023 ⁢alone. The risks ⁤associated ​with‌ multimodal forms⁢ of⁤ generative AI were also highlighted, including the creation ‍of hyper-realistic synthetic child sexual abuse material ⁢via text prompt to video, highly accurate voice cloning, ​and manipulated chatbots that ⁢could ​facilitate grooming ⁣and sextortion.

To address these risks, eSafety has submitted mandatory standards to​ parliament aimed at strengthening regulations in this area. However, Ms. Inman Grant‍ believes that tech companies should also take ⁢responsibility for⁣ reducing the risks on their ​platforms. She‍ emphasized the need for⁣ robust safety standards ‍enforced by platform libraries ‌hosting these apps and clear reporting mechanisms to ‌prevent their weaponization against children.

While authorities are​ taking action against AI ⁤risks, law enforcement faces significant challenges ⁤due to the ‌rapid advancement of technology. Deepfake detection tools lag ‌behind those used by perpetrators who create deepfakes ⁣freely available online—deepfakes have become so realistic that ‌they are difficult to discern with the ⁢naked eye.

Ms. Inman Grant also revealed that eSafety⁤ often takes⁢ an informal approach when dealing with sexual abuse ‍materials despite other formal means being available ⁢under current laws. The online content regulator can informally request online service providers remove illegal or ‍restricted content—a method⁣ chosen for its speed in removing ⁤harmful content from overseas sites where it⁢ is⁣ predominantly ​hosted.

Since the introduction ⁤of ⁤the Online Safety Act‌ 2021, eSafety⁢ has​ issued 10 formal warnings, 13 remedial directions, and ⁢34 removal ‌notices within Australia ​as part of its efforts against online harms related to sexual abuse materials.

Share:

Leave the first comment

Related News