Think Tank Warns of AI’s Potential for Creating Viruses and Bioweapons

Artificial ‍intelligence‍ (AI) has the potential to not‌ only bring⁤ about technological advancements but also create viruses that could pose a threat to‌ humanity, according to⁣ a recent Senate inquiry hearing on AI. Greg Sadler, CEO of Good Ancestors ⁤Policy, ‌warned that AI could assist ​in manufacturing dangerous viruses, a task typically limited to leading laboratories. He cited an example from March‌ 2022 ​where an AI designed 40,000 lethal molecules in‌ less ‍than six‍ hours⁢ instead of ‍finding new drugs as intended. Another study in 2023 demonstrated how⁤ students used ⁣ChatGPT to ⁢suggest potential pandemic pathogens ‍and provide information on how they could be made using DNA ordered online.

Sadler emphasized that these studies revealed‍ the failure of ​ChatGPT’s safeguards in preventing ⁤the application from offering⁢ dangerous ‌assistance. He expressed concern that⁤ AIs could potentially aid individuals in building bioweapons during⁤ the next ⁣term of government. In ⁣response to ⁤this threat, the U.S. government introduced an executive order in​ October 2023.

While Sadler has raised this issue with various Australian government departments, he has not⁣ observed any evidence of risk⁤ management measures similar to those implemented by ‍the⁣ United‌ States. He highlighted a‍ significant gap between investment in‍ safety and investment in capability when it comes⁢ to AI development.

To address biosecurity risks and other potential threats posed by AI, Soroush Pour, CEO of Harmony Intelligence—an AI⁤ safety research company—proposed ⁣establishing an AI safety institute in ​Australia. This⁣ institute would focus on developing technical capabilities necessary for responding effectively ‌to such threats and would require strong⁤ regulation ⁤enforcing⁤ mandatory policies like third-party testing and safety‍ incident reporting.

Regarding regulatory frameworks for AI safety,‌ Sadler suggested Australia consider ‍adopting California’s SB-1047 bill as a practical way‌ forward. This bill ⁣places an obligation ⁤on developers to ensure their AI models are ​safe ⁣and do not ​pose risks to public safety; failure ‍to comply ⁤may result in liability for any catastrophic harms ​caused by their models.

Furthermore, Sadler noted that under the ⁢Californian framework, developers must have⁣ the⁢ ability to deactivate their AIs if they become dangerous.‍ It is important for ​Australia’s approach towards regulating AI safety research and development align with⁣ these principles.

The SB-1047 ⁤bill specifically targets⁣ high-powered AI models developed at significant ‍cost by​ requiring them to⁣ undergo safeguarding processes and allowing third-party verification if necessary. However, concerns have been raised about its potential impact on innovation within the tech industry.

As legislation progresses through ⁤California’s lower house following its recent clearance of a key hurdle on August 15th., it remains crucial for governments worldwide—including‍ Australia—to address these emerging ⁢challenges associated with advancing⁣ artificial intelligence technologies responsibly.

Share:

Leave the first comment

Related News