Generative artificial intelligence (AI) - including ChatGPT, Meta and Google’s generative AI products - should be designated “high-risk” under dedicated AI laws that could strictly regulate (or even ban) most AI technologies according to a bipartisan recommendation.
In a special parliamentary inquiry into the growing tech, accusations were levelled at the tech giants that they have committed “unprecedented theft” from creative workers in Australia.
The inquiry recommended work should begin urgently to develop mechanisms for creators to be paid for their work if it is used to train commercial AI models.
This sets the stage for the Federal Government to introduce legislation that could explicitly prohibit the use of AI and a comprehensive framework of how it’s used in many parts of society, including in the home, office or in healthcare.
The committee was initially put together to examine whether the Government should respond with “whole-of-economy” legislation, tweaks to existing laws or a lightest-touch approach with regulations created in collaboration with the industry.
Ultimately, the committee opted for the strongest response.
The inquiry chair, Labor senator Tony Sheldon said while AI has incredible potential to improve productivity, it also comes with a host of new risks that could exploit Australians.
"We need new standalone AI laws to rein in big tech and put strong protections in place for high-risk AI uses while existing laws should be amended as necessary,” he said.
"General-purpose AI models must be treated as high-risk by default, with mandated transparency, testing, and accountability requirements.
“If these companies want to operate AI products in Australia, those products should create value, rather than just strip mine us of data and revenue."
The committee has said developers of AI products should be forced to be transparent about their use of copyrighted works when used for training and that the original owner of the work should be fairly compensated.
A specific recommendation from the committee is tools like OpenAI’s Chat GPT, which are known as large language models should be explicitly included on the list of high-risk AI uses .
Senators also said their interactions with AI developers “only intensified” their concerns about how the models were operating.