Artificial intelligence (AI) must be harnessed for the public good of science over the next decade, former Australian prime minister Julia Gillard has said.
Ms Gillard, who now chairs London-based health research charity the Wellcome Trust, said getting AI tools into the hands of scientists who need them would be a “public policy challenge for the next 10 years”.
The former politician, who served as prime minister of Australia between 2010 and 2013, was speaking in Westminster at the invitation of the House of Lords Speaker, Lord McFall of Alcluith.
Asked by her predecessor at the helm of the Wellcome Trust, crossbench peer Baroness Manningham-Buller, where science must be in 10 years’ time, Ms Gillard replied: “I think using AI for good.”
She added: “Using AI for good because it is publicly available, and it needs to be available to scientists, and that won’t happen by accident.”
Ms Gillard gave the example of Google-owned Deep Mind, which made its “AI-driven protein folding technology” open access so scientists around the world could use it free of charge.
She added: “They did that out the goodness of their heart. They could have made that proprietary, they could have charged people for it, they could have charged people a lot of money for it, they could have limited access to it.
“Getting the AI tools into the hands of scientists who need them is a public policy challenge for the next 10 years.”
She also called for other means of making technology and research accessible to different scientists around the world, including better use of common platforms.
“We are getting better and better platforms for further discoveries, but they have got to be collaborative, they have got to be shared,” Ms Gillard said.
The former world leader’s call for greater access comes only a week after the UK hosted a global summit on AI safety.
Rishi Sunak hailed a series of “landmark” steps agreed by governments and businesses around the world following the gathering.
This included agreement that the UK’s new AI Safety Institute would be allowed to test new AI models developed by major firms in the sector before they are released.