The UK’s AI oversight will include chances to directly study some companies’ technology. In a speech at London Tech Week, Prime Minister Rishi Sunak revealed that Google DeepMind, OpenAI and Anthropic have pledged to provide “early or priority access” to AI models for the sake of research and safety. This will ideally improve inspections of these models and help the government recognize the “opportunities and risks,” Sunak says.

It’s not clear just what data the tech firms will share with the UK government. We’ve asked Google, OpenAI and Anthropic for comment.

The announcement comes weeks after officials said they would conduct an initial assessment of AI model accountability, safety, transparency and other ethical concerns. The country’s Competition and Markets Authority is expected to play a key role. The UK has also committed to spending an initial £100 million (about $125.5 million) to create a Foundation Model Taskforce that will develop “sovereign” AI meant to grow the British economy while minimizing ethical and technical problems.

Industry leaders and experts have called for a temporary halt to AI development over worries creators are pressing forward without enough consideration for safety. Generative AI models like OpenAI’s GPT-4 and Anthropic’s Claude have been praised for their potential, but have also raised concerns about inaccuracies, misinformation and abuses like cheating. The UK’s move theoretically limits these issues and catches problematic models before they’ve done much damage.

This doesn’t necessarily give the UK complete access to these models and the underlying code. Likewise, there are no guarantees the government will catch every major issue. The access may provide relevant insights, though. If nothing else, the effort promises increased transparency for AI at a time when the long-term impact of these systems isn’t entirely clear.

Source: www.engadget.com