The US and UK have made history by signing a landmark agreement on artificial intelligence safety, marking the first formal cooperation between countries in assessing risks from emerging AI models. The deal outlines a collaboration to pool technical knowledge and talent, addressing potential existential risks from AI technology. The partnership enables the UK's AI Safety Institute and its US counterpart to exchange expertise and work on independently evaluating private AI models. This pivotal agreement signifies a concerted effort towards AI safety regulation, setting a global precedent. The UK's dedication to AI safety is evident through the establishment of its AI Safety Institute and the voluntary commitments from major tech companies to open up their latest AI models for review. This joint initiative aims to accelerate the institutes' work, addressing risks to national security and broader society. The collaboration stands in contrast to the broader regulation efforts in other nations, such as the EU's AI Act and President Joe Biden's executive order targeting AI models threatening national security. The significance of this partnership is underscored by its potential to gain a better understanding of AI systems, conduct robust evaluations, and issue rigorous guidance. The UK's proactive approach to tackling AI development aligns with Prime Minister Rishi Sunak's ambition for the country to play a central role in this field. The commitment also extends to addressing shared challenges, including the impact of AI on upcoming elections and the need for computing infrastructure for AI. This historic agreement represents a major step towards ensuring the safe and responsible use of AI technology.