Member-only story
Responsible AI Regulations: When Governments Drive the Future of AI
The US stands at the forefront of AI research and development, housing some of the world’s most sophisticated and influential tech corporations and institutions. In a significant event earlier in May, the White House convened a meeting with the chief executives of tech giants such as Google, Microsoft, OpenAI, and Anthropic. The agenda of their discussion centered around the potential risks and advantages posed by AI technology.
The gathering was perceived as a constructive stride towards fostering trust and collaboration between the government and the tech sector concerning AI issues. The quartet of tech firms, along with several others, voluntarily pledged to promote the safe, secure, and reliable development of AI technology.
In contrast, on June 14, 2023, the European Parliament approved its version of the draft EU Artificial Intelligence Act, the first legislation of its kind. This law will regulate AI applications across four risk levels and ban those posing an unacceptable risk to safety and privacy. Non-compliance with the law can result in fines up to 6% of annual turnover or €30 million.
The US has faced criticism for not having a clear and all-encompassing plan for the governance of AI, particularly when compared to other regions such as the European Union. The primary…