Brad Smith, President of tech giant Microsoft, has emphasized the need for governments and corporations to accelerate their efforts in regulating the rapid development of artificial intelligence (AI). Speaking at a panel in Washington, D.C., Smith proposed various measures to mitigate the potential risks associated with AI technology. His call comes amidst concerns about privacy, job displacement, and the proliferation of deepfake videos spreading misinformation. While Microsoft is actively involved in AI development, Smith stressed the importance of shared responsibility and affirmed the company’s commitment to safeguarding AI technology.
Urging Safety Measures and Regulatory Frameworks:
Smith called for implementing “safety brakes” in AI systems controlling critical infrastructure and establishing a comprehensive legal and regulatory framework for AI. He highlighted the need for companies to take an active role in mitigating the risks of uncontrolled AI development rather than solely relying on government intervention.
Support for Licensing and Oversight:
Smith endorsed the idea of licensing AI developers proposed by OpenAI’s CEO, Sam Altman. He suggested that high-risk AI services and development should only be conducted within licensed AI data centers. Such oversight would ensure accountability and enable effective monitoring of AI technologies.
Growing Concerns and Calls for Action:
The concerns surrounding AI have gained momentum in recent months. On May 16, Altman testified before Congress, advocating for establishing a federal oversight agency to grant licenses to AI companies. Additionally, the Future of Life Institute published an open letter on March 22, signed by influential tech leaders like Elon Musk and Steve Wozniak, calling for a temporary halt in AI development. These calls underscore the need for stricter oversight and regulations.
Despite Microsoft’s involvement in AI development, Smith reassured that the company is not shirking responsibility. Microsoft has committed to implementing its own AI-related safeguards, irrespective of government requirements. The company is developing specialized chips to power OpenAI’s viral chatbot, ChatGPT.
Brad Smith’s call for accelerated regulation and shared responsibility in the AI industry highlights the need for proactive measures. Governments and corporations must collaborate to address the potential risks associated with AI, including privacy breaches and misinformation spread through deepfake videos. By endorsing licensing and oversight and committing to internal safeguards, Microsoft exemplifies the proactive approach necessary to ensure the responsible and ethical development of AI technologies.