Technology governance is not a new notion. But AI and machine learning present unique complexities in governance that as a society, we largely have not been forced to previously consider.
The last time a technological revolution had so seismic an impact, some argue, was when the steam engine transformed industrial settings in the 1700s.
Back then, however, the policy and legal landscape was significantly less complicated — the modern U.S. patent system had barely even been invented — and technological development emanated far fewer moral and ethical quandaries than does an advancement so profound and far-reaching as machine intelligence.
This has left policymakers with a lot of questions about how best to proceed. During the Obama Administration, the White House Office of Science and Technology Policy led a significant policy process on AI with the goal of setting a foundation for domestic policymaking on issues related to machine intelligence in the United States.
Simultaneously, other countries have also begun investigating similar questions — last year, the UK Parliament released a report on robotics and artificial intelligence policy, and the EU Commission generated significant public attention earlier this year after calls for EU-wide ethical and liability standards for robotics.
Given the inevitable acceleration of AI development and distribution, and the increasingly complicated legal and policy landscape, where does this leave us?
It may seem obvious, but one of the most salient points in AI governance is the notion that technology is not destiny. As the Obama Administration wrote previously on the topic of policy responses to advanced automation, economic incentives and public policy can play a significant role in shaping the direction and effects of technological change.
The scope of this responsibility is also global; to be effective, our approach to AI governance will ultimately have to cross borders and leverage international governance bodies.
Governments have a significant obligation to their citizens to think proactively about this, and to develop and maintain a policy environment that emphasizes safety and risk mitigation alongside a prioritization of innovation and accelerated research and development in areas where AI can improve the human condition.
Former Advisor, White House Office of Science and Technology Policy
Let’s block ads! (Why?)