The inaugural MIT AI Policy Congress recently convened in Massachusetts, bringing together leading researchers, industry insiders, and policy experts for a series of discussions about the future of artificial intelligence. Hosted by MIT’s Internet Policy Research Initiative (IPRI) and the Organization for Economic Cooperation and Development (OECD), the conversation stressed the importance of collaboration and ethical behavior in the developing field.
The key takeaway is that businesses, governments, and private citizens will need to work together to craft sensible regulations around the use of AI.
“The right interaction between computer science, government, and society at large will help shape the development of new technology to address society’s needs,” said Daniel Weitzner, the founding director of IPRI.
“There is simply too much at stake for all of us not to have a say,” added R. David Edelman, IPRI’s Director of the Project on Technology, the Economy, and National Security.
The trouble, of course, is that different fields will incorporate AI in different ways, each of which will require different policy solutions. Privacy concerns are paramount when dealing with an industry like healthcare or a biometric recognition platform like Amazon’s Rekognition. Safety is a much bigger concern in transportation, where self-driving cars could soon become a reality. As a result, legislators may need to make decisions about AI on a case-by-case basis.
“Let’s ask ourselves if ‘AI governance’ is the right frame at all — it might just be that in the near future, all governance deals with AI issues, one way or another,” said Edelman.
The Policy Congress was an effort to facilitate that dialogue to ensure that AI remains a positive influence on our society.
(Originally posted on Mobile ID World)