An AI research group based in New York University has issued a new paper calling for government regulation of artificial intelligence technologies.
Called AI Now, the group is headed by employees from major tech companies including Google and Microsoft. Its latest publication, simply titled “AI Now Report 2018”, highlights the growing use of AI technologies such as automated facial recognition in everyday life, and points to ethical issues concerning civil rights and demographic bias.
It also lists a range of solutions to the problems that AI technologies poses, with two essential approaches being key: One is the implementation of “stringent” regulation establishing clear limits to how AI can be used, transparency, and mechanisms of oversight. The other is a call to the AI industry urging companies to “waive trade secrecy” in the interest of opening their algorithms to external auditing.
Noting that public institutions are increasingly turning to AI-driven technologies for governance, the report argued that such organizations “must be able to understand and explain how and why decisions are made, particularly when people’s access to healthcare, housing, welfare, and employment is on the line.”
The report’s publication arrived this week alongside another speech from Microsoft President Brad Smith urging government authorities to regulate the use of facial recognition technology. Smith has now made this argument at multiple speaking events this year, even repeating a similar line about ensuring “that the year 2024 doesn’t look like a page from the novel 1984.” Meanwhile, Google’s CEO has laid out ethical guidelines for his company’s development of AI technology, and facial recognition specialist Trueface has sought to make its own ethical principles a selling point for its brand.
There is an increasingly clear call for regulation of some kind concerning AI and facial recognition, and the call is coming from inside the industry.
December 7, 2018 – by Alex Perala