Welcome to the latest edition of FindBiometrics AI update. Here’s the latest big news on the shifting landscape of AI and identity technology:
The White House is mulling tightened restrictions on the export of AI processors as the government seeks to constrain China’s access to the technology. Citing unnamed officials familiar with the matter, the FT reports that such extended controls could be implemented sometime this summer, and would likely impact the chipmakers AMD and Nvidia in particular. President Biden is also reportedly preparing an executive order that would implement a screening system for investment in Chinese companies.
(Fun fact: a single Nvidia H100 GPU costs $40,000.)
Business software maker Progress has revealed that a Russian-speaking group called “Clop” had breached its defenses and gained access to sensitive information from hundreds of companies including British Airways, PwC, and Shell, as well as dozens of American government agencies. Information stolen in the attack includes photographs, addresses, and social security numbers; and experts affiliated with LexisNexis Risk Solutions, Sumsub, and AU10TIX are warning about the possibility that this information will be used to create synthetical digital identities that can be leveraged to do things like fraudulently apply for loans and food stamps. Haywood Talcove, the head of LexisNexis Risk Solutions’ government division, estimates that each stolen identity could be used to steal as much as $2 million from government programs alone.
Amazon Web Services and Nvidia have teamed up on a fintech-focused accelerator program for seed-stage startups, with a focus on those working in AI and machine learning technologies. The six-week AWS Global Fintech Accelerator will welcome a cohort of 150 startups, evenly split between the regions of North America/Europe, Middle East/Africa, and Latin America. The program will conclude with an opportunity for 15 startups to pitch to investors and customers. Applications are now open, with a deadline of August 14. The program will run from September 18 to October 23.
Canada’s University of Waterloo will use $5 million from its $800 million endowment to establish a VC investment arm focused on startups. Velocity Fund II (VFII) will be led by general partners Ross Robinson and Akash Vaswani, who are hoping to raise $25 million in total by Q1 of 2024. The fund will target companies in the pre-seed and early seed stages, with leadership expecting individual investments ranging from $25,000 to $500,000.
Silicon Valley-based Sensory has integrated ChatGPT into the latest version of its VoiceHub web portal for its voice interface platform, offering a streamlined user interface for engineers. Given a simple prompt about the kind of project being pursued, the new ‘Task Explorer’ will generate a variety of potentially useful intents, commands, or phrases, together with their relevant categories, displayed either in a mapping tool or in the language builder interface. Sensory’s core platforms for voice commands and natural language recognition have also been updated. “With VoiceHub 2.0, we’re providing an even more powerful, flexible, and intuitive tool that harnesses the power of generative AI to make short work of creating high-performance speech recognition models,” says CEO Todd Mozer.
Daon has launched a new package of Presentation Attack Detection solutions aimed at countering the threat of deepfakes. Offered as a complement to Daon’s IdentityX and TrustX platforms, “AI.X” features newly patented algorithms that look for signs of synthetic faces and voices. Daon says it has been working on PAD technology for facial recognition for eight years, and that it has a new set of high-performing algorithms to detect voice deepfakes based on distinctive voice patterns that are unique to this particular form of synthetic biometrics, even when they are indistinguishable from genuine voices to human ears.
ElevenLabs has raised $19 million in a Series A funding round co-led by Nat Friedman and Daniel Gross (whose new fund was discussed in last week’s AI Update) along with Andreessen Horowitz, and featuring the participation of a number of other figures from the tech community. ElevenLabs is the developer of synthetic voice technology, and attained some notoriety earlier this year when a Vice journalist used it to spoof his bank’s biometric authentication system. Such uses of its technology prompted the startup to develop a deepfake detection tool, dubbed “AI Speech Classifier”, which was made available to certain ElevenLabs partners last week. ElevenLabs CEO Mati Staniszewski says the startup’s latest funding will be used to build additional voice products.
The National Institute of Standards and Technology (NIST) has announced plans to launch a new “Public Working Group on Generative AI” aimed at assessing the opportunities and risks of such technologies, and at advising NIST on guidance to help organizations addressing generative AI’s risks. The White House says the working group will build on the NIST AI Risk Management Framework released earlier this year. In putting the group together, NIST will draw on volunteers from across the public and private sectors; interested individuals are asked to apply by July 9.
The Chatbot’s Take:
We asked ChatGPT what it makes of all this news. Here is its answer:
“The news highlights various developments and concerns related to AI. The White House’s consideration of tightened restrictions on AI processors export aims to limit China’s access to the technology. The data breach involving Progress and the Clop group underscores the importance of cybersecurity, as stolen information could be used for fraudulent activities. The partnership between Amazon Web Services and Nvidia in the fintech accelerator program reflects the growing focus on AI and machine learning in the financial industry. The University of Waterloo’s establishment of a VC investment arm demonstrates the recognition of AI startups’ potential. Daon’s launch of Presentation Attack Detection solutions addresses the threat of deepfakes. ElevenLabs securing funding for synthetic voice technology and developing a deepfake detection tool highlights the ongoing efforts to address the challenges associated with synthetic media. Lastly, the formation of the Public Working Group on Generative AI by NIST emphasizes the need to assess the opportunities and risks of generative AI and provide guidance for organizations. Overall, these developments showcase the multifaceted landscape of AI, encompassing technology, security, investment, and regulation.”
June 29, 2023 – by Alex Perala