Chooch AI has released a new computer vision service that allows users to detect and categorize events in any kind of local video stream. Dubbed Chooch Edge AI, the technology was unveiled at this week’s TechCrunch Disrupt startup event in San Francisco and leverages Chooch AI’s object recognition capabilities.
“Our clients requested a fast, easy to install, complete embedded AI solution many times over,” said Chooch AI CEO Emrah Gultekin. “There are locations where connectivity is an issue, and now we can offer detailed inferencing on the edge, with less stress on the overall network.”
To that end, Chooch Edge AI can be installed on any camera device that meets the minimum hardware requirements, which include Linux, 1GB RAM, and an ARM 32, ARM 64, or Intel x86-64 chip. While the device must be connected to the internet during the initial setup, Edge AI can upload data to the cloud asynchronously once it is up and running, which allows it to function in remote places with poor connectivity.
The system includes a customizable dashboard that allows users to determine what kind of events they’d like to identify. The options include video standards like faces and specific objects, as well as words, movements, and other more complex triggers. When Edge AI detects the specified event, it makes a short recording and stores it in a running log.
According to Chooch AI, the platform is well suited to a range of applications, including access control, industrial operations, and robotics. The Edge AI solution will add to a Chooch portfolio that already includes an object recognition SDK that allows customers to train their applications to identify specific items. The SDK debuted in a beta format earlier this year.
October 2, 2019 – by Eric Weiss