
“Video analysis is increasingly used in many aspects of our lives, whether it is to control security points through facial recognition in smart buildings to provide more reliable security than access control cards, or to monitor whether it is consistent with wearing a mask and To maintain social distance to meet the anti-epidemic requirements of the new crown epidemic, or to monitor traffic congestion and detect crimes in the deployment of smart cities, video analysis applications are ubiquitous, and it makes our business and life more intelligent, safe and convenient.
“
Author: Ed Wright, Marketing Director of Xilinx Data Center Division
Video analysis is increasingly used in many aspects of our lives, whether it is to control security points through facial recognition in smart buildings to provide more reliable security than access control cards, or to monitor whether it is consistent with wearing a mask and To maintain social distance to meet the anti-epidemic requirements of the new crown epidemic, or to monitor traffic congestion and detect crimes in the deployment of smart cities, video analysis applications are ubiquitous, and it makes our business and life more intelligent, safe and convenient.
In this era of Internet of Things, billions of cameras are deployed in buildings and every corner of the city. Even if they can see six channels, can their analysis capabilities be exquisite?
In fact, the difficulty of extracting insights from massive amounts of information has become greater than ever as the information continues to accumulate and become more complicated. Companies that help develop and deploy smart building and smart city management video analytics solutions are facing one or more of the following major challenges:
1. Passively collect massive amounts of information from the camera, but because of the lack of computing power at the edge, the recorded data is often deleted before being analyzed.
2. In order to extract insights, data is transmitted from hundreds of edge cameras to high-performance servers in the data center or the cloud 24 hours a day, which will incur millions of dollars in bandwidth costs.
3. Processing the input video requires not only strong flexibility to manage a dazzling variety of different video sources and encoding types, but also the computing power required for video frame extraction, and at the same time allocating these videos Correct color space and scale it to fit the normal operation of the AI model.
4. How to instantly give intelligence to existing cameras without incurring a lot of cost and workload, and without changing the settings of existing cameras.
5. Robust real-time video analysis platforms that can provide highly accurate AI models for edge computing are very limited in the market.
Aupera video AI analysis for critical infrastructure applications
As an important partner of Xilinx, Aupera provides the possibility to effectively solve the above-mentioned problems. Aupera is a company that provides highly intelligent video processing solutions (from the cloud to the edge). They have been committed to providing efficient, highly agile, and immediately deployable AI applications based on Xilinx’s Zynq® UltraScale+™ MPSoC and Alveo accelerator platforms. Aupera’s video analysis solution can convert all passive camera data into actionable intelligence information, while saving total cost of ownership, improving energy efficiency, and saving bandwidth. The most important thing is to reduce the complexity of deployment.
High-performance, low-latency video analysis platform
AI models and deep learning are key technologies used to gain insights from video-enabled applications. Obtaining insights accurately from video requires high-intensity calculations and complex algorithms, and often requires multiple neural networks to run in parallel to achieve high accuracy with deterministic delays. With the exponential increase in video streams and the ever-increasing number of cameras deployed, the use of a general-purpose CPU that relies entirely on software for all processing has become a serious bottleneck. In order to solve the CPU bottleneck faced by video processing, Aupera has innovatively developed a new distributed micro-node architecture based on Xilinx Zynq® UltraScale+™ MPSoC, which provides a flexible solution for the realization of video transcoding and real-time analysis. Computing platform.
Zynq MPSoC has a hardened video codec that can simultaneously perform low-latency codec and can process 4K resolution pictures at a speed of up to 60 frames per second. The programmable logic feature in the Zynq device provides unique flexibility to efficiently run different video AI algorithms in parallel, allowing it to provide accurate results with deterministic and low latency. Compared with GPU-based video analysis solutions, Zynq MPSoC provides industry-leading total cost of ownership (TCO) advantages.
Flexible framework that can be widely deployed at the edge, cloud and data center
Aupera provides a simple deployment model that can realize all functions from connecting multiple cameras to smart output through pre-built and customizable neural network models. Aupera’s AI video solution has a built-in complete software stack, which includes a video gateway, an AI model for inference acceleration that can be deployed immediately, and a set of standard APIs to simplify integration with enterprise applications or third-party software platforms.
Aupera has created an immediately deployable video analysis device that can help solve the problem of lack of computing power at the edge, and extracts valuable insights at the edge by moving calculations closer to the camera. Aupera equipment will help users improve performance, reduce latency, and save bandwidth expenses.
The Aupera V205 Edge portable device supports 8-channel 1080p30 video AI, and the Aupera 2601 Edge server supports 64-channel 1080p30 video AI. For data center and cloud deployment, customers can use Boston Stream AI equipment. This is a 2RU server based on the Alveo U30 data center accelerator (supporting 112 channels of 1080p30 video AI).
Industry-leading high-accuracy AI model
For developers of smart city and smart building applications, creating an AI model from scratch is time-consuming and costly. Aupera has developed a variety of high-accuracy pre-trained models that can be put into production immediately to provide support for a variety of video AI workloads, including face recognition, crowd statistics, people tracking, virtual fences, car tracking, car license plate recognition and Video anomaly detection, etc.
For developers who are accustomed to using their own customized AI models, Aupera provides a smooth and seamless integration process for using mainstream frameworks such as Caffe, Pytorch, and Tensorflow for model development and deployment.
If you want to learn more about the solution, welcome to join our online seminar. You will have the opportunity to have a deeper understanding of the video AI analysis market trend and how to deploy Aupera solutions in many industries involved in smart cities, smart retail, smart buildings, and smart healthcare plan. Register for real-time AI video analysis for smart cities (xilinx.com).
The Links: NL10276AC30-02 BSM100GB120DLCK