Annons
Annons
Annons
Annons
Annons
Annons
Annons
Annons
Sponsrat innehåll från Congatec AG

Combining embedded computing, vision technologies and artificial intelligence

Fusion offers more than the sum of parts To develop smart embedded systems with AI-based situational awareness, OEMs from a wide range of industries need a set of preconfigured functional building blocks so they don’t have to spend any time and effort on ensuring the interoperability of the individual components.
With more and more applications requiring smart vision technologies, there is a growing need to integrate embedded camera and AI technologies at the embedded level. The required effort is comparable to integrating other peripheral components and wouldn’t really constitute a big challenge if it wasn’t for the added need to integrate AI technologies along with time synchronized networking, for instance to maintain the real-time capability of distributed robotics control. What is more, building application-ready platforms based on ARM technologies invariably calls for additional effort, since these must be adapted to the specific application requirements. Regardless of which processor technology they use, OEMs will always have to bring all the individual components together as smoothly as possible before series production. Ideally, they will find a supplier who can provide them with a solution platform for their specific needs that offers more than the sum of the individual parts. If this is the case, they can fully concentrate on the development of new applications. Heterogeneous solutions offered by processor manufacturers Caption 1: Smart embedded vision platforms with AI-based situational awareness are composed of many small function blocks whose interoperability must be validated. The challenges begin with the integration of MIPI-CSI based camera technologies, for example. While they are standard for ARM-based technologies, x86 platforms require special integration effort. AMD and Intel have quite different software support strategies for AI technologies. As with OpenCX/CV, AMD relies on open source solutions such as ROCm and TensorFlow to support the heterogeneous use of embedded computing resources needed for deep learning inference algorithms. Intel, on the other hand, offers customers a distribution of the OpenVINO toolkit that doesn’t only optimize deep-learning interference but also supports many calls to traditional computer vision algorithms implemented in OpenCV – in other words, it provides a total integrated package. Ultimately, by supporting FPGAs and the Intel® Movidius™ Neural Compute Stick, Intel aims to use not only the expensive GPUs from AMD or Nvidia, but also to present other in-house alternatives for the inference systems.1 NXP offers answers for the use of AI as well, with the eIQ Machine Learning Software Development Environment. This targets not only the automotive segment, but also the industrial environment. It includes inference engines, neuronal network compilers, vision and sensor solutions, and hardware abstraction layers, providing all the key components required to deploy a wide range of machine learning algorithms.2 Based on popular open source frameworks that are also integrated into the NXP development environments for MCUXpresso and Yocto, eIQ is available in the early access release for i.MX RT and i.MX. Caption 2: The vision-based retail deep learning platform from congatec, NXP and Basler automatically recognizes goods and can fully automate the checkout process in the retail sector. Embedded computing platforms must match the solution As these three different AI approaches of the semiconductor manufacturers clearly indicate, OEMs have different implementation requirements for their applications depending on the chosen solution path. But in any case, the embedded computing hardware must be prepared for whichever software solution is used, and this requires careful selection of the individual hardware components, which is why the cooperation between the semiconductor manufacturers and the embedded computing providers is so crucial. If OEMs work with companies such as congatec, which are among the leading providers in this field, and which have already presented application-ready bundles based on solutions developed in collaboration with semiconductor manufacturers, then OEMs can rest assured that the vital homework has already been done. However, AI implementations are only valuable to the degree that they support interoperability with the appropriate embedded vision technologies. For this reason, congatec has also entered into a cooperation with Basler that aims to offer customers perfectly matched components for embedded vision applications. Two very similar application platforms have already emerged from this cooperation: one with NXP technology and the other one based on Intel processors. Three different solutions from one source The smart embedded image recognition platform based on Intel technology recognizes faces and can analyze them according to age and mood. It is based on Basler’s USB 3.0 dart camera module and conga-PA5 Pico-ITX boards with 5th generation Intel ® Atom®, Celeron® and Pentium® processors. congatec will also be integrating the pylon Camera Software Suite as standard software into suitable kits. The NXP solution platform – which will be available from Basler in the summer of 2019 – targets deep learning applications in retail to fully automate the checkout process. It recognizes packaging via an AI inference system and is based on a Basler Embedded Vision Kit featuring an NXP i.MX 8QuadMax SoC on a conga-SMX8 SMARC 2.0 Computer-on-Module from congatec, a SMARC 2.0 carrier board and Basler’s dart BCON for MIPI 13 MP camera module. While the two applications are rather similar, they use highly heterogeneous components whose interoperability must be validated to ready the OEM solution for series production as easily as possible. The same applies to the additional integration of real-time controls, as required for robotic systems and autonomous logistics vehicles. For these applications, congatec has also developed a solution platform in cooperation with Intel and Real-Time Systems. It is based on COM Express Type 6 modules with Intel Xeon E2 processors and integrates four application-ready preconfigured virtual machines based on the RTS hypervisor. Caption 3: The embedded vision platform for real-time robotics from congatec, Intel and Real-Time Systems fuses heterogeneous building block solutions into a homogeneous solution platform, which helps to consolidate workloads. Two independent real-time partitions running real-time Linux have an application installed to balance an inverse pendulum in real-time. This installation is a placeholder for any distributed manufacturing robot control. Another Linux partition is used to operate a secure gateway onboard. The advantages of such secure gateway integration include significant cost as well as space savings for the external gateway. At the same time, users are safe from the dangerous backdoors often associated with external gateways, so vision control can be designed directly as an edge device. To demonstrate application independence and real-time behavior on a single server platform with multiple virtual machines, the Linux partition running the demo vision system can be rebooted without any impact on the virtualized real-time system. Of course in principle, it is also possible to reboot all other operating systems independently on their respective virtual machines and to check that they are operating correctly via watchdogs. Partners are key OEMs using such application-ready solution platforms benefit from significantly reduced development effort, since many functionalities have already been tested and the interoperability of the individual components has been validated. If required, congatec also offers these individual, custom components as a fully developed, series-ready solution platform, including all certifications required for series delivery to the end customer – whether AMD, Intel or NXP-based. This way, customers benefit from simplified handling and accelerated design-in of the embedded vision computer component, as well as optimized service and support conditions. At congatec, such projects are often based on Computer-on-Modules, because they make it particularly easy to scale performance in line with requirements and to implement closed-loop engineering strategies. However, it is always possible to fuse module and carrier board into a customized OEM solution, including the development of a custom-specific solution platform with housing and IoT connectivity. In short: a true solution platform portfolio for OEMs. Author: Zeljko Loncaric is Marketing Engineer at congatec The author reserves the right to publish this text on the company website or in other, non-competing publications, or in other languages. However, parallel placement in a direct competitive environment is excluded. Alternative agreements can be made at any time if required. For more information about the real-time hypervisor please visit: https://www.congatec.com/en/technologies/real-time-hypervisor.html For more information on congatec COM Express modules please visit: https://www.congatec.com/en/products/com-express-type-6.html ________________________________________________________________________________________ 1 See also https://www.learnopencv.com/using-openvino-with-opencv/ 2 https://www.nxp.com/support/developer-resources/software-center/eiq-ml-development-environment:EIQ
Annons
Annons
Visa fler nyheter
2019-06-17 21:26 V13.3.21-1