Embedded Vision – A New Concept with New Applications
How is an embedded vision system designed and what are its key features? These questions and more will be answered in our White Paper.Read the White Paper
Discover Embedded Vision
Embedded Vision is a hot topic right now. Steve wants to get in on the fun too. Join him for his first embedded vision project.Just press play!
What is Embedded Vision?
In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and also more powerful. This trend can also be observed in the world of vision technology.
A classic machine vision system consists of an industrial camera and a PC:
Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers (SPCs), i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.
Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. These systems are called embedded (vision) systems.
Design and use of an embedded vision system
An embedded vision system consists, for example, of a camera, so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB, Basler’s BCON for MIPI or BCON for LVDS.
Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.
Which Embedded Systems Are Available?
A so-called SoC (system on chip) lies at the heart of all embedded processing solutions. This is a single chip on which the CPU (which may be multiple units), graphics processors, controllers, other special processors (DSP, ISP) and other components are integrated.
Due to these efficient SoC components, embedded vision systems have become available in such a small size and at a low cost only recently.
As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi® or DragonBoard®. These are mini-computers with the established interfaces (USB, Ethernet, HDMI, etc.) and a range of features similar to traditional PCs or laptops, although the CPUs are of course less powerful.
Embedded vision solutions can also be designed with a so-called SoM (system on module, also called computer on module or CoM). In principle, an SoM is a circuit board which contains the core elements of an embedded processing platform, such as the SoC, storage, power management, etc. An individual carrier board is required for the customization of the SoM to each application (e.g. with the appropriate interfaces). This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.
Completely individual processing boards in the form of a full custom design may also be a sensible choice for high quantities.
Characteristics of Embedded Vision Systems versus Standard Vision Systems
Most of the above-mentioned single board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture.
The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries. Increasingly, however, x86-based single-board computers are also spreading. A consistently important criterion for the computer is the space available for the embedded system.
For the software developer, the program development for an embedded system is much more complex than for a standard PC. While the PC used in standard software development is also the main target platform (meaning the type of computer which the program is later intended to run on), this is different in the case of embedded software, where the target system generally can’t be used for the development due to its limited resources (CPU performance, storage). This is why the development of embedded software also uses a standard PC on which the program is coded and compiled with tools that may get very complex. The compiled program must then be copied to the embedded system and subsequently be debugged remotely.
When developing the software, it should be noted that the hardware concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.
However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the popular Raspberry Pi, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and, with the connection of a monitor, mouse and keyboard, is therefore a universal computer.
What Are the Benefits of Embedded Vision Systems?
In some cases, much depends on how the embedded vision system is designed. An SBC (single-board computer) is often a good choice, as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision.
On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. For that reason, this approach is not very economical in terms of manufacturing costs and is more suitable for small unit numbers, where the development costs must be kept low while the manufacturing costs are of secondary importance.
The leanest setup is obtained with a full-custom design, a system that is highly optimized for individual applications. But this involves high integration costs and the associated high development expenditures. This solution is therefore suitable for large unit numbers.
An approach with a conventionally available system on module (SoM) and an appropriately customized carrier board presents a compromise between an SBC and a full-custom design (also see above: “Which embedded systems are available? “). The manufacturing costs are not as optimized as in a full custom design (after all, a setup with a carrier board plus a more or less generic SoM is a bit more complex), but at least the hardware development costs are lower, since the significant part of the hardware development is already completed with the SoM. This is why a module-based approach is a very good choice for medium-level unit numbers in which the manufacturing and development costs must be well-balanced.
The benefits of embedded vision systems at a glance:
- Leaner system design
- Light weight
- Cost-effective, because there is no unnecessary hardware
- Lower manufacturing costs
- Low energy consumption
- Small footprint
Which Interfaces Are Suitable for An Embedded Vision Application?
Embedded vision is the technology of choice for many applications. Accordingly, the design requirements are widely diversified. Depending on the specification, Basler offers a variety of cameras with different sensors, resolutions and interfaces.
The three different interface technologies that Basler offers for embedded vision systems in the are:
- USB 3.0 for plug and play integration in Windows-based or Linux-based systems (x86 or ARM)
- Basler BCON for MIPI for simple integration and a lean system design via MIPI CSI-2 interface to Linux ARM-based systems
USB 3.0 is the right interface for a simple plug and play camera connection and ideal for camera connections to single-board computers. The Basler pylon SDK gives you easy access to the camera within seconds (for example, images and settings), since USB 3.0 cameras are standard-compliant and GenICam-compatible.
- Easy connection to single-board computers with USB 2.0 or USB 3.0 connection
- Field-tested solutions with Raspberry Pi®, NVIDIA Jetson TK1 and many other systems
- Profitable solutions for SoMs with associated base boards
- Stable data transfer with a bandwidth of up to 350 MB/s
BCON stands for Basler Connectivity, which represents the addition of reliable and highly productive machine vision features to established data transmission standards from the embedded industry (such as LVDS or MIPI CSI-2). Thanks to the integration into the world of machine vision standards (GenICam) and the pylon SDK, this makes basic technologies easier to handle than ever before.
Basler cameras with BCON technology (BCON for LVDS and BCON for MIPI) are equipped with the same 28-pole ZIF connector for flat flex cables. We achieved the entire camera functionality through this connection:
- The image transfer via D-PHY (BCON for MIPI) or LVDS (BCON for LVDS)
- The camera configuration via I²C (BCON for LVDS) or CCI (BCON for MIPI)
- 5 V power supply for the camera module
- I/O functions, e.g. to externally trigger the camera or control a light source
BCON for MIPI
BCON for MIPI enables a direct connection to an embedded processor with a MIPI CSI-2 interface. MIPI CSI-2 is a camera interface standardized by the Mobile Industry Processor Interface Alliance (MIPI). CSI-2 stands for Camera Serial Interface of the 2nd generation. CSI-2 is currently the most important camera interface for mobile applications and is used for applications such as connecting cell phone camera modules to cell phone processors. Since nearly all processors (SoC, System on Chip) used in the embedded area also typically have two CSI-2 interfaces, MIPI is an ideal broadband (up to 750 MB/s) and economical solution for direct connection of dart camera modules to embedded SoCs, since no other hardware is required.
Thanks to BCON, this technology–originally designed for consumer cell phone modules–is now enhanced with important Machine Vision features (such as individual image capture and highly differentiated camera configuration options) and integrated into the GenICam standard: MIPI becomes BCON for MIPI.
In combination with Basler’s provided drivers for the supported platforms and the pylon Camera Software Suite, it is possible to operate dart camera modules virtually via plug and play, without any additional integration costs.
- Simple integration of the dart module into an embedded application, thanks to the easily installed Basler Driver Package.
- Full GenICam compatibility
- pylon as standard API – dart with BCON for MIPI interface, just like all other Basler cameras, is supported by the unified pylon SDK with exactly the same API. This means that the dart module can be integrated into an application with just a few lines of code. It is also possible to reuse existing code or simply port it from one camera interface technology or another operating system to BCON for MIPI under Linux ARM.
- Development kit is available
- Very lean and economical setup: using an inexpensive flat flex cable, the dart module with the BCON for MIPI interface can be directly connected to the CSI-2 input of the target SoC without requiring any other hardware.
- Stable, reliable image data transfer with a bandwidth of up to 750 MB/s
BCON for LVDS
BCON for LVDS – The LVDS-based interface developed by Basler enables a direct camera connection to processing platforms with LVDS inputs. In particular, this includes logic elements such as FPGAs (Field Programmable Gate Arrays) or FPGA SoCs (FPGAs with integrated CPU units). The option of a direct camera-to-FPGA connection allows for overall designs that are extremely lean and economical.
Basler's pylon SDK is also tailored to work with the BCON for LVDS interface. This makes it very easy to change camera settings, such as the exposure time, gain or pixel formats, from a user application with the help of the pylon API. The image acquisition of the application must be implemented individually as it depends on the hardware used.
- Direct connection via LVDS-based image data exchange to FPGA
- Full compatibility with the GenICam standard.
- The data protocols are openly and comprehensively documented
- Development kit with reference implementation available
- Flat flex cable and small connector for applications with maximum space limitations.
- Image processing directly on the camera. This results in the highest image quality, without compromising the very limited resources of the downstream processing board.
- Stable, reliable data transfer with a bandwidth of up to 252 MB/s
How can an Embedded Vision System be Developed and How can the Camera be Integrated?
Although it is unusual for developers who have not had much to do with embedded vision to develop an embedded vision system, there are many possibilities for this. In particular, the switch from standard machine vision system to embedded vision system can be made easy. In addition to its embedded product portfolio, Basler offers many tools that simplify integration.
Find out how you can develop an embedded vision system and how easy it is to integrate a camera in our simpleshow video.
Machine Learning in Embedded Vision Applications
Embedded vision systems often have the task of classifying images captured by the camera: On a conveyor belt, for example, in round and square biscuits. In the past, software developers have spent a lot of time and energy developing intelligent algorithms that are designed to classify a biscuit based on its characteristics (features) in type A (round) or B (square). In this example, this may sound relatively simple, but the more complex the features of an object, the more difficult it becomes.
Algorithms of machine learning (e.g., Convolutional Neural Networks, CNNs), however, do not require any features as input. If the algorithm is presented with large numbers of images of round and square biscuits, together with the information which image represents which variety, the algorithm automatically learns how to distinguish the two types of biscuits. If the algorithm is shown a new, unknown image, it decides for one of the two varieties because of its "experience" of the images already seen. The algorithms are particularly fast on graphics processor units (GPUs) and FPGAs.
What products does Basler provide for embedded vision?
Basler dart with BCON for LVDS interface and Basler dart with USB 3.0 interface
What camera is suitable for my embedded vision application?
Embedded Vision Kits
Are you looking for the right camera for integration into your embedded project? Basler simplifies this process for you with an evaluation kit and a development kit for Basler dart cameras.
Basler pylon Software for embedded vision applications
Basler's tried-and-true pylon Camera Software Suite provides a user-friendly SDK suitable for use in embedded vision products.
Components for embedded vision
An embedded vision system is comprised of more than just camera and processing board. To establish a stable solution, you'll need components tailored to work perfectly with the camera and application.