-
Basler Sales
Europe, Middle East, Africa
The Americas
Asia-Pacific
Find here your local Basler representative and contact details.
Basler Technical Support -
Request Cart
Your request cart is currently empty. Please add at least one product to send a quote request. If you would like to send a message directly to our sales team, please use this form.
Request Cart -
- Products
- Solutions
- Downloads
- Sales & Support
-
Vision Campus
-
Camera Technology
- Comparison between Sony's IMX CMOS sensor series
- Why should I color calibrate my camera?
- Expert Tips to Find the Right Lens for a Vision System
- Expert Tips for Embedded Vision Systems
- What Is Multispectral Imaging?
- Comparison of CMOS cameras
- Color in Image Processing
- Processing Boards in Embedded Vision
- What Is Image Processing
- 3D Technologies in Image Processing
- What Is Embedded Vision
- Why CMOS Image Sensors?
- What Is Time of Flight?
- What Is Image Quality?
- Camera Sizes
- How does a Digital Camera Work?
- CMOS vs. CCD: Sensor Technology
- Real-Time Capability
- NIR: Seeing Clearly Even in Low Light
- High-Sensitivity Industrial Cameras
- Interfaces and Standards
-
Markets and Applications
- Cameras for Fluorescence Microscopy
- What Is the Role of Computer Vision for Industry 4.0?
- What Is Deep Learning?
- Robots with Vision Technology
- Blockchain for IoT Sensor Producers
- How will IoT change retail?
- How Do Machines Learn?
- IoT Applications in the Smart City
- Benefits of Cameras in Medicine
- Lab Automation with Vision
- Medicine with Vision
- Image Processing in Industry 4.0
- Machine Vision
- Automated Optical Inspection
- Color Calibration in Medical Technology
-
Vision Systems and Components
- Software in Image Processing
- Image Pre-processing Strengthens and Streamlines Image Processing Systems
- How to Find the Right Lighting for Your Vision System?
- What is a Machine Vision SDK
- Lighting
- How Can I Find the Right Lens?
- Components of a Vision System
- Cutting Through the Noise: Camera Selection
- Image Processing Systems — The Basics
-
Camera Technology
- Company
-
Products
Computer Vision HardwareCameras Lenses Lighting Acquisition Cards Cables Accessories Network & Peripheral DevicesComputer Vision Softwarepylon Camera Software Suite pylon vTools VisualApplets Other softwarePortfoliosEmbedded Vision Portfolio Medical & Life Sciences Portfolio CoaXPress Product PortfolioCamerasArea Scan Cameras Line Scan Cameras 3D Cameras CoaXPress 2.0-Cameras Customized Basler Cameras Cameras for Embedded VisionArea Scan CamerasBasler ace 2 Basler ace Basler MED ace Basler boost Basler beat Basler dart Basler pulse Basler scoutBasler aceNew ace featuresLine Scan CamerasBasler racerBasler racerBasler racer - the right choiceBasler blazeblaze FAQ Use Case: 3D image processing for object inspection Basler blaze Software Use Case: Time-of-Flight Technology Meets Labeling Robot Use-Case-Kine-RoboticsBasler Stereo CameraQuote Basler Stereo CamerasLensesFixed Focal LensesAcquisition CardsUSB 3.0 Interface Cards GigE Interface Cards CXP-12 Interface Cards imaWorx microEnable 5 marathon microEnable 5 ironmanpylon Camera Software Suitepylon vTools pylon Open Source Projects pylon Data FAQ pylon Drivers & GenTL pylon SDKs pylon ViewerOther softwareBasler blaze Software Frame Grabber Software Basler Microscopy Software Basler Video Recording Software Basler Application Software for Robotics drag&bot Robotics Software
-
Solutions
MarketsFactory Automation Medical & Life Sciences Vision-guided Robotics Warehouse Automation Further ApplicationsFactory AutomationAutomotive Electronics & Semiconductors Photovoltaics Food & Beverages Pharma & Healthcare Products Printing & Webbed MaterialsFurther ApplicationsSport & Entertainment Security and Surveillance Agriculture Retail Traffic & TransportationRetailATMs / VTMs Check-Out Systems Reverse Vending Machines Vending Machines Recognition/Biometrics People Counting, Tracking & Profiling Shelf InspectionTraffic & TransportationANPR Enforcement Tolling In-Vehicle Monitoring and Transportation Rail and RoadEmbedded VisionEmbedded Vision Portfolio Embedded Vision Services Embedded Vision Ecosystem & SupportEmbedded Vision PortfolioCameras for Embedded Vision Embedded Vision Kits Software for Embedded Vision Embedded Vision for NVIDIA Embedded Vision for NXP
- Downloads
-
Sales & Support
Sales & SupportSales Support Contact Knowledge Base Aftersales Services Basler Product Documentation Frame Grabber Services Tools
-
Vision Campus
Vision CampusCamera Technology Interfaces and Standards Markets and Applications Vision Systems and ComponentsCamera TechnologyComparison between Sony's IMX CMOS sensor series Why should I color calibrate my camera? Expert Tips to Find the Right Lens for a Vision System Expert Tips for Embedded Vision Systems What Is Multispectral Imaging? Comparison of CMOS cameras Color in Image Processing Processing Boards in Embedded Vision What Is Image Processing 3D Technologies in Image Processing What Is Embedded Vision Why CMOS Image Sensors? What Is Time of Flight? What Is Image Quality? Camera Sizes How does a Digital Camera Work? CMOS vs. CCD: Sensor Technology Real-Time Capability NIR: Seeing Clearly Even in Low Light High-Sensitivity Industrial CamerasShow moreShow lessInterfaces and StandardsSystem Setup with CoaXPress 2.0 What Is CoaXPress? Which interface for Embedded Vision? Multi-Camera Systems with GigE 2.0 USB 3.0 – Interface of the Future Camera Link Gigabit Ethernet (GigE) GenICam Standard USB 3.0 and USB3 VisionMarkets and ApplicationsCameras for Fluorescence Microscopy What Is the Role of Computer Vision for Industry 4.0? What Is Deep Learning? Robots with Vision Technology Blockchain for IoT Sensor Producers How will IoT change retail? How Do Machines Learn? IoT Applications in the Smart City Benefits of Cameras in Medicine Lab Automation with Vision Medicine with Vision Image Processing in Industry 4.0 Machine Vision Automated Optical Inspection Color Calibration in Medical TechnologyShow moreShow lessVision Systems and ComponentsSoftware in Image Processing Image Pre-processing Strengthens and Streamlines Image Processing Systems How to Find the Right Lighting for Your Vision System? What is a Machine Vision SDK Lighting How Can I Find the Right Lens? Components of a Vision System Cutting Through the Noise: Camera Selection Image Processing Systems — The Basics
-
Company
Social ResponsibilityCareer and Family Health Management HanseBelt Initiative Promoting Young Talents Environmental PoliciesInvestorsShare Financial Information Financial Calendar Annual General Meeting Corporate Governance Contact Investor Relations SustainabilityShareCurrent Price Information Basic Data Shareholder Structure Analysts Recommendations Share Buyback Program Dividend PolicyFinancial InformationAd Hoc Disclosures Corporate News Financial Reports Voting Rights Presentations Equity StoryEquity StoryBusiness ModelCorporate GovernanceDeclaration of Compliance Management Board Supervisory Board Director's Dealings Auditors Articles of Association Remuneration Report
How does the YUV color coding work?
A CCD or a CMOS sensor alone is not able to detect the color of incident light. In reality, each pixel array cavity simply detects the intensity of the incident light as long the exposure is active. It cannot distinguish how much they have of each particular color of light. But when a color pattern filter is applied to the sensor, each pixel becomes sensitive to only one color - red, green or blue. As the human eye is more sensitive to green light than both red and blue light it has positive effects the array has twice as many green as red or blue sensors. The following images shows the color distribution and arrangement of a “Bayer Pattern” filter on a sensor with a size of x * y (with x and y being multiples of 2).

Since the arrangement of the colors in the Bayer pattern filter is known, an application can use the transmitted raw pixel information to interpolate full RGB color information for each pixel in the camera sensor. Instead of transmitting the raw pixel information, it is also common to use a color coding group known as YUV. The block diagram below illustrates the process of conversion inside a Basler color camera that is capable of this feature. To keep things simple, we assume that the sensor collects pixel data at an 8 bit depth.

As a first step, an algorithm calculates the complete RGB values for each and every pixel. This means, for example, that even if a pixel is sensitive to green light only, the camera gets full RGB information for the pixel by interpolating the intensity information out of adjacent red and blue pixels. This is, of course, just an approximation of the real world. There are many algorithms for doing RGB interpretation and the complexity and calculation time of each algorithm will determine the quality of the approximation. Basler color cameras have an effective built-in algorithm for this RGB conversion.
A disadvantage of RGB conversion is that the amount of data for each pixel is inflated. If a single pixel normally has a depth of 8 bits, after conversion it will have a depth of 8 bits per color (red, green and blue) and will thus have a total depth of 24 bits.
YUV coding converts the RGB signal to an intensity component (Y) that ranges from black to white plus two other components (U and V) which code the color. The conversion from RGB to YUV is linear, occurs without loss of information and does not depend on a particular piece of hardware such as the camera. The standard equations for accomplishing the conversion from RGB to YUV are:
Y = 0.299 R + 0.587 G + 0.114 B
U = 0.493 * (B - Y)
V = 0.877 * (R - Y)
In practice, the coefficients in the equations may deviate a bit due to the dynamics of the sensor used in a particular camera. If you want to know how the RGB to YUV conversion is accomplished in a particular Basler color camera, please refer to the camera’s user manual for the correct coefficients. This information is particularly useful if you want to convert the output from a Basler color camera from YUV back to RGB.
The diagram below illustrates how color can be coded with the U and V components and how the Y component codes the intensity of the signal.

This type of conversion is also known as YUV 4:4:4 sampling. With YUV 4:4:4, each pixel gets brightness and color information and the “4:4:4” indicates the proportion of the Y, U and V components in the signal.
To reduce the average amount of data transmitted per pixel from 24 bits to 16 bits, it is more common to include the color information for only every other pixel. This type of sampling is also known as YUV 4:2:2 sampling. Since the human eye is much more sensitive to intensity than it is to color, this reduction is almost invisible even though the conversion represents a real loss of information. YUV 4:2:2 digital output from a Basler color camera has a depth that alternates between 24 bits per pixel and 8 bits per pixel (for an average bit depth of 16 bits per pixel).
As shown in the table below, when a Basler camera is set for YUV 4:2:2 output, each quadlet of image data transmitted by the camera will contain data for two pixels. K represents the number of a pixel in a frame and one row in the table represents a quadlet of data transmitted by the camera.

For every other pixel, both the intensity information and the color information are transmitted and this results in a 24 bit depth for those pixels. For the remaining pixels, only the intensity information is preserved and this results in an 8 bit depth for them. As you can see, the average depth per pixel is 16 bits.
On all Basler color cameras, you are free to choose between an output mode that provides the raw sensor output for each pixel or a high quality YUV 4:2:2 signal. Some cameras provide RGB / BGR data as well.