KR101714896B1 - Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System - Google Patents

Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System Download PDF

Info

Publication number
KR101714896B1
KR101714896B1 KR1020150127729A KR20150127729A KR101714896B1 KR 101714896 B1 KR101714896 B1 KR 101714896B1 KR 1020150127729 A KR1020150127729 A KR 1020150127729A KR 20150127729 A KR20150127729 A KR 20150127729A KR 101714896 B1 KR101714896 B1 KR 101714896B1
Authority
KR
South Korea
Prior art keywords
matching error
matching
image
frequency component
processor
Prior art date
Application number
KR1020150127729A
Other languages
Korean (ko)
Inventor
이상근
김용호
구자민
Original Assignee
중앙대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 중앙대학교 산학협력단 filed Critical 중앙대학교 산학협력단
Priority to KR1020150127729A priority Critical patent/KR101714896B1/en
Application granted granted Critical
Publication of KR101714896B1 publication Critical patent/KR101714896B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06K9/32
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a robust stereo matching device and method for light intensity variations for intelligent driver assistance systems. According to an embodiment of the present invention, there is provided a stereo matching apparatus robust to changes in the amount of light for an intelligent driver assistance system, comprising: a processor; And a memory coupled to the processor, wherein the memory performs luminance mapping on a target image in consideration of correlation with a reference block of the reference image to correct brightness information of each RGB channel of each pixel, Calculating a second matching error based on the brightness using the brightness information of each of the RGB channels, calculating a second matching error based on the gradient using the brightness information of each of the corrected RGB channels, There is provided a stereo image matching apparatus for storing program instructions executable by the processor to apply an adaptive weighting to a second matching error to determine a parallax value.

Description

Technical Field [0001] The present invention relates to a robust stereo matching apparatus and method for an intelligent driver assist system,

The present invention relates to a robust stereo matching apparatus and method for varying the amount of light for an intelligent driver assistance system.

The automobile industry is an industrial field that has provided many convenience to human beings and is continuously developing.

The old automobile industry mainly focused on improving the car itself. Recently, in the automotive industry, research on the Advanced Driver Assistance System (ADAS) has emerged.

In ADAS, safety and convenience of driver are improved through advanced technologies such as various sensors and cameras, and it is being expanded to research on autonomous vehicles.

The camera-based ADAS can be divided into a monocular system and a stereo system. The monocular system has a simple hardware implementation, and includes a lane departure warning (LDW), Adaptive High-Beam Assist (AHB) and Traffic Sign Recognition (TSR) .

However, the monocular system can not directly estimate the distance information for the object ahead. Distance information can be estimated from a stereo vision system using multiple cameras.

Recently, ADAS research based on stereo vision system has been actively carried out.

Stereo vision systems can replace radar sensors through Adaptive Cruise Control (ACC). Furthermore, it can be applied to lane departure warning, blind spot detection (BSD), and around view monitoring (AVM) in camera-based systems.

Stereo vision systems go through processes such as image acquisition, camera modeling, corresponding point extraction, and 3D depth information extraction.

In particular, finding stereo correlation points and estimating three-dimensional information is the most difficult and important process.

However, because of the fact that a plurality of cameras are actually used, the stereo vision system has some problems in the image acquisition process.

One of the important problems arises from the change in the amount of light, and the change in the amount of light is caused by a mismatch of the parameters set for the camera, another path of the light source, and geometric inconsistency depending on the object and other positions of the camera.

The conventional stereo matching technique has a problem in that an exact correspondence point can not be found between the images distorted due to the light amount change without considering the above problem.

In order to solve the problems of the prior art described above, a stereo matching apparatus and method robust to changes in the amount of light for an intelligent driver assistance system capable of extracting and matching strong points even when the amount of light of an image changes is proposed.

According to an embodiment of the present invention, there is provided a stereo matching apparatus robust to changes in the amount of light for an intelligent driver assistance system, comprising: a processor; And a memory coupled to the processor, wherein the memory performs luminance mapping on a target image in consideration of correlation with a reference block of the reference image to correct brightness information of each RGB channel of each pixel, Calculating a first matching error based on the brightness using the brightness information of each of the RGB channels and calculating a gradient based second matching error for the reference image and the target image, There is provided a stereoscopic image matching apparatus for storing program instructions executable by the processor to apply an adaptive weight to an error to determine a parallax value.

The luminance mappings may be performed by determining a candidate region having a correlation higher than a preset threshold value for the reference block as a search region.

The second matching error may be calculated using gradient values, gradient magnitudes, and edge directions for each color channel in the horizontal and vertical directions.

Wherein the memory separates the reference image into a high frequency component and a low frequency component using DOG (Difference of Gaussian), and outputs the adaptive Gaussian weight value of the separated low frequency component and high frequency component to the first matching error and the second matching error May store program instructions executable by the processor to apply the program instructions.

The memory may store program instructions executable by the processor to apply an adaptive support weight to the first matching error and the second matching error to which the adaptive Gaussian weighting is applied to determine a final matching cost. have.

The adaptive area weight may be determined according to the color similarity and the geometric proximity between the reference pixel and the surrounding pixels.

According to another aspect of the present invention, there is provided a stereoscopic image matching method comprising the steps of: (a) receiving a reference image and a target image; (b) performing luminance mapping on a target image in consideration of correlation with a reference block of the reference image to correct brightness information of each of the RGB channels of each pixel; (c) calculating a brightness-based first matching error using the brightness information of each of the corrected RGB channels; (d) calculating a gradient-based second matching error for the reference image and the target image; And (e) applying an adaptive weighting to the first matching error and the second matching error to determine a parallax value.

There is provided a computer-readable recording medium having recorded thereon a program for performing the method according to another aspect of the present invention.

According to the present invention, there is an advantage that robust stereo matching can be performed even in an environment where a light amount change occurs by using brightness and gradient information.

1 is a flowchart of a stereo matching process according to a preferred embodiment of the present invention.
FIG. 2 is a diagram for explaining a process of searching for a candidate region using a high correlation larger than a threshold in a search region according to the present embodiment. FIG.
FIG. 3 illustrates a process of applying the adaptive Gaussian weight according to the present embodiment.
4 is a diagram illustrating a configuration of a stereo image matching apparatus according to an exemplary embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail.

It is to be understood, however, that the invention is not to be limited to the specific embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

Hereinafter, embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

The color consistency assumption is no longer valid for a stereo image that has been modified by light intensity variations. Brightness-based stereo matching receives images at varying amounts of light and causes inaccurate correspondence matching.

On the other hand, the gradient-based stereo matching maintains good performance even when there is a change in the amount of light, but it is difficult to find a corresponding point because a low gradient value occurs in a constant brightness range. This is because it does not reflect the brightness information.

Accordingly, the present invention proposes a method and apparatus capable of performing robust stereo matching even in an environment where a light amount change occurs using brightness and gradient information.

1 is a flowchart of a stereo matching process according to a preferred embodiment of the present invention.

1 may be performed in an image processing apparatus connected to a plurality of cameras and including a processor and a memory.

Referring to FIG. 1, a stereo image (a reference image and a target image) is input (step 100).

According to the present embodiment, the luminance of the target image is corrected using luminance remapping before calculating the matching cost (step 102).
Here, the color correction of the target image means correcting the color of each pixel by correcting the brightness of each of the RGB channels in the specific pixel of the target image.

However, correcting the color of all areas can degrade accuracy.

2, a region having a correlation higher than a preset threshold value is determined as a candidate region in the reference block of the reference image and the search region of the target image, and a luminance mapping is applied only to the candidate region.

The search for the candidate region is performed by the following equation (1).

Figure 112015087772865-pat00001

here,

Figure 112016073689633-pat00002
Is an image, and c is a color channel (RGB channel in one pixel).
Figure 112016073689633-pat00003
Is a reference block,
Figure 112016073689633-pat00004
Is the pixel of the target block in the search area in the target image.

Is a reference area and a target area,

Figure 112015087772865-pat00006
Wow
Figure 112015087772865-pat00007
The
Figure 112015087772865-pat00008
And the search range.

Figure 112015087772865-pat00009

here,

Figure 112015087772865-pat00010
Is a search area
Figure 112015087772865-pat00011
≪ / RTI >
Figure 112015087772865-pat00012
The
Figure 112015087772865-pat00013
And is a candidate region having a higher correlation.

The color is corrected by applying a luminance remapping to the candidate block.

Figure 112015087772865-pat00014

here,

Figure 112015087772865-pat00015
Wow
Figure 112015087772865-pat00016
,
Figure 112015087772865-pat00017
Is the luminance component, mean and standard deviation for each candidate region.

Based on the brightness information of each of the RGB channels, a brightness-based matching error is calculated as shown in Equation (4) (Step 104).

Figure 112015087772865-pat00018

here,

Figure 112016073689633-pat00019
Is the brightness value of each of the RGB channels c (c? {R, g, b}),
Figure 112016073689633-pat00020
Wow
Figure 112016073689633-pat00021
Are pixels of the reference block and the target block. d and T 1 are truncated values that adjust the bounds of search area and matching cost.

The probability that the brightness of the reference pixel and that of the neighboring pixel are the same in the obtained image is very large. Therefore, even if a light quantity change occurs, the brightness values of the reference pixel and the surrounding pixels coincide with a high probability of biasing.

Considering this point, the use of the gradient information extracted through the difference between the reference pixel and the surrounding pixels can provide a robust matching cost even in a stereo image changed by a change in illumination and a change in aperture exposure time.

Gradients have the advantage of being robust because they use the difference between the reference pixel and surrounding pixels even if the image is changed by the external environment.

In the input images, the gradient information in the horizontal and vertical RGB channels is calculated using the following equation (step 106).

Figure 112015087772865-pat00022

Figure 112015087772865-pat00023

Figure 112015087772865-pat00024

Figure 112015087772865-pat00025

here,

Figure 112016073689633-pat00026
Is a brightness value of each of the RGB channels,
Figure 112016073689633-pat00027
,
Figure 112016073689633-pat00028
,
Figure 112016073689633-pat00029
And
Figure 112016073689633-pat00030
Are a horizontal gradient, a vertical gradient, a gradient magnitude, and an edge direction, respectively.

Gradient-based matching errors are obtained through equation (9) (step 108).

Figure 112015087772865-pat00031

Here, m and

Figure 112015087772865-pat00032
Is the size and orientation of the gradient,
Figure 112015087772865-pat00033
Is the weight of m, f is
Figure 112015087772865-pat00034
In the cycle.

An adaptive Gaussian weight is applied to adaptively use the brightness-based matching error and the gradient-based matching error (step 110).

FIG. 3 illustrates a process of applying the adaptive Gaussian weight according to the present embodiment.

First, the reference image is separated into a high frequency component and a low frequency component by using DOG (Difference of Gaussian) (step 300).

In each of the separated components, the brightness value corresponding to the position of the reference pixel is used to calculate the Gaussian function

Figure 112015087772865-pat00035
(Step 302) and adjusts the width of the Gaussian kernel (step 304).

Figure 112015087772865-pat00036

Where p is the reference pixel and t is the surrounding pixels of p.

Next, in order to adaptively adjust the matching error in the Gaussian kernel, an inverse adaptive Gaussian weight is applied as shown in Equation (11).

Figure 112015087772865-pat00037

An adaptive Gaussian weight of the low-frequency component and the high-frequency component is applied to the brightness-based and gradient-based matching errors, respectively (step 306).

Referring again to FIG. 1, Adaptive support-weight (ASW) is then applied (step 112) to determine brightness and gradient-based final matching cost, respectively.

In a stereoscopic image in which a light quantity change occurs, since the brightness of each of the RGB channels in the pixels corresponding to each other is different, it is difficult to obtain an accurate parallax map only by a simple similarity.

To solve this problem, the final matching cost is calculated by applying an adaptive area weight according to the color similarity and the geometric proximity.

Figure 112015087772865-pat00038

Figure 112015087772865-pat00039

here,

Figure 112015087772865-pat00040
Are the support windows for the final matching cost of the brightness and gradient, the reference area and the target area, respectively.

Support window

Figure 112015087772865-pat00041
Is defined as follows.

Figure 112015087772865-pat00042

Figure 112015087772865-pat00043

Figure 112015087772865-pat00044

here,

Figure 112016073689633-pat00045
And
Figure 112016073689633-pat00046
Are the color similarity and geometric proximity between the reference pixel p and the surrounding pixel q in the CIELab color space and spatial coordinates, respectively.
Here, L denotes luminance, and a and b are chrominance. If a is positive, it is red. If negative is green, b is positive and yellow is negative.

variable

Figure 112015087772865-pat00047
Wow
Figure 112015087772865-pat00048
Are parameters that control photometric importance and geometric importance, respectively,
Figure 112015087772865-pat00049
Wow
Figure 112015087772865-pat00050
Is a variable that controls the relative importance of

The optimal parallax is determined using Equation 17 using WTA for the final brightness and gradient matching cost for the reference pixel p in the same search range.

Figure 112015087772865-pat00051

Here, S d is a search range for searching for the parallax d.

4 is a diagram illustrating a configuration of a stereo image matching apparatus according to an exemplary embodiment of the present invention.

As shown in FIG. 4, the stereo image matching apparatus according to the present embodiment may include a processor 400 and a memory 402.

The processor 400 may include a central processing unit (CPU) or other virtual machine capable of executing computer programs.

The memory 402 may include a non-volatile storage device such as a fixed hard drive or a removable storage device. The removable storage device may include a compact flash unit, a USB memory stick, and the like. Memory 502 may also include volatile memory, such as various random access memories.

Such memory 402 stores program instructions that are executable by the processor 400.

According to a preferred embodiment of the present invention, in the memory 402, luminance mapping is performed on a target image in consideration of correlation with a reference block of a reference image to correct brightness information of each RGB channel of each pixel, And calculates a brightness-based first matching error using the brightness information of each of the corrected RGB channels.

Meanwhile, the processor 400 may be configured to calculate a gradient-based second matching error for the reference image and the target image, and apply an adaptive weighting to the first matching error and the second matching error to determine a parallax value Lt; RTI ID = 0.0 > executable < / RTI >

In addition, the memory 402 separates the reference image into a high frequency component and a low frequency component by using DOG (Difference of Gaussian), and adapts the separated low frequency component and high frequency component to the first and second matching errors, And store executable program instructions executable by the processor 400 to apply the enemy Gaussian weight.

Further, the memory 402 may be programmed with a program instruction that is executable by the processor to apply an adaptive support weight to the first matching error and the second matching error to which the adaptive Gaussian weighting is applied to determine a final matching cost. Lt; / RTI >

As described above, the present invention has been described with reference to particular embodiments, such as specific elements, and specific embodiments and drawings. However, it should be understood that the present invention is not limited to the above- And various modifications and changes may be made thereto by those skilled in the art to which the present invention pertains. Accordingly, the spirit of the present invention should not be construed as being limited to the embodiments described, and all of the equivalents or equivalents of the claims, as well as the following claims, belong to the scope of the present invention .

Claims (10)

As a robust stereo matching device for intelligent driver assistance systems,
A processor; And
A memory coupled to the processor,
The memory comprising:
Luminance mapping is performed on the target image in consideration of the correlation with the reference block of the reference image to correct the brightness information of each of the RGB channels of each pixel,
Calculating a brightness-based first matching error using the brightness information of each of the corrected RGB channels,
Calculating a gradient-based second matching error for the reference image and the target image,
And applying an adaptive weight to the first matching error and the second matching error to determine a parallax value,
And stores program instructions executable by the processor.
The method according to claim 1,
The above-
And a candidate region having a correlation higher than a predetermined threshold value with the reference block in the target image as a search region.
The method according to claim 1,
Wherein the second matching error is calculated using a gradient value, a gradient magnitude, and an edge direction for each color channel in horizontal and vertical directions.
The method according to claim 1,
The memory comprising:
The reference image is separated into a high frequency component and a low frequency component using DOG (Difference of Gaussian)
And applying the adaptive Gaussian weight of the separated low-frequency component and high-frequency component to the first matching error and the second matching error
And stores program instructions executable by the processor.
5. The method of claim 4,
The memory comprising:
The adaptive support weight is applied to the first matching error and the second matching error to which the adaptive Gaussian weight is applied to determine the final matching cost
And stores program instructions executable by the processor.
6. The method of claim 5,
Wherein the adaptive region weighting comprises:
Wherein the reference pixel is determined based on color similarity and geometric proximity between the reference pixel and the surrounding pixels.
A stereo image matching method,
(a) receiving a reference image and a target image;
(b) performing luminance mapping on a target image in consideration of correlation with a reference block of the reference image to correct brightness information of each of the RGB channels of each pixel;
(c) calculating a brightness-based first matching error using the brightness information of each of the corrected RGB channels;
(d) calculating a gradient-based second matching error for the reference image and the target image; And
(e) applying an adaptive weighting to the first matching error and the second matching error to determine a parallax value.
8. The method of claim 7,
Wherein the luminance mappings are performed by determining a candidate region having a correlation higher than a predetermined threshold value as the search region in the target image.
8. The method of claim 7,
The step (e)
(e1) separating the reference image into a high frequency component and a low frequency component using DOG (Difference of Gaussian); And
(e2) applying an adaptive Gaussian weight of the separated low-frequency component and high-frequency component to the first matching error and the second matching error.
A computer-readable recording medium having recorded thereon a program for performing the method according to claim 7.
KR1020150127729A 2015-09-09 2015-09-09 Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System KR101714896B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150127729A KR101714896B1 (en) 2015-09-09 2015-09-09 Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150127729A KR101714896B1 (en) 2015-09-09 2015-09-09 Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System

Publications (1)

Publication Number Publication Date
KR101714896B1 true KR101714896B1 (en) 2017-03-23

Family

ID=58496123

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150127729A KR101714896B1 (en) 2015-09-09 2015-09-09 Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System

Country Status (1)

Country Link
KR (1) KR101714896B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403448A (en) * 2017-07-26 2017-11-28 海信集团有限公司 Cost function generation method and cost function generating means
CN107452028A (en) * 2017-07-28 2017-12-08 浙江华睿科技有限公司 A kind of method and device for determining target image positional information
CN112348871A (en) * 2020-11-16 2021-02-09 长安大学 Local stereo matching method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002271818A (en) * 2001-03-06 2002-09-20 Olympus Optical Co Ltd Parallax amount measurement device
JP2009176087A (en) * 2008-01-25 2009-08-06 Fuji Heavy Ind Ltd Vehicle environment recognizing system
JP2009239664A (en) * 2008-03-27 2009-10-15 Fuji Heavy Ind Ltd Vehicle environment recognition apparatus and preceding-vehicle follow-up control system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002271818A (en) * 2001-03-06 2002-09-20 Olympus Optical Co Ltd Parallax amount measurement device
JP2009176087A (en) * 2008-01-25 2009-08-06 Fuji Heavy Ind Ltd Vehicle environment recognizing system
JP2009239664A (en) * 2008-03-27 2009-10-15 Fuji Heavy Ind Ltd Vehicle environment recognition apparatus and preceding-vehicle follow-up control system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
스테레오 정합을 위한 고속화 방법에 대한 연구 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403448A (en) * 2017-07-26 2017-11-28 海信集团有限公司 Cost function generation method and cost function generating means
CN107452028A (en) * 2017-07-28 2017-12-08 浙江华睿科技有限公司 A kind of method and device for determining target image positional information
CN112348871A (en) * 2020-11-16 2021-02-09 长安大学 Local stereo matching method
CN112348871B (en) * 2020-11-16 2023-02-10 长安大学 Local stereo matching method

Similar Documents

Publication Publication Date Title
US10970566B2 (en) Lane line detection method and apparatus
US9536155B2 (en) Marking line detection system and marking line detection method of a distant road surface area
JP5880703B2 (en) Lane marking indicator, driving support system
CN110493488B (en) Video image stabilization method, video image stabilization device and computer readable storage medium
US11042966B2 (en) Method, electronic device, and storage medium for obtaining depth image
JP6158779B2 (en) Image processing device
EP1403615B1 (en) Apparatus and method for processing stereoscopic images
US20150367781A1 (en) Lane boundary estimation device and lane boundary estimation method
EP2887315B1 (en) Camera calibration device, method for implementing calibration, program and camera for movable body
US11055542B2 (en) Crosswalk marking estimating device
US20120263386A1 (en) Apparatus and method for refining a value of a similarity measure
JP6569280B2 (en) Road marking detection device and road marking detection method
US9747507B2 (en) Ground plane detection
US11164012B2 (en) Advanced driver assistance system and method
WO2018211930A1 (en) Object detection device, object detection method, and computer-readable recording medium
KR101714896B1 (en) Robust Stereo Matching Method and Apparatus Under Radiometric Change for Advanced Driver Assistance System
JP6278790B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
WO2019167238A1 (en) Image processing device and image processing method
US20200193184A1 (en) Image processing device and image processing method
JP7236857B2 (en) Image processing device and image processing method
CN111191482B (en) Brake lamp identification method and device and electronic equipment
EP3649571A1 (en) Advanced driver assistance system and method
JP7229032B2 (en) External object detection device
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
WO2014054124A1 (en) Road surface markings detection device and road surface markings detection method

Legal Events

Date Code Title Description
AMND Amendment
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant