KR20160126254A - System for detecting road area - Google Patents

System for detecting road area Download PDF

Info

Publication number
KR20160126254A
KR20160126254A KR1020150057116A KR20150057116A KR20160126254A KR 20160126254 A KR20160126254 A KR 20160126254A KR 1020150057116 A KR1020150057116 A KR 1020150057116A KR 20150057116 A KR20150057116 A KR 20150057116A KR 20160126254 A KR20160126254 A KR 20160126254A
Authority
KR
South Korea
Prior art keywords
road area
road
target
detecting
vehicle
Prior art date
Application number
KR1020150057116A
Other languages
Korean (ko)
Inventor
김병규
남재현
백장운
양승훈
엄태정
권기구
박미룡
Original Assignee
선문대학교 산학협력단
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 선문대학교 산학협력단, 한국전자통신연구원 filed Critical 선문대학교 산학협력단
Priority to KR1020150057116A priority Critical patent/KR20160126254A/en
Publication of KR20160126254A publication Critical patent/KR20160126254A/en

Links

Images

Classifications

    • G06K9/00798
    • G06T7/0079
    • G06T7/0085

Abstract

The present invention provides a road area detection system. Wherein the road area detection system comprises: a target detection unit for detecting a target road area for detecting a road area ahead of the vehicle from image information captured by a video input unit mounted on the vehicle using color feature information of a previously stored road; A method of detecting a contour line by analyzing a change in brightness of a color in a target road area, classifying the target road area into a road area and a non-road area using the detected contour line, And a controller for matching the traced contour with the image information.

Description

[0001] SYSTEM FOR DETECTING ROAD AREA [

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to a technology for detecting a road area present in a vehicle running direction, and more particularly, to a system for detecting a road area in image information photographed in front of the vehicle to provide safe driving.

The Advanced Driver Assistance System (ADAS) is an auxiliary system that assists the vehicle driver in safe driving. The ADAS, the lane departure warning system, and the driver sleepiness prevention system are included in the above-mentioned advanced driver assistance system.

Particularly, the lane departure warning system detects a case where the vehicle leaves the lane without turning on the turn signal lamp and warns the driver of the sensed information by means of sound, text, vibration, etc. To detect the lane departure The process of recognizing lanes is essential.

However, the conventional lane recognition technology has a problem that a part of a lane is not detected or is erroneously detected when an obstacle exists on the road or the road condition is poor.

Therefore, it is necessary to develop a system that detects a road area having a high probability of existence of a real vehicle and helps the driver to operate safely by using the road area rather than a method using lane information having a high possibility of false detection.

In order to solve the above-described problems, the present invention provides a method of detecting a contour line by analyzing a change in brightness of image information photographed by a video input unit mounted on a vehicle and dividing a road area and a road area based on the detected contour line, And a system for detecting and providing a road area of a road.

According to an aspect of the present invention, there is provided a road area detection system for detecting a road area in front of a vehicle from image information captured by a video input unit mounted on a vehicle, A target detecting unit for detecting a target road region and a boundary detecting unit for detecting a contour line by analyzing a change in brightness of a color in the detected target road region and classifying the target road region into a road region and a non- And a controller for tracking an outline corresponding to the edge of the classified road area and matching the tracked outline with the image information.

According to the present invention, information on a road area having a high probability of existence of a vehicle is provided in real time using contour information of image information photographed from a video input unit, There is an advantage that information can be provided.

1 is a block diagram showing a configuration of a road area detection system according to an embodiment of the present invention.
FIGS. 2 and 3 are diagrams showing the geometric characteristics of the image input unit and the road shown in FIG. 1. FIG.
FIG. 4 is a diagram showing a Sobel mask used by the outline detection unit shown in FIG. 1 for detecting a contour line from input image information.
5 is a diagram showing Bezier curves used for correcting the contours detected by the road area detection system shown in FIG.
6 is a flowchart illustrating an operation of a road area detection system according to an exemplary embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, advantages and features of the present invention and methods of achieving them will be apparent from the following detailed description of embodiments thereof taken in conjunction with the accompanying drawings.

The present invention may, however, be embodied in many different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, And advantages of the present invention are set forth in the appended claims.

It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular forms herein include plural forms unless the context clearly dictates otherwise. &Quot; comprises "and / or" comprising ", as used herein, is intended to include the use of one or more other components, steps, operations, and / And does not exclude the presence or addition of a compound.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram showing a configuration of a road area detection system according to an embodiment of the present invention.

1, a road area detection system according to an embodiment of the present invention includes an image input unit 100, a target detection unit 110, a control unit 120, a display unit 130, a feature classification unit 140, And a database 150.

The image input unit 100 photographs an area including the front of the vehicle mounted on the vehicle. The image input unit 100 may be an image sensor, a depth camera, or the like.

The target detection unit 110 detects a target road area for detecting a road area ahead of the image information photographed by the image input unit 100 using the color feature information of the road previously stored in the database 150. [

The control unit 120 includes an outline detection unit 121 for detecting the outline of the color of the target road area detected by the target detection unit 110 and a control unit 120 for controlling the target road using the outline detected by the outline detection unit 121. [ An edge tracing unit (122) for tracing an outline corresponding to a road area edge classified by the road area classification unit (122) and a road area classification unit (122) for classifying the area into a road area and a non- 123).

The display unit 130 matches the road area updated by the controller 120 with the image information captured by the image input unit 100 and displays the image on the screen.

The feature classifying unit 140 analyzes and classifies arbitrary plurality of road images to generate image models that are not sensitive to brightness and stores them in the database 150.

At this time, the feature classifying unit 140 can generate a brightness-robust image model by analyzing a plurality of road images using the histogram analyzing method.

Hereinafter, a specific operation of the above-described devices will be described with reference to FIGS. 2 to 5. FIG.

Fig. 2 and Fig. 3 are diagrams showing the geometrical characteristics of the image input unit 100 and the road shown in Fig. 1. Fig. 4 is a diagram illustrating the contour detection unit 121 shown in Fig. 1 for detecting contours FIG. 5 is a diagram showing Bezier curves used for correcting a contour detected by the road area detection system shown in FIG. 1. FIG.

The target detection unit 110 generates the image information of the image captured by the image input unit 100 using the geometric relationship between the image input unit 100 and the road shown in FIGS.

Specifically, the target detection unit 110 calculates the image information value (image information value) obtained by performing inverse projection transformation using Equations (1) to (5)

Figure pat00001
).

Figure pat00002

Figure pat00003

Figure pat00004

Figure pat00005

Figure pat00006

here,

Figure pat00007
An angle at which the image input unit 100 is inclined relative to the vehicle,
Figure pat00008
The vehicle is inclined at an angle with respect to the road,
Figure pat00009
The distance to the left road edge around the vehicle,
Figure pat00010
The distance to the right road edge around the vehicle,
Figure pat00011
The height of the image input unit 100 with respect to the ground,
Figure pat00012
Means the focal distance of the image input unit 100.

In addition, the target detection unit 110 may detect an angle at which the image input unit 100 is inclined

Figure pat00013
) ≪ / RTI >
Figure pat00014
), And the image information value (
Figure pat00015
) Can be calculated.

This is because, for each frame,

Figure pat00016
instead
Figure pat00017
Average value (
Figure pat00018
) To obtain more stabilized back projection image information. For example,
Figure pat00019
Average value (
Figure pat00020
), And the image information value (
Figure pat00021
) Can be calculated.

When the backprojected image information is generated, the target detecting unit 110 detects a target road region for detecting a road region ahead of the vehicle from the backprojected image information generated using the color feature information of the road previously stored in the database 150 .

At this time, the target detection unit 110 continuously tracks the target road area using the cam shift algorithm and updates the target road area every frame, thereby effectively coping with the brightness change or the state change of the image. Since the camshift algorithm is a known technique, a detailed description thereof will be omitted.

The contour detection unit 121 of the control unit 120 detects a contour line that is a boundary between the road area and the non-road area using the brightness change in the target road area detected by the target detection unit 110. [

At this time, the contour detecting unit 121 can detect contours in the target road region using the 3 * 3 sobell mask shown in FIG. 4 and Equation (6).

Figure pat00022

here,

Figure pat00023
Is a convolute line kernel representing vertical and horizontal Sobel edge calculations,
Figure pat00024
The input image,
Figure pat00025
Means a filtered image.

Since the Sobel edge technology is a known technology, a more detailed description will be omitted.

The road area classification unit 122 of the control unit 120 classifies the target road area into the road area and the non-road area based on the contour detected by the contour detection unit 121. [

For example, the road region classifying unit 122 scans the sobel edge resultant image calculated by the contour detecting unit 121 from bottom to top, compares one pixel with each of the left, right, bottom and top two pixels, Pixels that are not less than the color average value of the target road area detected through the algorithm can be classified as a non-road area.

The edge tracing unit of the control unit 120 continuously detects the contour corresponding to the edge of the road in the contour of the road area classified by the road area classifying unit 122. [ At this time, the edge of the road means a contour corresponding to the boundary between the road area and the non-road area.

For example, the edge tracer can smoothly detect the edge contour of the road area using the Bezier curve algorithm shown in FIG. 5 and Equation (7).

Figure pat00026

In addition, the edge tracer can use the Kalman filter algorithm to continuously track the points of the curve detected using the Bezier curve algorithm. Equation 8 is a formula for the Kalman filter algorithm.

Figure pat00027

At this time,

Figure pat00028
Is the current estimate,
Figure pat00029
Is the Kalman gain,
Figure pat00030
Is a measurement value,
Figure pat00031
Means the previous estimate.

Kalman filter is a filter that uses a recursive method to estimate the conclusion by using the previous output value as the current input value. As described above, by using the Kalman filter, a high calculation result can be expected with a low calculation amount, and there is an advantage that the road area can be detected in real time.

The Bezier curve and the Kalman filter technology are well known in the art, and thus a detailed description thereof will be omitted.

The display unit 130 matches the outline of the edge of the road area tracked by the edge tracker 123 with the image photographed from the image input unit 100 and displays the outline on the screen.

6 is a flowchart illustrating an operation of a road area detection system according to an exemplary embodiment of the present invention.

As shown in FIG. 6, the image input unit photographing the front side of the vehicle is determined (S600), and the image input unit implements the reverse image conversion of the captured image (S610).

When the backprojected image information is generated, a target road area for detecting a road area in front of the vehicle is detected from the generated backprojected image information (S620).

At this time, the target road region is continuously tracked using the camshift algorithm, and updated every frame, thereby effectively responding to the brightness change or state change of the image. Since the camshift algorithm is a known technique, a detailed description thereof will be omitted.

When the target road region is detected, the boundary between the road region and the non-road region is detected using the detected brightness change in the target road region (S630).

At this time, the contour line can be detected from the input image by using the Sobel edge extraction technique. Since the Sobel edge extraction technique is a known technique, a detailed description thereof will be omitted.

When a contour line is detected in the input image, the target road area is classified into a road area and a non-road area based on the detected contour (S640).

For example, the above-described Sobel edge result image is scanned from bottom to top, and one pixel is compared with two pixels each on the left, right, bottom, and top, and pixels that are more than the color average value of the target road area detected through the cam- Road area.

If the road area and the non-road area are classified, the contour corresponding to the edge of the road is continuously tracked in the contour of the classified road area (S650), and the contour of the traced road area is matched with the captured image On the screen (S660).

At this time, the edge contour of the road area can be detected smoothly using the Bezier curve algorithm. In addition, the Kalman filter algorithm can be used to continuously track the points of the detected curve.

The Bezier curve and the Kalman filter technology are well known in the art, and thus a detailed description thereof will be omitted.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the invention.

Therefore, the embodiments of the present invention are not intended to limit the scope of the present invention, and the scope of the present invention is not limited by these embodiments. It is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

100: image input unit 110:
120: control unit 121: contour detection unit
122: road region classifying unit 123: edge tracing unit
130: Display section 140:
150: Database

Claims (1)

A target detection unit for detecting a target road area for detecting a road area ahead of the vehicle from image information captured by a video input unit mounted on the vehicle using color feature information of a previously stored road; And
Detecting a contour line by analyzing a change in brightness of a color in the detected target road area, classifying the target road area into a road area and a non-road area using the detected contour line, A controller for tracking an outline and matching the tracked outline with the image information,
The road area detection system comprising:
KR1020150057116A 2015-04-23 2015-04-23 System for detecting road area KR20160126254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150057116A KR20160126254A (en) 2015-04-23 2015-04-23 System for detecting road area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150057116A KR20160126254A (en) 2015-04-23 2015-04-23 System for detecting road area

Publications (1)

Publication Number Publication Date
KR20160126254A true KR20160126254A (en) 2016-11-02

Family

ID=57518150

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150057116A KR20160126254A (en) 2015-04-23 2015-04-23 System for detecting road area

Country Status (1)

Country Link
KR (1) KR20160126254A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180082089A (en) * 2017-01-10 2018-07-18 (주)베라시스 Method for Detecting Border of Grassland Using Image-Based Color Information
US10140530B1 (en) 2017-08-09 2018-11-27 Wipro Limited Method and device for identifying path boundary for vehicle navigation
KR20190026481A (en) * 2017-09-05 2019-03-13 전자부품연구원 Vision based Adaptive Cruise Control System and Method
KR20190103508A (en) * 2018-02-12 2019-09-05 경북대학교 산학협력단 Method for extracting driving lane, device and computer readable medium for performing the method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180082089A (en) * 2017-01-10 2018-07-18 (주)베라시스 Method for Detecting Border of Grassland Using Image-Based Color Information
US10140530B1 (en) 2017-08-09 2018-11-27 Wipro Limited Method and device for identifying path boundary for vehicle navigation
KR20190026481A (en) * 2017-09-05 2019-03-13 전자부품연구원 Vision based Adaptive Cruise Control System and Method
KR20190103508A (en) * 2018-02-12 2019-09-05 경북대학교 산학협력단 Method for extracting driving lane, device and computer readable medium for performing the method

Similar Documents

Publication Publication Date Title
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
US10262216B2 (en) Hazard detection from a camera in a scene with moving shadows
EP3208635B1 (en) Vision algorithm performance using low level sensor fusion
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
US8005266B2 (en) Vehicle surroundings monitoring apparatus
US8890951B2 (en) Clear path detection with patch smoothing approach
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
US8810653B2 (en) Vehicle surroundings monitoring apparatus
EP2463843A2 (en) Method and system for forward collision warning
CN109409186B (en) Driver assistance system and method for object detection and notification
JP2016148962A (en) Object detection device
JP4528283B2 (en) Vehicle periphery monitoring device
Liu et al. Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions
JP2018063680A (en) Traffic signal recognition method and traffic signal recognition device
KR20150002038A (en) Method of Real-Time Vehicle Recognition and Tracking Using Kalman Filter and Clustering Algorithm Based on Haar-like Feature and Adaboost
KR20160126254A (en) System for detecting road area
Yoneda et al. Simultaneous state recognition for multiple traffic signals on urban road
JP5521217B2 (en) Obstacle detection device and obstacle detection method
KR20140104516A (en) Lane detection method and apparatus
JP2011103058A (en) Erroneous recognition prevention device
JP5541099B2 (en) Road marking line recognition device
Manoharan et al. A robust approach for lane detection in challenging illumination scenarios
JP4575315B2 (en) Object detection apparatus and method
Wu et al. A vision-based collision warning system by surrounding vehicles detection
US9030560B2 (en) Apparatus for monitoring surroundings of a vehicle