WO2018097595A1 - Method and device for providing driving information by using camera image - Google Patents

Method and device for providing driving information by using camera image Download PDF

Info

Publication number
WO2018097595A1
WO2018097595A1 PCT/KR2017/013347 KR2017013347W WO2018097595A1 WO 2018097595 A1 WO2018097595 A1 WO 2018097595A1 KR 2017013347 W KR2017013347 W KR 2017013347W WO 2018097595 A1 WO2018097595 A1 WO 2018097595A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
obstacle
driving
camera
image data
Prior art date
Application number
PCT/KR2017/013347
Other languages
French (fr)
Korean (ko)
Inventor
기석철
이신재
윤형석
박태형
Original Assignee
충북대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 충북대학교 산학협력단 filed Critical 충북대학교 산학협력단
Priority to CN201780025422.6A priority Critical patent/CN109070882B/en
Publication of WO2018097595A1 publication Critical patent/WO2018097595A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/02Rear-view mirror arrangements
    • B60R1/08Rear-view mirror arrangements involving special optical features, e.g. avoiding blind spots, e.g. convex mirrors; Side-by-side associations of rear-view and other mirrors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • B60W2050/0059Signal noise suppression
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/802Longitudinal distance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/08Predicting or avoiding probable or impending collision

Definitions

  • the present invention relates to a technology for providing driving information in a vehicle, and more particularly, to a driving information providing technology for recognizing an obstacle and detecting a possible obstacle area using an image through a camera mounted on a vehicle.
  • the vehicle approach obstacle detecting apparatus or system is configured to detect an obstacle at a very close distance to the vehicle and inform the driver through a sensor such as an ultrasonic wave or a laser.
  • Camera-based obstacle detection unlike ultrasonic or laser sensors, can provide detailed information about obstacles as well as detection of the presence of obstacles.
  • AVM around view monitoring
  • radiation distortion in which distortion is determined by the refractive index of the convex lens, inevitably occurs, and thus may cause serious errors in image recognition of the image processing apparatus as well as visual distortion of the display image.
  • an around-view monitoring system consists of four cameras, one installed at the front and rear of the vehicle and one under the left and right side mirrors.
  • the 4-channel AVM (Around View Monitoring) system which is used for car parking and narrow road driving, provides the driver with the function of simply showing the synthesized image.However, by analyzing the video of each input channel, an obstacle that is expected to crash while driving Recognizing this, it can provide a new safety function.
  • ADAS advanced driver assistance systems
  • the existing MOD function development method uses a method of detecting a motion region in an image, but it is difficult to recognize a single color, a simple pattern of a wall or a pillar, which is difficult to detect as a motion region, as an obstacle.
  • the present invention has been made to solve the above problems, it is possible to recognize a wall or pillar of a single color, a simple pattern that is difficult to classify as an obstacle as an obstacle, an object such as a lane, a manhole cover existing in the driving area It is an object of the present invention to provide a method and apparatus for providing driving information using a camera image, which improves a problem of causing a recognition error that is misinterpreted as a moving object.
  • the present invention provides a driving information providing method in a vehicle equipped with one or more cameras, the method comprising: detecting an obstacle by real-time image processing of external image data of a vehicle received from the camera, Predicting the risk of collision based on the detected position of the obstacle and the distance between the vehicle itself, and using the Free Space Detection (FSD) algorithm, processing the external image data of the vehicle received from the camera to detect the driving range And providing information about whether the vehicle is capable of traveling through the collision risk and the driving region.
  • FSD Free Space Detection
  • a result of performing the FSD algorithm on the external image data detects an object having no danger of collision in the obtained free space by using an image recognition method and includes it in the driveable area.
  • the image recognition method may detect an object including a lane and a manhole cover in the free area as an object without a collision risk.
  • the motion region is detected based on a motion history image (MHI) algorithm for the external image data of the vehicle received from the camera, and the obstacle is detected by real-time image processing on the detected motion region. Can be detected.
  • MHI motion history image
  • the position of the detected obstacle and the coordinates of the vehicle itself may be obtained using a distance transformation matrix, and the distance between the obstacle and the vehicle itself may be calculated using the obtained coordinate values.
  • the vehicle may be a vehicle to which an around view monitoring system or a camera mirror system is applied.
  • a driving information providing apparatus in a vehicle equipped with one or more cameras. Predicts the risk of collision based on the distance between the vehicle and the vehicle, and detects a driving range by processing external image data of the vehicle received from the camera using a free space detection (FSD) algorithm. And a control unit for providing driving availability information, a display unit for displaying driving availability information under the control of the controller, and an alarm unit for alarming the risk of collision under the control of the controller.
  • FSD free space detection
  • the controller may detect an object having no danger of collision in the obtained free space by using an image recognition method and include the same in the driving range.
  • the controller may detect an object including a lane and a manhole cover in the free area as an object without a collision risk by using the image recognition method.
  • the controller may detect a motion region based on a motion history image (MHI) algorithm with respect to the external image data of the vehicle received from the camera, and detect an obstacle by processing an image in real time on the detected motion region.
  • MHI motion history image
  • the controller may acquire the position of the detected obstacle and the coordinate value of the vehicle itself using the distance transformation matrix, and calculate the distance between the obstacle and the vehicle itself using the obtained coordinate value.
  • the vehicle may be a vehicle to which an around view monitoring system or a camera mirror system is applied.
  • an image processing speed is improved based on a motion history image (MHI) algorithm, thereby enabling real-time processing of 30 frames per second (FPS) or more.
  • MHI motion history image
  • the obstacle detection function can be implemented in a camera mirror system that is expected to be commercialized in the future.
  • FIG. 1 is a block diagram illustrating an internal configuration of a driving information providing apparatus in a vehicle according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a driving information providing method in a vehicle according to an exemplary embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an image for explaining an obstacle detection process based on an MHI algorithm according to an embodiment of the present invention.
  • FIG. 4 is a view showing an image for explaining a collision risk according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an image for explaining a process of detecting a driving possible region using an FSD algorithm according to an embodiment of the present invention.
  • a driving information providing method for a vehicle equipped with one or more cameras Estimating the risk of collision based on the distance with the user; detecting external driving data by processing external image data of the vehicle received from the camera using a free space detection (FSD) algorithm; And providing information about whether the vehicle can be driven through the driving area.
  • FSD free space detection
  • the present invention relates to an apparatus and method for providing driving information in a vehicle equipped with one or more cameras.
  • the present invention may be implemented for a vehicle to which an around view monitoring system or a camera mirror system is applied.
  • FIG. 1 is a block diagram illustrating an internal configuration of a driving information providing apparatus in a vehicle according to an embodiment of the present invention.
  • a driving information providing apparatus in a vehicle includes one or more cameras 110, a control unit 120, a display unit 130, and an alarm unit 140.
  • the controller 120 performs image processing on the external image data of the vehicle received from the camera 110 in real time to detect an obstacle. Then, the collision risk is predicted based on the detected position of the obstacle and the distance between the vehicle itself. In addition, by using the Free Space Detection (FSD) algorithm, the external image data of the vehicle received from the camera 110 is processed to detect a driveable area, and provide driving possibility information through a collision risk and a driveable area. .
  • FSD Free Space Detection
  • the display unit 130 serves to display driving availability information under the control of the controller 120.
  • the alarm unit 140 serves to alert the risk of collision under the control of the controller 130.
  • the controller 120 detects an object having no danger of collision in the obtained free space by using an image recognition method and includes it in the driving range.
  • the controller 120 may detect an object including a lane and a manhole cover as a non-collision risk object in a free area by an image recognition method.
  • the controller 120 detects a motion region of the vehicle's external image data received from the camera 110 based on a motion history image (MHI) algorithm, and processes an image in real time on the detected motion region to detect an obstacle. do.
  • MHI motion history image
  • control unit 120 may obtain the position of the detected obstacle and the coordinate value of the vehicle itself using the distance transformation matrix, and calculate the distance between the obstacle and the vehicle itself using the obtained coordinate value.
  • FIG. 2 is a flowchart illustrating a driving information providing method in a vehicle according to an exemplary embodiment of the present invention.
  • the controller 120 performs image processing on the external image data of the vehicle received from the camera 110 in real time to detect an obstacle (S210).
  • the collision risk is predicted based on the detected position of the obstacle and the distance between the vehicle itself (S220).
  • the controller 120 detects a driving possible region by processing external image data of the vehicle received from the camera 110 using a free space detection (FSD) algorithm (S230).
  • FSD free space detection
  • an object having no danger of collision is detected in the obtained free space using an image recognition method and included in the driving region.
  • an image recognition method may detect an object including a lane and a manhole cover in a free area as an object without a collision risk.
  • the motion region is detected based on a motion history image (MHI) algorithm for the external image data of the vehicle received from the camera 110, and the image is detected in real time with respect to the detected motion region.
  • MHI motion history image
  • the obstacle can be detected by processing.
  • the collision risk may be predicted by using the distance transformation matrix to obtain the position of the detected obstacle and the coordinates of the vehicle and calculate the distance between the obstacle and the vehicle using the obtained coordinate value. Can be.
  • FIG. 3 is a diagram illustrating an image for explaining an obstacle detection process based on an MHI algorithm according to an embodiment of the present invention.
  • the area marked in red is a motion area in which motion is detected.
  • the obstacle is detected by performing a high speed image processing on a near obstacle such as a moving parking vehicle in real time.
  • FIG. 4 is a view showing an image for explaining a collision risk according to an embodiment of the present invention.
  • a distance transformation matrix is illustrated in FIG. 4 (a). As described above, in the present invention, a coordinate value of a distance between the red region shown in FIG. 4 (b) and the vehicle itself is obtained using the distance transformation matrix.
  • FIG. 5 is a diagram illustrating an image for explaining a process of detecting a driving possible region using an FSD algorithm according to an embodiment of the present invention.
  • a MeanShift algorithm is applied to bind regions having similar color pixel values in an original image obtained from the camera 110.
  • the patch area is set in the area immediately before the vehicle body, and this part includes the road area.
  • the Flood Fill algorithm is applied to cluster color values similar to the patch area. At this time, only the patch area is displayed in the Flood Fill algorithm.
  • the existing problem can be solved by performing the FSD algorithm and detecting an object without a collision risk, such as a lane and a manhole cover, existing in the FSD region through an image recognition method.
  • the manhole cover can be implemented by applying a learning-based algorithm.
  • a wall of a simple color or a simple pattern that is not recognized as an obstacle having a collision risk may be recognized as an obstacle.
  • the obstacle recognition using the MHI-based algorithm and the driving region using the FSD algorithm are used, it is possible to determine whether driving is possible by real-time processing the video received by the front camera when driving forward.

Abstract

According to the present invention, a method for providing driving information of a vehicle having one or more cameras mounted therein comprises the steps of: detecting an obstacle by processing, into images in real time, vehicular external image data received from the cameras; predicting a collision risk on the basis of the distance between a position of the detected obstacle and the vehicle itself; detecting, by using a free space detection (FSD) algorithm, a drivable area by processing the vehicular external image data received from the cameras; and providing drivability information through the collision risk and the drivable area. According to the present invention, a single color and a simple pattern of a wall or a pillar, which are difficult to be classified as obstacles, can be recognized as obstacles.

Description

[규칙 제26조에 의한 보정 30.11.2017] 카메라 영상을 이용한 주행 정보 제공 방법 및 장치[Revision 30.11.2017 by Rule 26] Method and device for providing driving information using camera image
본 발명은 차량에서의 주행 정보 제공 기술에 관한 것으로서, 더욱 상세하게는 차량에 장착된 카메라를 통한 영상을 이용하여 장애물을 인식하고 장애 가능 영역을 검출하는 주행 정보 제공 기술에 관한 것이다. The present invention relates to a technology for providing driving information in a vehicle, and more particularly, to a driving information providing technology for recognizing an obstacle and detecting a possible obstacle area using an image through a camera mounted on a vehicle.
차량 주변에는 다양한 형태의 장애물이 존재하며, 이와 같은 잠재적인 충돌 가능성을 가진 위험 요소로부터 보호받기 위하여 장애물 검출을 시도한다.Various types of obstacles exist around the vehicle, and obstacle detection is attempted in order to be protected from such a potential hazard.
일반적으로 차량 접근 장애물 감지 장치 내지 시스템은 초음파, 레이저 등의 센서를 통해 차량과 매우 근접한 거리에 있는 장애물을 탐지하고 이를 운전자에게 알려주도록 구성된다.In general, the vehicle approach obstacle detecting apparatus or system is configured to detect an obstacle at a very close distance to the vehicle and inform the driver through a sensor such as an ultrasonic wave or a laser.
그런데, 이처럼 초음파 내지 레이저 센서 대신 차량에 설치된 카메라를 통해 촬영된 영상을 이용하여 차량 접근 장애물을 감지하기 위해서는 센서와는 다른 방법이 필요하게 된다.However, in order to detect an obstacle approaching the vehicle using an image captured by a camera installed in the vehicle instead of the ultrasonic or laser sensor, a method different from the sensor is required.
카메라 기반의 장애물 검출은 초음파나 레이저 센서와 다르게 장애물 유무의 감지뿐 아니라 장애물에 대한 세부 정보를 제공할 수 있다. 특히 카메라가 어라운드 뷰 모니터링(Around View Monitoring, AVM) 시스템일 경우, 넓은 시야를 확보할 수 있는 광각 카메라를 사용하는 것이 일반적이다. 그런데, 광각 카메라는 볼록 렌즈의 굴절률에 의하여 왜곡 정도가 결정되는 방사 왜곡이 필연적으로 발생하게 되고, 이에 따라 표시 영상의 시각적인 왜곡은 물론 이미지 처리 장치의 이미지 인식에 심각한 오차를 야기할 수 있다. Camera-based obstacle detection, unlike ultrasonic or laser sensors, can provide detailed information about obstacles as well as detection of the presence of obstacles. In particular, when the camera is an around view monitoring (AVM) system, it is common to use a wide-angle camera that can secure a wide field of view. However, in the wide-angle camera, radiation distortion, in which distortion is determined by the refractive index of the convex lens, inevitably occurs, and thus may cause serious errors in image recognition of the image processing apparatus as well as visual distortion of the display image.
일반적으로 어라운드 뷰 모니터링 시스템은 차량의 앞뒤와 좌우 사이드 미러 하단에 1개씩 설치되는 4개의 카메라로 구성되어 있다. 자동차 주차, 협로 주행 등에 이용되는 4채널 AVM(Around View Monitoring) 시스템은 단순히 운전자에게 합성된 영상을 보여만 주는 기능을 제공하고 있지만, 입력된 각 채널의 동영상을 분석하여 주행 중 충돌이 예상되는 장애물을 인식하여, 새로운 안전 기능을 제공할 수 있다. In general, an around-view monitoring system consists of four cameras, one installed at the front and rear of the vehicle and one under the left and right side mirrors. The 4-channel AVM (Around View Monitoring) system, which is used for car parking and narrow road driving, provides the driver with the function of simply showing the synthesized image.However, by analyzing the video of each input channel, an obstacle that is expected to crash while driving Recognizing this, it can provide a new safety function.
최근, 첨단 운전자 지원시스템(advanced driver assistance systems, ADAS)을 개발하고 있는 자동차업체들은 충돌 위험이 있는 장애물을 인식하여 경보하는 기능(Moving Object Detection, MOD)이 있는 AVM 시스템을 상용화하고자 노력하고 있다.Recently, automakers developing advanced driver assistance systems (ADAS) have been trying to commercialize AVM systems that have the ability to recognize and alert obstacles with a collision risk (Mod).
기존의 MOD 기능 개발 방법은 영상에서 움직임 영역을 검출하는 방법을 사용하고 있으나, 움직임 영역으로 검출하기 어려운 단일 색상, 단순 패턴의 벽체나 기둥 등을 장애물로 인식하기 어려운 문제점이 있다.The existing MOD function development method uses a method of detecting a motion region in an image, but it is difficult to recognize a single color, a simple pattern of a wall or a pillar, which is difficult to detect as a motion region, as an obstacle.
반대로, 주행이 가능한 영역 (Free Space) 내에 존재하는 차선이나, 맨홀 뚜껑과 같은 물체가 움직임 물체로 오인되어 인식 오류를 발생하는 문제점도 있다.On the contrary, there is a problem that a recognition error occurs because an object such as a lane or a manhole cover existing in a free space can be mistaken for a moving object.
본 발명은 상기와 같은 문제점을 해결하기 위하여 안출된 것으로서, 장애물로 분류하기 어려운 단일 색상, 단순 패턴의 벽체나 기둥을 장애물로 인식할 수 있고, 주행 가능 영역 내에 존재하는 차선, 맨홀 뚜껑과 같은 물체가 움직이는 물체로 오인되어 인식 오류를 발생시키는 문제점을 개선한, 카메라 영상을 이용한 주행 정보 제공 방법 및 장치를 제공하는데 그 목적이 있다.The present invention has been made to solve the above problems, it is possible to recognize a wall or pillar of a single color, a simple pattern that is difficult to classify as an obstacle as an obstacle, an object such as a lane, a manhole cover existing in the driving area It is an object of the present invention to provide a method and apparatus for providing driving information using a camera image, which improves a problem of causing a recognition error that is misinterpreted as a moving object.
본 발명의 목적은 이상에서 언급한 목적으로 제한되지 않으며, 언급되지 않은 또 다른 목적들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The object of the present invention is not limited to the above-mentioned object, and other objects not mentioned will be clearly understood by those skilled in the art from the following description.
이와 같은 목적을 달성하기 위한 본 발명은 한 대 이상의 카메라가 장착된 차량에서의 주행 정보 제공 방법에 있어서, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 실시간으로 영상 처리하여, 장애물을 검출하는 단계, 검출된 장애물의 위치와 차량 자신과의 거리를 기반으로 충돌 위험성을 예측하는 단계, FSD(Free Space Detection) 알고리즘을 이용하여, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 처리하여 주행 가능 영역을 검출하는 단계 및 상기 충돌 위험성과 주행 가능 영역을 통해 주행 가능 여부 정보를 제공하는 단계를 포함한다. In order to achieve the above object, the present invention provides a driving information providing method in a vehicle equipped with one or more cameras, the method comprising: detecting an obstacle by real-time image processing of external image data of a vehicle received from the camera, Predicting the risk of collision based on the detected position of the obstacle and the distance between the vehicle itself, and using the Free Space Detection (FSD) algorithm, processing the external image data of the vehicle received from the camera to detect the driving range And providing information about whether the vehicle is capable of traveling through the collision risk and the driving region.
상기 주행 가능 영역을 검출하는 단계에서, 상기 외부 영상 데이터에 상기 FSD 알고리즘을 수행한 결과, 획득된 자유 영역(free space) 내에 충돌 위험이 없는 물체를 영상 인식 방법으로 검출하여 주행 가능 영역에 포함시킬 수 있다. 이때, 상기 영상 인식 방법으로 상기 자유 영역 내에서 차선, 맨홀 뚜껑을 포함하는 물체를 충돌 위험이 없는 물체로 검출할 수 있다. In the detecting of the driveable area, a result of performing the FSD algorithm on the external image data detects an object having no danger of collision in the obtained free space by using an image recognition method and includes it in the driveable area. Can be. In this case, the image recognition method may detect an object including a lane and a manhole cover in the free area as an object without a collision risk.
상기 장애물을 검출하는 단계에서, 상기 카메라로부터 수신된 차량의 외부 영상 데이터에 대하여, MHI(Motion History Image) 알고리즘을 기반으로 움직임 영역을 검출하고, 검출된 움직임 영역에 대하여 실시간으로 영상 처리하여 장애물을 검출할 수 있다. In the detecting of the obstacle, the motion region is detected based on a motion history image (MHI) algorithm for the external image data of the vehicle received from the camera, and the obstacle is detected by real-time image processing on the detected motion region. Can be detected.
상기 충돌 위험성을 예측하는 단계에서, 거리 변환 행렬을 이용하여 검출된 장애물의 위치와 차량 자신의 좌표 값을 획득하고, 획득된 좌표 값을 이용하여 장애물과 차량 자신과의 거리를 계산할 수 있다. In the predicting of the collision risk, the position of the detected obstacle and the coordinates of the vehicle itself may be obtained using a distance transformation matrix, and the distance between the obstacle and the vehicle itself may be calculated using the obtained coordinate values.
상기 차량은 어라운드 뷰 모니터링(Around View Monitoring) 시스템 또는 카메라 미러 시스템(Camera Mirror System)이 적용된 차량일 수 있다. The vehicle may be a vehicle to which an around view monitoring system or a camera mirror system is applied.
본 발명은 한 대 이상의 카메라가 장착된 차량에서의 주행 정보 제공 장치에 있어서, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 실시간으로 영상 처리하여, 장애물을 검출하고, 검출된 장애물의 위치와 차량 자신과의 거리를 기반으로 충돌 위험성을 예측하고, FSD(Free Space Detection) 알고리즘을 이용하여, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 처리하여 주행 가능 영역을 검출하고, 상기 충돌 위험성과 주행 가능 영역을 통해 주행 가능 여부 정보를 제공하는 제어부, 상기 제어부의 제어에 따라 주행 가능 여부 정보를 표시하기 위한 표시부 및 상기 제어부의 제어에 따라 충돌 위험성을 경보하기 위한 경보부를 포함한다. According to an aspect of the present invention, there is provided a driving information providing apparatus in a vehicle equipped with one or more cameras. Predicts the risk of collision based on the distance between the vehicle and the vehicle, and detects a driving range by processing external image data of the vehicle received from the camera using a free space detection (FSD) algorithm. And a control unit for providing driving availability information, a display unit for displaying driving availability information under the control of the controller, and an alarm unit for alarming the risk of collision under the control of the controller.
상기 제어부는 상기 외부 영상 데이터에 상기 FSD 알고리즘을 수행한 결과, 획득된 자유 영역(free space) 내에 충돌 위험이 없는 물체를 영상 인식 방법으로 검출하여 주행 가능 영역에 포함시킬 수 있다. 이때, 상기 제어부는 상기 영상 인식 방법으로 상기 자유 영역 내에서 차선, 맨홀 뚜껑을 포함하는 물체를 충돌 위험이 없는 물체로 검출할 수 있다. As a result of performing the FSD algorithm on the external image data, the controller may detect an object having no danger of collision in the obtained free space by using an image recognition method and include the same in the driving range. In this case, the controller may detect an object including a lane and a manhole cover in the free area as an object without a collision risk by using the image recognition method.
상기 제어부는 상기 카메라로부터 수신된 차량의 외부 영상 데이터에 대하여, MHI(Motion History Image) 알고리즘을 기반으로 움직임 영역을 검출하고, 검출된 움직임 영역에 대하여 실시간으로 영상 처리하여 장애물을 검출할 수 있다. The controller may detect a motion region based on a motion history image (MHI) algorithm with respect to the external image data of the vehicle received from the camera, and detect an obstacle by processing an image in real time on the detected motion region.
상기 제어부는 거리 변환 행렬을 이용하여 검출된 장애물의 위치와 차량 자신의 좌표 값을 획득하고, 획득된 좌표 값을 이용하여 장애물과 차량 자신과의 거리를 계산할 수 있다. The controller may acquire the position of the detected obstacle and the coordinate value of the vehicle itself using the distance transformation matrix, and calculate the distance between the obstacle and the vehicle itself using the obtained coordinate value.
상기 차량은 어라운드 뷰 모니터링(Around View Monitoring) 시스템 또는 카메라 미러 시스템(Camera Mirror System)이 적용된 차량일 수 있다. The vehicle may be a vehicle to which an around view monitoring system or a camera mirror system is applied.
본 발명에 의하면, 장애물로 분류하기 어려운 단일 색상, 단순 패턴의 벽체나 기둥을 장애물로 인식할 수 있는 효과가 있다. According to the present invention, there is an effect that can recognize a wall or pillar of a single color, a simple pattern that is difficult to classify as an obstacle.
또한, 본 발명에 의하면, 주행 가능한 영역 내에 존재하는 차선이나, 맨홀 뚜껑과 같은 물체가 움직임 물체로 오인되어 인식 오류를 발생하는 문제점을 개선하는 효과가 있다. In addition, according to the present invention, there is an effect of improving the problem that a recognition error is generated because an object such as a lane or a manhole cover existing in a driving area is mistaken for a moving object.
또한, 본 발명에 의하면, MHI(Motion History Image) 알고리즘을 기반으로 영상처리 속도를 개선하여 30 FPS(frames per second) 이상의 실시간 처리가 가능한 장점이 있다. In addition, according to the present invention, an image processing speed is improved based on a motion history image (MHI) algorithm, thereby enabling real-time processing of 30 frames per second (FPS) or more.
본 발명을 적용하면, 향후 사업화 적용이 예상되는 카메라 미러 시스템(Camera Mirror System)에서도 장애물 검출 기능을 구현할 수 있을 것으로 기대된다. Applying the present invention, it is expected that the obstacle detection function can be implemented in a camera mirror system that is expected to be commercialized in the future.
도 1은 본 발명의 일 실시예에 따른 차량에서의 주행 정보 제공 장치의 내부 구성을 보여주는 블록도이다. 1 is a block diagram illustrating an internal configuration of a driving information providing apparatus in a vehicle according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 차량에서의 주행 정보 제공 방법을 보여주는 흐름도이다. 2 is a flowchart illustrating a driving information providing method in a vehicle according to an exemplary embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 MHI 알고리즘 기반 장애물 검출 과정을 설명하기 위한 영상을 나타낸 도면이다. 3 is a diagram illustrating an image for explaining an obstacle detection process based on an MHI algorithm according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 충돌 위험성을 설명하기 위한 영상을 나타낸 도면이다.4 is a view showing an image for explaining a collision risk according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 FSD 알고리즘을 이용한 주행 가능 영역을 검출하는 과정을 설명하기 위한 영상을 나타낸 도면이다. FIG. 5 is a diagram illustrating an image for explaining a process of detecting a driving possible region using an FSD algorithm according to an embodiment of the present invention.
본 발명은 한 대 이상의 카메라가 장착된 차량에서의 주행 정보 제공 방법에 있어서, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 실시간으로 영상 처리하여, 장애물을 검출하는 단계, 검출된 장애물의 위치와 차량 자신과의 거리를 기반으로 충돌 위험성을 예측하는 단계, FSD(Free Space Detection) 알고리즘을 이용하여, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 처리하여 주행 가능 영역을 검출하는 단계 및 상기 충돌 위험성과 주행 가능 영역을 통해 주행 가능 여부 정보를 제공하는 단계를 포함한다. According to an aspect of the present invention, there is provided a driving information providing method for a vehicle equipped with one or more cameras. Estimating the risk of collision based on the distance with the user; detecting external driving data by processing external image data of the vehicle received from the camera using a free space detection (FSD) algorithm; And providing information about whether the vehicle can be driven through the driving area.
본 발명은 다양한 변경을 가할 수 있고 여러 가지 실시예를 가질 수 있는 바, 특정 실시예들을 도면에 예시하고 상세하게 설명하고자 한다. 그러나, 이는 본 발명을 특정한 실시 형태에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다.As the present invention allows for various changes and numerous embodiments, particular embodiments will be illustrated in the drawings and described in detail in the written description. However, this is not intended to limit the present invention to specific embodiments, it should be understood to include all modifications, equivalents, and substitutes included in the spirit and scope of the present invention.
본 출원에서 사용한 용어는 단지 특정한 실시예를 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 출원에서, "포함하다" 또는 "가지다" 등의 용어는 명세서상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the present invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this application, the terms "comprise" or "have" are intended to indicate that there is a feature, number, step, operation, component, part, or combination thereof described in the specification, and one or more other features. It is to be understood that the present invention does not exclude the possibility of the presence or the addition of numbers, steps, operations, components, components, or a combination thereof.
다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 갖고 있다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥 상 갖는 의미와 일치하는 의미를 갖는 것으로 해석되어야 하며, 본 출원에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art. Terms such as those defined in the commonly used dictionaries should be construed to have meanings consistent with the meanings in the context of the related art, and shall not be construed in ideal or excessively formal meanings unless expressly defined in this application. Do not.
또한, 첨부 도면을 참조하여 설명함에 있어, 도면 부호에 관계없이 동일한 구성 요소는 동일한 참조부호를 부여하고 이에 대한 중복되는 설명은 생략하기로 한다. 본 발명을 설명함에 있어서 관련된 공지 기술에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우 그 상세한 설명을 생략한다.In addition, in the description with reference to the accompanying drawings, the same components regardless of reference numerals will be given the same reference numerals and duplicate description thereof will be omitted. In the following description of the present invention, if it is determined that the detailed description of the related known technology may unnecessarily obscure the subject matter of the present invention, the detailed description thereof will be omitted.
본 발명은 한 대 이상의 카메라가 장착된 차량에서의 주행 정보 제공 장치 및 방법에 대한 것이다. 본 발명은 어라운드 뷰 모니터링(Around View Monitoring) 시스템 또는 카메라 미러 시스템(Camera Mirror System)이 적용된 차량을 대상으로 구현될 수 있다. The present invention relates to an apparatus and method for providing driving information in a vehicle equipped with one or more cameras. The present invention may be implemented for a vehicle to which an around view monitoring system or a camera mirror system is applied.
도 1은 본 발명의 일 실시예에 따른 차량에서의 주행 정보 제공 장치의 내부 구성을 보여주는 블록도이다. 1 is a block diagram illustrating an internal configuration of a driving information providing apparatus in a vehicle according to an embodiment of the present invention.
도 1을 참조하면, 본 발명의 일 실시예에 따른 차량에서의 주행 정보 제공 장치는 한 대 이상의 카메라(110), 제어부(120), 표시부(130) 및 경보부(140)를 포함한다. Referring to FIG. 1, a driving information providing apparatus in a vehicle according to an embodiment of the present invention includes one or more cameras 110, a control unit 120, a display unit 130, and an alarm unit 140.
제어부(120)는 카메라(110)로부터 수신된 차량의 외부 영상 데이터를 실시간으로 영상 처리하여, 장애물을 검출한다. 그리고, 검출된 장애물의 위치와 차량 자신과의 거리를 기반으로 충돌 위험성을 예측한다. 그리고, FSD(Free Space Detection) 알고리즘을 이용하여, 카메라(110)로부터 수신된 차량의 외부 영상 데이터를 처리하여 주행 가능 영역을 검출하고, 충돌 위험성과 주행 가능 영역을 통해 주행 가능 여부 정보를 제공한다. The controller 120 performs image processing on the external image data of the vehicle received from the camera 110 in real time to detect an obstacle. Then, the collision risk is predicted based on the detected position of the obstacle and the distance between the vehicle itself. In addition, by using the Free Space Detection (FSD) algorithm, the external image data of the vehicle received from the camera 110 is processed to detect a driveable area, and provide driving possibility information through a collision risk and a driveable area. .
표시부(130)는 제어부(120)의 제어에 따라 주행 가능 여부 정보를 표시하는 역할을 한다. The display unit 130 serves to display driving availability information under the control of the controller 120.
경보부(140)는 제어부(130)의 제어에 따라 충돌 위험성을 경보하는 역할을 한다. The alarm unit 140 serves to alert the risk of collision under the control of the controller 130.
본 발명에서 제어부(120)는 외부 영상 데이터에 FSD 알고리즘을 수행한 결과, 획득된 자유 영역(free space) 내에 충돌 위험이 없는 물체를 영상 인식 방법으로 검출하여 주행 가능 영역에 포함시킨다. 예를 들어, 제어부(120)는 영상 인식 방법으로 자유 영역 내에서 차선, 맨홀 뚜껑을 포함하는 물체를 충돌 위험이 없는 물체로 검출할 수 있다. In the present invention, as a result of performing the FSD algorithm on the external image data, the controller 120 detects an object having no danger of collision in the obtained free space by using an image recognition method and includes it in the driving range. For example, the controller 120 may detect an object including a lane and a manhole cover as a non-collision risk object in a free area by an image recognition method.
제어부(120)는 카메라(110)로부터 수신된 차량의 외부 영상 데이터에 대하여, MHI(Motion History Image) 알고리즘을 기반으로 움직임 영역을 검출하고, 검출된 움직임 영역에 대하여 실시간으로 영상 처리하여 장애물을 검출한다. The controller 120 detects a motion region of the vehicle's external image data received from the camera 110 based on a motion history image (MHI) algorithm, and processes an image in real time on the detected motion region to detect an obstacle. do.
본 발명에서 제어부(120)는 거리 변환 행렬을 이용하여 검출된 장애물의 위치와 차량 자신의 좌표 값을 획득하고, 획득된 좌표 값을 이용하여 장애물과 차량 자신과의 거리를 계산할 수 있다. In the present invention, the control unit 120 may obtain the position of the detected obstacle and the coordinate value of the vehicle itself using the distance transformation matrix, and calculate the distance between the obstacle and the vehicle itself using the obtained coordinate value.
도 2는 본 발명의 일 실시예에 따른 차량에서의 주행 정보 제공 방법을 보여주는 흐름도이다. 2 is a flowchart illustrating a driving information providing method in a vehicle according to an exemplary embodiment of the present invention.
도 2를 참조하면, 제어부(120)는 카메라(110)로부터 수신된 차량의 외부 영상 데이터를 실시간으로 영상 처리하여, 장애물을 검출한다(S210). Referring to FIG. 2, the controller 120 performs image processing on the external image data of the vehicle received from the camera 110 in real time to detect an obstacle (S210).
그리고, 검출된 장애물의 위치와 차량 자신과의 거리를 기반으로 충돌 위험성을 예측한다(S220). Then, the collision risk is predicted based on the detected position of the obstacle and the distance between the vehicle itself (S220).
또한, 제어부(120)는 FSD(Free Space Detection) 알고리즘을 이용하여, 카메라(110)로부터 수신된 차량의 외부 영상 데이터를 처리하여 주행 가능 영역을 검출한다(S230). In addition, the controller 120 detects a driving possible region by processing external image data of the vehicle received from the camera 110 using a free space detection (FSD) algorithm (S230).
그리고, 충돌 위험성과 주행 가능 영역을 통해 주행 가능 여부 정보를 제공한다(S240). In addition, information on whether the vehicle is capable of driving through the collision risk and the driving possible region is provided (S240).
주행 가능 영역을 검출하는 단계(S230)에서, 외부 영상 데이터에 FSD 알고리즘을 수행한 결과, 획득된 자유 영역(free space) 내에 충돌 위험이 없는 물체를 영상 인식 방법으로 검출하여 주행 가능 영역에 포함시킨다. 예를 들어, 영상 인식 방법으로 자유 영역 내에서 차선, 맨홀 뚜껑을 포함하는 물체를 충돌 위험이 없는 물체로 검출할 수 있다. In operation S230, as a result of performing the FSD algorithm on the external image data, an object having no danger of collision is detected in the obtained free space using an image recognition method and included in the driving region. . For example, an image recognition method may detect an object including a lane and a manhole cover in a free area as an object without a collision risk.
장애물을 검출하는 단계(S210)에서, 카메라(110)로부터 수신된 차량의 외부 영상 데이터에 대하여, MHI(Motion History Image) 알고리즘을 기반으로 움직임 영역을 검출하고, 검출된 움직임 영역에 대하여 실시간으로 영상 처리하여 장애물을 검출할 수 있다. In the detecting of the obstacle (S210), the motion region is detected based on a motion history image (MHI) algorithm for the external image data of the vehicle received from the camera 110, and the image is detected in real time with respect to the detected motion region. The obstacle can be detected by processing.
그리고, 충돌 위험성을 예측하는 단계(S220)에서, 거리 변환 행렬을 이용하여 검출된 장애물의 위치와 차량 자신의 좌표 값을 획득하고, 획득된 좌표 값을 이용하여 장애물과 차량 자신과의 거리를 계산할 수 있다. In operation S220, the collision risk may be predicted by using the distance transformation matrix to obtain the position of the detected obstacle and the coordinates of the vehicle and calculate the distance between the obstacle and the vehicle using the obtained coordinate value. Can be.
도 3은 본 발명의 일 실시예에 따른 MHI 알고리즘 기반 장애물 검출 과정을 설명하기 위한 영상을 나타낸 도면이다. 3 is a diagram illustrating an image for explaining an obstacle detection process based on an MHI algorithm according to an embodiment of the present invention.
도 3에서 붉은색으로 표시된 영역이 움직임을 감지한 움직임 영역이다. 본 발명에서는 움직임이 있는 주차 차량 등의 근거리 장애물을 실시간으로 고속 영상 처리하여 장애물을 검출한다. In FIG. 3, the area marked in red is a motion area in which motion is detected. In the present invention, the obstacle is detected by performing a high speed image processing on a near obstacle such as a moving parking vehicle in real time.
도 4는 본 발명의 일 실시예에 따른 충돌 위험성을 설명하기 위한 영상을 나타낸 도면이다.4 is a view showing an image for explaining a collision risk according to an embodiment of the present invention.
본 발명에서는 차량에 전방을 향하여 장착된 카메라를 이용하여 장애물의 위치와 차량 자신과의 거리를 계산하고, 이를 통해 충돌 위험성을 예측한다. In the present invention, using the camera mounted to the front of the vehicle to calculate the distance between the position of the obstacle and the vehicle itself, thereby predicting the risk of collision.
도 4 (a)에 거리 변환 행렬이 예시되어 있으며, 이처럼 본 발명에서는 거리 변환 행렬을 이용하여 도 4 (b)에 표시된 붉은색 영역과 차량 자신과의 거리에 대한 좌표 값을 획득한다. A distance transformation matrix is illustrated in FIG. 4 (a). As described above, in the present invention, a coordinate value of a distance between the red region shown in FIG. 4 (b) and the vehicle itself is obtained using the distance transformation matrix.
도 5는 본 발명의 일 실시예에 따른 FSD 알고리즘을 이용한 주행 가능 영역을 검출하는 과정을 설명하기 위한 영상을 나타낸 도면이다. FIG. 5 is a diagram illustrating an image for explaining a process of detecting a driving possible region using an FSD algorithm according to an embodiment of the present invention.
도 5를 참조하면, 카메라(110)로부터 획득한 원본 영상에서 유사한 색상 픽셀 값을 가지는 영역들을 묶어주는 MeanShift 알고리즘을 적용한다.Referring to FIG. 5, a MeanShift algorithm is applied to bind regions having similar color pixel values in an original image obtained from the camera 110.
어안렌즈의 경우, 차량의 차체가 보이기 때문에 차체 바로 앞의 영역에 패치 영역을 설정하고, 이 부분이 도로영역을 포함한다고 가정한다. In the case of the fisheye lens, since the vehicle body is visible, it is assumed that the patch area is set in the area immediately before the vehicle body, and this part includes the road area.
그리고, MeanShift 알고리즘 적용 후, 패치 영역과 유사한 색상 값을 클러스터링하는 Flood Fill 알고리즘을 적용한다. 이때, Flood Fill 알고리즘에서 패치 영역만을 표시할 수 있도록 설정한다.After applying the MeanShift algorithm, the Flood Fill algorithm is applied to cluster color values similar to the patch area. At this time, only the patch area is displayed in the Flood Fill algorithm.
여기서, Flood Fill 알고리즘만을 적용하면 노이즈가 남아 있기 때문에 원 영상과의 차영상을 이용하여 이진화한 다음, 모폴로지 연산을 이용하여 내부의 노이즈를 제거한다.In this case, since only the Flood Fill algorithm is applied, noise remains, and the binarization is performed using a difference image from the original image, and then internal noise is removed by using a morphology operation.
도 5에서 보는 바와 같이, 노이즈를 제거하면 자유 영역(Free Space) 부분만 남게 되고, 이 부분을 라벨링하여 영역의 크기와 중심점(푸른색)등 여러 정보를 획득하여 표시한다.As shown in FIG. 5, when the noise is removed, only the free space portion remains, and the portion is labeled to acquire and display various information such as the size and center point (blue) of the region.
본 발명에서는 이처럼 FSD 알고리즘을 수행하고, 영상 인식 방법을 통해 FSD 영역 내에 존재하는 차선, 맨홀 뚜껑 등 충돌 위험이 없는 물체를 검출하여, FSD 영역에 포함시킴으로써, 기존 문제점을 해결할 수 있다. 예를 들어, 맨홀 뚜껑의 경우, 학습 기반 알고리즘을 적용하여 구현 가능하다.In the present invention, the existing problem can be solved by performing the FSD algorithm and detecting an object without a collision risk, such as a lane and a manhole cover, existing in the FSD region through an image recognition method. For example, the manhole cover can be implemented by applying a learning-based algorithm.
본 발명에서 MHI 알고리즘을 이용한 장애물 검출 방식과, FSD 알고리즘을 이용한 주행 가능 영역 검출 방식을 결합하면, 충돌 위험이 있는 장애물로 인식되지 못했던 단순한 색상이나 단순한 패턴의 벽체를 장애물로 인식할 수 있다. In the present invention, when the obstacle detection method using the MHI algorithm and the driving area detection method using the FSD algorithm are combined, a wall of a simple color or a simple pattern that is not recognized as an obstacle having a collision risk may be recognized as an obstacle.
또한, 본 발명에서 MHI 기반 알고리즘을 통한 장애물 인식과 FSD 알고리즘을 통한 주행 가능 영역을 이용하면, 전방 주행시 전방 카메라로 받은 동영상을 실시간 처리하여 주행 가능 여부를 판단할 수 있다.In addition, in the present invention, when the obstacle recognition using the MHI-based algorithm and the driving region using the FSD algorithm are used, it is possible to determine whether driving is possible by real-time processing the video received by the front camera when driving forward.
이상 본 발명을 몇 가지 바람직한 실시예를 사용하여 설명하였으나, 이들 실시예는 예시적인 것이며 한정적인 것이 아니다. 본 발명이 속하는 기술분야에서 통상의 지식을 지닌 자라면 본 발명의 사상과 첨부된 특허청구범위에 제시된 권리범위에서 벗어나지 않으면서 다양한 변화와 수정을 가할 수 있음을 이해할 것이다.While the invention has been described using some preferred embodiments, these embodiments are illustrative and not restrictive. Those skilled in the art will appreciate that various changes and modifications can be made without departing from the spirit of the invention and the scope of the rights set forth in the appended claims.

Claims (12)

  1. 한 대 이상의 카메라가 장착된 차량에서의 주행 정보 제공 방법에 있어서,In the driving information providing method in a vehicle equipped with one or more cameras,
    상기 카메라로부터 수신된 차량의 외부 영상 데이터를 실시간으로 영상 처리하여, 장애물을 검출하는 단계;Detecting an obstacle by image processing external image data of the vehicle received from the camera in real time;
    검출된 장애물의 위치와 차량 자신과의 거리를 기반으로 충돌 위험성을 예측하는 단계;Predicting a collision risk based on the detected position of the obstacle and the distance between the vehicle and the vehicle;
    FSD(Free Space Detection) 알고리즘을 이용하여, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 처리하여 주행 가능 영역을 검출하는 단계; 및Using the Free Space Detection (FSD) algorithm, processing external image data of the vehicle received from the camera to detect a driving range; And
    상기 충돌 위험성과 주행 가능 영역을 통해 주행 가능 여부 정보를 제공하는 단계를 포함하는 주행 정보 제공 방법. And providing information about driving possibility through the collision risk and the driving area.
  2. 청구항 1에 있어서, The method according to claim 1,
    상기 주행 가능 영역을 검출하는 단계에서,In the detecting of the driveable area,
    상기 외부 영상 데이터에 상기 FSD 알고리즘을 수행한 결과, 획득된 자유 영역(free space) 내에 충돌 위험이 없는 물체를 영상 인식 방법으로 검출하여 주행 가능 영역에 포함시키는 것을 특징으로 하는 주행 정보 제공 방법. And as a result of performing the FSD algorithm on the external image data, an object having no danger of collision in the obtained free space is detected by an image recognition method and included in the driving range.
  3. 청구항 2에 있어서, The method according to claim 2,
    상기 주행 가능 영역을 검출하는 단계에서,In the detecting of the driveable area,
    상기 영상 인식 방법으로 상기 자유 영역 내에서 차선, 맨홀 뚜껑을 포함하는 물체를 충돌 위험이 없는 물체로 검출하는 것을 특징으로 하는 주행 정보 제공 방법.And detecting an object including a lane and a manhole cover in the free area as an object without a collision risk by using the image recognition method.
  4. 청구항 1에 있어서,The method according to claim 1,
    상기 장애물을 검출하는 단계에서,Detecting the obstacle,
    상기 카메라로부터 수신된 차량의 외부 영상 데이터에 대하여, MHI(Motion History Image) 알고리즘을 기반으로 움직임 영역을 검출하고, 검출된 움직임 영역에 대하여 실시간으로 영상 처리하여 장애물을 검출하는 것을 특징으로 하는 주행 정보 제공 방법. The driving information is detected by detecting a motion region based on a motion history image (MHI) algorithm on the external image data of the vehicle received from the camera and processing an image in real time on the detected motion region. How to Provide.
  5. 청구항 1에 있어서,The method according to claim 1,
    상기 충돌 위험성을 예측하는 단계에서, In predicting the collision risk,
    거리 변환 행렬을 이용하여 검출된 장애물의 위치와 차량 자신의 좌표 값을 획득하고, 획득된 좌표 값을 이용하여 장애물과 차량 자신과의 거리를 계산하는 것을 특징으로 하는 주행 정보 제공 방법. And obtaining the position of the detected obstacle and the coordinates of the vehicle itself by using the distance transformation matrix, and calculating the distance between the obstacle and the vehicle itself using the obtained coordinate values.
  6. 청구항 1에 있어서, The method according to claim 1,
    상기 차량은 어라운드 뷰 모니터링(Around View Monitoring) 시스템 또는 카메라 미러 시스템(Camera Mirror System)이 적용된 차량인 것임을 특징으로 하는 주행 정보 제공 방법. And the vehicle is a vehicle to which an around view monitoring system or a camera mirror system is applied.
  7. 한 대 이상의 카메라가 장착된 차량에서의 주행 정보 제공 장치에 있어서,A driving information providing apparatus in a vehicle equipped with one or more cameras,
    상기 카메라로부터 수신된 차량의 외부 영상 데이터를 실시간으로 영상 처리하여, 장애물을 검출하고, 검출된 장애물의 위치와 차량 자신과의 거리를 기반으로 충돌 위험성을 예측하고, FSD(Free Space Detection) 알고리즘을 이용하여, 상기 카메라로부터 수신된 차량의 외부 영상 데이터를 처리하여 주행 가능 영역을 검출하고, 상기 충돌 위험성과 주행 가능 영역을 통해 주행 가능 여부 정보를 제공하는 제어부; Image processing of the external image data of the vehicle received from the camera in real time to detect the obstacle, predict the risk of collision based on the location of the detected obstacle and the vehicle itself, FSD (Free Space Detection) algorithm A controller configured to process an external image data of the vehicle received from the camera to detect a driving range, and to provide driving possibility information through the collision risk and the driving range;
    상기 제어부의 제어에 따라 주행 가능 여부 정보를 표시하기 위한 표시부; 및A display unit for displaying driving availability information under control of the controller; And
    상기 제어부의 제어에 따라 충돌 위험성을 경보하기 위한 경보부를 포함하는 주행 정보 제공 장치. And a warning unit for alerting the risk of collision under the control of the controller.
  8. 청구항 7에 있어서, The method according to claim 7,
    상기 제어부는 상기 외부 영상 데이터에 상기 FSD 알고리즘을 수행한 결과, 획득된 자유 영역(free space) 내에 충돌 위험이 없는 물체를 영상 인식 방법으로 검출하여 주행 가능 영역에 포함시키는 것을 특징으로 하는 주행 정보 제공 장치. As a result of performing the FSD algorithm on the external image data, the controller detects an object having no danger of collision in the obtained free space by using an image recognition method and provides the driving information to the driving region. Device.
  9. 청구항 8에 있어서, The method according to claim 8,
    상기 제어부는 상기 영상 인식 방법으로 상기 자유 영역 내에서 차선, 맨홀 뚜껑을 포함하는 물체를 충돌 위험이 없는 물체로 검출하는 것을 특징으로 하는 주행 정보 제공 장치.And the controller detects an object including a lane and a manhole cover in the free area as an object without a collision risk by the image recognition method.
  10. 청구항 7에 있어서,The method according to claim 7,
    상기 제어부는 상기 카메라로부터 수신된 차량의 외부 영상 데이터에 대하여, MHI(Motion History Image) 알고리즘을 기반으로 움직임 영역을 검출하고, 검출된 움직임 영역에 대하여 실시간으로 영상 처리하여 장애물을 검출하는 것을 특징으로 하는 주행 정보 제공 장치. The controller detects a motion region based on a motion history image (MHI) algorithm with respect to the external image data of the vehicle received from the camera, and detects an obstacle by processing the detected motion region in real time. Driving information providing device.
  11. 청구항 7에 있어서,The method according to claim 7,
    상기 제어부는 거리 변환 행렬을 이용하여 검출된 장애물의 위치와 차량 자신의 좌표 값을 획득하고, 획득된 좌표 값을 이용하여 장애물과 차량 자신과의 거리를 계산하는 것을 특징으로 하는 주행 정보 제공 장치. The control unit obtains the detected position of the obstacle and the coordinates of the vehicle itself using the distance conversion matrix, and calculates the distance between the obstacle and the vehicle itself using the obtained coordinate value.
  12. 청구항 7에 있어서, The method according to claim 7,
    상기 차량은 어라운드 뷰 모니터링(Around View Monitoring) 시스템 또는 카메라 미러 시스템(Camera Mirror System)이 적용된 차량인 것임을 특징으로 하는 주행 정보 제공 장치. And the vehicle is a vehicle to which an around view monitoring system or a camera mirror system is applied.
PCT/KR2017/013347 2016-11-28 2017-11-22 Method and device for providing driving information by using camera image WO2018097595A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780025422.6A CN109070882B (en) 2016-11-28 2017-11-22 Utilize the driving information providing method and device of camera image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2016-0159235 2016-11-28
KR1020160159235A KR101749873B1 (en) 2016-11-28 2016-11-28 Method and apparatus for providing driving information using image of camera

Publications (1)

Publication Number Publication Date
WO2018097595A1 true WO2018097595A1 (en) 2018-05-31

Family

ID=59283186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/013347 WO2018097595A1 (en) 2016-11-28 2017-11-22 Method and device for providing driving information by using camera image

Country Status (3)

Country Link
KR (1) KR101749873B1 (en)
CN (1) CN109070882B (en)
WO (1) WO2018097595A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149460A (en) * 2019-06-27 2020-12-29 华为技术有限公司 Obstacle detection method and device
CN110834645B (en) * 2019-10-30 2021-06-29 中国第一汽车股份有限公司 Free space determination method and device for vehicle, storage medium and vehicle
US11281915B2 (en) * 2019-12-06 2022-03-22 Black Sesame Technologies Inc. Partial frame perception
KR102323483B1 (en) 2020-04-13 2021-11-10 주식회사 만도모빌리티솔루션즈 Smart cruise control system and method thereof
KR102241116B1 (en) * 2020-09-08 2021-04-16 포티투닷 주식회사 A method and apparatus for determining the lane of a driving vehicle using an artificial neural network, and a navigation device including the same

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006298041A (en) * 2005-04-18 2006-11-02 Nikon Corp Operation support system for vehicle
JP2011150633A (en) * 2010-01-25 2011-08-04 Toyota Central R&D Labs Inc Object detection device and program
KR20140104516A (en) * 2013-02-18 2014-08-29 주식회사 만도 Lane detection method and apparatus
KR20150019182A (en) * 2013-08-13 2015-02-25 현대모비스 주식회사 Image displaying Method and Apparatus therefor
KR101503473B1 (en) * 2014-01-10 2015-03-18 한양대학교 산학협력단 System and method for deciding driving situation of vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881955B (en) * 2015-06-16 2017-07-18 华中科技大学 A kind of driver tired driving detection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006298041A (en) * 2005-04-18 2006-11-02 Nikon Corp Operation support system for vehicle
JP2011150633A (en) * 2010-01-25 2011-08-04 Toyota Central R&D Labs Inc Object detection device and program
KR20140104516A (en) * 2013-02-18 2014-08-29 주식회사 만도 Lane detection method and apparatus
KR20150019182A (en) * 2013-08-13 2015-02-25 현대모비스 주식회사 Image displaying Method and Apparatus therefor
KR101503473B1 (en) * 2014-01-10 2015-03-18 한양대학교 산학협력단 System and method for deciding driving situation of vehicle

Also Published As

Publication number Publication date
CN109070882B (en) 2019-09-20
KR101749873B1 (en) 2017-06-22
CN109070882A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2018097595A1 (en) Method and device for providing driving information by using camera image
EP2352136B1 (en) System for monitoring the area around a vehicle
WO2018070655A1 (en) Moving object collision warning device and method for large vehicle
JP3739693B2 (en) Image recognition device
WO2010134680A1 (en) Lane departure sensing method and apparatus using images that surround a vehicle
CN110889351B (en) Video detection method, device, terminal equipment and readable storage medium
KR101611261B1 (en) Stereo camera, driver assistance apparatus and Vehicle including the same
JP2002319091A (en) Device for recognizing following vehicle
CN103950410A (en) Panoramic auxiliary driving method and system
JPH06281455A (en) Vehicle environment monitoring device
CN112172663A (en) Danger alarm method based on door opening and related equipment
WO2016052837A1 (en) Traffic monitoring system
WO2018101603A1 (en) Road object recognition method and device using stereo camera
WO2017195965A1 (en) Apparatus and method for image processing according to vehicle speed
WO2021107171A1 (en) Deep learning processing apparatus and method for multiple sensors for vehicle
CN113593301A (en) Method for pre-judging vehicle clogging, vehicle, and computer-readable storage medium
CN104380341A (en) Object detection device for area around vehicle
JP4848644B2 (en) Obstacle recognition system
US20220024452A1 (en) Image processing apparatus, imaging apparatus, moveable body, and image processing method
WO2017095116A1 (en) Navigation device and path guidance method thereof
JP2002321579A (en) Warning information generating method and vehicle side image generating device
US11697380B2 (en) System and method for displaying vehicle driving information
WO2020218716A1 (en) Automatic parking device and automatic parking method
KR20130094558A (en) Apparatus and method for detecting movement of vehicle
CN113173160B (en) Anti-collision device for vehicle and anti-collision method for vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17874451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17874451

Country of ref document: EP

Kind code of ref document: A1