CN113228130B - image processing device - Google Patents
image processing device Download PDFInfo
- Publication number
- CN113228130B CN113228130B CN201980084842.0A CN201980084842A CN113228130B CN 113228130 B CN113228130 B CN 113228130B CN 201980084842 A CN201980084842 A CN 201980084842A CN 113228130 B CN113228130 B CN 113228130B
- Authority
- CN
- China
- Prior art keywords
- area
- recognition
- correction
- pedestrian
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 98
- 238000001514 detection method Methods 0.000 claims abstract description 69
- 238000012937 correction Methods 0.000 claims description 81
- 238000004364 calculation method Methods 0.000 claims description 20
- 238000003384 imaging method Methods 0.000 claims description 14
- 230000000903 blocking effect Effects 0.000 claims 1
- 238000000034 method Methods 0.000 description 36
- 238000010586 diagram Methods 0.000 description 23
- 238000005286 illumination Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000033772 system development Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明实现即使在无法检测到存在于周边的行人的全身的情况下,也能够正确地进行是否是行人的识别的图像处理装置。在白天截掉了脚下的情况下的摄像机图像(1101)中,为了将包含行人的头部的上半身作为识别区域而校正了识别区域(11020)的结果成为识别区域(1102)。对于夜间本车辆前照灯为近光的摄像机图像(1103)中能够判断为行人的头部没有包含在检测区域中的摄像机图像(1103),为了将包含行人的头部的上半身作为识别区域,对识别区域(11030)进行校正,作为识别区域(1104)。在夜间本车辆的前照灯为远光的摄像机图像(1105)中,为了将包含行人的头部的上半身作为识别区域而对截掉了行人的脚下的图像(11060)进行校正,成为识别区域(1106)。
The present invention realizes an image processing device capable of accurately identifying whether a pedestrian is a pedestrian even when the whole body of a pedestrian existing in the surrounding area cannot be detected. In the camera image (1101) in which the feet are cut off during the daytime, the result of correcting the recognition region (11020) so that the upper body including the pedestrian's head is the recognition region becomes the recognition region (1102). For the camera image (1103) in which the headlight of the own vehicle is low beam at night, it can be judged that the head of the pedestrian is not included in the detection area (1103), in order to use the upper body including the head of the pedestrian as the recognition area, The recognition region (11030) is corrected as the recognition region (1104). In the nighttime camera image (1105) in which the headlights of the own vehicle are on high beam, the image (11060) in which the foot of the pedestrian is cut off is corrected so that the upper body including the head of the pedestrian is the recognition region, and becomes the recognition region (1106).
Description
技术领域technical field
本发明涉及一种车载摄像机中的识别行人的图像处理装置。The invention relates to an image processing device for identifying pedestrians in a vehicle-mounted camera.
背景技术Background technique
近年来,由于车载传感装置的普及,驾驶辅助系统、自动驾驶系统的开发正在进行。在更高级的系统开发中,有必要识别存在于汽车周边的物体是车辆还是行人等类别。In recent years, the development of driver assistance systems and automated driving systems has been progressing due to the spread of in-vehicle sensor devices. In more advanced system development, it is necessary to recognize whether objects existing around the car are vehicles or pedestrians.
在存在于汽车周边的物体的类别的识别中,如专利文献1所记载的那样,采用了使用预先通过机器学习生成的识别器,来识别由车载摄像机或雷达等检测出的物体的方法。As described in Patent Document 1, a method of recognizing objects detected by an on-vehicle camera, radar, etc. using a classifier generated in advance by machine learning is used to recognize the type of objects existing around a car.
现有技术文献prior art literature
专利文献patent documents
专利文献1:日本专利特开2013-232080号公报Patent Document 1: Japanese Patent Laid-Open No. 2013-232080
发明内容Contents of the invention
发明要解决的问题The problem to be solved by the invention
现有技术中的行人识别是以检测到的区域内反映出行人的全身为前提,通过进行行人全身的识别处理来保持识别性能。Pedestrian recognition in the prior art is based on the premise that the whole body of the pedestrian is reflected in the detected area, and recognition performance is maintained by performing recognition processing of the whole body of the pedestrian.
因此,在行人的下半身露出视角外这样的近距离存在行人的情况、行人的下半身被树篱、护栏等遮挡的情况、由于夜间前照灯的照射状况而无法检测到行人的全身等情况下,行人识别性能会下降。Therefore, when there are pedestrians at close range such that the lower body of the pedestrian is exposed outside the viewing angle, the lower body of the pedestrian is blocked by hedges, guardrails, etc., and the whole body of the pedestrian cannot be detected due to the illumination conditions of the headlights at night, etc., the pedestrian Recognition performance will degrade.
本发明的目的在于实现一种即使在无法检测到存在于周边的行人的全身的情况下,也能够正确地进行是否是行人的识别的图像处理装置。An object of the present invention is to realize an image processing device capable of accurately identifying whether a pedestrian is a pedestrian even when the whole body of a pedestrian existing in the surrounding area cannot be detected.
解决问题的技术手段technical means to solve problems
为了实现上述目的,本发明如下构成。In order to achieve the above objects, the present invention is constituted as follows.
在图像处理装置中,具备:检测部,其检测外部环境的图像信息;识别区域校正量计算部,其根据所述检测部检测出的所述图像信息计算识别立体物的识别区域的校正量;识别区域校正部,其根据所述识别区域校正量计算部计算出的所述校正量,校正用于识别所述立体物的所述图像信息的所述识别区域;以及识别处理部,其对由所述识别区域校正部校正了的识别区域内的立体物进行识别处理。The image processing device includes: a detection unit that detects image information of an external environment; a recognition area correction amount calculation unit that calculates a correction amount for a recognition area that recognizes a three-dimensional object based on the image information detected by the detection unit; a recognition area correction section that corrects the recognition area of the image information for recognizing the three-dimensional object based on the correction amount calculated by the recognition area correction amount calculation section; The recognition process is performed on the three-dimensional objects within the recognition region corrected by the recognition region correction unit.
发明的效果The effect of the invention
根据本发明,可以实现一种即使在无法检测到存在于周边的行人的全身的情况下,也能够正确地进行是否是行人的识别的图像处理装置。According to the present invention, it is possible to realize an image processing device capable of accurately identifying whether a pedestrian is a pedestrian even when the whole body of a pedestrian existing in the surrounding area cannot be detected.
附图说明Description of drawings
图1是表示本发明的一实施例的图像处理装置的整体构成的框图。FIG. 1 is a block diagram showing the overall configuration of an image processing device according to an embodiment of the present invention.
图2是表示立体摄像机装置内的处理流程的图。FIG. 2 is a diagram showing a processing flow in the stereo camera device.
图3是图示了在摄像机图像上的立体物检测处理的结果的图。FIG. 3 is a diagram illustrating the result of a three-dimensional object detection process on a camera image.
图4是表示一实施例的处理构成的图。FIG. 4 is a diagram showing a processing configuration of an embodiment.
图5是表示一实施例的处理构成的详情的图。FIG. 5 is a diagram showing details of a processing configuration of an embodiment.
图6是表示夜间的摄像机图像和立体物检测处理的动作结果的一例的图。FIG. 6 is a diagram showing an example of nighttime camera images and operation results of three-dimensional object detection processing.
图7是表示在立体物检测处理中在检测区域中包含遮挡物的情况下的立体物检测结果的图。FIG. 7 is a diagram showing a three-dimensional object detection result when an obstructer is included in the detection area in the three-dimensional object detection process.
图8是表示立体物脚下被截掉的状态的图像的图。FIG. 8 is a diagram showing an image in a state where the foot of a three-dimensional object is cut off.
图9是表示摄像机的视角与行人的距离关系的图。FIG. 9 is a diagram showing the relationship between the angle of view of a camera and the distance of pedestrians.
图10是表示前照灯照射范围与行人的距离关系的图。FIG. 10 is a diagram showing the relationship between the headlight irradiation area and the distance of pedestrians.
图11是表示识别区域校正处理结果的例子的图。FIG. 11 is a diagram showing an example of a recognition region correction processing result.
具体实施方式detailed description
使用附图,对本发明的实施方式进行说明。Embodiments of the present invention will be described using the drawings.
实施例Example
图1是表示本发明的一实施例的图像处理装置的整体构成的框图。FIG. 1 is a block diagram showing the overall configuration of an image processing device according to an embodiment of the present invention.
在图1中,本发明的一实施例中的车载立体摄像机装置100是搭载在车辆上、基于车辆前方的摄影对象区域的外部环境的图像信息来识别车外环境的装置。车载立体摄像机装置100例如进行道路的白线、行人、车辆、其他立体物、信号、标识以及照明灯等的识别,进行搭载了该立体摄像机装置100的车辆(本车辆)的制动、转向调整等的调整。In FIG. 1 , an in-vehicle
车载立体摄像机装置100具备:获取图像信息的在左右配置的2个摄像机(左摄像机101、右摄像机102(拍摄外部环境来得到图像的摄像部)以及用于控制摄像机101、102的拍摄,取入所拍摄的图像的图像输入接口103。The vehicle-mounted
通过该图像输入接口103取入的图像通过总线109发送数据,由图像处理部104、运算处理部105处理,将成为处理途中的结果、最终结果的图像数据等存储在存储部106中。Images taken in through the
图像处理部104将从摄像机101的拍摄元件得到的第1图像与从摄像机102的拍摄元件得到的第2图像进行比较,以对各图像进行由摄像机101以及摄像机102的拍摄元件引起的设备固有的偏差的校正、噪声插值等的图像校正,将其存储在存储部106中。The
另外,计算来自摄像机101的第1图像和来自摄像机102的第2图像之间相互对应的位置,计算视差信息,并以与上述相同的方式将该视差信息存储在存储部106中。In addition, the positions corresponding to each other between the first image from the
运算处理部105使用存储在存储部106中的图像以及视差信息(相对于图像上的各点的距离信息),进行感知车辆周边环境所需的各种物体的识别。The
各种物体是指人、车、其他障碍物、信号灯、标志、车的后照灯和前照灯等。这些识别结果、中间的计算结果的一部分与上述同样地记录在存储部106中。在对拍摄到的图像进行各种物体识别后,使用这些识别结果计算车辆控制所需的指令值等。The various objects refer to people, cars, other obstacles, signal lights, signs, rear lights and headlights of cars, and the like. These identification results and part of intermediate calculation results are recorded in the storage unit 106 in the same manner as described above. After performing various object recognitions on captured images, these recognition results are used to calculate command values and the like required for vehicle control.
作为计算结果得到的车辆的控制方针、物体识别结果的一部分通过CAN接口107传递到车载网络CAN110,由此进行车辆的制动。The control policy of the vehicle obtained as a calculation result and part of the object recognition result are transmitted to the in-vehicle network CAN 110 through the
而且,为如下的构成:关于这些动作,控制处理部108监视各处理部是否发生了异常动作,在数据传输时是否发生错误等,防止异常动作。Furthermore, the
上述图像处理部104经由内部总线109连接到控制处理部108、存储部106、运算处理部105、左摄像机101与右摄像机102的摄像元件之间的输入输出部103以及与外部车载网络CAN110的输入输出部107。The above-mentioned
图像输入接口103、图像处理部104、存储部106、运算处理部105、输入输出部107以及控制处理部108由单个或多个计算机单元构成。The
存储部106由存储器等构成,该存储器存储例如由图像处理部104得到的图像信息、根据由运算处理部105扫描的结果制作的图像信息等。The storage unit 106 is constituted by a memory or the like which stores, for example, image information obtained by the
与外部车载网络CAN110的输入输出部107将从车载立体摄像机装置100输出的信息经由外部车载网络CAN110输出到本车辆的控制系统(未图示)。The input/
图2是表示立体摄像机装置100内的处理流程的图。FIG. 2 is a diagram showing the flow of processing in the
首先,在车载立体相机装置100内,由左右的摄像机101和102拍摄图像,图像处理部104对于各自拍摄的图像数据203、204进行用于吸收摄像元件固有特征的校正等的图像处理205。该处理结果存储在图像缓冲器206中。图像缓冲器206设置在图1的存储部106中。First, in the vehicle-mounted
另外,使用经校正的2个图像进行图像彼此的对照,由此获得由左右摄像机得到的图像的视差信息。根据左右图像的视差,对象物体上的某个着眼点与左右摄像机101以及102的图像上的何处和何处对应变得明确,根据三角测量的原理,可得到到对象物的距离。In addition, the parallax information of the images obtained by the left and right cameras is obtained by comparing the images using the corrected two images. From the parallax of the left and right images, it becomes clear where and where a point of interest on the target object corresponds to on the images of the left and
进行这种处理的是视差处理207。由图1中的图像处理部104进行图像处理205以及视差处理207,最终得到的图像以及视差信息存储在存储部106中。What performs this kind of processing is the
使用如上述得到的视差图像,进行用于检测三维空间中的立体物的检测处理208。进而,使用上述存储在存储部106中的图像和视差信息,进行各种物体识别处理209。作为识别对象的物体,有人、车、其他立体物、标识、信号灯以及后照灯等,识别处理的详情是由对象的特性和系统上花费的处理时间等的制约而决定的。Using the parallax images obtained as described above,
进而,考虑物体识别的结果和本车辆的状态(速度、舵角等)来决定如下方针,即通过车辆控制处理210,例如向乘员发出警告,进行本车辆的制动和舵角调整等的制动的方针,或者据此进行对象物的回避控制的方针,该结果通过CAN接口108输出。Furthermore, in consideration of the result of the object recognition and the state of the host vehicle (speed, steering angle, etc.), it is decided that the
立体物体检测处理208、各种物体识别处理209以及车辆控制处理210由图1的运算处理部105进行。这些的各处理各方法例如由单个或多个计算机单元构成,构成为能够相互交换数据。The three-dimensional
由上述视差处理207得到左右图像的各像素的视差或距离,在立体物检测处理部208将其组合为三维空间上的立体物,根据该图像上的位置和区域实施各种物体识别处理209。此时,为了各种物体识别处理209稳定地进行物体的识别,需要使图像上的立体物区域和想识别的对象的影像一致。The parallax or distance of each pixel of the left and right images is obtained by the above-mentioned
但是,在立体摄像机中,有时由于外部环境的亮度、摄像机间的摄像性能的偏差、由玻璃面的异物等产生的遮挡等,而无法使想识别的图像上的物体区域完全一致。However, in a stereo camera, it may not be possible to completely match the object area on the image to be recognized due to the brightness of the external environment, variation in imaging performance between cameras, occlusion by foreign matter on the glass surface, or the like.
这即使是在将毫米波等的雷达和摄像机等图像传感器组合的情况下也是同样的。因此,在各种识别处理中,应用本发明的一个实施例。This is the same even when a radar such as a millimeter wave is combined with an image sensor such as a camera. Therefore, in various identification processes, one embodiment of the present invention is applied.
下面以立体摄像机装置为前提来叙述构成。The configuration will be described below on the premise of a stereo camera device.
图3是图示在摄像机图像上的立体物体检测处理208的结果的图。FIG. 3 is a diagram illustrating the results of the stereoscopic
在图3中,针对三维空间中存在的行人、车辆、树木以及路灯等具有路面以上高度的物体的每一个,获得作为立体物检测处理208的结果的立体物区域301,将其作为图像上的区域投影。In FIG. 3 , for each of objects with a height above the road surface such as pedestrians, vehicles, trees, and street lamps that exist in the three-dimensional space, a three-
立体物体区域301可以如图3所示是矩形,也可以是根据视差、距离得到的不定形的区域。在后段的处理中,为了使计算机的处理变得容易,一般被作为矩形来处理。在本实施例中,以下,将区域作为矩形来处理,使用行人作为立体物的一例来描述各处理的详情。The three-
图4是表示本实施例的处理结构成的图。如上所述,各种物体识别209由运算处理部105进行。因此,图4所示的构成是运算处理部105的内部构成。FIG. 4 is a diagram showing the processing structure of this embodiment. As described above,
另外,下面以立体摄像机为前提来叙述构成。In addition, the configuration will be described below on the premise of a stereo camera.
在图4中,检测部401根据作为摄像部的摄像机101、102所拍摄的外部环境的图像信息,检测从摄像部到立体物的距离等,根据亮度、天气等外部环境的状况、从立体物检测处理208得到的检测对象物体区域的距离分布等,检测在检测对象物体区域中是否包含遮挡物等。In FIG. 4 , the
区域校正量计算部(识别区域校正量计算部)402根据检测部401检测到的图像信息,计算识别立体物的识别区域的校正量。即,根据检测到的外部环境的亮度、与立体物的距离、前照灯照射范围、遮挡物的位置或尺寸中的至少一个,计算立体物的识别区域的校正量。The area correction amount calculation unit (recognition area correction amount calculation unit) 402 calculates the correction amount of the recognition area for recognizing a three-dimensional object based on the image information detected by the
在识别区域校正部403中,根据由区域校正量计算部402计算出的校正量,实施识别区域的校正。In the recognition
在识别处理部404中,设定的(校正的)识别区域内的立体物被识别处理。识别处理例如可以举出以下技术。In the
有将预先准备的具有识别对象特征的模板与识别区域进行比较的模板匹配、利用将亮度图像或HOG或Haar-Like这样的特征量、和支持向量机或Ada-Boost或DeepLearning这样的机器学习方法结合起来的识别器(软件)的方法。There are template matching that compares a pre-prepared template with characteristics of the object to be recognized with the recognition area, using feature quantities such as luminance images or HOG or Haar-Like, and machine learning methods such as support vector machines, Ada-Boost, and Deep Learning Combine the recognizer (software) method.
此外,也可以通过人为决定的阈值判定来识别边缘形状等。In addition, an edge shape and the like may be recognized by an artificially determined threshold value judgment.
图5是表示本实施例的处理构成的详情(图4所示的各部的详情)的图。FIG. 5 is a diagram showing details of the processing configuration of this embodiment (details of each unit shown in FIG. 4 ).
在图5中,检测部401的外部环境判定处理501进行立体物检测处理208被进行时的外部环境的亮度的状态的判定。图6是表示外部环境的示例的图,且是表示夜间摄像机图像和立体物体检测处理部208的动作结果的一例的摄像机图像601的图。In FIG. 5 , the external
如图3所示,如果外部环境是白天,则检测移动到图像中的行人的整体被包含在立体物区域(检测区域)301中。但是,在夜间等变暗的情况下,如图6所示,行人与背景不能分离,在立体物检测处理208中,假定检测区域602包含行人的下半部分,但不包含上半部分,不包含行人的整体。As shown in FIG. 3 , if the external environment is daytime, the entirety of pedestrians detected to move into the image is included in a three-dimensional object area (detection area) 301 . However, when it is dark at night, as shown in FIG. 6, the pedestrian and the background cannot be separated. In the three-dimensional
在外部环境的亮度的判定中,也可以通过图像处理求出图像中的亮度分布等,从车辆的内部信息直接取得时刻等来进行判定。此外,也可以是如果实施了曝光调整的摄像机的快门速度在阈值以下,则判定为白天,如果在阈值以上,则判定为夜间。In judging the brightness of the external environment, the brightness distribution in the image, etc., may be obtained by image processing, and the time and the like may be directly obtained from the internal information of the vehicle to make the judgment. In addition, it may be determined that it is daytime if the shutter speed of the camera for which exposure adjustment is performed is below a threshold, and it may be determined as nighttime if it is above the threshold.
另外,不仅可以观测快门速度,还可以观测图像中的亮度分布、前后帧间的亮度变化等,进行隧道内、黄昏这样详细的外部环境的判定。In addition, not only the shutter speed can be observed, but also the brightness distribution in the image, the brightness change between frames before and after, etc. can be observed, and detailed external environment judgments such as tunnels and dusk can be carried out.
在图5中,检测部401的遮挡物检测处理502检测在立体物检测处理208被进行时检测区域内是否包含行人以外的物体,判定行人是否被遮挡。In FIG. 5 , the
图7是表示在立体物检测处理208中在检测区域701中包含遮挡物的情况下的立体物检测结果的图。在图7中,观察检测区域(识别区域)701的视差分布702,在包含视差不同的区域较多的情况下,计算各区域的距离,判断为行人(立体物)被遮挡物703遮挡,在检测区域内通过位于跟前方向和进深方向上的区域来辨别遮挡物703与行人的区域。然后,以将遮挡物703从检测区域(识别区域)701中排除的方式计算识别区域的校正量,识别校正后的识别区域内的立体物。FIG. 7 is a diagram showing a three-dimensional object detection result when an obstructer is included in the
在图5中,在区域校正量计算部402的距离判定处理503中,在检测出的立体物是行人的情况下,判定是否位于截掉行人的脚下的距离,进行识别区域的校正量的计算。图8是表示截掉立体物(在图8中为行人)的脚下的状态的图像的图。如图8所示,可以通过图像上的检测区域801的下端是否接触所拍摄的图像的下端来判定是否截掉立体物的脚下。In FIG. 5 , in the
此外,图9是表示摄像机的视角和行人的距离关系的图。如图9所示,在立体物是行人的情况下,从图像中截掉脚下的距离可以考虑根据检测出的立体物与车辆的距离、摄像机拍摄视角901、摄像机设置高度以及摄像机的姿势来计算的方法等。In addition, FIG. 9 is a diagram showing the relationship between the angle of view of the camera and the distance between pedestrians. As shown in Figure 9, when the three-dimensional object is a pedestrian, the distance to cut off the feet from the image can be calculated based on the detected distance between the three-dimensional object and the vehicle, the
从图像中截掉脚下的距离预先设定为距离设定值并存储在存储部106中,作为从图像中截掉脚下的距离的条件,根据摄像部(摄像机101、摄像机102)检测到的距离是否在距离设定值以下来进行判断。The distance to cut off the foot from the image is preset as a distance setting value and stored in the storage unit 106. As a condition for cutting off the distance to the foot from the image, it is based on the distance detected by the imaging unit (
在检测到的距离在距离设定值以下,且能够判断检测到的立体物的上部(头部)不在识别区域内的情况下,识别区域校正部403进行扩大(扩展)识别区域的上部区域的校正。在这种情况下,识别区域校正部403在检测到的距离为距离设定值以下的范围内,以检测到的距离越大识别区域的上端区域的校正量越小的方式进行校正。When the detected distance is less than the distance setting value, and it can be judged that the upper part (head) of the detected three-dimensional object is not in the recognition area, the recognition
在图5中,区域校正量计算部402的前照灯照射范围判定校正处理504根据车辆前照灯(照射外部环境前方的前照灯)的照射状态,以行人的头部被包含在识别区域中的方式计算识别区域的校正量。图10是表示前照灯照射范围与行人的距离关系的图。如图10的(A)所示,在前照灯为近光的情况下(前照灯的照射范围设定得低的情况下),前照灯的光轴朝向照射路面的方向,因此行人越接近车辆,光照射的范围1001越靠近脚下侧,头部附近不再被照射。In FIG. 5 , the headlight irradiation range determination and
因此,如图10的(B)所示,如果用摄像机图像1002观测行人,则头部变暗,与背景混合,因此立体物检测处理208的结果是不包含行人的头部的检测区域1003。Therefore, as shown in (B) of FIG. 10 , when a pedestrian is observed with the
如图10的(C)所示,在远光的情况下(前照灯的照射范围设定得高的情况下),由于光轴朝上,因此即使在行人位于附近的情况下,头部也包含在照射范围1004中。如果利用此时的摄像机图像1005观测行人,则如图10的(D)所示,光也照射到头部,立体物检测处理208的结果是包含行人的头部的检测区域1006。As shown in (C) of FIG. 10 , in the case of high beam (when the irradiation range of the headlight is set high), since the optical axis is directed upward, even when a pedestrian is located nearby, the head Also included in the
因此,进行与近光和远光相应的校正量的计算。校正量的决定方法如下:例如在近光的情况下,如果在根据照射范围和距离而头部难以被包含的位置上有行人,则以实际空间上的尺寸将一头身程度的量、例如25cm设为识别区域的上端方向的校正量(进行扩大识别区域的上部区域的校正)。此外,如果是远光的情况下,因为容易包含头部,所以校正量为四个半头身程度的量,例如6cm。Therefore, calculation of correction amounts corresponding to low beam and high beam is performed. The method of determining the correction amount is as follows: For example, in the case of low beams, if there is a pedestrian at a position where the head is difficult to be included due to the irradiation range and distance, the amount of the head and body level, for example, 25cm It is set as the correction amount in the upper end direction of the recognition area (correction is performed to expand the upper area of the recognition area). In addition, in the case of high beam, since the head is easily included, the correction amount is about four and a half head and body, for example, 6cm.
近光、远光的判定也可以通过直接参照本车辆的内部信息、对拍摄的图像进行图像处理并计算亮度的分布等来判定。此外,也可以根据车辆装备的前照灯的种类(LED、卤素等),考虑这些光的扩散特性来计算校正量的值。也可以使用外部环境判定处理501的结果来变更校正量。例如,在外部环境判定处理501中,在判定为黄昏程度的暗度的情况下,判定为即使前照灯是近光灯也能够充分检测到行人的头部,还可以进行相对于夜间的近光灯的情况而言减少校正量的调整。The determination of low beam and high beam can also be determined by directly referring to the internal information of the host vehicle, performing image processing on the captured image, and calculating the brightness distribution. In addition, depending on the type of headlights (LED, halogen, etc.) equipped on the vehicle, the value of the correction amount may be calculated in consideration of the diffusion characteristics of these lights. The correction amount may be changed using the result of the external
在区域校正量计算部402的检测结果校正处理505中,在由检测部401判定为在检测区域有遮挡物的情况下,使用遮挡物检测处理502的结果,以在识别区域内不包含遮挡物的方式计算识别区域的校正量。例如,可以列举如下方法:根据视差分布从图7的视差分布702对跟前侧的物体和进深侧的物体视差分布进行分组,区分遮挡物703和行人的区域,以不包含遮挡物703的区域的方式计算识别区域的校正量。In the detection
另外,在检测结果校正处理505中,也可以进行如下处理:不仅使用视差信息,还使用彩色信息、特征量的提取等来详细求出遮挡物703与行人的位置关系等,以不包含遮挡物703的方式计算校正量。In addition, in the detection
识别区域校正部403的识别区域校正处理部506根据由区域校正量计算部402计算出的各校正量,实施识别区域的尺寸调整。将立体物检测区域作为识别区域的基础,综合由距离判定校正处理503、前照灯照射范围判定校正处理504以及检测结果校正处理505计算出的校正量,向最终的识别区域进行校正。The recognition region
识别区域校正处理部506根据上述判断结果,以例如将相当于从立体物的检测框的上部到地面的高度的一半的区域重新设定为识别区域的方式进行校正。The recognition region
在对象物是行人的情况下,以包含行人头部的上半身包含在识别区域内的方式校正识别区域,判断校正后的识别区域内的对象物是否是行人。由此,能够提高是否是行人的识别精度。When the object is a pedestrian, the recognition area is corrected so that the upper body including the head of the pedestrian is included in the recognition area, and it is determined whether the object in the corrected recognition area is a pedestrian. Thereby, it is possible to improve the recognition accuracy of whether the person is a pedestrian or not.
图11是表示识别区域校正处理结果的例子的图。FIG. 11 is a diagram showing an example of a recognition region correction processing result.
在图11的(A)中,例如,在白天截掉了脚下的情况下的摄像机图像1101中,为了将包含了行人的头部的上半身作为识别区域,校正了识别区域11020所得的结果成为识别区域1102。In (A) of FIG. 11 , for example, in the
此外,在图11的(B)中,在夜间本车辆的前照灯为近光的摄像机图像1103中,对于能够判断为行人的头部不包含在检测区域中的摄像机图像1103,为了将包含了行人的头部的上半身作为识别区域,校正识别区域11030,将其作为识别区域1104。Furthermore, in (B) of FIG. 11 , among the
此外,在图11的(C)中,在夜间本车辆的前照灯为远光的摄像机图像1105中,为了将包含了行人的头部的上半身作为识别区域,对于截掉了行人的脚下的图像11060进行校正,将其作为识别区域1106。In addition, in (C) of FIG. 11 , in the
识别区域校正计算部402在近光的情况下(照射外部环境的前方的前照灯的照射范围设定得低的情况下),以识别区域的上部区域的校正量比远光的情况(前照灯的照射范围设定得高的情况下)大的方式进行校正。The recognition region
另外,也可能有根据识别区域的校正量的计算结果,校正后的识别区域与校正前的检测区域一致的情况。Also, depending on the calculation result of the correction amount of the recognition region, there may be a case where the corrected recognition region coincides with the pre-correction detection region.
如上所述,根据本发明的一个实施例,被构成为在判断为行人的一部分被遮挡物遮挡的情况下,计算各区域距本车辆的距离,辨别近前区域的物体作为遮挡物除去,通过剩余的图像辨别是否是行人。As mentioned above, according to one embodiment of the present invention, it is configured that when it is judged that a part of the pedestrian is blocked by an obstruction, the distance between each area and the vehicle is calculated, and the object identified in the immediate area is removed as an obstruction. The image can identify whether it is a pedestrian.
另外,被构成为,在判断为截掉了对象物的脚下的情况下,以包含头部的上半身部分成为识别区域的方式校正识别区域,在判断为头部没有被照射的情况下,以包含头部的区域成为识别区域的方式校正识别区域,使用具有包含头部的上半身的识别区域,辨别是否是行人。In addition, when it is determined that the foot of the object is cut off, the recognition area is corrected so that the upper body part including the head becomes the recognition area, and when it is determined that the head is not illuminated, the recognition area is corrected to include The recognition region is corrected so that the head region becomes the recognition region, and whether or not it is a pedestrian is determined using the recognition region having the upper body including the head.
因此,可以实现即使在无法检测到存在于本车辆周边的行人的全身的情况下,也能够正确地识别是否是行人的图像处理装置。Therefore, it is possible to realize an image processing device capable of correctly identifying whether a pedestrian is a pedestrian even when the entire body of a pedestrian existing around the own vehicle cannot be detected.
在识别处理404中,也可以准备多个识别器,根据识别区域的校正结果切换识别器。In the
例如,根据校正结果,在行人的上半身部分成为识别区域的情况下,不是使用全身的识别器,而是使用利用行人上半身的学习数据制作的识别器进行识别。For example, when the upper body of a pedestrian becomes the recognition area based on the calibration result, instead of using a classifier for the whole body, the classifier created using the learning data of the upper body of the pedestrian is used for recognition.
另外,上述例子是将本发明应用于搭载在车辆上的图像处理装置的情况,但是本发明也可以应用于搭载在车辆以外的移动体(物品输送车等)上的图像处理装置。In addition, the above-mentioned example is a case where the present invention is applied to an image processing device mounted on a vehicle, but the present invention can also be applied to an image processing device mounted on a moving body (article transport vehicle, etc.) other than a vehicle.
另外,上述例子是识别行人(人)的图像处理装置的例子,但是也能够应用于识别人以外的移动物体的图像处理装置。例如,也可以应用于识别其他车辆、小动物等的图像处理装置。In addition, the above example is an example of an image processing device that recognizes a pedestrian (person), but it can also be applied to an image processing device that recognizes a moving object other than a person. For example, it can also be applied to an image processing device that recognizes other vehicles, small animals, and the like.
上述的检测区域是用于检测对象物的区域,识别区域是用于识别检测区域内的检测出的对象物的区域。因此,识别区域往往校正检测区域,但校正的结果也有可能两者相同。The aforementioned detection area is an area for detecting an object, and the recognition area is an area for identifying a detected object within the detection area. Therefore, the recognition area often corrects the detection area, but the corrected result may be the same.
符号说明Symbol Description
100…立体摄像机装置,101…左摄像机,102…右摄像机,103…图像输入接口,104…图像处理部,105…运算处理部,106…存储部,107…CAN接口,108…控制处理部,109…内部总线,110…外部车载网络CAN,203、204…图像数据,205…图像处理,206…图像缓冲器,207…视差处理,208…立体物检测处理,209…各种物体识别处理,210…车辆控制处理,301…立体物区域,401…检测部,402…区域校正计算部,403…识别区域校正部,404…识别处理部,501…外部环境检测处理,502…遮挡物检测处理,503…距离判定校正处理,504…前照灯照射范围判定校正处理,505…检测结果校正处理,506…识别区域校正处理,601、1002、1005、1101、1103、1105…摄像机图像,701、801、1003、1006…检测区域,702…视差分布,703…遮挡物,901…摄像机拍摄视角,1001…光照射的范围,1102、1104、1106、11020、11030、11060…识别区域。100...stereoscopic camera device, 101...left camera, 102...right camera, 103...image input interface, 104...image processing unit, 105...operation processing unit, 106...storage unit, 107...CAN interface, 108...control processing unit, 109...internal bus, 110...external vehicle network CAN, 203, 204...image data, 205...image processing, 206...image buffer, 207...parallax processing, 208...three-dimensional object detection processing, 209...various object recognition processing, 210...vehicle control processing, 301...three-dimensional object area, 401...detection unit, 402...area correction calculation unit, 403...recognition area correction unit, 404...recognition processing unit, 501...external environment detection processing, 502...occlusion detection processing , 503...distance judgment correction processing, 504...headlight irradiation range judgment correction processing, 505...detection result correction processing, 506...recognition area correction processing, 601, 1002, 1005, 1101, 1103, 1105...camera image, 701, 801, 1003, 1006... detection area, 702... parallax distribution, 703... occlusion, 901... camera shooting angle, 1001... range of light irradiation, 1102, 1104, 1106, 11020, 11030, 11060... recognition area.
Claims (5)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018236756 | 2018-12-18 | ||
JP2018-236756 | 2018-12-18 | ||
PCT/JP2019/045181 WO2020129517A1 (en) | 2018-12-18 | 2019-11-19 | Image processing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113228130A CN113228130A (en) | 2021-08-06 |
CN113228130B true CN113228130B (en) | 2023-01-17 |
Family
ID=71101241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980084842.0A Active CN113228130B (en) | 2018-12-18 | 2019-11-19 | image processing device |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7201706B2 (en) |
CN (1) | CN113228130B (en) |
WO (1) | WO2020129517A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023107532A (en) * | 2022-01-24 | 2023-08-03 | 株式会社豊田自動織機 | Image processing device for human detection system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007058805A (en) * | 2005-08-26 | 2007-03-08 | Fuji Heavy Ind Ltd | Front environment recognition device |
JP2010188970A (en) * | 2009-02-20 | 2010-09-02 | Honda Motor Co Ltd | Periphery monitoring device for vehicle |
JP2011086097A (en) * | 2009-10-15 | 2011-04-28 | Daihatsu Motor Co Ltd | Obstacle detection device |
JP2011165170A (en) * | 2010-01-15 | 2011-08-25 | Toyota Central R&D Labs Inc | Object detection device and program |
CN103136509A (en) * | 2011-11-24 | 2013-06-05 | 富士重工业株式会社 | Exterior environment recognition device and exterior environment recognition method |
CN104680124A (en) * | 2013-11-28 | 2015-06-03 | 现代摩比斯株式会社 | Device And Method For Detecting Pedestrains |
WO2018008461A1 (en) * | 2016-07-05 | 2018-01-11 | 日立オートモティブシステムズ株式会社 | Image processing device |
CN108241852A (en) * | 2016-12-26 | 2018-07-03 | 佳能株式会社 | Human body detecting device and method, information processing equipment and method and storage medium |
JP2018109824A (en) * | 2016-12-28 | 2018-07-12 | 株式会社デンソーテン | Electronic control device, electronic control system, and electronic control method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4425642B2 (en) * | 2004-01-08 | 2010-03-03 | 富士重工業株式会社 | Pedestrian extraction device |
JP5712968B2 (en) * | 2012-05-31 | 2015-05-07 | 株式会社デンソー | Human detection device |
JP6091586B1 (en) * | 2015-11-12 | 2017-03-08 | 三菱電機株式会社 | VEHICLE IMAGE PROCESSING DEVICE AND VEHICLE IMAGE PROCESSING SYSTEM |
-
2019
- 2019-11-19 CN CN201980084842.0A patent/CN113228130B/en active Active
- 2019-11-19 JP JP2020561232A patent/JP7201706B2/en active Active
- 2019-11-19 WO PCT/JP2019/045181 patent/WO2020129517A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007058805A (en) * | 2005-08-26 | 2007-03-08 | Fuji Heavy Ind Ltd | Front environment recognition device |
JP2010188970A (en) * | 2009-02-20 | 2010-09-02 | Honda Motor Co Ltd | Periphery monitoring device for vehicle |
JP2011086097A (en) * | 2009-10-15 | 2011-04-28 | Daihatsu Motor Co Ltd | Obstacle detection device |
JP2011165170A (en) * | 2010-01-15 | 2011-08-25 | Toyota Central R&D Labs Inc | Object detection device and program |
CN103136509A (en) * | 2011-11-24 | 2013-06-05 | 富士重工业株式会社 | Exterior environment recognition device and exterior environment recognition method |
CN104680124A (en) * | 2013-11-28 | 2015-06-03 | 现代摩比斯株式会社 | Device And Method For Detecting Pedestrains |
WO2018008461A1 (en) * | 2016-07-05 | 2018-01-11 | 日立オートモティブシステムズ株式会社 | Image processing device |
CN108241852A (en) * | 2016-12-26 | 2018-07-03 | 佳能株式会社 | Human body detecting device and method, information processing equipment and method and storage medium |
JP2018109824A (en) * | 2016-12-28 | 2018-07-12 | 株式会社デンソーテン | Electronic control device, electronic control system, and electronic control method |
Also Published As
Publication number | Publication date |
---|---|
WO2020129517A1 (en) | 2020-06-25 |
JPWO2020129517A1 (en) | 2021-10-07 |
JP7201706B2 (en) | 2023-01-10 |
CN113228130A (en) | 2021-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11461595B2 (en) | Image processing apparatus and external environment recognition apparatus | |
CN103358981B (en) | The headlight setting device of vehicle | |
CN102779430B (en) | Collision-warning system, controller and method of operating thereof after the night of view-based access control model | |
CN103213540B (en) | Vehicle driving environment recognition apparatus | |
CN104185588B (en) | Vehicle-mounted imaging system and method for determining road width | |
CN108496176B (en) | Method for identifying objects in the surrounding area of a motor vehicle, driver assistance system and motor vehicle | |
US9224055B2 (en) | Exterior environment recognition device | |
JP6459659B2 (en) | Image processing apparatus, image processing method, driving support system, program | |
KR20170014168A (en) | Camera device for vehicle | |
KR101840974B1 (en) | Lane identification system for autonomous drive | |
KR101268282B1 (en) | Lane departure warning system in navigation for vehicle and method thereof | |
KR101276073B1 (en) | System and method for detecting distance between forward vehicle using image in navigation for vehicle | |
US20120128211A1 (en) | Distance calculation device for vehicle | |
JP4007578B2 (en) | Headlamp irradiation range control method and headlamp apparatus | |
CN113228130B (en) | image processing device | |
WO2020054260A1 (en) | Image recognition device | |
JP6174884B2 (en) | Outside environment recognition device and outside environment recognition method | |
KR101511586B1 (en) | Apparatus and method for controlling vehicle by detection of tunnel | |
JP2018163530A (en) | Object detection device, object detection method, and object detection program | |
KR101180676B1 (en) | A method for controlling high beam automatically based on image recognition of a vehicle | |
JP2020038551A (en) | Face orientation detection device, face orientation detection method, and control program | |
JP4791086B2 (en) | Collision avoidance system | |
KR101982091B1 (en) | Surround view monitoring system | |
JP6582891B2 (en) | Empty vehicle frame identification system, method and program | |
WO2023112127A1 (en) | Image recognition device and image recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |