WO2022009537A1 - Image processing device - Google Patents

Image processing device Download PDF

Info

Publication number
WO2022009537A1
WO2022009537A1 PCT/JP2021/019335 JP2021019335W WO2022009537A1 WO 2022009537 A1 WO2022009537 A1 WO 2022009537A1 JP 2021019335 W JP2021019335 W JP 2021019335W WO 2022009537 A1 WO2022009537 A1 WO 2022009537A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
region
image
images
obstacle candidate
Prior art date
Application number
PCT/JP2021/019335
Other languages
French (fr)
Japanese (ja)
Inventor
琢馬 大里
フェリペ ゴメズカバレロ
雅幸 竹村
健 永崎
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112021002598.8T priority Critical patent/DE112021002598T5/en
Publication of WO2022009537A1 publication Critical patent/WO2022009537A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention relates to an image processing device.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2007-235642
  • the publication describes as a problem "eliminating false detection of obstacles caused by detection errors of vehicle speed sensors and steering angle sensors, and performing three-dimensional object detection with a monocular camera with higher accuracy" as a solution.
  • a top view of the first image including the road surface captured by the camera mounted on the vehicle and a top view of the second image captured at a timing different from the first image are created, and the above two The top views of the two sheets are made to correspond based on the characteristic shape on the road surface, and the region where the difference occurs in the overlapping portion of the two top views is recognized as an obstacle.
  • An object of the present invention is to calculate an appropriate correspondence for each region even when the parameter corresponding to the two images changes for each region, and to correctly detect and recognize an obstacle (hereinafter, also referred to as an object). It is to provide an image processing apparatus capable of.
  • the present invention uses an image conversion unit that converts two images at two different times into a bird's-eye view image, and an obstacle using the two bird's-eye view images or the two images.
  • the obstacle candidate area extraction unit that extracts the area within a predetermined height from the road surface among the areas showing the above and the obstacle candidate area are aligned by using the image features in the obstacle candidate area. It is provided with an obstacle detection unit that determines that the area where the difference is generated is an area where an obstacle exists.
  • an image processing device capable of correctly detecting and recognizing an obstacle (object) by calculating an appropriate correspondence for each region even when the parameter corresponding to the two images changes for each region.
  • FIG. Explanatory diagram of the object detection method by the bird's-eye view difference method.
  • Explanatory drawing of a specific example using a classifier for extracting an obstacle candidate area A flowchart of a specific example using an obstacle candidate area extraction classifier. Flow chart of obstacle detection unit. Explanatory drawing of the alignment means of the obstacle candidate area. The flowchart of obstacle candidate area extraction using map information and GPS information in Example 2. An explanatory diagram of obstacle candidate area extraction using map information and GPS information in Example 2. The flowchart of obstacle candidate area extraction for pop-out correspondence in Example 3. The explanatory view of the obstacle candidate area extraction for pop-out correspondence in Example 3. FIG.
  • the camera 101 is mounted on the vehicle 100.
  • the camera 101 is for acquiring an image of the surroundings of the vehicle 100, and may be a monocular camera or a stereo camera composed of two or more cameras as long as a plurality of images different in time can be acquired.
  • An object detection device 102 is mounted on the camera 101, and for example, the distance to the object in front and the relative speed are measured and transmitted to the vehicle control unit 103.
  • the vehicle control unit 103 controls the brake accelerator 105 and the steering 104 from the distance and the relative speed received from the object detection device 102.
  • the camera 101 includes the object detection device 102 shown in FIG.
  • the object detection device 102 includes an image pickup device 201, a memory 202, a CPU 203, an image processing unit (image processing device) 204, an external output unit 205, and the like. Each component constituting the object detection device 102 is communicably connected via the communication line 206.
  • the CPU 203 executes the arithmetic processing described below according to the instructions of the program stored in the memory 202.
  • the image taken by the image sensor 201 is transmitted to the image processing unit 204, and an obstacle reflected in the image is detected. Specific means for detection will be described later.
  • the detection result is transmitted from the external output unit 205 to the outside of the object detection device 102, and is used in the vehicle control unit 103 for determining vehicle control such as the brake accelerator 105 and the steering 104 in the above-mentioned vehicle-mounted camera system.
  • the image processing unit 204 detects and tracks an object using an image.
  • a means for detecting an object using an image for example, a method using a difference in a bird's-eye view image (also called a bird's-eye view difference method) is used.
  • a bird's-eye view difference method also called a bird's-eye view difference method
  • two images 301 and 302 image 301 taken at the past time T-1 and image 302 taken at the current time T
  • Perform detection Of the two images 301 and 302, the previously captured image 301 is converted into a bird's-eye view image, and at the same time, the change in appearance due to the movement of the vehicle is calculated from information such as the vehicle speed, and it is predicted that the image will be captured in the current frame.
  • Image 304 is generated.
  • Homography is generally used for conversion to a bird's-eye view image.
  • the projective transformation is a transformation in which a certain plane is projected onto another plane, and the road plane in the image 301 and the road plane in the image 302 can be converted into a bird's-eye view image looking down from directly above.
  • the difference image 306 is created by comparing the predicted image 304 with the image 305 obtained by converting the image 302 actually captured in the current frame into a bird's-eye view image.
  • the difference image 306 has a difference value in each pixel, and a region without a difference is represented by black and a region with a difference is represented by white.
  • the road plane becomes the same image due to the projective transformation and no difference occurs, but a difference occurs in a region other than the road plane, that is, a region where the obstacle 303 exists.
  • the object detection result 307 can be obtained.
  • this method detects an object other than the plane to be converted by the projective transformation, it is important to select the ground plane of the obstacle as the conversion target.
  • FIG. 4 in a scene where there is a change in the slope of the road surface, there are a plurality of different road planes in one image. Therefore, for example, when image conversion is performed on a road surface plane in a region where the vehicle is in contact with the ground, a different plane may be projected and converted.
  • the bird's-eye view image 404 and the bird's-eye view image 405 generated from the two images 401 and 402 captured in time series an unintended difference occurs in the texture of the road plane, in this example, the pedestrian crossing portion, which causes an obstacle.
  • a difference image 406 with a difference in an area other than the object 303 is obtained, and the object detection result 407 detected as a difference erroneously detects the pedestrian crossing as an obstacle, erroneously detects the area of the obstacle to be detected, and the like.
  • FIG. 5 is an explanatory diagram when an error is included in the generation of the bird's-eye view image and the road surface region near the foot of the obstacle is mistaken as an obstacle.
  • the lower end position of the detection frame here, the rectangular detection frame
  • the image processing unit 204 of this embodiment has an image conversion unit 241, an obstacle candidate region extraction unit 242, and an obstacle detection unit 243.
  • the image conversion unit 241 converts two images at two different times into a bird's-eye view image by a first parameter that projects and transforms (also called a bird's-eye view conversion) a plane that is dominant in each image.
  • the first parameter is the same parameter for each screen as a whole.
  • the dominant plane can be a road surface when the vehicle is converted on the assumption that the vehicle is on a flat road surface.
  • a distance sensor such as LiDAR or a stereo camera exists
  • the result of estimating the road surface using the distance measurement result may be used.
  • the entire road surface is represented by one (same) plane 602 based on the posture of the own vehicle 601. May occur.
  • the obstacle candidate area extraction unit 242 In the obstacle candidate area extraction unit 242, the area should be aligned with high accuracy from the two images captured at two different times, and the area of the obstacle should be calculated accurately (hereinafter, the obstacle candidate). (Also called a region) is extracted. Since there are multiple planes in the entire scene, one (uniform) projective transformation cannot be fully expressed (see FIG. 6). Therefore, by extracting a small area that can be expressed by one projective transformation and performing image conversion and superimposition by projective transformation again in the subsequent processing, the difference in the road surface part is eliminated and only the difference in the obstacle part is used. To detect obstacles. Here, since it is the lower end portion that needs to accurately estimate the position of the detection frame in obstacle detection, the region close to the ground contact position (that is, the region indicating the obstacle within a predetermined height from the road surface). ) Is selected.
  • step S801 a flowchart of a specific example using a bird's-eye view image is shown in FIG. 8, and an actual area is shown in FIG.
  • step S801 two images converted from a bird's-eye view by the image conversion unit 241 are given as inputs, and the images (that is, the two bird's-eye view images) are superimposed with the first parameter that is the same for the entire image.
  • Object detection is performed using the area (difference area) where the difference (image difference) is generated.
  • a detection frame 701 that includes a large area under the feet when detecting an obstacle in the difference image 406 and an erroneous detection frame 702 in an area having only a road surface can be obtained. From here, in step S802, a region close to the road surface (a region within a predetermined height from the road surface) 703 of the detection frame (that is, an obstacle detection region indicating an obstacle) is extracted, and two images 401 captured. , 704 and 705 corresponding to the regions 703 extracted in 402 are extracted as obstacle candidate regions.
  • FIG. 10 a flowchart of a specific example using a classifier is shown in FIG. 10, and an actual area is shown in FIG.
  • the classifier here has a pedestrian leg detection function, and the peripheral area of the pedestrian leg candidate (the pedestrian leg candidate area where the pedestrian leg is likely to be imaged) by the function. Is detected.
  • two captured images are given as inputs, and in step S1001, the region where the pedestrian leg is likely to be imaged is obtained while gradually shifting the processing window 901 of the classifier from the image region.
  • the discriminator is an evaluation function designed so that the score is high when the pedestrian leg is imaged in the target area, and the pedestrian leg is searched for the area where the score is equal to or higher than the threshold value.
  • the region 902 that is likely to be imaged can be extracted as an obstacle candidate region.
  • the processing only the vicinity of the feet (that is, the area within a predetermined height from the road surface) is extracted from the areas (obstacle detection areas) where there is a possibility of obstacles in the two images 401 and 402 captured. Can be done.
  • the boundary portion such as the lower end
  • the obstacle area and the road surface area are accurately separated by aligning the area extracted by this process in the subsequent process.
  • the obstacle detection unit 243 calculates a second parameter for projecting and transforming a dominant plane by performing alignment in the obstacle candidate area extracted by the obstacle candidate area extraction unit 242, and using the second parameter.
  • the converted images are superimposed (in other words, alignment is performed), and the area where the difference occurs is detected as an area where an obstacle exists, that is, an obstacle.
  • the flowchart of the obstacle detection unit 243 is shown in FIG. As shown in FIG. 11, the above-mentioned obstacle candidate area (of two images) is given as an input.
  • a feature point is detected from a given region.
  • the means for detecting feature points include FAST (Features from Accelerated Segment Test) and Harris's corner detection.
  • step S1102 the feature amount is calculated from the detected feature points.
  • SIFT Scale-Invariant Feature Transform
  • SIFT is a method known as a robust feature quantity that does not change the feature quantity even when image changes such as lighting changes, rotation, and scaling occur, and the appearance changes depending on the position change of the camera and the passage of time. Even in this case, the same points can be determined to be the same.
  • step S1103 matching of feature points at two times is performed (see FIG. 12).
  • the feature points (of the obstacle candidate area) at the past time T-1 and the feature points (of the obstacle candidate area) at the current time T are compared, and the points having similar feature quantities calculated in step S1102 are matched with each other.
  • a feature point pair (1201, 1202) pointing to the same location is obtained at 2 hours.
  • the projection transformation matrix is estimated using the information of the matched points.
  • a projective transformation matrix with the smallest amount of point shift is obtained when all points are transformed by the same projective transformation matrix and superposed.
  • the second parameter according to the road surface at the feet For the small area (obstacle candidate area) extracted by the obstacle candidate area extraction unit 242, which is composed of the lower end of the obstacle and the road surface at the feet, the second parameter according to the road surface at the feet.
  • step S1105 the images converted using the projective transformation matrix according to the second parameter (that is, the bird's-eye view image of the obstacle candidate region portion) are superimposed, and the difference in the luminance values of the images is calculated.
  • step S1106 an obstacle detection result is obtained assuming that the region (difference region) in which the image difference is generated by superposition (alignment) is an obstacle (region in which the obstacle exists). Since the obstacle candidate area is a small area that includes only the obstacle and its foot road surface, this process changes the road surface gradient in the entire scene, or one projective transformation due to a step or height difference.
  • an appropriate projective transformation matrix (projection transformation matrix with the second parameter according to the foot road surface) can be estimated for each obstacle, and unnecessary differences are obtained from the imaged area of the road surface. Can be prevented from occurring.
  • the image processing unit (image processing device) 204 of the present embodiment includes an image conversion unit 241 that converts two images at two different times into a bird's-eye view image, and the two bird's-eye view images or the above.
  • the obstacle candidate area extraction unit 242 that extracts the area within a predetermined height from the road surface among the areas showing obstacles as the obstacle candidate area, and the image features in the obstacle candidate area are It is provided with an obstacle detection unit 243 that aligns the obstacle candidate region and determines that the region where the difference is generated is the region where the obstacle exists.
  • the obstacle candidate region extraction unit 242 extracts an region in which an image difference occurs when the two bird's-eye view images are superimposed with the same first parameter in the entire image, thereby extracting the obstacle.
  • the object candidate region is extracted, and the obstacle detection unit 243 aligns the obstacle candidate region according to the second parameter using the image feature in the obstacle candidate region, and the obstacle determines the region where the difference is generated.
  • Judge as an existing area the obstacle candidate region extraction unit 242 extracts an region in which an image difference occurs when the two bird's-eye view images are superimposed with the same first parameter in the entire image.
  • the obstacle candidate region extraction unit 242 extracts a region where an image difference occurs when the two bird's-eye views images are superimposed with the same first parameter in the entire image, and the road surface among the extracted regions. A region within a predetermined height is extracted from the above, and the regions of the two images corresponding to the extracted regions are extracted as the obstacle candidate regions.
  • the obstacle candidate region extraction unit 242 sets the peripheral region of the pedestrian leg candidate detected by the pedestrian leg detection function (discriminator) in the two images as the obstacle candidate. Extract as an area.
  • This provides an image processing device that can correctly detect and recognize obstacles (objects) by calculating an appropriate correspondence for each area even when the parameters corresponding to the two images change for each area. be able to.
  • Example 2 Method of linking with map information and GPS information
  • the second embodiment is a modification of the first embodiment, and shows an example in which a region having a high possibility of false detection is extracted by using the map information and the vehicle GPS information in the obstacle candidate region extraction unit 242.
  • a scene with a texture on the road surface which is likely to cause false detection due to an error in projective transformation, is extracted as an obstacle candidate area without being missed, and is aligned with high accuracy. It is possible to prevent false detection of the texture on the road surface as an obstacle.
  • FIG. 13 shows a flowchart of the obstacle candidate area extraction unit 242 in this embodiment.
  • the image 13A, the road surface paint position 13B included in the map information, and the GPS information 13C including the own vehicle position information are received as inputs.
  • the road surface paint is a texture drawn on the road surface such as a pedestrian crossing or a U-turn prohibition, and the position of this kind of texture is stored in a navigation system or the like.
  • the texture of the road surface is used, there is a possibility of false detection as well, so if you can obtain information on the texture that occurs on the entire road surface such as low-height curbs, gravel roads, and fallen leaves, not limited to road surface paint. You may receive that information as input.
  • step S1301 the road surface paint imaging region is calculated from the image.
  • the road surface paint By comparing the GPS information with the map information of the road surface paint position, it is determined whether or not the road surface paint is included in the image area currently captured.
  • the area of the image in which the image is captured is calculated, and the image area (that is, the road surface paint imaging area) is extracted.
  • FIG. 14 shows an explanatory diagram. It is known from the map information and GPS information that the area 1402 is an area without road surface paint with respect to the front of the own vehicle 1401. If there is no road surface texture, even if there is an error in the projective transformation, no difference will occur in the road surface area, so high-precision alignment is not necessary.
  • the obstacle candidate region extraction unit 242 extracts the region 1404 corresponding to the region (road surface paint imaging region) 1403 on the image as the obstacle candidate region, and the obstacle detection unit 243 accurately aligns the small region. By implementing, false detection is prevented.
  • the obstacle candidate area extraction unit 242 acquires map information and position information of the own vehicle, and in the two images. The area where the road surface paint is imaged is extracted as the obstacle candidate area.
  • Example 3 Extraction of an image area having a high risk of popping out
  • Example 3 is a modification of Example 1, and is not only an area where there is a possibility of obstacles at present in the obstacle candidate area extraction unit 242, but also obstacles (pedestrians of adults and children, bicycles, motorcycles) in the future. , Automobile, etc.)
  • An example is shown in the case of extracting an area where popping out may occur. According to this embodiment, the distance to an obstacle can be estimated without erroneous estimation even when a sudden jump occurs.
  • the flowchart of the obstacle candidate area extraction unit 242 in this embodiment is shown in FIG. 15, and the explanatory diagram is shown in FIG.
  • the object detection S801 using the difference region is performed by a projective transformation including an error as in the first embodiment.
  • an object detection result including an error in the depth direction is obtained for the object 1601 as in the detection result 1602.
  • step S1501 of the left and right ends of the detection result 1602, a region having a predetermined size on the own vehicle side is extracted as a pop-out possibility region 1603.
  • the region 1603 is a region where the object 1601 may pop out from behind, and if the pop-out occurs and the distance is incorrect, erroneous braking may occur.
  • the obstacle candidate area extraction unit 242 based on the area where there is a possibility of an obstacle, the area where there is a risk of popping out (probably popping out area) 1603 adjacent to the left and right ends thereof is extracted as an obstacle candidate area. Then, the obstacle detection unit 243 preferentially performs highly accurate image alignment to estimate the obstacle position with high accuracy.
  • the left and right edges of the object detection result including false detection are targeted for extraction, but the extraction method does not matter as long as the area has a high possibility of popping out.
  • the shadow of a building or the like may be extracted from the map information, or it may be fused with the object detection result by another sensor.
  • the obstacle candidate region extraction unit 242 has the left and right ends of the region where there is a possibility of obstacles in the two images. A region having a risk of popping out (a region having a possibility of popping out) adjacent to the above is extracted as the obstacle candidate region.
  • the present invention is not limited to the above-described embodiment, but includes various modifications.
  • the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations.
  • it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above configurations may be configured in whole or in part by hardware, or may be configured to be realized by executing a program on a processor.
  • control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all the control lines and information lines on the product. In practice, it can be considered that almost all configurations are interconnected.
  • Obstacle detection unit 100 vehicle, 101 camera, 102 object detection device, 103 vehicle control unit, 201 image pickup element, 202 memory, 203 CPU, 204 image processing unit (image processing device), 205 external output unit, 206 communication line, 241 image conversion unit, 242 Obstacle candidate area extraction unit 243 Obstacle detection unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided is an image processing device which, even when a parameter for associating two images with each other changes from one region to another, can calculate an appropriate response for each region and correctly detect and recognize an obstacle (object). This image processing device comprises: an image conversion unit 241 that converts two images obtained at two different time points into bird's-eye-view images; an obstacle candidate region extraction unit 242 that uses the two bird's-eye-view images or the two images to extract, as obstacle candidate regions, regions at a prescribed height or less from a road surface from among regions indicating obstacles; and an obstacle detection unit 243 that aligns the obstacle candidate regions using image features in the obstacle candidate regions and determines a region where a difference appears in the alignment to be a region where an obstacle is present.

Description

画像処理装置Image processing equipment
 本発明は、画像処理装置に関する。 The present invention relates to an image processing device.
 本技術分野の背景技術として、特開2007-235642号公報(特許文献1)がある。該公報には、課題として「車速センサや舵角センサの検出誤差に起因する障害物の誤検知を排除し、単眼カメラによる立体物検知をより高精度に行う。」と記載され、解決手段として「車両に搭載されたカメラで撮像された路面を含む第1の画像の上面図と、前記第1の画像とは異なるタイミングで撮像された第2の画像の上面図とを作成し、上記二枚の上面図を路面上の特徴的形状に基づいて対応させ、上記二枚の上面図の重複部分において差異が生じた領域を障害物と認識する。」と記載されている。 As a background technology in this technical field, there is Japanese Patent Application Laid-Open No. 2007-235642 (Patent Document 1). The publication describes as a problem "eliminating false detection of obstacles caused by detection errors of vehicle speed sensors and steering angle sensors, and performing three-dimensional object detection with a monocular camera with higher accuracy" as a solution. "A top view of the first image including the road surface captured by the camera mounted on the vehicle and a top view of the second image captured at a timing different from the first image are created, and the above two The top views of the two sheets are made to correspond based on the characteristic shape on the road surface, and the region where the difference occurs in the overlapping portion of the two top views is recognized as an obstacle. "
特開2007-235642号公報Japanese Unexamined Patent Publication No. 2007-235642
 しかしながら、路面勾配が変化していたり、段差や高低差がある路面を含む画像においては、路面上の特徴的形状に基づいた対応は画像の領域ごとに変化するため、例えば前記特許文献1に記載のように画面全体の対応をとった場合、領域によって障害物の認識精度が変化してしまう、などの問題があった。 However, in an image including a road surface having a change in the road surface gradient, a step or a height difference, the correspondence based on the characteristic shape on the road surface changes for each region of the image, and therefore, for example, described in Patent Document 1. When the entire screen is dealt with as in the above, there is a problem that the recognition accuracy of obstacles changes depending on the area.
 本発明の目的は、二枚の画像を対応させるパラメタが領域ごとに変化する場合においても領域ごとに適切な対応を算出し、障害物(以下、物体とも呼ぶ)を正しく検知・認識することのできる画像処理装置を提供することである。 An object of the present invention is to calculate an appropriate correspondence for each region even when the parameter corresponding to the two images changes for each region, and to correctly detect and recognize an obstacle (hereinafter, also referred to as an object). It is to provide an image processing apparatus capable of.
 上記目的を達成するために、本発明は、異なる2時刻における2枚の画像を俯瞰画像に変換する画像変換部と、前記2枚の俯瞰画像又は前記2枚の画像を使用して、障害物を示す領域のうち路面から所定高さ以内の領域を障害物候補領域として抽出する障害物候補領域抽出部と、前記障害物候補領域における画像特徴を用いて前記障害物候補領域の位置合わせを行い差分の生じた領域を障害物が存在する領域と判断する障害物検知部と、を備える。 In order to achieve the above object, the present invention uses an image conversion unit that converts two images at two different times into a bird's-eye view image, and an obstacle using the two bird's-eye view images or the two images. The obstacle candidate area extraction unit that extracts the area within a predetermined height from the road surface among the areas showing the above and the obstacle candidate area are aligned by using the image features in the obstacle candidate area. It is provided with an obstacle detection unit that determines that the area where the difference is generated is an area where an obstacle exists.
 本発明によれば、二枚の画像を対応させるパラメタが領域ごとに変化する場合においても領域ごとに適切な対応を算出し、障害物(物体)を正しく検知・認識することのできる画像処理装置を提供することができる。 According to the present invention, an image processing device capable of correctly detecting and recognizing an obstacle (object) by calculating an appropriate correspondence for each region even when the parameter corresponding to the two images changes for each region. Can be provided.
 上記した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。 Issues, configurations and effects other than those described above will be clarified by the explanation of the following embodiments.
実施例1における車載カメラシステムの概略構成を示す説明図。An explanatory diagram showing a schematic configuration of an in-vehicle camera system according to the first embodiment. 実施例1における物体検知装置及び画像処理部の構成を示す説明図。The explanatory view which shows the structure of the object detection apparatus and the image processing unit in Example 1. FIG. 俯瞰差分方式による物体検知手法の説明図。Explanatory diagram of the object detection method by the bird's-eye view difference method. 路面勾配のあるシーンにおける俯瞰差分方式の課題の説明図。An explanatory diagram of the problem of the bird's-eye view difference method in a scene with a road surface gradient. 路面を仮定した検知障害物までの距離推定手法の説明図。Explanatory diagram of the distance estimation method to the detection obstacle assuming the road surface. 一律の射影変換では誤差が発生するシーンの例。An example of a scene where an error occurs in a uniform projective transformation. 障害物候補領域抽出の俯瞰画像を用いた具体例の説明図。Explanatory drawing of a concrete example using the bird's-eye view image of the obstacle candidate area extraction. 障害物候補領域抽出の俯瞰画像を用いた具体例のフローチャート。A flowchart of a specific example using a bird's-eye view image of obstacle candidate area extraction. 障害物候補領域抽出の識別器を用いた具体例の説明図。Explanatory drawing of a specific example using a classifier for extracting an obstacle candidate area. 障害物候補領域抽出の識別器を用いた具体例のフローチャート。A flowchart of a specific example using an obstacle candidate area extraction classifier. 障害物検知部のフローチャート。Flow chart of obstacle detection unit. 障害物候補領域の位置合わせ手段の説明図。Explanatory drawing of the alignment means of the obstacle candidate area. 実施例2における、地図情報及びGPS情報を用いた障害物候補領域抽出のフローチャート。The flowchart of obstacle candidate area extraction using map information and GPS information in Example 2. 実施例2における、地図情報及びGPS情報を用いた障害物候補領域抽出の説明図。An explanatory diagram of obstacle candidate area extraction using map information and GPS information in Example 2. 実施例3における、飛び出し対応のための障害物候補領域抽出のフローチャート。The flowchart of obstacle candidate area extraction for pop-out correspondence in Example 3. 実施例3における、飛び出し対応のための障害物候補領域抽出の説明図。The explanatory view of the obstacle candidate area extraction for pop-out correspondence in Example 3. FIG.
 以下、図面を用いて本発明の実施例を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
[実施例1]
 図1を用いて、実施例1における物体検知装置を搭載した車載カメラシステムの概要を説明する。車両100にカメラ101が搭載される。カメラ101は、車両100周囲を撮影した画像を取得するためのもので、時間的に異なる複数の画像を取得できれば、単眼カメラでもよいし、2台以上のカメラからなるステレオカメラでもよい。カメラ101には物体検知装置102が搭載されており、例えば前方の物体までの距離や相対速度を計測して車両制御部103に送信する。車両制御部103は、物体検知装置102から受け取った距離や相対速度からブレーキ・アクセル105、ステアリング104を制御する。
[Example 1]
An outline of the vehicle-mounted camera system equipped with the object detection device according to the first embodiment will be described with reference to FIG. The camera 101 is mounted on the vehicle 100. The camera 101 is for acquiring an image of the surroundings of the vehicle 100, and may be a monocular camera or a stereo camera composed of two or more cameras as long as a plurality of images different in time can be acquired. An object detection device 102 is mounted on the camera 101, and for example, the distance to the object in front and the relative speed are measured and transmitted to the vehicle control unit 103. The vehicle control unit 103 controls the brake accelerator 105 and the steering 104 from the distance and the relative speed received from the object detection device 102.
 カメラ101は、図2に示す物体検知装置102を備えている。物体検知装置102は、撮像素子201、メモリ202、CPU203、画像処理部(画像処理装置)204、外部出力部205などを備える。物体検知装置102を構成する各構成要素は、通信ライン206を介して通信可能に接続されている。CPU203は、メモリ202に格納されているプログラムの指示に従って、以降で説明する演算処理を実行する。 The camera 101 includes the object detection device 102 shown in FIG. The object detection device 102 includes an image pickup device 201, a memory 202, a CPU 203, an image processing unit (image processing device) 204, an external output unit 205, and the like. Each component constituting the object detection device 102 is communicably connected via the communication line 206. The CPU 203 executes the arithmetic processing described below according to the instructions of the program stored in the memory 202.
 撮像素子201で撮影された画像は画像処理部204に送信され、画像に映る障害物を検知する。検知するための具体的な手段については後段で述べる。検知結果は外部出力部205より物体検知装置102の外部に送信され、前述の車載カメラシステムであれば、車両制御部103においてブレーキ・アクセル105、ステアリング104などの車両制御の判断に利用される。 The image taken by the image sensor 201 is transmitted to the image processing unit 204, and an obstacle reflected in the image is detected. Specific means for detection will be described later. The detection result is transmitted from the external output unit 205 to the outside of the object detection device 102, and is used in the vehicle control unit 103 for determining vehicle control such as the brake accelerator 105 and the steering 104 in the above-mentioned vehicle-mounted camera system.
 以下、画像処理部204の構成要素について説明する。 Hereinafter, the components of the image processing unit 204 will be described.
 画像処理部204においては、画像を用いて物体の検知と追跡を行う。画像を用いた物体の検知手段については、例えば俯瞰画像の差分を用いた手法(俯瞰差分方式とも呼ぶ)を用いる。この手法では、図3に示す通り、時系列に撮像した2枚の画像301、302(過去の時刻T-1で撮像した画像301及び現在の時刻Tで撮像した画像302)を用いて物体の検知を行う。2枚の画像301、302のうち過去に撮像された画像301を俯瞰画像に変換すると同時に、車速などの情報から車両の運動による見え方の変化を算出し、今フレームで撮像されると予測される画像304を生成する。俯瞰画像への変換は、一般的に射影変換が用いられる。射影変換はある平面を他の平面に射影する変換であり、画像301における道路平面と画像302における道路平面を真上から見下ろした俯瞰画像に変換することができる。予測した画像304と実際に今フレームで撮像された画像302を俯瞰画像に変換した画像305を比較し、差分画像306を作成する。差分画像306は各画素における差分の値を持っており、差分のない領域が黒、差分がある領域が白で表される。射影変換に誤差がなければ、道路平面は射影変換によって同一の画像となり差分が発生しないが、道路平面上以外の領域、すなわち、障害物303の存在する領域では差分が発生する。この差分を検出することによって物体検知結果307を得ることができる。 The image processing unit 204 detects and tracks an object using an image. As a means for detecting an object using an image, for example, a method using a difference in a bird's-eye view image (also called a bird's-eye view difference method) is used. In this method, as shown in FIG. 3, two images 301 and 302 (image 301 taken at the past time T-1 and image 302 taken at the current time T) of the object are used. Perform detection. Of the two images 301 and 302, the previously captured image 301 is converted into a bird's-eye view image, and at the same time, the change in appearance due to the movement of the vehicle is calculated from information such as the vehicle speed, and it is predicted that the image will be captured in the current frame. Image 304 is generated. Homography is generally used for conversion to a bird's-eye view image. The projective transformation is a transformation in which a certain plane is projected onto another plane, and the road plane in the image 301 and the road plane in the image 302 can be converted into a bird's-eye view image looking down from directly above. The difference image 306 is created by comparing the predicted image 304 with the image 305 obtained by converting the image 302 actually captured in the current frame into a bird's-eye view image. The difference image 306 has a difference value in each pixel, and a region without a difference is represented by black and a region with a difference is represented by white. If there is no error in the projective transformation, the road plane becomes the same image due to the projective transformation and no difference occurs, but a difference occurs in a region other than the road plane, that is, a region where the obstacle 303 exists. By detecting this difference, the object detection result 307 can be obtained.
 この方式は射影変換の変換対象の平面以外にある物体を検知するため、変換対象として障害物の接地している路面平面を選択することが重要である。図4に示すように、路面の勾配変化があるシーンにおいては、一枚の画像の中に複数の異なる道路平面がある。そのため、例えば自車の接地している領域の路面平面を対象にして画像変換を実施した場合、異なる平面を射影変換してしまうことがある。このとき時系列に撮像した2枚の画像401、402から生成される俯瞰画像404と俯瞰画像405においては、道路平面のテクスチャ、この例では横断歩道部分に意図しない差分が発生してしまい、障害物303以外の領域に差分のある差分画像406が得られ、差分として検出される物体検知結果407では横断歩道を障害物として誤検知してしまう、検知すべき障害物の領域を誤ってしまうなどの問題が発生する。したがって、障害物を正しく検知するためには、複数の道路平面が存在するシーンにおいて、領域ごとに適切な射影変換を実施することで、道路平面上では画像間の差分が発生しないようにする必要がある。 Since this method detects an object other than the plane to be converted by the projective transformation, it is important to select the ground plane of the obstacle as the conversion target. As shown in FIG. 4, in a scene where there is a change in the slope of the road surface, there are a plurality of different road planes in one image. Therefore, for example, when image conversion is performed on a road surface plane in a region where the vehicle is in contact with the ground, a different plane may be projected and converted. At this time, in the bird's-eye view image 404 and the bird's-eye view image 405 generated from the two images 401 and 402 captured in time series, an unintended difference occurs in the texture of the road plane, in this example, the pedestrian crossing portion, which causes an obstacle. A difference image 406 with a difference in an area other than the object 303 is obtained, and the object detection result 407 detected as a difference erroneously detects the pedestrian crossing as an obstacle, erroneously detects the area of the obstacle to be detected, and the like. Problem occurs. Therefore, in order to correctly detect obstacles, it is necessary to perform appropriate projective transformation for each area in a scene where there are multiple road planes so that differences between images do not occur on the road planes. There is.
 また、検知障害物までの距離を算出する手法の一つとして、検知した障害物領域の足元部分が路面に接していると仮定して算出する手法が公知である。図5は、俯瞰画像の生成に誤差が含まれ、障害物の足元近辺の路面領域を障害物として誤った場合の説明図である。路面領域を含んで障害物領域だと誤検知した結果501を基に、画像上での検知枠(ここでは矩形状の検知枠)の下端位置を道路面に射影することで、障害物303までの距離を推定する。このとき障害物303の下端位置を誤って推定していた場合、誤推定結果502の位置に障害物があると誤推定が発生する。検知障害物までの距離の誤推定は車両の誤制御に繋がるため、検知枠の下端位置をできる限り正確に推定することが望ましい。 Further, as one of the methods for calculating the distance to the detected obstacle, a method for calculating by assuming that the foot portion of the detected obstacle region is in contact with the road surface is known. FIG. 5 is an explanatory diagram when an error is included in the generation of the bird's-eye view image and the road surface region near the foot of the obstacle is mistaken as an obstacle. Based on the result 501 of false detection including the road surface area as an obstacle area, the lower end position of the detection frame (here, the rectangular detection frame) on the image is projected onto the road surface to reach the obstacle 303. Estimate the distance of. At this time, if the lower end position of the obstacle 303 is erroneously estimated, erroneous estimation occurs if there is an obstacle at the position of the erroneous estimation result 502. Since erroneous estimation of the distance to the detection obstacle leads to erroneous control of the vehicle, it is desirable to estimate the lower end position of the detection frame as accurately as possible.
 以下、画像処理部204の詳細について説明する。本実施例の画像処理部204は、図2に示すように、画像変換部241、障害物候補領域抽出部242、障害物検知部243を有する。 The details of the image processing unit 204 will be described below. As shown in FIG. 2, the image processing unit 204 of this embodiment has an image conversion unit 241, an obstacle candidate region extraction unit 242, and an obstacle detection unit 243.
(画像変換部241)
 画像変換部241においては、異なる2時刻における2枚の画像を、それぞれの画像全体において支配的な平面を射影変換(俯瞰変換とも呼ぶ)する第一パラメタによって俯瞰画像に変換する。第一パラメタは、それぞれの画面全体で同一のパラメタである。支配的な平面とは、例えば前述の車載カメラシステムであれば、自車が平坦な路面上にいると仮定して変換する場合の路面が考えられる。また、LiDARやステレオカメラなどの距離センサが存在する場合、距離計測結果を用いて路面を推定した結果を利用してもよい。このとき、図6に示すように路面勾配の変化するシーンでは自車601の姿勢を基に1枚の(同一の)平面602で路面全体を表現するため、表現している平面と路面にずれが発生する可能性がある。
(Image conversion unit 241)
The image conversion unit 241 converts two images at two different times into a bird's-eye view image by a first parameter that projects and transforms (also called a bird's-eye view conversion) a plane that is dominant in each image. The first parameter is the same parameter for each screen as a whole. For example, in the case of the above-mentioned in-vehicle camera system, the dominant plane can be a road surface when the vehicle is converted on the assumption that the vehicle is on a flat road surface. Further, when a distance sensor such as LiDAR or a stereo camera exists, the result of estimating the road surface using the distance measurement result may be used. At this time, as shown in FIG. 6, in the scene where the road surface gradient changes, the entire road surface is represented by one (same) plane 602 based on the posture of the own vehicle 601. May occur.
(障害物候補領域抽出部242)
 障害物候補領域抽出部242においては、異なる2時刻で撮像された2枚の画像から領域の位置合わせを高精度に実施し、障害物の領域を正確に算出すべき領域(以下、障害物候補領域とも呼ぶ)を抽出する。シーン全体では複数の平面が存在するため、一つの(一律の)射影変換では十分に表現できない(図6参照)。そこで、一つの射影変換で表現できるだけの小領域を抽出し、後段の処理で改めて射影変換による画像の変換と重ね合わせを行うことで、路面部分の差分をなくし、障害物部分の差分だけを用いて障害物検知を行う。ここで、障害物検知において精度よく検知枠の位置を推定する必要があるのは下端部であるから、接地位置に近い領域(すなわち、障害物を示す領域のうち路面から所定高さ以内の領域)を選択する。
(Obstacle candidate area extraction unit 242)
In the obstacle candidate area extraction unit 242, the area should be aligned with high accuracy from the two images captured at two different times, and the area of the obstacle should be calculated accurately (hereinafter, the obstacle candidate). (Also called a region) is extracted. Since there are multiple planes in the entire scene, one (uniform) projective transformation cannot be fully expressed (see FIG. 6). Therefore, by extracting a small area that can be expressed by one projective transformation and performing image conversion and superimposition by projective transformation again in the subsequent processing, the difference in the road surface part is eliminated and only the difference in the obstacle part is used. To detect obstacles. Here, since it is the lower end portion that needs to accurately estimate the position of the detection frame in obstacle detection, the region close to the ground contact position (that is, the region indicating the obstacle within a predetermined height from the road surface). ) Is selected.
 障害物候補領域抽出手段の例として、俯瞰画像を用いた具体例のフローチャートを図8、実際の領域を図7に示した。図8に示すように、ステップS801において、画像変換部241で俯瞰変換した画像2枚を入力として与え、画像(つまり、2枚の俯瞰画像)を画像全体で同一である第一パラメタで重ね合わせて差分(画像差分)の生じた領域(差分領域)を用いた物体検知を行う。このとき俯瞰変換は誤差を含むので、差分画像406において障害物を検知する際にその足元の領域を多く含んでいるような検知枠701や、路面しかない領域の誤検知枠702が得られる。ここから、ステップS802において、検知枠(すなわち、障害物を示す障害物検知領域)のうち路面に近い領域(路面から所定高さ以内の領域)703を抽出し、撮像された2枚の画像401、402において抽出した領域703に対応する領域704、705を障害物候補領域として抽出する。本処理によって、撮像された2枚の画像401、402において障害物の可能性のある領域(障害物検知領域)のうち足元付近(すなわち、路面から所定高さ以内の領域)だけを抽出することができる。物体ごとに領域を個別に抽出することで、前述の平面の違いによる俯瞰変換の誤差を取り除くための処理領域を得る。 As an example of the obstacle candidate area extraction means, a flowchart of a specific example using a bird's-eye view image is shown in FIG. 8, and an actual area is shown in FIG. As shown in FIG. 8, in step S801, two images converted from a bird's-eye view by the image conversion unit 241 are given as inputs, and the images (that is, the two bird's-eye view images) are superimposed with the first parameter that is the same for the entire image. Object detection is performed using the area (difference area) where the difference (image difference) is generated. At this time, since the bird's-eye view conversion includes an error, a detection frame 701 that includes a large area under the feet when detecting an obstacle in the difference image 406 and an erroneous detection frame 702 in an area having only a road surface can be obtained. From here, in step S802, a region close to the road surface (a region within a predetermined height from the road surface) 703 of the detection frame (that is, an obstacle detection region indicating an obstacle) is extracted, and two images 401 captured. , 704 and 705 corresponding to the regions 703 extracted in 402 are extracted as obstacle candidate regions. By this processing, only the vicinity of the feet (that is, the area within a predetermined height from the road surface) is extracted from the areas (obstacle detection areas) where there is a possibility of obstacles in the two images 401 and 402 captured. Can be done. By extracting the area individually for each object, a processing area for removing the error of the bird's-eye view conversion due to the difference in the plane described above is obtained.
 障害物候補領域抽出手段の別の例として、識別器を用いた具体例のフローチャートを図10、実際の領域を図9に示した。ここでの識別器は、歩行者の脚部検出機能を有し、当該機能により歩行者脚部候補の周辺領域(歩行者脚部の撮像されている可能性の高い歩行者脚部候補領域)を検出する。図10に示すように、撮像した画像2枚を入力として与え、ステップS1001において、画像領域から識別器の処理ウィンドウ901を少しずつずらしながら歩行者脚部の撮像されている可能性の高い領域を探索する。識別器は、対象の領域に歩行者脚部が撮像されている場合にスコアが高くなるように設計された評価関数であり、このスコアが閾値以上となる領域を探索することで、歩行者脚部の撮像されている可能性の高い領域902を障害物候補領域として抽出することができる。本処理によって、撮像された2枚の画像401、402において障害物の可能性のある領域(障害物検知領域)のうち足元付近(すなわち、路面から所定高さ以内の領域)だけを抽出することができる。ただし、下端などの境界部分を正確に推定することはできないため、本処理で抽出された領域を後段処理の位置合わせによって障害物領域と路面領域を精度よく分離する。 As another example of the obstacle candidate area extraction means, a flowchart of a specific example using a classifier is shown in FIG. 10, and an actual area is shown in FIG. The classifier here has a pedestrian leg detection function, and the peripheral area of the pedestrian leg candidate (the pedestrian leg candidate area where the pedestrian leg is likely to be imaged) by the function. Is detected. As shown in FIG. 10, two captured images are given as inputs, and in step S1001, the region where the pedestrian leg is likely to be imaged is obtained while gradually shifting the processing window 901 of the classifier from the image region. Explore. The discriminator is an evaluation function designed so that the score is high when the pedestrian leg is imaged in the target area, and the pedestrian leg is searched for the area where the score is equal to or higher than the threshold value. The region 902 that is likely to be imaged can be extracted as an obstacle candidate region. By this processing, only the vicinity of the feet (that is, the area within a predetermined height from the road surface) is extracted from the areas (obstacle detection areas) where there is a possibility of obstacles in the two images 401 and 402 captured. Can be done. However, since it is not possible to accurately estimate the boundary portion such as the lower end, the obstacle area and the road surface area are accurately separated by aligning the area extracted by this process in the subsequent process.
(障害物検知部243)
 障害物検知部243においては、障害物候補領域抽出部242で抽出された障害物候補領域において位置合わせを実施することで支配的な平面を射影変換する第二パラメタを算出し、第二パラメタによって変換した画像を重ね合わせて(換言すれば、位置合わせを行い)差分の生じた領域を障害物が存在する領域、すなわち障害物として検知する。障害物検知部243のフローチャートを図11に示した。図11に示すように、入力として前述した(2枚の画像の)障害物候補領域を与える。ステップS1101において、与えられた領域から特徴点を検出する。特徴点の検出手段としては、FAST(Features from Accelerated Segment Test)やHarris’s corner detectionなどが公知として挙げられる。これらは画像中からコーナーとなっている点を検出するものであり(図12参照)、周辺と比較して特徴的である点を検出する手法として知られている。次にステップS1102において、検出した特徴点から特徴量を算出する。特徴量の算出手段としては、例えばSIFT(Scale-Invariant Feature Transform)が公知として挙げられる。SIFTは照明変化、回転、拡大縮小などの画像変化が発生した場合においても特徴量の変化しない頑健な特徴量として知られている手法であり、カメラの位置変化や時間経過によって見え方が変化した場合にも、同一の点を同一と判断することができる。次にステップS1103において、2時刻の特徴点のマッチングを行う(図12参照)。過去の時刻T-1における(障害物候補領域の)特徴点と、現在の時刻Tにおける(障害物候補領域の)特徴点を比較し、ステップS1102で算出した特徴量の近い点同士をマッチングさせ、図12に示すように2時刻で同一の個所を指し示す特徴点ペア(1201、1202)を得る。マッチングした点の情報を用いて、ステップS1104では射影変換行列の推定を行う。すべての点を同一の射影変換行列によって変換して重ね合わせたときに、最も点のずれ量が少なくなるような射影変換行列を求める。本処理によって、障害物候補領域抽出部242で抽出された、障害物の下端部及び足元の路面によって構成される小領域(障害物候補領域)に対して、足元の路面に応じた第二パラメタによる射影変換行列を求める。第二パラメタによる射影変換行列を用いて変換した画像(すなわち、障害物候補領域部分の俯瞰画像)同士をステップS1105において重ね合わせ、画像の輝度値の差分を計算する。ステップS1106において、重ね合わせ(位置合わせ)によって画像の差分が発生した領域(差分領域)が障害物(障害物が存在する領域)であるとして障害物検知結果を得る。障害物候補領域は、障害物及びその足元路面だけを含むような小領域を抽出しているため、本処理によって、シーン全体で路面勾配が変化していたり、段差や高低差によって1つの射影変換行列では表現できないシーンであっても、障害物ごとに適切な射影変換行列(足元路面に応じた第二パラメタによる射影変換行列)を推定することができ、路面の撮像された領域から不要な差分が発生することを防ぐことができる。
(Obstacle detection unit 243)
The obstacle detection unit 243 calculates a second parameter for projecting and transforming a dominant plane by performing alignment in the obstacle candidate area extracted by the obstacle candidate area extraction unit 242, and using the second parameter. The converted images are superimposed (in other words, alignment is performed), and the area where the difference occurs is detected as an area where an obstacle exists, that is, an obstacle. The flowchart of the obstacle detection unit 243 is shown in FIG. As shown in FIG. 11, the above-mentioned obstacle candidate area (of two images) is given as an input. In step S1101, a feature point is detected from a given region. Known examples of the means for detecting feature points include FAST (Features from Accelerated Segment Test) and Harris's corner detection. These are for detecting points that are corners in an image (see FIG. 12), and are known as a method for detecting points that are characteristic as compared with the surroundings. Next, in step S1102, the feature amount is calculated from the detected feature points. As a means for calculating the feature amount, for example, SIFT (Scale-Invariant Feature Transform) is known. SIFT is a method known as a robust feature quantity that does not change the feature quantity even when image changes such as lighting changes, rotation, and scaling occur, and the appearance changes depending on the position change of the camera and the passage of time. Even in this case, the same points can be determined to be the same. Next, in step S1103, matching of feature points at two times is performed (see FIG. 12). The feature points (of the obstacle candidate area) at the past time T-1 and the feature points (of the obstacle candidate area) at the current time T are compared, and the points having similar feature quantities calculated in step S1102 are matched with each other. , As shown in FIG. 12, a feature point pair (1201, 1202) pointing to the same location is obtained at 2 hours. In step S1104, the projection transformation matrix is estimated using the information of the matched points. A projective transformation matrix with the smallest amount of point shift is obtained when all points are transformed by the same projective transformation matrix and superposed. For the small area (obstacle candidate area) extracted by the obstacle candidate area extraction unit 242, which is composed of the lower end of the obstacle and the road surface at the feet, the second parameter according to the road surface at the feet. Find the projective transformation matrix by. In step S1105, the images converted using the projective transformation matrix according to the second parameter (that is, the bird's-eye view image of the obstacle candidate region portion) are superimposed, and the difference in the luminance values of the images is calculated. In step S1106, an obstacle detection result is obtained assuming that the region (difference region) in which the image difference is generated by superposition (alignment) is an obstacle (region in which the obstacle exists). Since the obstacle candidate area is a small area that includes only the obstacle and its foot road surface, this process changes the road surface gradient in the entire scene, or one projective transformation due to a step or height difference. Even in a scene that cannot be represented by a matrix, an appropriate projective transformation matrix (projection transformation matrix with the second parameter according to the foot road surface) can be estimated for each obstacle, and unnecessary differences are obtained from the imaged area of the road surface. Can be prevented from occurring.
 以上で説明したように、本実施例の画像処理部(画像処理装置)204は、異なる2時刻における2枚の画像を俯瞰画像に変換する画像変換部241と、前記2枚の俯瞰画像又は前記2枚の画像を使用して、障害物を示す領域のうち路面から所定高さ以内の領域を障害物候補領域として抽出する障害物候補領域抽出部242と、前記障害物候補領域における画像特徴を用いて前記障害物候補領域の位置合わせを行い差分の生じた領域を障害物が存在する領域と判断する障害物検知部243と、を備える。 As described above, the image processing unit (image processing device) 204 of the present embodiment includes an image conversion unit 241 that converts two images at two different times into a bird's-eye view image, and the two bird's-eye view images or the above. Using two images, the obstacle candidate area extraction unit 242 that extracts the area within a predetermined height from the road surface among the areas showing obstacles as the obstacle candidate area, and the image features in the obstacle candidate area are It is provided with an obstacle detection unit 243 that aligns the obstacle candidate region and determines that the region where the difference is generated is the region where the obstacle exists.
 また、一例として、前記障害物候補領域抽出部242は、前記2枚の俯瞰画像を画像全体で同一である第一パラメタで重ね合わせた場合に画像差分の発生する領域を抽出することによって前記障害物候補領域を抽出し、前記障害物検知部243は、前記障害物候補領域における画像特徴を用いて前記障害物候補領域の第二パラメタによる位置合わせを行い差分の生じた領域を前記障害物が存在する領域と判断する。 Further, as an example, the obstacle candidate region extraction unit 242 extracts an region in which an image difference occurs when the two bird's-eye view images are superimposed with the same first parameter in the entire image, thereby extracting the obstacle. The object candidate region is extracted, and the obstacle detection unit 243 aligns the obstacle candidate region according to the second parameter using the image feature in the obstacle candidate region, and the obstacle determines the region where the difference is generated. Judge as an existing area.
 また、前記障害物候補領域抽出部242は、前記2枚の俯瞰画像を画像全体で同一である第一パラメタで重ね合わせた場合に画像差分の発生する領域を抽出し、抽出した領域のうち路面から所定高さ以内の領域を抽出し、抽出した領域に対応する前記2枚の画像の領域を前記障害物候補領域として抽出する。 Further, the obstacle candidate region extraction unit 242 extracts a region where an image difference occurs when the two bird's-eye views images are superimposed with the same first parameter in the entire image, and the road surface among the extracted regions. A region within a predetermined height is extracted from the above, and the regions of the two images corresponding to the extracted regions are extracted as the obstacle candidate regions.
 また、他例として、前記障害物候補領域抽出部242は、前記2枚の画像において歩行者の脚部検出機能(識別器)によって検出された歩行者脚部候補の周辺領域を前記障害物候補領域として抽出する。 As another example, the obstacle candidate region extraction unit 242 sets the peripheral region of the pedestrian leg candidate detected by the pedestrian leg detection function (discriminator) in the two images as the obstacle candidate. Extract as an area.
 本実施例の構成によって俯瞰画像の差分を用いた物体検出を行った場合、一般的には路面に差分が発生して誤検知につながる勾配変化のあるシーンにおいても、勾配変化のないとみなせる小領域(障害物候補領域)に対して射影変換を行うため、誤検知の発生を防ぐことができる。障害物ごとに射影変換行列を推定するため、勾配変化に限らず、自車の振動や傾きによる変化の影響を最小限にし、精度よく障害物を検知することができる。 When object detection is performed using the difference in the bird's-eye view image according to the configuration of this embodiment, it can be generally considered that there is no change in the gradient even in a scene where the difference occurs on the road surface and the gradient changes leading to false detection. Since the projective conversion is performed on the area (obstacle candidate area), it is possible to prevent the occurrence of false detection. Since the projective transformation matrix is estimated for each obstacle, it is possible to detect obstacles with high accuracy by minimizing the influence of changes due to vibration and tilt of the own vehicle, not limited to gradient changes.
 これにより、二枚の画像を対応させるパラメタが領域ごとに変化する場合においても領域ごとに適切な対応を算出し、障害物(物体)を正しく検知・認識することのできる画像処理装置を提供することができる。 This provides an image processing device that can correctly detect and recognize obstacles (objects) by calculating an appropriate correspondence for each area even when the parameters corresponding to the two images change for each area. be able to.
[実施例2:地図情報及びGPS情報と連携する手法]
 実施例2は実施例1の変形例であり、障害物候補領域抽出部242において地図情報及び車両GPS情報を用いることで、誤検知の可能性の高い領域を抽出する場合の実施例を示す。本実施例によれば、射影変換の誤差によって誤検知の発生する可能性の高い、路面上にテクスチャのあるシーンを逃すことなく障害物候補領域として抽出し、高精度に位置合わせすることで、路面上のテクスチャを障害物として誤検知することを防ぐことができる。
[Example 2: Method of linking with map information and GPS information]
The second embodiment is a modification of the first embodiment, and shows an example in which a region having a high possibility of false detection is extracted by using the map information and the vehicle GPS information in the obstacle candidate region extraction unit 242. According to this embodiment, a scene with a texture on the road surface, which is likely to cause false detection due to an error in projective transformation, is extracted as an obstacle candidate area without being missed, and is aligned with high accuracy. It is possible to prevent false detection of the texture on the road surface as an obstacle.
 本実施例における障害物候補領域抽出部242のフローチャートを図13に示した。図13に示すように、入力として画像13A、地図情報に含まれる路面ペイント位置13B、自車位置情報を含むGPS情報13Cを受け取る。ここで、路面ペイントとは、横断歩道やUターン禁止などの路面に描かれたテクスチャのことであり、ナビシステムなどではこの種のテクスチャの位置が保存されている。また、路面のテクスチャであれば同様に誤検知の可能性があるため、路面ペイントに限らず、高さの低い縁石、砂利道や落葉などの路面全体に発生するテクスチャの情報が得られる場合、その情報を入力として受け取っても構わない。ステップS1301において、画像から路面ペイント撮像領域を算出する。GPS情報と路面ペイント位置の地図情報を見比べて、現在撮像されている画像領域に路面ペイントが含まれるかどうかを判定する。路面ペイントが含まれる場合、画像のどの領域に撮像されているかを計算し、画像領域(つまり、路面ペイント撮像領域)を抽出する。図14に説明図を示す。自車両1401の前方に対して、領域1402は地図情報とGPS情報から路面ペイントのない領域であることがわかっている。路面テクスチャがなければ、射影変換に誤りがあったとしても路面領域に差分は発生しないため、高精度な位置合わせは不要となる。一方、領域1403は路面に横断歩道がペイントされていることがわかっている。射影変換を誤った場合、横断歩道のテクスチャが適切に重ね合わされずに差分を生じ、障害物として誤検知する可能性が高い。よって、障害物候補領域抽出部242において、画像上で領域(路面ペイント撮像領域)1403に対応する領域1404を障害物候補領域として抽出し、障害物検知部243において、小領域の正確な位置合わせを実施することで、誤検知を防ぐ。 FIG. 13 shows a flowchart of the obstacle candidate area extraction unit 242 in this embodiment. As shown in FIG. 13, the image 13A, the road surface paint position 13B included in the map information, and the GPS information 13C including the own vehicle position information are received as inputs. Here, the road surface paint is a texture drawn on the road surface such as a pedestrian crossing or a U-turn prohibition, and the position of this kind of texture is stored in a navigation system or the like. Also, if the texture of the road surface is used, there is a possibility of false detection as well, so if you can obtain information on the texture that occurs on the entire road surface such as low-height curbs, gravel roads, and fallen leaves, not limited to road surface paint. You may receive that information as input. In step S1301, the road surface paint imaging region is calculated from the image. By comparing the GPS information with the map information of the road surface paint position, it is determined whether or not the road surface paint is included in the image area currently captured. When the road surface paint is included, the area of the image in which the image is captured is calculated, and the image area (that is, the road surface paint imaging area) is extracted. FIG. 14 shows an explanatory diagram. It is known from the map information and GPS information that the area 1402 is an area without road surface paint with respect to the front of the own vehicle 1401. If there is no road surface texture, even if there is an error in the projective transformation, no difference will occur in the road surface area, so high-precision alignment is not necessary. On the other hand, it is known that the pedestrian crossing is painted on the road surface in the area 1403. If the projective transformation is incorrect, the textures of the pedestrian crossing will not be properly superimposed and a difference will occur, and there is a high possibility that they will be falsely detected as obstacles. Therefore, the obstacle candidate region extraction unit 242 extracts the region 1404 corresponding to the region (road surface paint imaging region) 1403 on the image as the obstacle candidate region, and the obstacle detection unit 243 accurately aligns the small region. By implementing, false detection is prevented.
 以上で説明したように、本実施例の画像処理部(画像処理装置)204は、前記障害物候補領域抽出部242は、地図情報及び自車の位置情報を取得し、前記2枚の画像における路面ペイントが撮像される領域を前記障害物候補領域として抽出する。 As described above, in the image processing unit (image processing device) 204 of the present embodiment, the obstacle candidate area extraction unit 242 acquires map information and position information of the own vehicle, and in the two images. The area where the road surface paint is imaged is extracted as the obstacle candidate area.
 本実施例によれば、誤検知の危険性の高い路面ペイントのある(撮像される)領域を逃すことなく、射影変換行列の算出を実施し、路面領域に生じる差分を防ぐことができる。 According to this embodiment, it is possible to calculate the projective transformation matrix without missing the area where the road surface paint has a high risk of false detection (imaging), and prevent the difference occurring in the road surface area.
[実施例3:飛び出しの危険性が高い画像領域の抽出]
 実施例3は実施例1の変形例であり、障害物候補領域抽出部242で現在障害物のある可能性のある領域だけでなく、未来において障害物(大人や子どもの歩行者、自転車、バイク、自動車など)の飛び出しの発生する可能性のある領域を抽出する場合の実施例を示す。本実施例によれば、急な飛び出しの発生時においても障害物までの距離を誤推定することなく推定することができる。
[Example 3: Extraction of an image area having a high risk of popping out]
Example 3 is a modification of Example 1, and is not only an area where there is a possibility of obstacles at present in the obstacle candidate area extraction unit 242, but also obstacles (pedestrians of adults and children, bicycles, motorcycles) in the future. , Automobile, etc.) An example is shown in the case of extracting an area where popping out may occur. According to this embodiment, the distance to an obstacle can be estimated without erroneous estimation even when a sudden jump occurs.
 本実施例における障害物候補領域抽出部242のフローチャートを図15に、説明図を図16に示した。図15に示すように、実施例1と同様に誤差を含む射影変換によって差分領域を用いた物体検知S801を行う。このとき物体1601に対して検知結果1602のように奥行方向の誤差を含む物体検知結果を得る。次にステップS1501において、検知結果1602の左右端のうち、自車両側の所定の大きさの領域を飛び出し可能性領域1603として抽出する。領域1603は、物体1601の陰からの飛び出しが発生する可能性のある領域であり、飛び出しが発生した場合に距離を誤ると誤ブレーキにつながる恐れがある。そのため、障害物候補領域抽出部242において、障害物の可能性のある領域を基に、その左右端に隣接する飛び出しの危険性のある領域(飛び出し可能性領域)1603を障害物候補領域として抽出し、障害物検知部243において、優先的に高精度な画像の位置合わせを実施することで、障害物位置を精度よく推定する。 The flowchart of the obstacle candidate area extraction unit 242 in this embodiment is shown in FIG. 15, and the explanatory diagram is shown in FIG. As shown in FIG. 15, the object detection S801 using the difference region is performed by a projective transformation including an error as in the first embodiment. At this time, an object detection result including an error in the depth direction is obtained for the object 1601 as in the detection result 1602. Next, in step S1501, of the left and right ends of the detection result 1602, a region having a predetermined size on the own vehicle side is extracted as a pop-out possibility region 1603. The region 1603 is a region where the object 1601 may pop out from behind, and if the pop-out occurs and the distance is incorrect, erroneous braking may occur. Therefore, in the obstacle candidate area extraction unit 242, based on the area where there is a possibility of an obstacle, the area where there is a risk of popping out (probably popping out area) 1603 adjacent to the left and right ends thereof is extracted as an obstacle candidate area. Then, the obstacle detection unit 243 preferentially performs highly accurate image alignment to estimate the obstacle position with high accuracy.
 本実施例では誤検知を含む物体検知結果の左右端を抽出する対象としたが、飛び出しの可能性の高い領域であれば抽出手法は問わない。例えば地図情報から建物などの陰を抽出してもよいし、他センサによる物体検知結果とフュージョンしてもかまわない。 In this embodiment, the left and right edges of the object detection result including false detection are targeted for extraction, but the extraction method does not matter as long as the area has a high possibility of popping out. For example, the shadow of a building or the like may be extracted from the map information, or it may be fused with the object detection result by another sensor.
 以上で説明したように、本実施例の画像処理部(画像処理装置)204は、前記障害物候補領域抽出部242は、前記2枚の画像において前記障害物の可能性のある領域の左右端に隣接する飛び出しの危険性のある領域(飛び出し可能性領域)を前記障害物候補領域として抽出する。 As described above, in the image processing unit (image processing apparatus) 204 of the present embodiment, the obstacle candidate region extraction unit 242 has the left and right ends of the region where there is a possibility of obstacles in the two images. A region having a risk of popping out (a region having a possibility of popping out) adjacent to the above is extracted as the obstacle candidate region.
 本実施例によれば、障害物の飛び出しの可能性の高い領域を優先的に抽出し、飛び出しが発生した際の誤検知や距離の誤推定を防ぐことができる。 According to this embodiment, it is possible to preferentially extract an area where an obstacle has a high possibility of popping out, and prevent erroneous detection and erroneous estimation of distance when popping out occurs.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 The present invention is not limited to the above-described embodiment, but includes various modifications. For example, the above-described embodiment has been described in detail in order to explain the present invention in an easy-to-understand manner, and is not necessarily limited to the one including all the described configurations. Further, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of one embodiment. Further, it is possible to add / delete / replace a part of the configuration of each embodiment with another configuration.
 また、上記の各構成は、それらの一部又は全部が、ハードウェアで構成されても、プロセッサでプログラムが実行されることにより実現されるように構成されてもよい。また、制御線や情報線は説明上必要と考えられるものを示しており、製品上全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されている
と考えてもよい。
Further, each of the above configurations may be configured in whole or in part by hardware, or may be configured to be realized by executing a program on a processor. In addition, the control lines and information lines indicate those that are considered necessary for explanation, and do not necessarily indicate all the control lines and information lines on the product. In practice, it can be considered that almost all configurations are interconnected.
100 車両、101 カメラ、102 物体検知装置、103 車両制御部、201 撮像素子、202 メモリ、203 CPU、204 画像処理部(画像処理装置)、205 外部出力部、206 通信ライン、241 画像変換部、242 障害物候補領域抽出部、243 障害物検知部 100 vehicle, 101 camera, 102 object detection device, 103 vehicle control unit, 201 image pickup element, 202 memory, 203 CPU, 204 image processing unit (image processing device), 205 external output unit, 206 communication line, 241 image conversion unit, 242 Obstacle candidate area extraction unit 243 Obstacle detection unit

Claims (7)

  1.  異なる2時刻における2枚の画像を俯瞰画像に変換する画像変換部と、
     前記2枚の俯瞰画像又は前記2枚の画像を使用して、障害物を示す領域のうち路面から所定高さ以内の領域を障害物候補領域として抽出する障害物候補領域抽出部と、
     前記障害物候補領域における画像特徴を用いて前記障害物候補領域の位置合わせを行い差分の生じた領域を障害物が存在する領域と判断する障害物検知部と、を備えることを特徴とする画像処理装置。
    An image conversion unit that converts two images at two different times into a bird's-eye view image,
    An obstacle candidate area extraction unit that extracts an area within a predetermined height from the road surface as an obstacle candidate area among the areas showing obstacles by using the two bird's-eye view images or the two images.
    An image characterized by comprising an obstacle detection unit that aligns the obstacle candidate region using the image feature in the obstacle candidate region and determines that the region where the difference occurs is the region where the obstacle exists. Processing device.
  2.  請求項1に記載の画像処理装置であって、
     前記障害物候補領域抽出部は、前記2枚の俯瞰画像を画像全体で同一である第一パラメタで重ね合わせた場合に画像差分の発生する領域を抽出することによって前記障害物候補領域を抽出し、
     前記障害物検知部は、前記障害物候補領域における画像特徴を用いて前記障害物候補領域の第二パラメタによる位置合わせを行い差分の生じた領域を前記障害物が存在する領域と判断することを特徴とする画像処理装置。
    The image processing apparatus according to claim 1.
    The obstacle candidate region extraction unit extracts the obstacle candidate region by extracting the region where the image difference occurs when the two bird's-eye view images are superimposed with the same first parameter in the entire image. ,
    The obstacle detection unit uses the image features in the obstacle candidate region to perform alignment according to the second parameter of the obstacle candidate region, and determines that the region where the difference is generated is the region where the obstacle exists. An image processing device that features it.
  3.  請求項2に記載の画像処理装置であって、
     前記障害物候補領域抽出部は、前記2枚の俯瞰画像を画像全体で同一である第一パラメタで重ね合わせた場合に画像差分の発生する領域を抽出し、抽出した領域のうち路面から所定高さ以内の領域を抽出し、抽出した領域に対応する前記2枚の画像の領域を前記障害物候補領域として抽出することを特徴とする画像処理装置。
    The image processing apparatus according to claim 2.
    The obstacle candidate region extraction unit extracts a region where an image difference occurs when the two bird's-eye view images are superimposed with the same first parameter in the entire image, and the predetermined height from the road surface among the extracted regions. An image processing apparatus comprising extracting an area within the range and extracting an area of the two images corresponding to the extracted area as an obstacle candidate area.
  4.  請求項1に記載の画像処理装置であって、
     前記障害物候補領域抽出部は、前記2枚の画像において歩行者の脚部検出機能によって検出された歩行者脚部候補の周辺領域を前記障害物候補領域として抽出することを特徴とする画像処理装置。
    The image processing apparatus according to claim 1.
    The image processing featured in that the obstacle candidate region extraction unit extracts the peripheral region of the pedestrian leg candidate detected by the pedestrian leg detection function in the two images as the obstacle candidate region. Device.
  5.  請求項1に記載の画像処理装置であって、
     前記障害物候補領域抽出部は、地図情報及び自車の位置情報を取得し、前記2枚の画像における路面ペイントが撮像される領域を前記障害物候補領域として抽出することを特徴とする画像処理装置。
    The image processing apparatus according to claim 1.
    The obstacle candidate area extraction unit acquires map information and position information of the own vehicle, and extracts an area in which the road surface paint is imaged in the two images as the obstacle candidate area. Device.
  6.  請求項1に記載の画像処理装置であって、
     前記障害物候補領域抽出部は、前記2枚の画像において前記障害物の可能性のある領域の左右端に隣接する飛び出しの危険性のある領域を前記障害物候補領域として抽出することを特徴とする画像処理装置。
    The image processing apparatus according to claim 1.
    The obstacle candidate region extraction unit is characterized in that, in the two images, regions having a risk of popping out adjacent to the left and right ends of the potential obstacle region are extracted as the obstacle candidate region. Image processing device.
  7.  異なる2時刻における2枚の画像を俯瞰画像に変換する画像変換部と、
     前記2枚の俯瞰画像又は前記2枚の画像を使用して、障害物を示す領域のうち路面から所定高さ以内の領域を障害物候補領域として前記2枚の画像から抽出する障害物候補領域抽出部と、
     前記障害物候補領域における画像特徴を用いて前記障害物候補領域の俯瞰画像を生成し、前記障害物候補領域の俯瞰画像の位置合わせを行い差分の生じた領域を障害物が存在する領域と判断する障害物検知部と、を備えることを特徴とする画像処理装置。
    An image conversion unit that converts two images at two different times into a bird's-eye view image,
    An obstacle candidate area extracted from the two images using the two bird's-eye view images or the two images as an obstacle candidate area within a predetermined height from the road surface among the areas showing obstacles. Extractor and
    A bird's-eye view image of the obstacle candidate region is generated using the image features in the obstacle candidate region, the bird's-eye view image of the obstacle candidate region is aligned, and the region where the difference occurs is determined to be the region where the obstacle exists. An image processing device including an obstacle detection unit.
PCT/JP2021/019335 2020-07-07 2021-05-21 Image processing device WO2022009537A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112021002598.8T DE112021002598T5 (en) 2020-07-07 2021-05-21 IMAGE PROCESSING DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-117152 2020-07-07
JP2020117152A JP7404173B2 (en) 2020-07-07 2020-07-07 Image processing device

Publications (1)

Publication Number Publication Date
WO2022009537A1 true WO2022009537A1 (en) 2022-01-13

Family

ID=79552895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/019335 WO2022009537A1 (en) 2020-07-07 2021-05-21 Image processing device

Country Status (3)

Country Link
JP (1) JP7404173B2 (en)
DE (1) DE112021002598T5 (en)
WO (1) WO2022009537A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7418481B2 (en) 2022-02-08 2024-01-19 本田技研工業株式会社 Learning method, learning device, mobile control device, mobile control method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008034981A (en) * 2006-07-26 2008-02-14 Fujitsu Ten Ltd Image recognition device and method, pedestrian recognition device and vehicle controller
WO2010044127A1 (en) * 2008-10-16 2010-04-22 三菱電機株式会社 Device for detecting height of obstacle outside vehicle
JP2010256995A (en) * 2009-04-21 2010-11-11 Daihatsu Motor Co Ltd Object recognition apparatus
JP2019139420A (en) * 2018-02-08 2019-08-22 株式会社リコー Three-dimensional object recognition device, imaging device, and vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007235642A (en) 2006-03-02 2007-09-13 Hitachi Ltd Obstruction detecting system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008034981A (en) * 2006-07-26 2008-02-14 Fujitsu Ten Ltd Image recognition device and method, pedestrian recognition device and vehicle controller
WO2010044127A1 (en) * 2008-10-16 2010-04-22 三菱電機株式会社 Device for detecting height of obstacle outside vehicle
JP2010256995A (en) * 2009-04-21 2010-11-11 Daihatsu Motor Co Ltd Object recognition apparatus
JP2019139420A (en) * 2018-02-08 2019-08-22 株式会社リコー Three-dimensional object recognition device, imaging device, and vehicle

Also Published As

Publication number Publication date
JP7404173B2 (en) 2023-12-25
DE112021002598T5 (en) 2023-03-09
JP2022014673A (en) 2022-01-20

Similar Documents

Publication Publication Date Title
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
Wu et al. Lane-mark extraction for automobiles under complex conditions
JP4919036B2 (en) Moving object recognition device
RU2568777C2 (en) Device to detect moving bodies and system to detect moving bodies
EP2928178B1 (en) On-board control device
JP2002197444A (en) Run lane recognition device for vehicle
JP5401257B2 (en) Far-infrared pedestrian detection device
JP2008168811A (en) Traffic lane recognition device, vehicle, traffic lane recognition method, and traffic lane recognition program
WO2014002692A1 (en) Stereo camera
KR20130053980A (en) Obstacle detection method using image data fusion and apparatus
JP5548212B2 (en) Crosswalk sign detection method and crosswalk sign detection device
Yamaguchi et al. Road region estimation using a sequence of monocular images
US20200193184A1 (en) Image processing device and image processing method
JP2005217883A (en) Method for detecting flat road area and obstacle by using stereo image
WO2022009537A1 (en) Image processing device
JP2014106739A (en) In-vehicle image processing device
JP3925285B2 (en) Road environment detection device
JP2018073275A (en) Image recognition device
JP5785515B2 (en) Pedestrian detection device and method, and vehicle collision determination device
WO2018088262A1 (en) Parking frame recognition device
EP3373199A2 (en) Object detector, object detection method, carrier means, and equipment control system
WO2014050285A1 (en) Stereo camera device
JP3104645B2 (en) Road white line detection method and road white line detection device
JPH1141521A (en) Image pickup device, instrument and method for measuring distance between vehicles
JP2004153627A (en) Outside recognition device for vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21837713

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21837713

Country of ref document: EP

Kind code of ref document: A1