WO2021106510A1 - 障害物認識装置 - Google Patents

障害物認識装置 Download PDF

Info

Publication number
WO2021106510A1
WO2021106510A1 PCT/JP2020/041143 JP2020041143W WO2021106510A1 WO 2021106510 A1 WO2021106510 A1 WO 2021106510A1 JP 2020041143 W JP2020041143 W JP 2020041143W WO 2021106510 A1 WO2021106510 A1 WO 2021106510A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
difference
image
parameter
recognition device
Prior art date
Application number
PCT/JP2020/041143
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
フェリペ ゴメズカバレロ
琢馬 大里
小林 正幸
雅幸 竹村
健 志磨
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112020004944.2T priority Critical patent/DE112020004944T5/de
Publication of WO2021106510A1 publication Critical patent/WO2021106510A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention relates to an obstacle recognition device, for example, an in-vehicle obstacle recognition device for image-based obstacle detection in an environment near the own vehicle.
  • such devices display the surrounding environment to the driver and / or detect moving or static objects (obstacles) around the vehicle and have the potential for collisions between the vehicle and obstacles. It is configured to notify the driver of certain risks and automatically stop the vehicle based on the decision system to avoid a collision between the vehicle and an obstacle.
  • the device disclosed in Patent Document 1 is configured to provide the driver with a camera image showing the surroundings of the vehicle, which helps to ensure safe driving.
  • the device disclosed in Patent Document 2 is configured to detect an object and a distance to a detected object from distance data and images acquired by a stereo camera mounted on a vehicle.
  • Patent Document 1 only adjusts the camera attitude parameters for calibration based on the current environmental conditions, and the provided image is for object detection using an image difference based method.
  • the noise caused by fluctuations in camera attitude makes it difficult to pinpoint the location of obstacles.
  • the device described in Patent Document 2 can detect a moving object and calculate the distance to the detected object by using the distance data separated into the road and another object.
  • the device described in Patent Document 2 is configured to separate the road from the object based on the road surface estimation that can be obtained using the camera attitude parameter, in which case the camera attitude parameter is the current time. It is corrected using only the camera attitude parameter (absolute parameter) of. Therefore, the system is used in scenarios where vehicle speed and current road conditions affect the relationship between the in-vehicle camera and current road conditions, and variations in the initially set camera attitude parameter that generate incorrect correction values in time. When generating, the system cannot verify the correction value, the accuracy of the calculated distance to the detected object is reduced, an incorrect object detection result is generated, and the reliability of the system is reduced.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to verify the camera attitude parameters used for image conversion and to define the relationship between the device camera and the current environment.
  • An obstacle recognition device that can improve the reliability of obstacle recognition by simultaneously improving the accuracy of obstacle detection and the accuracy of distance calculation to the detected object even when the driving environmental conditions change. Is to provide.
  • the obstacle recognition device is an obstacle recognition device that recognizes an obstacle reflected in an image based on the difference between a plurality of images captured by a camera at different times.
  • the first difference regarding the posture of the camera based on the road surface shape at different times is acquired, the second difference regarding the amount of movement of the feature points in the images captured at the different times is acquired, and the first difference is obtained.
  • the second difference is used to verify the camera attitude parameter used for calculating the difference image.
  • the camera posture is used for image conversion and for describing the relationship between the device camera and the current environment.
  • the reliability of obstacle recognition is reduced by reducing the false detection rate of objects and improving the accuracy of object detection and the accuracy of distance calculation to the detected object at the same time to improve driving safety. Can be improved.
  • the obstacle recognition device 110 has a configuration in which a CPU, RAM, ROM, etc. are connected via a bus, and the CPU executes various control programs stored in the ROM to execute the entire system. Control the operation.
  • two camera sensors hereinafter, may be simply referred to as cameras
  • a single monocular camera is used as the sensing unit 111.
  • FIG. 1 is a block diagram showing a schematic configuration of an obstacle recognition device according to an embodiment of the present invention.
  • the obstacle recognition device 110 is mounted on, for example, a vehicle (own vehicle) V, and as shown in FIG. 1, a sensing unit 111 including two camera sensors located at the same height, an image acquisition unit 121, and an absolute camera attitude parameter. It includes a calculation unit 131, a camera relative attitude parameter calculation unit 132, a camera attitude parameter verification unit 141, an image difference calculation unit 151, an obstacle detection unit 161, an obstacle distance measurement unit 171, and a control application processing unit 181.
  • Camera orientation parameter refers to a set of values and parameters used to represent the position and orientation (direction) of a camera in three-dimensional space with respect to a particular position or plane.
  • the parameters can include translational parameters X, Y, and Z that can be used to describe the position of the camera with respect to the road, and pitch, which is a rotational parameter that describes the orientation of the camera in three-dimensional space. Yaw and roll can be included.
  • Camera absolute attitude parameter refers to a camera attitude parameter that describes the position and orientation of the camera (that is, the attitude of the camera) with respect to the road (road surface shape). It represents the position and orientation of the camera in the current period, as it is calculated using the current information. In other words, this is the current position and orientation, not the past, and not relative to other data.
  • Camera relative attitude parameter refers to a camera attitude parameter that describes a change from one state (usually the previous position or orientation) to a different state of time (usually the current position or orientation), between these states. Represents changes and movements (corresponding to the amount of movement of feature points in images captured at different times). Such parameters can themselves be used to represent the difference in position and orientation between the two periods, rather than representing the position and orientation of the camera in three-dimensional space with respect to the road.
  • the image acquisition unit 121 is an image acquired by one or both of the two camera sensors corresponding to the sensing unit 111 (an image reflecting the surroundings of the vehicle) in order to adjust the image characteristics for further processing. ) Is processed. This process includes, but is not limited to, image resolution adjustment that allows the input image to be reduced or enlarged to change the resulting image size, and identification of the input image from the original input image for further processing.
  • the target image area selection for cropping (trimming) the area of the image may be included.
  • the parameters used for image resolution adjustment and target image area selection can be controlled based on the current operating environment and conditions (speed, turning speed, etc.).
  • the image acquired by the image acquisition unit 121 is input to the camera absolute attitude parameter calculation unit 131 and the camera relative attitude parameter calculation unit 132.
  • the camera absolute posture parameter calculation unit 131 has a camera pitch angle (horizontal axis rotation) and a camera roll angle (vertical axis (front-back direction axis). ) Rotation), and has the ability to calculate any of all camera absolute orientation parameters with respect to flat ground, as defined by the camera yaw angle (vertical axis rotation).
  • the camera absolute attitude parameters can be calculated using current environmental information, such as road shape.
  • Road shape information can be obtained from several sources, such as the device, external sensors, or a combination of multiple sources.
  • the road shape can be seen as how the road shape can be defined from the viewpoint of the device, and describes the relationship between the device camera and the road.
  • the calculation process of the camera absolute attitude parameter differs depending on the source of the road shape information. For example, when the configuration of the device includes a process of calculating the slope of the road in front of the vehicle based on the distance data acquired from the stereo matching process, when the information is acquired from the same obstacle recognition device, the resulting road slope is obtained. Is used to calculate the pitch and roll angles of the device with respect to the road ahead of the vehicle. In another example, if the pitch, yaw, and roll angles can be obtained directly from an external sensor in real time, the camera absolute attitude parameters can be adjusted based on the offset from the installation parameters. In addition, other methods for calculating the above parameters can also be included in this example.
  • 11 indicates a camera sensor as an in-vehicle sensor corresponding to the sensing unit 111
  • OA1 indicates a sensor optical axis
  • R1 indicates a flat road (ground).
  • the camera relative attitude parameter calculation unit 132 describes the relationship between the camera attitude parameters at two different time intervals, the camera pitch angle (horizontal axis rotation), and the camera roll angle (vertical axis (vertical axis (vertical axis (vertical axis)). It has the function of calculating any of all camera relative orientation parameters defined by the anteroposterior axis) rotation) and the camera yaw angle (vertical axis rotation).
  • the camera relative attitude parameter calculates the difference (that is, the amount of movement) between the positions of the feature points in the two images acquired in different periods (time) as shown in FIGS. 3 (a) and 3 (b).
  • Feature points can be selected according to the scenario, for example, feature points can be pixels (or pixel patches) that represent corner points in the image, such as visible road lane paint or edges of static objects such as curbs. .. Note that other types of calculations can also be used.
  • the parameters calculated by the camera absolute attitude parameter calculation unit 131 and the camera relative attitude parameter calculation unit 132 are input to the camera attitude parameter verification unit 141.
  • the camera attitude parameter verification unit 141 uses the parameters calculated by the camera absolute attitude parameter calculation unit 131 and the camera relative attitude parameter calculation unit 132, and the camera attitude used for geometric conversion and calculation. It has a function to verify parameters.
  • the verification process may refer to one or more of the following tasks and their respective outputs. Its tasks include parameter averaging, parameter selection based on transient value fluctuations, and further processing execution decisions (processing abandonment, processing continuation, and other types of processing). A specific processing example of the verification processing will be described later.
  • the camera attitude parameter as the verification result of the camera attitude parameter verification unit 141 is input to the image difference calculation unit 151.
  • the image difference calculation unit 151 performs geometric image conversion on the image acquired and processed by the image acquisition unit 111 based on the camera attitude parameter verified by the camera attitude parameter verification unit 141. Has a function to perform.
  • Image conversion includes, but is not limited to, affine transformation such as rotation, scale, and shearing, and bird's-eye view image conversion based on a flat ground. There is also a function to calculate a difference image showing a discrepancy between at least two converted images.
  • affine transformation such as rotation, scale, and shearing
  • bird's-eye view image conversion based on a flat ground There is also a function to calculate a difference image showing a discrepancy between at least two converted images.
  • Known methods including but not limited to, simple pixel-to-pixel difference calculations and filter-based image difference calculations can be applied to the difference calculations.
  • the difference image (difference between a plurality of images captured at different times) calculated by the image difference calculation unit 151 is input to the obstacle detection unit 161.
  • the obstacle detection unit 161 detects a three-dimensional object reflected in the image by using the image acquired by the image acquisition unit 121 and the difference image calculated by the image difference calculation unit 151, and its position. Has a function to calculate.
  • the obstacle distance measuring unit 171 measures the distance from the own vehicle (equipped with the device) to one or more obstacles detected by the obstacle detecting unit 161 in the camera posture. It has a function to measure using geometric calculation using the camera attitude parameter verified by the parameter verification unit 141.
  • FIGS. 4A to 4C a case where the obstacle recognition device 110 is applied as a system for monitoring the surroundings of the vehicle V will be described with reference to FIGS. 4A to 4C.
  • the camera attitude parameter called the pitch angle ( ⁇ Pitch )
  • ⁇ Yaw yaw angles
  • ⁇ Roll A camera attitude parameter called ( ⁇ Roll ) can be considered (see FIGS. 2 (a)-(d)).
  • FIG. 4A shows a situation in which a vehicle V equipped with an in-vehicle sensor 11 (corresponding to the sensing unit 111) is traveling on a flat road (ground) R1.
  • the pitch angle ( ⁇ Pitch ) of the in-vehicle sensor installation parameter for the sensor optical axis OA1 is set to have an orientation of 0 (zero) degrees with respect to the flat ground R1 described by the reference axis RA1.
  • This configuration / relationship between the sensor optical axis OA1 and the reference axis RA1 on a flat road can be regarded as the default configuration / relationship.
  • the situation described above with respect to FIGS. 4 (b) and 4 (c) may occur based on the braking and / or acceleration motion performed on the own vehicle V, or due to the slope of the road.
  • FIG. 5 is executed by the camera attitude parameter verification unit 141, which adjusts the camera attitude parameter using the outputs of the camera absolute attitude parameter calculation unit 131 and the camera relative attitude parameter calculation unit 132. It is a flowchart which shows the exemplary processing to be performed.
  • the camera absolute attitude parameters calculated in the current and previous processing periods are acquired, and specific or all camera absolute attitude parameters (for example, pitch angle, roll angle, yaw angle, etc.) are acquired.
  • Time difference ( ⁇ A) is calculated.
  • step S2 of redefining the relative attitude parameter to the time difference the camera relative attitude parameter calculated in the current processing period is acquired and based on the method used to calculate the camera relative attitude parameter. Convert the camera relative attitude parameter to the time difference value ( ⁇ R). If the method used to calculate the camera relative attitude parameters has already given a temporal difference in output, step S2 only retrieves the parameters for further processing.
  • step S3 the camera attitude parameter is verified using the acquired camera absolute attitude parameter and camera relative attitude parameter. Since only the basic example methods are described here, other more elaborate methods can be used interchangeably or in combination within the same step S3.
  • step S3 the time difference ( ⁇ A) of the camera absolute attitude parameter and the time difference ( ⁇ R) of the camera relative attitude parameter are compared, and the difference ( ⁇ c) between both parameters is calculated. Based on the resulting difference ( ⁇ c), if ( ⁇ c) is greater than a predetermined threshold, the camera relative orientation parameters will be calculated until new camera absolute orientation parameters and camera relative orientation parameters can be calculated in the next processing period. Instead, the camera absolute orientation parameter is verified and used as the camera orientation parameter.
  • the camera relative attitude parameter is considered to have less error than the camera absolute attitude parameter, so ( ⁇ c) is If it is greater than a predetermined threshold, the camera relative orientation parameter may be used as the camera orientation parameter.
  • both the camera absolute attitude parameter and the camera relative attitude parameter are used to calculate the output corresponding to the camera attitude parameter.
  • the correction value corresponding to the camera relative posture parameter is calculated, and the calculated correction value is added to the absolute parameter (for example, addition, integration, etc.) to calculate the output corresponding to the camera posture parameter.
  • step S4 when multiple options can be selected based on the result of step S3, for example, the parameter selection to be used for the image conversion task of the object distance measurement process (obstacle distance measurement unit 171) in the subsequent stage is selected.
  • the parameter selection to be used for the image conversion task of the object distance measurement process (obstacle distance measurement unit 171) in the subsequent stage is selected.
  • the camera absolute attitude parameter can be used for object ranging and the camera relative attitude parameter can be used for image conversion.
  • the difference in pitch angle ( ⁇ Pitch ) between the sensor optical axis OA1 described in FIGS. 4A to 4C and the reference axis RA1 on a flat ground can be obtained in real time. Therefore, the exact relationship between the sensor optical axis OA1 and the reference axis RA1 that directly affects the geometric image conversion result can be maintained.
  • the obstacle recognition device 110 is Sensing unit 111 that can acquire an image of the scene in front of the vehicle to which the device is attached,
  • An image acquisition unit 121 that processes an image acquired by the sensing unit 111 and adjusts its characteristics (including, but not limited to, image size, image resolution, and target image area).
  • Camera absolute orientation parameters defined by camera pitch angle (horizontal axis rotation), camera roll angle (vertical axis rotation), and camera yaw angle (vertical axis rotation) that describe the current relationship between sensing unit 111 and the road.
  • the desired geometric image conversion is performed on the image acquired and processed by the image acquisition unit 121, and the image acquisition unit 121
  • An image difference calculation unit 151 that executes a calculation of a difference image indicating a difference between at least two images acquired and processed by the image difference calculation unit 151 and converted by the image difference calculation unit 151.
  • An obstacle detection unit 161 that detects an object using the image acquired by the image acquisition unit 121 and the difference image calculated by the image difference calculation unit 151, and Obstacle distance measuring unit 171 that measures the three-dimensional distance from the own vehicle to the obstacle detected by the obstacle detecting unit 161 based on the geometric calculation using the camera attitude parameter verified by the camera attitude parameter verification unit 141.
  • Control application processing unit 181 that determines the control application to be executed by the vehicle equipped with the obstacle recognition device 110 based on the current situation which may include at least the output from the obstacle detection unit 161 and the obstacle distance measuring unit 171. And.
  • the obstacle recognition device 110 acquires the first difference ( ⁇ A) regarding the posture of the camera based on the road surface shape at different times, and in the images captured at the different times.
  • the second difference ( ⁇ R) regarding the amount of movement of the feature points is acquired, and the first difference ( ⁇ A) and the second difference ( ⁇ R) are used for image conversion for calculating the difference image. Verify the camera attitude parameters.
  • the camera attitude parameter is verified by comparing ( ⁇ c) the first difference ( ⁇ A) and the second difference ( ⁇ R). Further, the calculation method of the difference image is changed based on the verification of the camera attitude parameter.
  • the obstacle recognition device 110 verifies the camera attitude parameters used for image conversion and three-dimensional distance measurement based on the verification executed by the camera attitude parameter verification unit 141. By doing so, the noise introduced into the difference image used for object detection can be reduced, and the fluctuation of the camera attitude parameter used for the geometric calculation can be reduced. It is possible to improve the accuracy of both three-dimensional distance measurements performed by the obstacle distance measuring unit 171. Therefore, even if the environmental conditions in which the vehicle is traveling change, the erroneous detection rate can be reduced at the same time.
  • the configuration and operation of the obstacle recognition device 110 according to the present embodiment have been described above.
  • the obstacle recognition device 110 according to the present embodiment even when the environmental conditions in which the vehicle is traveling change, it is used for image conversion and for describing the relationship between the device camera and the current environment.
  • the camera orientation parameters used in that is, by calculating the camera orientation parameters and adjusting the geometric image conversion parameters
  • object false positives are detected.
  • the reliability of obstacle recognition can be improved.
  • each of the above configurations, functions, processing units, processing means, etc. may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a storage device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
  • SSD Solid State Drive
PCT/JP2020/041143 2019-11-29 2020-11-04 障害物認識装置 WO2021106510A1 (ja)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112020004944.2T DE112020004944T5 (de) 2019-11-29 2020-11-04 Hinderniserkennungsvorrichtung

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-216129 2019-11-29
JP2019216129A JP7225079B2 (ja) 2019-11-29 2019-11-29 障害物認識装置

Publications (1)

Publication Number Publication Date
WO2021106510A1 true WO2021106510A1 (ja) 2021-06-03

Family

ID=76087820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/041143 WO2021106510A1 (ja) 2019-11-29 2020-11-04 障害物認識装置

Country Status (3)

Country Link
JP (1) JP7225079B2 (de)
DE (1) DE112020004944T5 (de)
WO (1) WO2021106510A1 (de)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007240422A (ja) * 2006-03-10 2007-09-20 Fujitsu Ten Ltd 俯角算出装置、俯角算出方法、俯角算出プログラムおよび画像処理装置
JP2012052884A (ja) * 2010-08-31 2012-03-15 Honda Motor Co Ltd 車載カメラを用いた測距装置
JP2012226556A (ja) * 2011-04-20 2012-11-15 Nissan Motor Co Ltd 走行支援装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3995846B2 (ja) 1999-09-24 2007-10-24 本田技研工業株式会社 物体認識装置
JP6232994B2 (ja) 2013-12-16 2017-11-22 ソニー株式会社 画像処理装置、および画像処理方法、並びにプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007240422A (ja) * 2006-03-10 2007-09-20 Fujitsu Ten Ltd 俯角算出装置、俯角算出方法、俯角算出プログラムおよび画像処理装置
JP2012052884A (ja) * 2010-08-31 2012-03-15 Honda Motor Co Ltd 車載カメラを用いた測距装置
JP2012226556A (ja) * 2011-04-20 2012-11-15 Nissan Motor Co Ltd 走行支援装置

Also Published As

Publication number Publication date
DE112020004944T5 (de) 2022-08-11
JP2021086477A (ja) 2021-06-03
JP7225079B2 (ja) 2023-02-20

Similar Documents

Publication Publication Date Title
US10620000B2 (en) Calibration apparatus, calibration method, and calibration program
JP4052650B2 (ja) 障害物検出装置、方法及びプログラム
CN106054174B (zh) 使用雷达和摄像机用于横越交通应用的融合方法
US9516277B2 (en) Full speed lane sensing with a surrounding view system
CN102549631B (zh) 车辆周围监视装置
EP2757527B1 (de) System und Verfahren zur Korrektur verzerrter Kamerabilder
US9187051B2 (en) Method for detecting an imminent rollover of a vehicle
JP6003673B2 (ja) 3次元位置推定装置、車両制御装置、および3次元位置推定方法
WO2019202628A1 (ja) 路面検出装置、路面検出装置を利用した画像表示装置、路面検出装置を利用した障害物検知装置、路面検出方法、路面検出方法を利用した画像表示方法、および路面検出方法を利用した障害物検知方法
US11753002B2 (en) Vehicular control system
US9330319B2 (en) Apparatus for compensating camera image and operating method thereof
EP2913999A1 (de) Disparitätswertableitungsvorrichtung, Ausrüstungssteuerungssystem, bewegliche Vorrichtung, Roboter, Disparitätswertableitungsverfahren und computerlesbares Speichermedium
WO2020110435A1 (ja) 外界認識装置
KR102494260B1 (ko) 차량의 주행 지원 장치 및 이의 구동 방법
JP6422431B2 (ja) 慣性センサの改良
US20220050190A1 (en) System and Method for Compensating a Motion of a Vehicle Component
WO2021106510A1 (ja) 障害物認識装置
JP7056379B2 (ja) 車両用走行制御装置
WO2022049880A1 (ja) 画像処理装置
JP4026641B2 (ja) 物体検出装置、および物体検出方法
JP7064400B2 (ja) 物体検知装置
WO2023026626A1 (ja) 信号処理装置、信号処理システム、信号処理方法及びプログラム
JP2019059274A (ja) 車両の制御装置及び車両の制御方法
WO2023090170A1 (ja) 画像処理装置
KR20230158335A (ko) 차량 및 그 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20893699

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20893699

Country of ref document: EP

Kind code of ref document: A1