WO2017090097A1 - 車両用外界認識装置 - Google Patents
車両用外界認識装置 Download PDFInfo
- Publication number
- WO2017090097A1 WO2017090097A1 PCT/JP2015/082967 JP2015082967W WO2017090097A1 WO 2017090097 A1 WO2017090097 A1 WO 2017090097A1 JP 2015082967 W JP2015082967 W JP 2015082967W WO 2017090097 A1 WO2017090097 A1 WO 2017090097A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- obstacle
- vehicle
- recognition device
- camera
- external environment
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
Definitions
- the present invention relates to a vehicle external recognition device.
- a vehicle periphery monitoring device is a vehicle periphery monitoring device that monitors the periphery of a vehicle from an image acquired via an image capturing unit mounted on the vehicle.
- Object extraction means for extracting objects existing around the vehicle from the acquired image
- pedestrian extraction means for extracting pedestrians from the objects extracted by the object extraction means
- the walking By executing a determination algorithm including posture determination means for determining the posture of the pedestrian extracted by the pedestrian extraction means and at least a first determination process related to the posture of the pedestrian determined by the posture determination means.
- An avoidance object determination means for determining whether or not the object extracted by the extraction means is an avoidance object that should avoid contact with the vehicle; and at least the avoidance object determination means According to the determination result, there is a description that the feature "in that it comprises a vehicle equipment control means for controlling the equipment of the vehicle.
- an object of the present invention is to provide a vehicle external recognition apparatus that can correctly estimate the direction of an obstacle.
- An external environment recognition device for a vehicle includes an obstacle detection unit that detects an obstacle in an image, a direction estimation unit that estimates the direction of an obstacle detected by the obstacle detection unit, and a direction estimation unit.
- An orientation correction unit that corrects the orientation according to the positional relationship between the obstacle and the camera.
- the direction of the obstacle can be correctly estimated.
- the figure which represented as a functional block the function which the program performed in the external field recognition apparatus 100 for vehicles has.
- the flowchart which shows the process performed in the direction estimation part 103 Diagram showing an example of shooting with a wide-angle camera
- the flowchart which shows the process performed in the direction correction part 104 The figure which shows an example of a process of the direction correction part 104
- the figure which represented as a functional block the function which the program performed in the external field recognition apparatus 100a for vehicles has.
- FIG. 1 is a diagram showing a configuration of a vehicle external environment recognition device 100 built in a vehicle 300.
- the vehicle 300 includes a CAN bus 20, and the vehicle external environment recognition device 100 is connected to the CAN bus 20.
- Other devices not shown are also connected to the CAN bus 20. That is, a device for controlling the vehicle based on obstacle information output from the vehicle external environment recognition device 100 is connected to the CAN bus 20.
- the vehicle external environment recognition apparatus 100 includes a camera 101, a CPU 10, a ROM 11, a RAM 12, and a CAN interface 13.
- the camera 101 is attached to the vehicle 300 and photographs the surroundings of the vehicle 300.
- the CPU 10 calculates obstacle information from images obtained by the camera 101 using a program described later at predetermined intervals, for example, every 0.1 second.
- the calculated obstacle information is output to the CAN bus 20 via the CAN interface 13.
- the above-described predetermined cycle is referred to as a “processing cycle”.
- the ROM 11 stores a program and camera parameters.
- the program is developed from the ROM 11 to the RAM 12 by the CPU 10 and executed.
- the camera parameters are internal parameters such as lens distortion and external parameters such as the attachment position and angle of the camera 101 with respect to the vehicle 300.
- the RAM 12 temporarily stores an obstacle information area 12a or other information necessary for program execution.
- the obstacle information area 12a is a predetermined area of the RAM 12 in which information related to obstacles estimated by an obstacle detection unit 102, a direction estimation unit 103, and a direction correction unit 104, which will be described later, is stored.
- the information regarding the obstacle is the position and orientation of the obstacle.
- the CAN interface 13 is a communication interface with the CAN bus 20 of the vehicle external environment recognition device 100.
- the vehicle external environment recognition device 100 outputs the calculated obstacle information to the CAN bus 20 via the CAN interface 13.
- FIG. 2 is a diagram showing the functions of a program executed by the CPU 10 of the vehicle external environment recognition apparatus 100 as function blocks. That is, the vehicle external environment recognition apparatus 100 is configured to execute an obstacle detection function by the obstacle detection unit 102, a direction estimation function by the direction estimation unit 103, a direction correction function by the direction correction unit 104, according to a program executed by the CPU 10. And an output function by the output unit 105.
- the vehicle external environment recognition apparatus 100 causes the camera 101 to take a picture at a predetermined cycle, that is, every processing cycle, and performs processing by each functional block when an image is obtained by shooting. Specifically, when the camera captures an image and an image is obtained, the obstacle detection unit 102 starts processing. When the obstacle detection unit 102 completes the processing, the direction estimation unit 103 starts processing, and the direction estimation unit 103 When the process is completed, the orientation correction unit 104 starts the process. That is, each functional block operates every processing cycle.
- the obstacle detection unit 102 detects an obstacle existing around the host vehicle 300 from the image obtained by the camera 101 and adds information about the detected obstacle to the obstacle information area 12a.
- Obstacle detection is, for example, detecting a pedestrian or a vehicle from an image and detecting it as an obstacle.
- a pattern matching method described in JP-A-2015-132879 can be used.
- the direction estimation unit 103 stores information about the obstacle output from the obstacle detection unit 102 and stored in the RAM 12, image information including luminance included in an image obtained by the camera 101, and stored in the ROM 11.
- the direction of the obstacle is estimated using the discriminator parameter for each direction of the obstacle, and the estimated direction of the obstacle is added to the obstacle information area 12a. Details of the processing will be described later.
- the direction correction unit 104 uses the information about the obstacles output from the obstacle detection unit 102 and the direction estimation unit 103 and stored in the RAM 12 and the camera parameters stored in the ROM 11, and uses the obstacle and the camera 101.
- the direction of the obstacle is corrected according to the positional relationship, and the corrected direction of the obstacle is added to the obstacle information area 12a. Details of the processing will be described later.
- FIG. 3 is a flowchart showing processing executed in the direction estimation unit 103.
- the execution subject of each step described below is the CPU 10.
- step S500 lens distortion correction is performed on the obstacle area in the image obtained by the camera 101 using the camera parameters stored in the ROM 11, and the process proceeds to step S501. Since lens distortion correction is a known technique, a detailed description thereof is omitted. By guaranteeing the change of the pattern due to the lens distortion by the lens distortion correction, the accuracy of the direction estimation based on the pattern in step S503 described later is improved.
- the lens distortion correction is performed in the direction estimation unit 103, it is not limited to this.
- the obstacle detection unit 102 uses, for example, the pattern matching method described in Japanese Patent Application Laid-Open No. 2015-132879, the obstacle detection unit 102 performs lens distortion correction on the entire image, and the lens distortion correction is performed.
- the image may be stored in the RAM 12 and used by the orientation estimation unit 103.
- step S501 the obstacle region in the lens distortion corrected image is enlarged / reduced to generate an image having a size used for calculation of a preset feature amount, and the process proceeds to step S502.
- the size used for calculating the feature value is 12 pixels wide and 24 pixels high.
- step S502 a feature amount used for direction estimation is calculated from the image generated in step S501, and the process proceeds to step S503.
- HOG N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, pp. 886-893, 2005
- HOG improvement methods described in JP-A-2015-132879 can be used.
- the calculation of the feature amount is performed in the direction estimation unit 103, but is not limited thereto.
- the obstacle detection unit 102 uses, for example, the pattern matching method described in Japanese Patent Application Laid-Open No. 2015-132879
- the feature amount used for the obstacle detection is stored in the RAM 12 and is used by the direction estimation unit 103. Also good. As a result, the number of calculation of the feature amount can be reduced and the processing time can be reduced.
- the feature amount calculation is performed by the direction estimation unit 103, a feature amount different from the obstacle detection unit 102 can be used.
- the obstacle detection unit 102 that processes the entire image uses an HOG that has a relatively small amount of calculation, and the orientation estimation unit 103 that processes only the obstacle region only has a small pattern although the calculation amount increases.
- the improved HOG technique that can express the above, it is possible to improve the accuracy of direction estimation while suppressing an increase in the amount of calculation.
- step S503 the feature amount calculated in step S502 is input to the discriminator to obtain the orientation.
- the classifier for example, SVM (C. Cortes and V. Vapnik, Support-vector Networks, Machine Learning, which is known to have high accuracy even when the number of feature dimensions is large and the number of training images is small, is known. , Vol. 20, Issue 3, pp. 273-297,) 1995).
- SVM C. Cortes and V. Vapnik, Support-vector Networks, Machine Learning, which is known to have high accuracy even when the number of feature dimensions is large and the number of training images is small, is known. , Vol. 20, Issue 3, pp. 273-297,) 1995).
- SVM SVM
- the SVM is trained using feature values calculated from image data of backward (0 degrees), right (90 degrees), front (180 degrees), and left (270 degrees), and parameters of the training result are stored in the ROM 11. save.
- the obstacle is backward (0 degrees), right (90 degrees), front (180 degrees), and left (270 degrees)
- the obstacle direction handled by the SVM is backward (0 degrees), right direction (90 degrees), front direction (180 degrees), and left direction (270 degrees), but the obstacle direction is not limited to this.
- the number of directions is set according to the request of a method using an obstacle direction such as vehicle control, such as two directions of rightward (90 degrees) and leftward (270 degrees), or 360 degrees divided into eight directions. Can do.
- a discriminator is not limited to this.
- a classifier such as a neural network or a decision tree may be used.
- the orientation estimation unit 103 does not take into consideration that the appearance of the obstacle changes greatly depending on the positional relationship between the obstacle and the camera, even in the same direction of the obstacle, particularly in the image of the wide-angle camera.
- the direction of the obstacle may be erroneously estimated at the edge of the image.
- FIG. 4 is a diagram illustrating an example of shooting with a wide-angle camera.
- the pedestrians 301 and 302 are photographed by the wide-angle camera 101 attached to the host vehicle.
- a fish-eye camera in which the distance from the image center is proportional to the angle of the light beam is assumed as the wide-angle camera 101, and the image plane 303 of the camera is represented by an arc.
- the pedestrians 301 and 302 are detected as obstacles by the obstacle detection unit 102.
- the pedestrian 301 is imaged in the area 311 of the image plane 303 of the camera, and the pedestrian 302 is imaged in the area 312 of the image plane 303 of the camera.
- the direction estimation unit 103 estimates the front direction (180 degrees).
- FIG. 5 is a flowchart showing processing executed in the direction correction unit 104. The execution subject of each step described below is the CPU 10.
- step S600 a ray vector that is a direction vector from the camera 101 to the obstacle is obtained using the position of the obstacle in the image detected by the obstacle detection unit 102 and the camera parameters stored in the ROM 11.
- step S601. a light vector in a three-dimensional coordinate system (camera coordinate system) based on the camera 101 is calculated using the position of the obstacle in the image and the camera internal parameters.
- a camera external parameter representing the attachment position / angle of the camera 101 to the vehicle 300 a light vector in the camera coordinate system is used as a reference and a two-dimensional coordinate system (vehicle coordinate system) parallel to the ground. Convert to
- step S601 the obstacle direction estimated by the direction estimation unit 103 is corrected using the light vector in the vehicle coordinate system calculated in step S600, and the obstacle direction in the vehicle coordinate system is obtained. Specifically, by adding the angle of the light vector in the vehicle coordinate system to the angle estimated by the direction estimating unit 103, the direction estimated by the direction estimating unit 103 is converted into a direction based on the light vector, and the vehicle Get the orientation in the coordinate system.
- FIG. 6 is a diagram illustrating an example of processing of the direction correction unit 104.
- the pedestrian 302 is imaged on the image plane 303 by the wide-angle camera 101 attached to the own vehicle 300.
- the direction of the pedestrian 302 is estimated to be leftward (270 degrees).
- step S600 a light vector 332 from the wide-angle camera 101 to the pedestrian 302 is calculated.
- the center 322 of the pedestrian area is used as the position in the pedestrian image.
- step S601 the direction estimated by the direction estimation unit 103 is converted into a direction based on the light vector 332, thereby obtaining a corrected direction vector 342.
- steps S600 to S601 are processed for all obstacles, and the processing of the direction correction unit 104 is terminated.
- the following operational effects can be obtained.
- a vehicle external environment recognition device 100 is mounted on a vehicle 300 and captures an obstacle in the image based on a camera 101 that captures the surroundings of the vehicle and an image obtained by capturing the camera 101.
- Unit 102, direction estimation unit 103 that estimates the direction of the obstacle detected by obstacle detection unit 102, and direction that is estimated by direction estimation unit 103 is corrected according to the positional relationship between the obstacle and camera 101.
- the direction correction unit 104 corrects the direction obtained by the direction estimation unit 103 based on the positional relationship between the obstacle 101 detected by the obstacle detection unit 102 and the camera 101. did. Therefore, it is possible to improve the estimation accuracy of the direction of the obstacle, particularly at the image end of the wide-angle camera. Thereby, the orientation of the object can be estimated in a wide range by using the wide-angle camera.
- the direction correction unit 104 calculates a light vector from the camera 101 to the obstacle based on the position in the image of the obstacle detected by the obstacle detection unit 102 and the camera parameter, and the direction estimation unit 103
- the estimated direction is converted into a direction based on the ray vector (FIG. 5, steps S600 to S601). Therefore, the accuracy of the obstacle direction in the vehicle coordinate system is improved by considering the relationship between the image and the vehicle coordinate system, that is, the imaging process of the image.
- the direction correction unit 104 corrects the direction by converting the direction estimated by the direction estimation unit 103 with reference to the light vector from the camera 101 to the obstacle (FIG. 5, Steps S600 to S601).
- the direction correction method is not limited to this.
- the direction correction unit 104 corrects the direction by using a table that stores the relationship between the direction estimated by the direction estimation unit 103 and the position of the obstacle in the image and the corrected direction prepared in advance. Good.
- the heel direction correction unit 104 corrects the direction without using camera parameters. Therefore, the direction estimated by the direction estimation unit 103 can be corrected even when the accuracy of the camera parameter is low or the camera parameter cannot be obtained.
- the direction correction unit 104 corrects the direction according to an arbitrarily set table. Therefore, the direction can be arbitrarily corrected according to the positional relationship between the obstacle and the camera 101.
- the obstacle detection part 102 detected the obstacle from the image image
- the obstacle detection method is not limited to this.
- the vehicle external environment recognition apparatus 100 may include a sensor for measuring a distance, such as a radar, a laser scanner, or a stereo camera.
- a stereo camera may be configured in combination with the camera 101 by additionally providing one or more cameras.
- an obstacle may be detected based on the distance information measured by the distance measuring sensor. For example, an object whose height from the road surface is equal to or higher than a preset threshold is detected as an obstacle. Moreover, a pedestrian or a vehicle may be detected from the shape or movement of the object obtained from the distance information, and may be used as an obstacle.
- the obstacle position detected by the distance measuring sensor is converted into an obstacle position in the image by projection calculation from the mounting position / angle of the distance measuring sensor to the vehicle and the camera parameters of the camera 101, and the direction estimating unit 103 Used in.
- the detection result of the distance measuring sensor is verified by applying an obstacle detection method from the image such as pattern matching only to the periphery of the position of the obstacle detected by the distance measuring sensor. May be used by the orientation estimation unit 103.
- the vehicle external environment recognition device 100 In order to implement this modification, it is necessary to add the following functions to the vehicle external environment recognition device 100.
- a distance measuring sensor that measures the distance to an object around the vehicle
- a function that detects an obstacle from the distance measured by the distance measuring sensor
- an obstacle position detected by the distance sensor on the image This is a function for calculating the imaging position of the obstacle.
- the vehicle external environment recognition device 100 is connected to other devices via the CAN bus 20 of the vehicle 300.
- the connection relationship with the other apparatuses of the vehicle external environment recognition apparatus 100 is not limited to this.
- the vehicle external environment recognition device 100 may be connected to other devices via a communication bus other than CAN, or may be directly connected to other devices without using a communication bus. Furthermore, the vehicle external environment recognition device 100 may be incorporated in a camera device or an integrated controller.
- a vehicle external environment recognition apparatus according to Embodiment 2 will be described.
- the same components as those in the first embodiment are denoted by the same reference numerals, and different points will be mainly described. Points that are not particularly described are the same as those in the first embodiment.
- the processing of the direction estimation unit is mainly different from the first embodiment. (Constitution)
- the configuration of the vehicular external recognition device 100a is the same as that of the first embodiment except for the program stored in the ROM 11 and the parameters of the discriminator used for direction estimation.
- FIG. 7 is a diagram showing the functions of a program executed in the vehicle external environment recognition apparatus 100a as function blocks. Differences between the second embodiment and the first embodiment are as follows.
- a parameter selection unit 106 is further provided. (Operation of parameter selector) The contents of processing in the parameter selection unit 106 will be described with reference to FIG.
- the parameter selection unit 106 selects, for each obstacle detected by the obstacle detection unit 102, the parameter of the discriminator used by the direction estimation unit 103 according to the position of the obstacle.
- the direction candidates in the vehicle coordinate system output by the direction correction unit 104 differ depending on the position of the obstacle and the camera 101. Therefore, for example, the parameters are selected so that the difference between the orientation candidates in the vehicle coordinate system output by the orientation correction unit 104 due to the obstacle position becomes small.
- FIG. 8 is a diagram illustrating an example of processing of the parameter selection unit 106.
- the parameter selection unit 106 includes the parameter A of the discriminator for identifying the obstacle direction in four directions: backward (0 degree), right (90 degrees), front (180 degrees), and left (270 degrees).
- the direction candidate after correction by the direction correction unit 104 when the parameter A is used is 353a
- the direction candidate after correction by the direction correction unit 104 when the parameter B is used is 353b.
- the direction candidate after correction by the direction correction unit 104 when the parameter A is used for the pedestrian 304 is 354a
- the direction candidate after correction by the direction correction unit 104 when the parameter B is used is 354b.
- the direction correcting unit 106 Differences in orientation candidates after correction due to obstacle positions are reduced. (Effect) According to Example 2 mentioned above, the following effect is obtained.
- the parameter selection unit 106 selects parameters used by the direction estimation unit 103 according to the positional relationship between the obstacle and the camera 101. Therefore, the difference due to the obstacle position of the orientation candidate in the vehicle coordinate system is reduced, and the pedestrian behavior prediction / vehicle control method using the obstacle direction output by the vehicle external environment recognition device 100a can be simplified.
- ⁇ Modification 1> The parameter selection unit 106 selects the parameters so that the difference in the direction candidates in the vehicle coordinate system output by the direction correction unit 104 due to the obstacle position becomes small.
- the parameter selection method used by the orientation estimation unit 103 is not limited to this.
- the parameter selection unit 106 may select a parameter of the discriminator obtained by training using different image groups according to the positional relationship between the obstacle and the camera 101. For example, for obstacles near the front of a wide-angle camera, select the parameters of the classifier obtained by training using the obstacle image taken near the center of the image. On the other hand, a parameter of a discriminator obtained by training using an image of an obstacle taken near the edge of the image may be selected.
- the parameter selection unit 106 selects a parameter used by the direction estimation unit 103 according to the positional relationship between the obstacle and the camera 101.
- the target of parameter selection processing according to the positional relationship between the obstacle and the camera 101 is not limited to the parameter used in the direction estimation unit 103.
- the vehicle external environment recognition device 100 a may select parameters used for obstacle detection in the obstacle detection unit 102 according to the positional relationship between the obstacle and the camera 101. For example, when using the pattern matching method described in Japanese Patent Application Laid-Open No. 2015-132879 in the obstacle detection unit 102, the pedestrian and the background are identified according to the position in the image in the raster scan operation of the image. The parameters of the discriminator to be changed may be changed.
- the direction correction unit 104 corrects the direction by converting the direction estimated by the direction estimation unit 103 with reference to the light vector from the camera 101 to the obstacle (FIG. 5, Steps S600 to S601).
- the direction correction method is not limited to this.
- the direction correction unit 104 may correct the direction by further converting the direction converted based on the light vector.
- the direction converted with reference to the light vector is in any of the four directions in the vehicle coordinate system: backward (0 degrees), right (90 degrees), front (180 degrees), and left (270 degrees).
- the closest direction may be calculated, and the closest direction may be used as the corrected obstacle direction.
- the direction candidates output by the heel direction correction unit 104 are the same for all obstacles regardless of the positional relationship between the obstacle and the camera 101. Therefore, it is possible to further simplify the pedestrian behavior prediction / vehicle control method using the obstacle direction output by the vehicle external environment recognition device 100a.
- the present invention is not limited to the above-described embodiments, and includes various modifications.
- the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
- Other embodiments conceivable within the scope of the technical idea of the present invention are also included in the scope of the present invention.
- a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
- Each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
- Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
- Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
- SSD Solid State Drive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
(構成)
図1は車両300に内蔵される車両用外界認識装置100の構成を示す図である。車両300はCANバス20を備えており、車両用外界認識装置100はCANバス20に接続される。CANバス20には不図示の他の機器も接続されている。すなわち、CANバス20に車両用外界認識装置100が出力する障害物情報に基づき車両を制御する装置が接続される。
(機能ブロック)
図2は、車両用外界認識装置100のCPU10において実行されるプログラムが有する機能を機能ブロックとして表した図である。すなわち、車両用外界認識装置100は、CPU10において実行されるプログラムにより、障害物検知部102による障害物検知機能と、向き推定部103による向き推定機能と、向き補正部104による向き補正機能と、出力部105による出力機能とを備える。車両用外界認識装置100は、予め定められた周期、すなわち処理周期ごとにカメラ101に撮影を行わせ、撮影して画像が得られると各機能ブロックによる処理を行う。具体的には、カメラが撮影を行い画像が得られると障害物検知部102が処理を開始し、障害物検知部102が処理を完了すると向き推定部103が処理を開始し、向き推定部103が処理を完了すると向き補正部104が処理を開始する。すなわち、各機能ブロックは処理周期ごとに動作する。
(向き推定部の動作)
次に図3~図4を用いて、向き推定部103における処理の内容について説明する。図3は向き推定部103において実行される処理を示すフローチャートである。以下に説明する各ステップの実行主体は、CPU10である。
以下、ステップS500からステップS503を全ての障害物に対して処理を行い、向き推定部103の処理を終了する。
(向き補正部の動作)
次に図5~図6を用いて、向き補正部104における処理の内容について説明する。向き補正部104は、向き推定部103が推定した障害物の向きを、障害物とカメラ101の位置関係に応じて補正する。図5は向き補正部104において実行される処理を示すフローチャートである。以下に説明する各ステップの実行主体は、CPU10である。
(作用効果)
本実施例の車両用外界認識装置によれば、次の作用効果が得られる。
<変形例1>
向き補正部104は、カメラ101から障害物への光線ベクトルを基準とし、向き推定部103が推定した向きを変換することで、向きを補正した(図5、ステップS600~ステップS601)。しかし向きの補正方法はこれに限定されない。
<変形例2>
上述した実施例1では、障害物検知部102は、車両に取り付けられたカメラ101により撮影された画像から障害物を検知した。しかし、障害物の検知方法はこれに限定されない。車両用外界認識装置100は、カメラ101に加えて、レーダやレーザスキャナ、ステレオカメラといった距離を計測するセンサを備えてもよい。また、1台以上のカメラを追加で備えることで、カメラ101と組み合わせて、ステレオカメラを構成してもよい。
<変形例3>
上述した実施例1では、車両用外界認識装置100は車両300のCANバス20を介して他の機器と接続された。しかし、車両用外界認識装置100の他の機器との接続関係はこれに限定されない。
(構成)
車両用外界認識装置100aの構成は、ROM11に保存されているプログラム、向き推定に用いる識別機のパラメータを除いて実施例1と同様である。
(パラメータ選択部の動作)
図8を用いて、パラメータ選択部106における処理の内容について説明する。
(作用効果)
上述した実施例2によれば、次の作用効果が得られる。
<変形例1>
パラメータ選択部106は、向き補正部104が出力する、車両座標系における向き候補の、障害物位置による差が小さくなるようにパラメータを選択した。しかし、向き推定部103が用いるパラメータの選択方法はこれに限定されない。
<変形例2>
車両用外界認識装置100aは、パラメータ選択部106において、障害物とカメラ101の位置関係に応じて、向き推定部103が用いるパラメータを選択した。しかし、障害物とカメラ101の位置関係に応じたパラメータ選択処理の対象は、向き推定部103で用いるパラメータに限定されない。
<変形例3>
向き補正部104は、カメラ101から障害物への光線ベクトルを基準とし、向き推定部103が推定した向きを変換することで、向きを補正した(図5、ステップS600~ステップS601)。しかし向きの補正方法はこれに限定されない。
Claims (12)
- 車両に搭載され当該車両の周囲を撮影するカメラと、
前記カメラが撮影して得られた画像における障害物の位置を検知する障害物検知部と、
前記障害物検知部が検知した前記障害物の向きを推定する向き推定部と、
前記向き推定部が推定した前記障害物の向きを、前記障害物と前記カメラの位置関係に応じて補正する、向き補正部と、
を備える車両用外界認識装置。 - 請求項1に記載の車両用外界認識装置において、
前記向き補正部は、前記カメラから前記障害物への光線ベクトルに基づき、前記向き推定部が推定した前記障害物の向きを変換する、車両用外界認識装置。 - 請求項1に記載の車両用外界認識装置において、
前記向き補正部は、前記障害物検知部が検知した前記障害物の位置と、前記向き推定部が推定した前記障害物の向きからテーブルを参照することにより、補正後の向きを得る、車両用外界認識装置。 - 請求項1から3のいずれか一項に記載の車両用外界認識装置において、
前記障害物までの距離を計測する距離センサをさらに備え、
前記障害物検知部は、前記距離センサにより計測された距離に基づき、前記カメラが撮影して得られた画像における前記障害物の位置を検知する、車両用外界認識装置。 - 請求項1から4のいずれか一項に記載の車両用外界認識装置において、
前記障害物検知部が検知した障害物と前記カメラの位置関係に応じて、前記向き推定部が使用するパラメータを選択するパラメータ選択部をさらに備える車両用外界認識装置。 - 請求項5に記載の車両用外界認識装置において、
前記パラメータ選択部は、前記向き補正部が出力する向き候補の、前記障害物位置による差が小さくなるようにパラメータを選択する、車両用外界認識装置。 - 請求項5に記載の車両用外界認識装置において、
前記パラメータ選択部は、障害物とカメラの位置関係毎に異なる画像郡を用いて識別器を訓練することにより得られた複数のパラメータから、パラメータを選択する、車両用外界認識装置。 - 請求項1から7のいずれか一項に記載の車両用外界認識装置において、
前記障害物検知部は、障害物候補と前記カメラの位置関係に応じて、障害物検知に使用するパラメータを選択する、車両用外界認識装置。 - 請求項1から8のいずれか一項に記載の車両用外界認識装置において、
前記向き補正部は、前記障害物の位置によらず設定された向きの候補の中から、最も近い向きを選択して出力する、車両用外界認識装置。 - 請求項1から9のいずれか一項に記載の車両用外界認識装置において、
前記障害物検知部は、歩行者を障害物として検知し、
前記向き推定部は、前記障害物検知部が検知した前記歩行者の向きを推定し、
前記向き補正部は、前記向き推定部が推定した前記歩行者の向きを補正する、車両用外界認識装置。 - 請求項1から9のいずれか一項に記載の車両用外界認識装置において、
前記障害物検知部は、車両を障害物として検知し、
前記向き推定部は、前記障害物検知部が検知した前記車両の向きを推定し、
前記向き補正部は、前記向き推定部が推定した前記車両の向きを補正する、車両用外界認識装置。 - 請求項1から11のいずれか一項に記載の車両用外界認識装置において、
前記カメラは広角カメラである、車両用外界認識装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017552566A JP6577595B2 (ja) | 2015-11-25 | 2015-11-25 | 車両用外界認識装置 |
US15/769,153 US10572753B2 (en) | 2015-11-25 | 2015-11-25 | Outside recognition device for vehicle |
PCT/JP2015/082967 WO2017090097A1 (ja) | 2015-11-25 | 2015-11-25 | 車両用外界認識装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/082967 WO2017090097A1 (ja) | 2015-11-25 | 2015-11-25 | 車両用外界認識装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017090097A1 true WO2017090097A1 (ja) | 2017-06-01 |
Family
ID=58764051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/082967 WO2017090097A1 (ja) | 2015-11-25 | 2015-11-25 | 車両用外界認識装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10572753B2 (ja) |
JP (1) | JP6577595B2 (ja) |
WO (1) | WO2017090097A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020044266A (ja) * | 2018-09-21 | 2020-03-26 | キヤノンメディカルシステムズ株式会社 | 医用情報処理装置、x線診断装置及び医用情報処理プログラム |
JP2020534197A (ja) * | 2017-09-22 | 2020-11-26 | コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツングConti Temic microelectronic GmbH | 動力車両用の表示手段の形状に基づいた画像処理を適合させるための装置と方法 |
JP2021054388A (ja) * | 2019-09-30 | 2021-04-08 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッドBeijing Baidu Netcom Science Technology Co., Ltd. | 自動運転の制御方法、装置、電子機器及び記憶媒体 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005173899A (ja) * | 2003-12-10 | 2005-06-30 | Nissan Motor Co Ltd | 周囲状況表示装置 |
JP2014050100A (ja) * | 2012-09-01 | 2014-03-17 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2014142241A (ja) * | 2013-01-23 | 2014-08-07 | Denso Corp | 3次元位置推定装置、車両制御装置、および3次元位置推定方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05265547A (ja) * | 1992-03-23 | 1993-10-15 | Fuji Heavy Ind Ltd | 車輌用車外監視装置 |
US7671725B2 (en) | 2006-03-24 | 2010-03-02 | Honda Motor Co., Ltd. | Vehicle surroundings monitoring apparatus, vehicle surroundings monitoring method, and vehicle surroundings monitoring program |
JP4173896B2 (ja) | 2006-04-03 | 2008-10-29 | 本田技研工業株式会社 | 車両周辺監視装置 |
JP2008230561A (ja) * | 2007-03-23 | 2008-10-02 | Alpine Electronics Inc | 撮像制御装置および測光領域調整方法 |
JP4840472B2 (ja) * | 2009-04-15 | 2011-12-21 | トヨタ自動車株式会社 | 物体検出装置 |
JP5603835B2 (ja) * | 2011-06-27 | 2014-10-08 | クラリオン株式会社 | 車両周囲監視装置 |
US10095935B2 (en) * | 2013-12-20 | 2018-10-09 | Magna Electronics Inc. | Vehicle vision system with enhanced pedestrian detection |
-
2015
- 2015-11-25 JP JP2017552566A patent/JP6577595B2/ja active Active
- 2015-11-25 WO PCT/JP2015/082967 patent/WO2017090097A1/ja active Application Filing
- 2015-11-25 US US15/769,153 patent/US10572753B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005173899A (ja) * | 2003-12-10 | 2005-06-30 | Nissan Motor Co Ltd | 周囲状況表示装置 |
JP2014050100A (ja) * | 2012-09-01 | 2014-03-17 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2014142241A (ja) * | 2013-01-23 | 2014-08-07 | Denso Corp | 3次元位置推定装置、車両制御装置、および3次元位置推定方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020534197A (ja) * | 2017-09-22 | 2020-11-26 | コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツングConti Temic microelectronic GmbH | 動力車両用の表示手段の形状に基づいた画像処理を適合させるための装置と方法 |
JP2020044266A (ja) * | 2018-09-21 | 2020-03-26 | キヤノンメディカルシステムズ株式会社 | 医用情報処理装置、x線診断装置及び医用情報処理プログラム |
JP7233874B2 (ja) | 2018-09-21 | 2023-03-07 | キヤノンメディカルシステムズ株式会社 | 医用情報処理装置、x線診断装置及び医用情報処理プログラム |
JP2021054388A (ja) * | 2019-09-30 | 2021-04-08 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッドBeijing Baidu Netcom Science Technology Co., Ltd. | 自動運転の制御方法、装置、電子機器及び記憶媒体 |
US11529971B2 (en) | 2019-09-30 | 2022-12-20 | Apollo Intelligent Driving Technology (Beijing) Co., Ltd. | Method and apparatus for autonomous driving control, electronic device, and storage medium |
JP7271467B2 (ja) | 2019-09-30 | 2023-05-11 | アポロ インテリジェント ドライビング テクノロジー(ペキン)カンパニー リミテッド | 自動運転の制御方法、装置、電子機器及び記憶媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2017090097A1 (ja) | 2018-08-30 |
US10572753B2 (en) | 2020-02-25 |
US20180307932A1 (en) | 2018-10-25 |
JP6577595B2 (ja) | 2019-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10242576B2 (en) | Obstacle detection device | |
JP5926228B2 (ja) | 自律車両用の奥行き検知方法及びシステム | |
WO2013132947A1 (ja) | 距離算出装置及び距離算出方法 | |
US10776946B2 (en) | Image processing device, object recognizing device, device control system, moving object, image processing method, and computer-readable medium | |
WO2014073322A1 (ja) | 物体検出装置及び物体検出方法 | |
JP6561512B2 (ja) | 視差値導出装置、移動体、ロボット、視差値導出方法、視差値生産方法及びプログラム | |
US11233983B2 (en) | Camera-parameter-set calculation apparatus, camera-parameter-set calculation method, and recording medium | |
JP2006252473A (ja) | 障害物検出装置、キャリブレーション装置、キャリブレーション方法およびキャリブレーションプログラム | |
JP6138861B2 (ja) | 距離算出装置 | |
US10719949B2 (en) | Method and apparatus for monitoring region around vehicle | |
WO2013035612A1 (ja) | 障害物検知装置、障害物検知方法及び障害物検知プログラム | |
JP6577595B2 (ja) | 車両用外界認識装置 | |
JP5107154B2 (ja) | 運動推定装置 | |
JP7169689B2 (ja) | 計測システム、計測方法、及び計測プログラム | |
CN112400094B (zh) | 物体探测装置 | |
US20190156512A1 (en) | Estimation method, estimation apparatus, and non-transitory computer-readable storage medium | |
JP2017117038A (ja) | 道路面推定装置 | |
US20180268228A1 (en) | Obstacle detection device | |
JP2015215235A (ja) | 物体検出装置及び物体検出方法 | |
KR102565603B1 (ko) | 긴급 제동 시스템의 성능평가 장치 및 방법 | |
WO2020246202A1 (ja) | 計測システム、計測方法、及び計測プログラム | |
US20210404843A1 (en) | Information processing apparatus, control method for information processing apparatus, and storage medium | |
JP6064648B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム、画像処理システム及び移動装置 | |
JP6111732B2 (ja) | 画像処理装置、画像処理方法、画像処理システム、画像処理プログラム及び移動装置 | |
KR20240032547A (ko) | 카메라에 오염이 발생했는지 여부를 실시간으로 판단하는 방법 및 이를 이용한 오염도 추정 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15909216 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017552566 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15769153 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15909216 Country of ref document: EP Kind code of ref document: A1 |