JP2023045830A - Object detection device and object detection method - Google Patents

Object detection device and object detection method Download PDF

Info

Publication number
JP2023045830A
JP2023045830A JP2021154418A JP2021154418A JP2023045830A JP 2023045830 A JP2023045830 A JP 2023045830A JP 2021154418 A JP2021154418 A JP 2021154418A JP 2021154418 A JP2021154418 A JP 2021154418A JP 2023045830 A JP2023045830 A JP 2023045830A
Authority
JP
Japan
Prior art keywords
object detection
detection range
point cloud
existence probability
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2021154418A
Other languages
Japanese (ja)
Inventor
椋 影山
Ryo Kageyama
望 長峯
Nozomi Nagamine
宏記 向嶋
Hiroki Mukojima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Railway Technical Research Institute
Original Assignee
Railway Technical Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Railway Technical Research Institute filed Critical Railway Technical Research Institute
Priority to JP2021154418A priority Critical patent/JP2023045830A/en
Publication of JP2023045830A publication Critical patent/JP2023045830A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

To provide an object detection method capable of detecting an object even when it is hard to distinguish between a background and the object due to low illuminance in surroundings.SOLUTION: An object detection device comprises: an object detection unit which calculates an object existence probability and an object classification probability for each object detection range from a photographed image obtained as a result of imaging by photographing means, and if the object existence probability alone or both the object existence probability and the object classification probability are equal to or greater than prescribed values, determines that an object classified as a highest object classification probability exists in the object detection range; and a three-dimensional sensor which generates point cloud data including at least distance information in a depth direction of the object existing in the detection range, wherein the detection range includes a photographing range of the photographing means. The object detection unit assigns point clouds based on the point cloud data to the object detection range, compares the object existence probability of the object detection range with a point cloud occupancy ratio for the object detection range of the assigned point clouds, and if the point cloud occupancy ratio is greater than object existence probability data, takes the point cloud occupancy ratio as the object existence probability of the object detection range.SELECTED DRAWING: Figure 1

Description

本発明は、物体検出装置及び物体検出方法に関する。 The present invention relates to an object detection device and an object detection method.

画像から物体検出を行うための手法として、非特許文献1や非特許文献2に示す手法がある。非特許文献1に示す手法では、可視光ビデオカメラの画像から深層学習により物体領域や分類の予測が行われる。また、非特許文献2に示す手法では、遠赤外線ビデオカメラの画像を用いて、夜間の人物や車などが検出されている。 Non-Patent Document 1 and Non-Patent Document 2 disclose methods for detecting an object from an image. In the method shown in Non-Patent Document 1, object regions and classification are predicted by deep learning from images of a visible light video camera. In addition, according to the method shown in Non-Patent Document 2, a person, a car, etc. at night are detected using an image of a far-infrared video camera.

A. Bochkovskiy, C. Y. Wang and H. Y. Mark Liao: YOLOv4: Optimal Speed and Accuracy of Object Detection, arXiv:2004.10934, 2020A. Bochkovskiy, C. Y. Wang and H. Y. Mark Liao: YOLOv4: Optimal Speed and Accuracy of Object Detection, arXiv:2004.10934, 2020 青木正喜:赤外線画像の人間検出への応用、情報処理学会研究報告コンピュータビジョンとイメージメディア、2004-CVIM-147、 pp.105~113、2005Masayoshi Aoki: Application of Infrared Image to Human Detection, Information Processing Society of Japan Research Report Computer Vision and Image Media, 2004-CVIM-147, pp.105-113, 2005

可視光ビデオカメラの画像のみから物体を検出する手法では、画像中における物体の特徴をもとに検出を行うため、周辺の照度が低く背景と物体の見分けがつきにくい場合は検出精度が低下する。一方、赤外線ビデオカメラの画像から物体を検出する手法の場合、温度情報を画像化するため照度には依存しないが、検出対象物は熱を持つ物体に限られる。 In the method of detecting an object only from the image of a visible light video camera, since detection is based on the characteristics of the object in the image, the detection accuracy decreases when the surrounding illumination is low and it is difficult to distinguish between the background and the object. . On the other hand, in the case of the method of detecting an object from the image of an infrared video camera, the object to be detected is limited to an object having heat, although it does not depend on the illuminance because temperature information is imaged.

そこで、本発明は上記問題点に鑑み、周辺の照度が低く背景と物体の見分けがつきにくい場合であっても物体を検出することができる物体検出方法を提供することを目的とする。 SUMMARY OF THE INVENTION Accordingly, it is an object of the present invention to provide an object detection method capable of detecting an object even when the ambient illumination is low and it is difficult to distinguish between the background and the object.

この技術的課題を解決するための本発明の技術的手段は、以下に示す点を特徴とする。
物体検出装置は、撮像手段の撮像結果得られた撮像画像から、物体検出範囲毎に、物体存在確率と物体分類確率を算出し、物体存在確率が所定の値以上である場合、物体検出範囲には、物体分類確率が最も高い分類の物体が存在するとする物体検出部と、前記撮像手段の撮像範囲を含む検出範囲とし、前記検出範囲に存在する物体の少なくとも奥行き方向の距離の情報を含む点群データを生成する3次元センサと、備え、前記物体検出部は、前記物体検出範囲に、点群データに基づく点群を割り当て、前記物体検出範囲の物体存在確率と、割り当てられた点群の物体検出範囲の点群占有率を比較し、点群占有率が前記物体存在確率データより大きい場合は、点群占有率を前記物体検出範囲の前記物体存在確率とする。
The technical means of the present invention for solving this technical problem are characterized by the following points.
The object detection device calculates an object existence probability and an object classification probability for each object detection range from the captured image obtained as a result of imaging by the imaging means. is a detection range that includes an object detection unit in which an object of the classification with the highest object classification probability exists, and the imaging range of the imaging means, and includes at least information on the distance in the depth direction of the object existing in the detection range. a three-dimensional sensor that generates group data, wherein the object detection unit assigns a point group based on point cloud data to the object detection range, and calculates an object existence probability in the object detection range and the value of the assigned point group The point cloud occupancy of the object detection range is compared, and if the point cloud occupancy is greater than the object existence probability data, the point cloud occupancy is used as the object existence probability of the object detection range.

本発明によれば、周辺の照度が低い場合であっても物体を検出することができる。 According to the present invention, an object can be detected even when the surrounding illuminance is low.

図1は本発明の一実施例である物体検出システム1の機能構成を示す機能ブロック図である。FIG. 1 is a functional block diagram showing the functional configuration of an object detection system 1 that is an embodiment of the present invention. 図2は低照度時物体検出処理を説明するフローチャートである。FIG. 2 is a flowchart for explaining object detection processing at low illumination. 図3は、物体存在確率および物体分類確率の算出処理のイメージを示す図である。FIG. 3 is a diagram showing an image of processing for calculating object existence probability and object classification probability. 図4は、図2のステップS2の詳細を示すフローチャートである。FIG. 4 is a flow chart showing details of step S2 in FIG. 図5は、物体検出範囲の分類結果のイメージを示す図である。FIG. 5 is a diagram showing an image of the classification result of the object detection range. 図6は、物体検出範囲への点群割り当て処理のイメージを示す図である。FIG. 6 is a diagram showing an image of processing for assigning point groups to object detection ranges.

図1は本実施形態に係る物体検出システム1の構成を示す図である。 FIG. 1 is a diagram showing the configuration of an object detection system 1 according to this embodiment.

物体検出システム1は、鉄道車両の走行において障害物となり得る物体を検出するシステムである。例えば、走行する線路上およびその周辺に存在する人、車両などの移動物体、線路上に落下した木々などの静止物体である。 The object detection system 1 is a system that detects an object that can become an obstacle in the running of a railroad vehicle. For example, there are moving objects such as people and vehicles existing on and around a running railroad track, and stationary objects such as trees that have fallen on the railroad track.

物体検出システム1は、ビデオカメラ11、3次元センサ12、および物体検出装置13を備える。 The object detection system 1 includes a video camera 11 , a three-dimensional sensor 12 and an object detection device 13 .

ビデオカメラ11は、鉄道車両の前方方向の所定の範囲を撮像できるように設置され、撮影によって得られた撮影画像を物体検出装置13に出力する。 The video camera 11 is installed so as to capture an image of a predetermined range in the front direction of the railroad vehicle, and outputs a captured image obtained by capturing to the object detection device 13 .

3次元センサ12は、例えばLiDAR(Laser Imaging Detection and Ranging)センサ、3次元ミリ波センサなど、物体の3次元空間における位置を検出するセンサである。3次元空間を規定する空間座標系は、鉄道車両の範囲内の所定位置(例えば3次元センサ12の設置位置)を原点とする直交3次元座標系、又は極座標系である。3次元センサ12は、ビデオカメラ11の撮像範囲を含む検出範囲を走査可能となるように鉄道車両に設置され、検出範囲の走査によって得られた検出信号を物体検出装置13に出力する。 The three-dimensional sensor 12 is a sensor that detects the position of an object in a three-dimensional space, such as a LiDAR (Laser Imaging Detection and Ranging) sensor or a three-dimensional millimeter wave sensor. A spatial coordinate system that defines the three-dimensional space is an orthogonal three-dimensional coordinate system or a polar coordinate system with a predetermined position (for example, the installation position of the three-dimensional sensor 12) within the range of the railway vehicle as the origin. The three-dimensional sensor 12 is installed on the railway vehicle so as to be able to scan the detection range including the imaging range of the video camera 11 , and outputs a detection signal obtained by scanning the detection range to the object detection device 13 .

3次元センサ12は、レーザ光源、走査ユニット、受光素子、および検出回路を備える。
レーザ光源は、例えば一方向に列状に並んだ複数のレーザ発光部を備え、それぞれのレーザ発光部からバルス状のレーザ光を出射する。
The three-dimensional sensor 12 includes a laser light source, scanning unit, light receiving element, and detection circuit.
The laser light source includes, for example, a plurality of laser light emitting portions arranged in a row in one direction, and emits pulsed laser light from each laser light emitting portion.

走査ユニットは、各レーザ光を走査するユニットである。走査ユニットは、光学素子と、走査アクチュエータと、を備える。光学素子は、所定の照射方向に各レーザ光を向ける反射鏡などの素子である。走査アクチュエータは、各レーザ光の配列方向と直交する方向に光学素子を所定の走査速度で正逆回転させる装置である。3次元センサ12は、各レーザ光の並び方向、及び走査方向を、鉄道車両の高さ方向、及び当該高さ方向に直交する方向(車幅方向)にそれぞれ合わせた姿勢で鉄道車両に設置される。 The scanning unit is a unit that scans each laser beam. The scanning unit comprises an optical element and a scanning actuator. The optical element is an element such as a reflecting mirror that directs each laser beam in a predetermined irradiation direction. The scanning actuator is a device that rotates the optical element forward and backward at a predetermined scanning speed in a direction perpendicular to the arrangement direction of each laser beam. The three-dimensional sensor 12 is installed on the railway vehicle in a posture in which the alignment direction and the scanning direction of the laser beams are aligned with the height direction of the railway vehicle and the direction perpendicular to the height direction (vehicle width direction), respectively. be.

受光素子は、各レーザ光のパルスが物体によって反射されて返ってくる反射光を受光する素子である。検出回路は、受光素子による反射光の受光結果に基づいて、各レーザ光が反射された物体の表面上の反射点の位置を検出し、点群データとして物体検出装置13に出力する。 The light-receiving element is an element that receives the reflected light that is the pulse of each laser beam reflected by an object. The detection circuit detects the position of the reflection point on the surface of the object on which each laser beam is reflected based on the result of the reflected light received by the light receiving element, and outputs it to the object detection device 13 as point cloud data.

点群データは、鉄道車両を基準とした空間座標における各反射点の方向、及び、各反射点の距離に係る情報を含むデータである。検出回路は、反射点の方向を、当該反射光の受光方向(すなわち当該反射光の元となったレーザ光の照射方向)に基づいて求める。また検出回路は、反射点の距離を、レーザ光の照射から反射光を受光するまでの時間に基づいて求める。これら反射点の方向、及び距離によって、空間座標における反射点の位置(座標点)が一意に特定される。 The point cloud data is data including information on the direction of each reflection point and the distance between each reflection point in spatial coordinates with reference to the railway vehicle. The detection circuit obtains the direction of the reflection point based on the light receiving direction of the reflected light (that is, the irradiation direction of the laser beam that is the source of the reflected light). Further, the detection circuit obtains the distance of the reflection point based on the time from the irradiation of the laser beam to the reception of the reflected light. The direction and distance of these reflection points uniquely identify the position (coordinate point) of the reflection point in the spatial coordinates.

物体検出装置13は、CPU(Central Processing Unit)やMPU(Micro-Processing Unit)などのプロセッサ、ROM(Read Only Memory)やRAM(Random Access Memory)などのメモリデバイス(主記憶装置とも呼ばれる)、HDD(hard disk drive)やSSD(Solid State Drive)などのストレージ装置(補助記憶装置とも呼ばれる)、及びセンサ類や周辺機器などを接続するためのインターフェース回路を備えたコンピュータを備える。物体検出装置13において、メモリデバイス又はストレージ装置に記憶されているコンピュータプログラムをプロセッサが実行することで、物体を検出するための各種の機能的構成が実現される。 The object detection device 13 includes processors such as CPU (Central Processing Unit) and MPU (Micro-Processing Unit), memory devices (also called main storage devices) such as ROM (Read Only Memory) and RAM (Random Access Memory), HDD A storage device (also called an auxiliary storage device) such as a hard disk drive (hard disk drive) or SSD (Solid State Drive), and a computer equipped with an interface circuit for connecting sensors, peripheral devices, and the like. In the object detection device 13, a processor executes a computer program stored in a memory device or a storage device, thereby realizing various functional configurations for detecting an object.

物体検出装置13は、画像入力部21、点群データ入力部22、物体検出部23、及び出力制御部24を備える。画像入力部21は、ビデオカメラ11から撮影画像を取得し、物体検出部23に出力する。点群データ入力部22は、3次元センサ12から点群データを取得し、物体検出部23に出力する。 The object detection device 13 includes an image input section 21 , a point cloud data input section 22 , an object detection section 23 and an output control section 24 . The image input unit 21 acquires a captured image from the video camera 11 and outputs it to the object detection unit 23 . The point cloud data input unit 22 acquires point cloud data from the three-dimensional sensor 12 and outputs it to the object detection unit 23 .

物体検出部23は、ビデオカメラ11からの撮影画像、及び、3次元センサ12から点群データに基づいて物体を検出し、検出結果を、出力制御部24を介して外部の装置に出力する。具体的には、物体検出部23は、記憶部31に記憶されている、予め物体を機械学習によって学習したデータ(以下、学習済データと称する)を用いて、物体検出範囲に物体が存在する確率(以下、物体存在確率と称する)とその物体の所定の分類である確率(以下、物体分類確率と称する)を算出する。物体検出部23は、物体存在確率、あるいは物体存在確率と物体分類確率の積が通常時判定基準値より大きい場合、その検出範囲には物体が存在し、その物体は物体分類確率の最も高い分類(例えば、人、車、信号機)であるとする。以下、この処理を、AI(artificial intelligence)物体検出処理と称する。物体検出部23は、鉄道車両が夜間走行する際など、周辺の照度が低く、AI物体検出処理では背景と物体の見分けがつきにくい場合、後述するように、物体検出範囲の点群の占有率(以下、点群占有率と称する)を物体存在確率として利用し、物体検出を行う。以下、この処理を、低照度時物体検出処理と称する。 The object detection unit 23 detects an object based on the captured image from the video camera 11 and the point cloud data from the three-dimensional sensor 12, and outputs the detection result to an external device via the output control unit 24. Specifically, the object detection unit 23 detects whether an object exists in the object detection range using the data (hereinafter referred to as learned data) in which the object is learned in advance by machine learning, which is stored in the storage unit 31 . A probability (hereinafter referred to as an object existence probability) and a probability that the object is in a predetermined classification (hereinafter referred to as an object classification probability) are calculated. When the object existence probability or the product of the object existence probability and the object classification probability is larger than the normal determination reference value, the object detection unit 23 determines that an object exists in the detection range and the object is classified into the object with the highest object classification probability. (e.g. people, cars, traffic lights). Hereinafter, this processing is referred to as AI (artificial intelligence) object detection processing. When it is difficult to distinguish between the background and the object in the AI object detection process, the object detection unit 23 calculates the occupancy rate of the point cloud in the object detection range, as will be described later. (hereinafter referred to as point cloud occupancy) is used as the object existence probability to perform object detection. Hereinafter, this process will be referred to as low-illuminance object detection process.

なお、物体検出部23は、機械学習を用いずに障害物を検出してもよい。例えば物体検出部23は、公知、又は周知の画像認識手法を用いて撮影画像に基づいて物体存在確率と物体分類確率を算出することもできる。 Note that the object detection unit 23 may detect obstacles without using machine learning. For example, the object detection unit 23 can also calculate the object existence probability and the object classification probability based on the captured image using a well-known image recognition technique.

物体検出部23の低照度時物体検出処理を、図2のフローチャートを参照して説明する。 The low-illuminance object detection processing of the object detection unit 23 will be described with reference to the flowchart of FIG.

ステップS1において、物体検出装置13の物体検出部23は、画像入力部21を介して入力されたビデオカメラ11の撮影画像に基づいて物体存在確率および物体分類確率を算出する。図3は、物体存在確率および物体分類確率の算出処理のイメージを示す図である。図3(A)に示す画像が入力されると、物体検出部23は、図3(B)に示すように、画像中に様々なサイズの物体検出範囲(図中、四角形の枠)を均等あるいはランダムに設定し、各物体検出範囲に対して物体存在確率と物体分類確率を算出する。 In step S<b>1 , the object detection unit 23 of the object detection device 13 calculates object existence probability and object classification probability based on the captured image of the video camera 11 input via the image input unit 21 . FIG. 3 is a diagram showing an image of processing for calculating object existence probability and object classification probability. When the image shown in FIG. 3A is input, the object detection unit 23 evenly distributes object detection areas of various sizes (rectangular frames in the figure) in the image as shown in FIG. 3B. Alternatively, it is set randomly, and the object existence probability and the object classification probability are calculated for each object detection range.

次にステップS2において、物体検出部23は、点群データ入力部22を介して入力された点群データを利用して、算出した物体存在確率の置き換えを行う。この処理の詳細は後述する。 Next, in step S<b>2 , the object detection unit 23 uses the point cloud data input via the point cloud data input unit 22 to replace the calculated object existence probability. The details of this processing will be described later.

ステップS3において、物体検出部23は、各物体検出範囲に対して物体存在確率と物体分類確率に基づく物体検出結果を出力する。通常時判定基準値以上の物体存在確率、あるいは物体存在確率と物体分類確率の積を有する物体検出範囲が存在する場合、物体検出部23は、その物体検出範囲に物体が存在し、その物体は、物体検出範囲の物体分類確率が最も高い分類(例えば、人)の物体が存在する旨を、検出結果として出力する。 In step S3, the object detection unit 23 outputs an object detection result based on the object existence probability and the object classification probability for each object detection range. If there is an object detection range that has an object existence probability equal to or higher than the normal judgment reference value or the product of the object existence probability and the object classification probability, the object detection unit 23 determines that an object exists in the object detection range, and the object is , the existence of an object of the classification (for example, human) with the highest object classification probability in the object detection range is output as a detection result.

図4は、図2のステップS2の詳細を示すフローチャートである。物体検出範囲毎に以下の処理が行われる。 FIG. 4 is a flow chart showing details of step S2 in FIG. The following processing is performed for each object detection range.

ステップS11において、物体検出部23は、物体分類確率に基づいて、物体検出範囲に存在すると推定される物体を分類する。物体検出範囲において、複数のカテゴリにその確率が付与される(例えば、人=0.99、信号機=0.01、犬=0等)ことから、その中で最もその確率が高い分類が、物体検出範囲に存在すると推定される物体の分類として特定される。 In step S11, the object detection unit 23 classifies objects presumed to exist within the object detection range based on the object classification probabilities. In the object detection range, a plurality of categories are assigned probabilities (for example, people = 0.99, traffic lights = 0.01, dogs = 0, etc.). Specified as a class of objects presumed to be in the detection range.

ステップS12において、物体検出部23は、ステップS11での物体の分類に応じて、物体検出範囲の大きさで物体検出範囲を分類する。例えば、物体の分類が「人」である場合、大きさが50px×100px以上の物体検出範囲を物体検出範囲(大)に、大きさが30px×60px~50px×100pxの物体検出範囲を、物体検出範囲(中)に、そして大きさが30px×60pxより小さい物体検出範囲を物体検出範囲(小)にそれぞれ分類される。また物体の分類が「トラック」である場合、大きさが260px×120px以上の物体検出範囲を物体検出範囲(大)に、大きさが160px×75px~260px×120pxの物体検出範囲を、物体検出範囲(中)に、そして大きさが160px×75pxより小さい物体検出範囲を物体検出範囲(小)にそれぞれ分類される。 In step S12, the object detection unit 23 classifies the object detection range according to the size of the object detection range according to the classification of the object in step S11. For example, if the object classification is "people", an object detection range with a size of 50px x 100px or more will be assigned to the object detection range (large), and an object detection range with a size of 30px x 60px to 50px x 100px will be assigned to an object Object detection areas with a size smaller than 30px x 60px are classified as object detection areas (medium) and object detection areas (small), respectively. Also, if the object classification is "truck", the object detection range with a size of 260px x 120px or more is set as the object detection range (large), and the object detection range with a size of 160px x 75px to 260px x 120px is set as the object detection range. Object detection areas with a size smaller than 160px x 75px are classified as object detection areas (small), respectively.

図5は、物体検出範囲の分類結果のイメージを示す図である。図中、二点鎖線の物体検出範囲は物体検出範囲(大)に、一点鎖線の物体検出範囲は物体検出範囲(中)に、そして実線の物体検出範囲は物体検出範囲(小)に、それぞれ分類される。なお図5における二点鎖線、一点鎖線等は、物体検出範囲の分類を示すために図示したものである。 FIG. 5 is a diagram showing an image of the classification result of the object detection range. In the figure, the two-dot chain line indicates the object detection range (large), the one-dot chain line indicates the object detection range (middle), and the solid line indicates the object detection range (small). being classified. The two-dot chain line, the one-dot chain line, etc. in FIG. 5 are shown to indicate the classification of the object detection range.

図4に戻りステップS13において、物体検出部23は、ステップS13での物体検出範囲の分類に応じて、物体検出範囲に点群を割り当てる。物体の分類が「人」である場合、例えば、物体検出範囲(大)と分類された物体検出範囲には3次元センサ12からの距離(以下、奥行と称する)が100m未満の点群が、物体検出範囲(中)と分類された物体検出範囲には奥行が100m以上200m未満の点群が、物体検出範囲(小)と分類された物体検出範囲には奥行200m以上の点群がそれぞれ割り当てられる。 Returning to FIG. 4, in step S13, the object detection unit 23 assigns a point group to the object detection range according to the classification of the object detection range in step S13. When the classification of the object is "person", for example, in the object detection range classified as the object detection range (large), the point cloud whose distance (hereinafter referred to as depth) from the three-dimensional sensor 12 is less than 100 m, Point clouds with a depth of 100 m or more and less than 200 m are assigned to object detection ranges classified as object detection range (medium), and point clouds with a depth of 200 m or more are assigned to object detection ranges classified as object detection range (small). be done.

図6は、物体検出範囲への点群割り当て処理のイメージを示す図である。図6は、物体の分類が「人」である場合の例であり、物体検出範囲(大)に分類された物体検出範W1に、奥行が50mmの点群G1が割り当てられ、物体検出範囲(中)に分類された物体検出範囲W2に、奥行が150mmの点群G2が割り当てられ、そして物体検出範囲(小)に分類された物体検出範W3に、奥行が250mmの点群G3が割り当てられている。 FIG. 6 is a diagram showing an image of processing for assigning point groups to object detection ranges. FIG. 6 shows an example in which the classification of the object is "person", and the point group G1 with a depth of 50 mm is assigned to the object detection range W1 classified as the object detection range (large), and the object detection range ( A point group G2 with a depth of 150 mm is assigned to the object detection range W2 classified as medium), and a point group G3 with a depth of 250 mm is assigned to the object detection range W3 classified as the object detection range (small). ing.

なお点群G2の一部は、物体検出範囲W2と物体検出範囲W4の両方の領域に存在する。この例の場合、点群G2の奥行が150mmであるので、点群G2は物体検出範囲(中)に分類される物体検出範囲W2に割り当てられる。すなわち点群の一部が複数の物体検出範囲に存在する場合、点群の距離に対応した物体検出範囲に割り当てられる。 A part of the point group G2 exists in both the object detection range W2 and the object detection range W4. In this example, since the depth of the point group G2 is 150 mm, the point group G2 is assigned to the object detection range W2 classified as the object detection range (middle). That is, when a part of the point cloud exists in multiple object detection ranges, it is assigned to the object detection range corresponding to the distance of the point cloud.

図4に戻りステップS14において、物体検出部23は、物体検出範囲に割り当てた点群の物体検出範囲の点群占有率を算出する。具体的には、点群のドット数/物体検出範囲内の画素数が、点群占有率として算出される。 Returning to FIG. 4, in step S14, the object detection unit 23 calculates the point cloud occupancy of the object detection range of the point cloud assigned to the object detection range. Specifically, the number of dots in the point cloud divided by the number of pixels within the object detection range is calculated as the point cloud occupancy rate.

次に、ステップS15において、物体検出部23は、物体検出範囲の物体存在確率と、割り当てられた点群の点群占有率を比較し、点群占有率が物体存在確率より大きいか否かを判定し、点群占有率が物体存在確率より大きいと判合した場合、ステップS16において、点群占有率を物体検出範囲の物体存在確率に置き換える。 Next, in step S15, the object detection unit 23 compares the object existence probability in the object detection range with the point cloud occupancy of the assigned point cloud, and determines whether the point cloud occupancy is greater than the object existence probability. If it is determined that the point cloud occupancy is greater than the object existence probability, then in step S16, the point cloud occupancy is replaced with the object existence probability of the object detection range.

ステップS16で、点群占有率が物体検出範囲の物体存在確率に置き換えられ場合、またはステップS15で、点群占有率が物体存在確率より大きくないと判定されたとき場合、処理は、図2のステップS3に戻る。ステップS3において、物体検出部23は、通常時判定基準値以上の物体存在確率を有する物体検出範囲が存在する場合、“その物体検出範囲に物体が存在し、その物体は、物体検出範囲の物体分類確率が最も高い分類(例えば、人)の物体が存在する旨”を、検出結果として出力する。 When the point cloud occupancy is replaced with the object existence probability of the object detection range in step S16, or when it is determined in step S15 that the point cloud occupancy is not greater than the object existence probability, the process is as shown in FIG. Return to step S3. In step S3, when there is an object detection range having an object existence probability equal to or higher than the normal determination reference value, the object detection unit 23 determines that "an object exists in the object detection range, and the object is an object in the object detection range." The fact that an object with the highest classification probability (for example, a person) exists" is output as a detection result.

(効果)
ビデオカメラ11の撮像結果得られた撮像画像から、物体検出範囲毎に、物体存在確率と物体分類確率を算出し、物体存在確率あるいは物体存在確率と物体分類確率の積が所定の値以上である場合、物体検出範囲には、物体分類確率が最も高い分類の物体が存在するとする物体検出部23と、ビデオカメラ11の撮像範囲を含む検出範囲とし、検出範囲に存在する物体の少なくとも奥行き方向の距離の情報を含む点群データを生成する3次元センサ12とを備え、物体検出部23は、物体検出範囲に、点群データに基づく点群を割り当てる割り当て(図4のステップS13,図6)、物体検出範囲の物体存在確率と、割り当てられた点群の物体検出範囲の点群占有率を比較し、点群占有率が物体存在確率データより大きい場合は、点群占有率を物体検出範囲の物体存在確率とする(ステップS16)を有する。
したがって例えば夕方になり照度が低くなると、AI物体検出処理では物体が存在するにもかかわらず物体存在確率が通常時判定基準値より小さくなり、物体が検出されないことがある。しかしながら、まわりの照度に影響されない3次元センサ12による点群を物体検出範囲に割り当て、点群占有率が物体存在確率より大きい場合は、点群占有率を物体検出範囲の物体存在確率とするようにしたので、より正確な物体存在確率を設定することができ、その結果物体を適切に検出することができる。
(effect)
An object existence probability and an object classification probability are calculated for each object detection range from the captured image obtained as a result of imaging by the video camera 11, and the object existence probability or the product of the object existence probability and the object classification probability is equal to or greater than a predetermined value. In this case, the object detection range includes the object detection unit 23 which assumes that an object of the classification with the highest object classification probability exists, and the imaging range of the video camera 11. At least the depth direction of the object existing in the detection range is A three-dimensional sensor 12 that generates point cloud data including distance information, and an object detection unit 23 assigns a point cloud based on the point cloud data to the object detection range (step S13 in FIG. 4, FIG. 6) , compare the object existence probability of the object detection range and the point cloud occupancy of the object detection range of the assigned point cloud, and if the point cloud occupancy is greater than the object existence probability data, the point cloud occupancy is assigned to the object detection range (step S16).
Therefore, for example, when the illuminance becomes low in the evening, the object existence probability becomes smaller than the normal determination reference value even though the object exists in the AI object detection process, and the object may not be detected. However, if the point cloud by the three-dimensional sensor 12 that is not affected by the surrounding illumination is assigned to the object detection range, and the point cloud occupancy is greater than the object existence probability, the point cloud occupancy is used as the object existence probability of the object detection range. , a more accurate object existence probability can be set, and as a result, the object can be detected appropriately.

物体検出部23は、物体の分類に応じて、物体検出範囲の大きさで物体検出範囲を分類するとともに(ステップS12)、大きい物体検出範囲には3次元センサ12からの距離が近い点群を割り当て、小さい物体検出範囲には3次元センサ12からの距離が遠い点群を割り当てる(ステップS13)。
すなわち、3次元センサ12からの距離の大きさに対応した物体検出範囲への点群の割り当てがなされる。したがって、点群の奥行に応じて物体検出範囲に点群を割り当てることができる。誤検知、過検知を防止できる効果が見込まれる。
The object detection unit 23 classifies the object detection range according to the size of the object detection range according to the classification of the object (step S12). Assignment: A group of points far from the three-dimensional sensor 12 is assigned to a small object detection range (step S13).
That is, point groups are assigned to object detection ranges corresponding to distances from the three-dimensional sensor 12 . Therefore, the point cloud can be assigned to the object detection range according to the depth of the point cloud. It is expected to be effective in preventing erroneous detection and over-detection.

また近くの小さい物体(例えば人)と遠くの大きい物体(例えばトラック)の物体検出範囲の大きさは同程度になるため、物体検出範囲に対して予測された物体の分類をもとに行うようにしたので、より正確に物体検出範囲への点群の割り当てを行うことができる。 Also, since the size of the object detection range for a small nearby object (e.g. a person) and a large distant object (e.g. a truck) are about the same, it is recommended to base the classification of the predicted object on the object detection range. , it is possible to more accurately assign the point cloud to the object detection range.

11・・・ビデオカメラ
12・・・3次元センサ
13・・・物体検出装置
21・・・画像入力部
22・・・点群データ入力部
23・・・物体検出部
REFERENCE SIGNS LIST 11: video camera 12: three-dimensional sensor 13: object detection device 21: image input unit 22: point cloud data input unit 23: object detection unit

Claims (4)

撮像手段の撮像結果得られた撮像画像から、物体検出範囲毎に、物体存在確率と物体分類確率を算出し、物体存在確率あるいは物体存在確率と物体分類確率の積が所定の値以上である場合、物体検出範囲には、物体分類確率が最も高い分類の物体が存在するとする物体検出部と、
前記撮像手段の撮像範囲を含む検出範囲とし、前記検出範囲に存在する物体の少なくとも奥行き方向の距離の情報を含む点群データを生成する3次元センサと
を備え、
前記物体検出部は、前記物体検出範囲に、点群データに基づく点群を割り当て、前記物体検出範囲の物体存在確率と、割り当てられた点群の物体検出範囲の点群占有率を比較し、点群占有率が前記物体存在確率より大きい場合は、点群占有率を前記物体検出範囲の前記物体存在確率とする
を有する物体検出装置。
When the object existence probability and the object classification probability are calculated for each object detection range from the imaged image obtained as a result of imaging by the imaging means, and the object existence probability or the product of the object existence probability and the object classification probability is equal to or greater than a predetermined value. , an object detection unit that determines that an object with the highest object classification probability exists in the object detection range;
a three-dimensional sensor that generates point cloud data including at least distance information in the depth direction of an object existing in the detection range, wherein the detection range includes the imaging range of the imaging means;
The object detection unit assigns a point group based on point cloud data to the object detection range, compares the object existence probability of the object detection range with the point cloud occupancy of the object detection range of the assigned point group, When the point cloud occupancy is greater than the object existence probability, the point cloud occupancy is set as the object existence probability of the object detection range.
請求項1の物体検出装置において、
前記物体検出部は、前記物体の分類に応じて、物体検出範囲の大きさで物体検出範囲を分類するとともに、大きい物体検出範囲には前記3次元センサからの距離が近い点群を割り当て、小さい物体検出範囲には前記3次元センサからの距離が遠い点群を割り当てる
物体検出装置。
In the object detection device of claim 1,
The object detection unit classifies the object detection range according to the size of the object detection range according to the classification of the object, assigns a large object detection range to a point group having a short distance from the three-dimensional sensor, An object detection device that allocates a group of points far from the three-dimensional sensor to an object detection range.
撮像手段の撮像結果得られた撮像画像から、物体検出範囲毎に、物体存在確率と物体分類確率を算出し、物体存在確率が所定の値以上である場合、物体検出範囲には、物体分類確率が最も高い分類の物体が存在するとする物体検出ステップと、
前記撮像手段の撮像範囲を含む検出範囲とし、前記検出範囲に存在する物体の少なくとも奥行き方向の距離の情報を含む点群データを生成する3次元センサから、前記点群データを入力する入力ステップと
を備え、
前記物体検出ステップでは、前記物体検出範囲に、点群データに基づく点群を割り当て、前記物体検出範囲の物体存在確率と、割り当てられた点群の物体検出範囲の点群占有率を比較し、点群占有率が前記物体存在確率より大きい場合は、点群占有率を前記物体検出範囲の前記物体存在確率とする
物体検出方法。
An object existence probability and an object classification probability are calculated for each object detection range from the captured image obtained as a result of imaging by the imaging means. an object detection step in which an object with the highest classification exists;
an input step of inputting the point cloud data from a three-dimensional sensor that generates point cloud data including at least distance information in the depth direction of an object existing in the detection range, the detection range including the imaging range of the imaging means; with
In the object detection step, a point group based on point cloud data is assigned to the object detection range, and the object existence probability of the object detection range is compared with the point cloud occupancy of the object detection range of the assigned point group, When the point cloud occupancy is greater than the object existence probability, the point cloud occupancy is set as the object existence probability of the object detection range.
請求項3の物体検出方法において、
前記物体検出ステップは、前記物体の分類に応じて、物体検出範囲の大きさで物体検出範囲を分類するとともに、大きい物体検出範囲には前記3次元センサからの距離が近い点群を割り当て、小さい物体検出範囲には前記3次元センサからの距離が遠い点群を割り当てる
物体検出方法。
In the object detection method of claim 3,
The object detection step classifies the object detection range according to the size of the object detection range according to the classification of the object, assigns a point group having a short distance from the three-dimensional sensor to a large object detection range, An object detection method, wherein a group of points far from the three-dimensional sensor is assigned to an object detection range.
JP2021154418A 2021-09-22 2021-09-22 Object detection device and object detection method Pending JP2023045830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021154418A JP2023045830A (en) 2021-09-22 2021-09-22 Object detection device and object detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2021154418A JP2023045830A (en) 2021-09-22 2021-09-22 Object detection device and object detection method

Publications (1)

Publication Number Publication Date
JP2023045830A true JP2023045830A (en) 2023-04-03

Family

ID=85777107

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2021154418A Pending JP2023045830A (en) 2021-09-22 2021-09-22 Object detection device and object detection method

Country Status (1)

Country Link
JP (1) JP2023045830A (en)

Similar Documents

Publication Publication Date Title
US6956469B2 (en) Method and apparatus for pedestrian detection
JP4691701B2 (en) Number detection device and method
US7103213B2 (en) Method and apparatus for classifying an object
JP6459659B2 (en) Image processing apparatus, image processing method, driving support system, program
US20050232466A1 (en) Method of recognizing and/or tracking objects
US11423663B2 (en) Object detection device, object detection method, and non-transitory computer readable medium comprising computer program for object detection-use
WO2006026688A2 (en) Method and apparatus for classifying an object
US11450103B2 (en) Vision based light detection and ranging system using dynamic vision sensor
KR20190011497A (en) Hybrid LiDAR scanner
JP6782433B2 (en) Image recognition device
JP2007114831A (en) Object detection device
CN110954912B (en) Method and apparatus for optical distance measurement
CN112193208A (en) Vehicle sensor enhancement
US11619725B1 (en) Method and device for the recognition of blooming in a lidar measurement
KR101868293B1 (en) Apparatus for Providing Vehicle LIDAR
US20220373660A1 (en) Filtering measurement data of an active optical sensor system
JP2023045830A (en) Object detection device and object detection method
KR101793790B1 (en) Apparatus and method for detecting entity in pen
JP2010060371A (en) Object detection apparatus
JP7036464B2 (en) Object identification device, object identification method, and control program
JP6533244B2 (en) Object detection device, object detection method, and object detection program
US20220214434A1 (en) Gating camera
JP2006258507A (en) Apparatus for recognizing object in front
JP2022013536A (en) Obstacle detection device, obstacle detection method and obstacle detection program
Ikeoka et al. Depth estimation from tilted optics blur by using neural network

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20230908