WO2022190531A1 - 物体検出装置、物体検出方法、およびプログラム - Google Patents
物体検出装置、物体検出方法、およびプログラム Download PDFInfo
- Publication number
- WO2022190531A1 WO2022190531A1 PCT/JP2021/047100 JP2021047100W WO2022190531A1 WO 2022190531 A1 WO2022190531 A1 WO 2022190531A1 JP 2021047100 W JP2021047100 W JP 2021047100W WO 2022190531 A1 WO2022190531 A1 WO 2022190531A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detection
- candidate
- area
- target area
- detected
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 254
- 230000008685 targeting Effects 0.000 claims abstract 3
- 238000012545 processing Methods 0.000 description 28
- 238000000034 method Methods 0.000 description 21
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Definitions
- the present invention relates to technology for detecting objects.
- a technique that uses a two-stage detector that is divided into a front stage and a rear stage.
- a candidate area of a detection target for example, a face
- a detection target is detected from the plurality of candidate areas by a detector in the subsequent stage. It performs highly accurate object detection.
- the purpose of the present invention is to provide technology capable of high-speed and high-precision object detection.
- the present invention adopts the following configuration.
- a first aspect of the present invention is an object detection apparatus for detecting a predetermined object from an image, comprising: first detection means for detecting a candidate area in which the object exists from the image; determination means for determining a target area from the detected one or more candidate areas; and second detection means for detecting the object from the target area by a detection algorithm different from that of the first detection means. and storage means for storing detection information representing detection results of the target area by the two detection means, wherein the determination means stores one or more determining the target area from the candidate areas of the object detection apparatus.
- Objects to be detected are not particularly limited, but include, for example, human bodies, faces, specific animals, automobiles, and specific products.
- the candidate area is an area judged by the first detecting means to have a high probability that the object to be detected exists, and the area (target area) to be detected by the second detecting means is based on this candidate area. determined by Any algorithm may be used for the first detection means and the second detection means, but the detection algorithm for the second detection means has higher precision than the detection algorithm for the first detection means. It is desirable to be able to detect it in a short period of time and have a large amount of calculation.
- the detection information is information obtained by performing object detection processing performed by the second detection means, and includes, for example, the position and size of the target area, an image corresponding to the target area, and the object to be detected in the target area. A score or the like representing the probability of inclusion is included.
- the detection information may include information about target areas in which no object was detected by the second detection means.
- the determining means preferably determines, as the target area, candidate areas other than candidate areas having a predetermined value or more of similarity with the target area in which no object was detected in the previous frame.
- the first detection means also outputs a first detection reliability representing the probability that the object is included in the candidate area, and the determination means outputs a similarity to the target area in which the object was not detected in the previous frame.
- the target area may be determined based on a value obtained by subtracting a predetermined value from the first detection reliability for candidate areas having a predetermined value or more, and based on the first detection reliability for other candidate areas. . According to the above configuration, since the number of candidate regions to be passed to the second detection means is reduced, it is possible to reduce the processing time while maintaining the detection performance by performing the detection processing in two stages.
- the predetermined value to be subtracted from the first detection reliability may be a value corresponding to the number of consecutive frames in which no object is detected by the second detection means.
- the predetermined value may be increased as the number of consecutive frames increases, or the predetermined value to be subtracted from the first detection reliability may be subtracted only when the number of consecutive frames is equal to or greater than a certain number.
- the predetermined value to be subtracted from the first detection reliability may be a fixed value.
- the first detection means also outputs a first detection reliability representing the probability that the object is included in the candidate area, and the detection information is the object area determined by the second detection means.
- the second detection reliability representing the probability that the target area indicated by the detection information is similar to the target area indicated by the detection information;
- the target area may be determined based on the value obtained by subtracting the value corresponding to the degree of detection, and based on the first detection reliability for the other candidate areas. For example, the higher the second detection reliability, the larger the predetermined value to be subtracted from the first detection reliability.
- the detection information includes the position and/or size of the target area, and the determining means preferably obtains the similarity based on the position and/or size of the candidate area and the position and/or size of the target area.
- object detection the same object in the input image may be erroneously detected many times. can be reduced to As a result, the number of candidate areas across the second detection unit is reduced, so that two-stage detection processing can be performed to reduce the processing time while maintaining the detection performance.
- the detection information includes an image corresponding to the target area
- the determining means preferably obtains the degree of similarity based on the image included in the detection information and the image corresponding to the candidate area. This enables highly accurate object detection even when the position and size of the area corresponding to the erroneous detection information and the candidate area match or are similar, but the images corresponding to the two areas are completely different.
- a second aspect of the present invention is an object detection method for detecting a predetermined object from an image, comprising: a first detection step of detecting a candidate area in which the object exists from the image; a determination step of determining a target region from one or more of the detected candidate regions; and a second detection step of detecting the object from the target region by a detection algorithm different from that of the first detection step. and a storage step of storing detection information representing the detection results of the two detection steps for the target region, wherein the determining step stores one or more determining the target area from the candidate areas of the object detection method.
- the present invention may be regarded as an object detection device having at least part of the above means, or as a device for recognizing or tracking an object to be detected, or as an image processing device or monitoring system. Further, the present invention may be regarded as an object detection method, an object recognition method, an object tracking method, an image processing method, and a monitoring method including at least part of the above processing. Further, the present invention can also be regarded as a program for realizing such a method and a recording medium on which the program is non-temporarily recorded. It should be noted that each of the means and processes described above can be combined with each other as much as possible to constitute the present invention.
- object detection can be performed at high speed and with high accuracy.
- FIG. 1 is a diagram showing an application example of object detection.
- FIG. 2 is a diagram showing the configuration of the object detection device.
- FIG. 3 is a flowchart of object detection processing.
- FIG. 4 is a flowchart of determination processing.
- FIG. 5 is a flowchart of determination processing.
- An object detection device detects an object (for example, a human body) from an image acquired by a fixed camera attached above a detection target area (for example, a ceiling). Also, the object detection device uses a two-stage detector that is divided into a front stage and a rear stage. An object 101 and an object 102 are detected objects (for example, a human body) and are moving objects that move within the imaging range of the fixed camera 1 . An object 103 is an object (for example, a flower) provided within the imaging range of the fixed camera 1 .
- the object detection apparatus detects candidate areas 111 to 113 in which the object exists in the input image using the above-described detector in the preceding stage.
- Candidate areas 111-113 are areas corresponding to the objects 101-103.
- the object 103 is not a human body to be detected, a candidate region 113 is generated when the features of the object 103 are similar to those of a human body.
- the object detection device performs object detection using the latter detector described above, and records the detection result in the storage device.
- the subsequent detector basically targets the target regions 121-123 corresponding to the candidate regions 111-113.
- the front-stage detector erroneously detects the object (flower) 103 as an object, but the rear-stage detector can detect that it is not the object. In this case, it is conceivable that the preceding detector continues to erroneously detect the object 103 . If all of the candidate areas are set as target areas for the detector in the subsequent stage, in the situation of FIG. do.
- an area (target area) in which an object is to be detected by the latter detector is detected from among the areas (candidate areas) in which the preceding detector detects an object, with respect to one or more previous frames.
- the target area is determined based on the detection score (reliability) of the preceding detector in the candidate area, but for the area where the latter detector did not detect the object in one or more previous frames, The target area may be determined based on a value obtained by subtracting a predetermined value from the detection score.
- the value to be subtracted may be a fixed value, or may be a value corresponding to the number of consecutive frames in which the object is not detected. In this way, even if a region in which the preceding detector has detected an object is similar to a region in which the latter detector has not detected the object, it is excluded from the processing targets of the latter detector. This makes it possible to speed up the processing while maintaining the accuracy of object detection.
- FIG. 2 is a functional block diagram of the object detection device 10 according to this embodiment.
- the object detection device 10 is an information processing device (computer) including an arithmetic device (CPU; processor), a memory, a storage device (storage unit 16), an input/output device, and the like.
- the functions of the image input unit 11, the first detection unit 12, the determination unit 13, the second detection unit 14, the output unit 15, etc. are provided by the object detection apparatus 10 executing the program stored in the storage device. be. Some or all of these functions may be implemented by dedicated logic circuits such as ASICs and FPGAs.
- the image input unit 11 has a function of capturing image data from the camera 20.
- the captured image data is handed over to the first detection unit 12 .
- This image data may be stored in the storage unit 16 .
- the image data is received directly from the camera 20 in this embodiment, the image data may be received via a communication device or the like, or may be received via a recording medium.
- the input image is not particularly limited, and may be an RGB image, a gray image, or an image representing distance, temperature, or the like.
- the first detection unit 12 detects candidate areas (areas where the object to be detected is likely to exist) from the input image.
- the first detection unit 12 detects a candidate region using a detector that uses Haar-like features and adaboost.
- the detection result is handed over to the determination unit 13 .
- the detection result includes the detected candidate area, and may further include the likelihood that the object to be detected exists in the candidate area (first detection reliability, detection score).
- the feature amount used for detection and the learning algorithm of the detector are not particularly limited. For example, any feature amount such as a HoG (Histogram of Gradient) feature amount, a SIFT feature amount, a SURF feature amount, or a sparse feature amount can be used as the feature amount.
- the learning algorithm can use any learning method such as a boosting method other than adaboost, SVM (Support Vector Machine), neural network, decision tree learning, and the like.
- the determination unit 13 determines a region (target region) to be detected by the second detection unit 14 from among the candidate regions detected by the first detection unit 12 .
- the determination unit 13 uses the detection information of the previous frame stored in the storage unit 16 to determine the target area from among the candidate areas.
- the detection information includes information about target areas (erroneous detection areas) in which no object was detected by the second detection unit 14 (to be described later) in one or more previous frames.
- the determination unit 13 determines, among the candidate regions, those other than the candidate regions having a degree of similarity with the erroneously detected region equal to or greater than a predetermined value as target regions, and outputs them to the second detection unit 14 in the subsequent stage.
- the determination unit 13 selects an error region from the candidate regions having the first detection reliability equal to or higher than a predetermined value.
- a region excluding candidate regions similar to the detection region may be determined as the candidate region.
- the second detection unit 14 performs object detection on the target area determined by the determination unit 13 .
- the detection result includes information indicating whether or not the object to be detected exists in the target area, and the likelihood that the object to be detected exists in the target area (second detection reliability, detection score). etc. may be included.
- the second detection unit 14 stores the position and/or size of the target area in which it is determined that the object to be detected does not exist as detection information in the storage unit 16 as a result of the object detection. Record.
- the second detection unit 14 may record detection information (positions and/or sizes) of all the target regions determined by the determination unit 13 in the storage unit 16 .
- the second detection unit 14 detects an object using a detector using deep learning.
- the method of deep learning is not particularly limited, for example, CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), SAE (Stacked Auto Encoder), DBN (Deep Belief Network) detector by any method such as There may be.
- the second detection unit 14 may not be a detector using deep learning. However, it is desirable that the detection algorithm of the second detection unit 14 is capable of detection with higher accuracy and requires a larger amount of calculation than the detection algorithm of the first detection unit 12 .
- the output unit 15 outputs detection results for the objects detected by the second detection unit 14 .
- the output unit 15 outputs result information indicating that an object has been detected for a candidate region in which the reliability of the detection result by the second detection unit 14 is equal to or greater than a threshold.
- a threshold a threshold for candidate regions whose reliability is less than the threshold need not be included in the result information.
- the detection result information is not particularly limited, but in the case of face detection, for example, information such as face area, reliability, face direction, age, sex, race, facial expression, etc. can be mentioned.
- FIG. 3 is a flowchart showing the overall flow of object detection processing by the object detection device 10. As shown in FIG. The details of the object detection device 100 will be described below with reference to the flowchart of FIG.
- step S31 the object detection device 10 acquires an image (input image).
- the input image may be obtained from the camera 20 via the image input unit 11 , may be obtained from another computer via the communication device 104 , or may be obtained from the storage unit 16 .
- step S32 the first detection unit 12 detects candidate regions (regions in which the detection target object is estimated to exist) from the input image (first detection processing).
- the first detection unit 12 is configured to use the Haar-like feature amount as the image feature amount and AdaBoost as the learning algorithm.
- the detection result of the first detection process may include, in addition to the candidate areas described above, the probability that the object to be detected exists in the candidate area (first detection reliability, detection score).
- step S ⁇ b>33 the determination unit 13 determines, among the candidate areas detected in step S ⁇ b>32 , the candidate areas other than the candidate areas whose degree of similarity to the erroneously detected area is equal to or greater than a predetermined value, as target areas.
- An erroneously detected area is a target area in which no object has been detected in the second detection process described later in one or more previous frames.
- the determination unit 13 outputs, as a target area, an area obtained by excluding areas similar to the erroneously detected area from among the candidate areas detected in step S32.
- the determination unit 13 acquires detection information (the position and size of the erroneously detected area) from the storage unit 16 (S41).
- the determination unit 13 may acquire only erroneous detection information for the immediately preceding frame, or may acquire erroneous detection information for a predetermined number of recent frames.
- the determining unit 13 calculates the degree of similarity between each of the one or more candidate regions and the falsely detected region (S42).
- IoU Intersection over Union
- IoU is a value obtained by dividing the area of the intersection of two regions by the area of the union of the two regions. IoU takes a value between 0 and 1, 1 when the two regions completely overlap, and 0 when they do not overlap at all.
- the position and size of the candidate area and the position and size of the erroneously detected area may be used to calculate the IoU.
- the determining unit 13 determines whether or not the IoU is equal to or greater than a predetermined threshold T1 (S43), and if the IoU is equal to or greater than the threshold T1, outputs the area excluding the corresponding candidate area as the target area ( S44).
- step S34 the second detection unit 14 determines whether or not the one or more target regions output in step S33 include an object to be detected (second detection processing).
- the second detection unit 14 performs object detection using a discriminator trained using a multilayer neural network called a convolutional neural network (CNN).
- CNN convolutional neural network
- step S35 the second detection unit 14 determines whether or not there is a target area that was determined not to contain the object to be detected in the process of step S34.
- step S36 the second detection unit 14 records, in the storage unit 16, information regarding the target area determined not to contain the detection target object as detection information.
- the position and size of the target area determined not to include the object to be detected are recorded in the storage unit 16 as the detection information.
- step S37 the output unit 15 outputs the detection result for the area where the object was detected in step S34.
- the output unit 15 outputs result information indicating that a detection target object has been detected for a detection target region in which the reliability of the detection result (second detection reliability) of the object detection region is equal to or greater than the threshold.
- a detection target area whose reliability is less than the threshold need not be included in the result information.
- ⁇ Advantageous effects of the present embodiment> In object detection, the same object in an input image may be erroneously detected many times. can be reduced to As a result, the number of candidate regions (target regions) across the second detection unit is reduced, so that two-step detection processing can be performed to reduce processing time while maintaining detection performance.
- step S33 an example in which the degree of similarity is determined in step S33 based on the positions and sizes of the candidate area and the erroneously detected area has been described.
- step S33 an example will be described in which the degree of similarity is determined by performing pattern matching between an image corresponding to a candidate area and an image corresponding to an erroneously detected area.
- the description of the same processing as in the above-described first embodiment is omitted, and the determination processing (S33), which is a different processing, will be described.
- FIG. 5 is a flow chart of the determination process performed in step S33 in this embodiment.
- the determination unit 13 acquires detection information from the storage unit 16 (S51).
- the detection information includes an image corresponding to the erroneously detected area.
- the determining unit 13 performs pattern matching processing using the image corresponding to the erroneously detected area for each of the images corresponding to one or more candidate areas (S52).
- the determination unit 13 determines whether or not the degree of similarity between images obtained by pattern matching is equal to or greater than a predetermined threshold value T2 (S53).
- T2 predetermined threshold value
- the area excluding the area is output as the target area (S54).
- ⁇ Advantageous effects of the present embodiment> even if the positions and sizes of the erroneously detected area and the candidate area match or are similar, but the images corresponding to the two areas are completely different, object detection can be performed with high accuracy. For example, even if the object to be detected overlaps the position of the object 103 shown in FIG. 1, the similarity is calculated based on the image, so the area corresponding to the position can be used as the target area. .
- the determination unit 13 determines, as a target area, a candidate area excluding those similar to an erroneously detected area from the candidate areas, but the present invention is not limited to this. .
- the determination unit 13 determines a candidate area for which the first detection reliability is equal to or greater than a predetermined threshold value T3 as the target area. do.
- the determination unit 13 determines that a value obtained by subtracting a predetermined value from the first detection reliability is equal to or greater than the above-described predetermined threshold value T3 for candidate regions whose degree of similarity to the erroneously detected region is equal to or greater than a predetermined threshold value T4.
- a certain candidate area may be determined as the target area.
- the method of determining the predetermined value to be subtracted from the first detection reliability is not particularly limited.
- the predetermined value to be subtracted from reliability may be a fixed value.
- the predetermined value to be subtracted from the reliability may be determined according to the number of consecutive frames in which the target object is not detected by the second detection unit 14 .
- the predetermined value may be increased as the number of consecutive frames increases, or the predetermined value to be subtracted from the first detection reliability may be subtracted only when the number of consecutive frames is equal to or greater than a certain number.
- the predetermined value to be subtracted from the reliability may be determined based on the second detection reliability.
- the determination unit 13 determines a candidate area whose first detection reliability is equal to or greater than a predetermined threshold value T3 as the target area. At this time, the determination unit 13 subtracts the value based on the second detection reliability from the first detection reliability for the candidate area whose similarity to the erroneously detected area is equal to or greater than the predetermined threshold value T4. A candidate area that is equal to or greater than a predetermined threshold value T3 of is determined as the target area. For example, the higher the second detection reliability, the larger the predetermined value to be subtracted from the reliability.
- a ratio or difference in size between regions, a difference in position (for example, central coordinate values) between regions, or a combination thereof may be used as a similarity index.
- Object detection device 11 Image input unit 12: First detection unit 13: Determination unit 14: Second detection unit 15: Output unit 16: Storage units 1, 20: Cameras 101, 102, 103: Object 111, 112, 113: candidate regions 121, 122, 123: target regions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
図1を参照して、本発明に係る物体検出装置の適用例を説明する。物体検出装置は、検出対象エリアの上方(例えば、天井)に取り付けられた固定カメラによって取得される画像から対象物(例えば、人体)を検出する。また、物体検出装置は、前段と後段に分かれた二段構成の検出器を用いる。物体101および物体102は、検出物(例えば、人体)であって、固定カメラ1の撮像範囲を移動する動体である。物体103は、固定カメラ1の撮像範囲内に設けられる物体(例えば、花)である。物体検出装置は、入力画像に対して上述の前段の検出器を用いて対象物が存在する候補領域111~113を検出する。候補領域111~113は、物体101~103に対応する領域である。物体103は検出対象の人体ではないが、物体103の特徴が人体に類似している場合に候補領域113が発生する。そして、物体検出装置は、上述の後段の検出器を用いて物体検出を行い、検出結果を記憶装置に記録する。後段の検出器は、基本的に候補領域111~113に対応する対象領域121~123を対象として行う。ここで、前段の検出器は、物体(花)103を対象物であると誤検出するが、後段の検出器は対象物ではないと検出できるものとする。この場合、前段の検出器は物体103の誤検出し続けることが考えられる。候補領域の全てを後段の検出器の対象領域とすると、図1の状況において、後段の検出器は対象物が存在しないにもかかわらず、毎フレーム検出処理を行うことになり無駄な処理が発生する。
<構成>
図2は、本実施形態に係る物体検出装置10における機能ブロック図である。物体検出装置10は、演算装置(CPU;プロセッサ)、メモリ、記憶装置(記憶部16)、入出力装置等を含む情報処理装置(コンピュータ)である。記憶装置に格納されたプログラムを物体検出装置10が実行することで、画像入力部11、第1の検出部12、判定部13、第2の検出部14、出力部15等の機能が提供される。これらの機能の一部または全部は、ASICやFPGAなどの専用の論理回路により実装されてもよい。
図3は、物体検出装置10による物体検出処理の全体の流れを示すフローチャートである。以下、図3のフローチャートにしたがって、物体検出装置100の詳細について説明する。
ステップS31において、物体検出装置10は、画像(入力画像)を取得する。入力画像は、画像入力部11を介してカメラ20から取得されてもよいし、通信装置104を介して他のコンピュータから取得されてもよいし、記憶部16から取得されてもよい。
ステップS32において、第1の検出部12は、入力画像から候補領域(検出対象の物体が存在すると推定される領域)を検出する(第1の検出処理)。本実施形態では、第1の検出部12は、画像特徴量としてHaar-like特徴量を用い、学習アルゴリズムとしてAdaBoostを用いるように構成される。第1の検出処理の検出結果として、上述の候補領域の他に、当該候補領域に検出対象の物体が存在する確からしさ(第1の検出信頼度、検出スコア)が含まれてもよい。
ステップS33において、判定部13は、ステップS32で検出された候補領域のうち、誤検出領域との類似度が所定値以上の候補領域以外を、対象領域として決定する。誤検出領域は、1つ以上前のフレームにおける後述する第2の検出処理において、物体が検出されなかった対象領域である。判定部13は、ステップS32で検出された候補領域の中から誤検出領域に類似するものを除いた領域を対象領域として出力する。
ステップS34において、第2の検出部14は、ステップS33で出力された1つ以上の対象領域に対して、検出対象の物体が含まれるか否かを判定する(第2の検出処理)。本実施形態では、第2の検出部14は、たたみ込みニューラルネットワーク(CNN)と呼ばれる多層ニューラルネットワークを用いて学習した識別器を用いて物体検出を行う。
ステップS37において、出力部15は、ステップS34で物体が検出された領域について検出結果を出力する。出力部15は、物体検出領域による検出結果の信頼度(第2の検出信頼度)が閾値以上である検出対象領域について、検出対象の物体が検出されたことを示す結果情報を出力する。信頼度が閾値未満の検出対象領域については、結果情報に含めなくてよい。
物体検出において、入力画像中の同じ物に対し、何度も誤検出が発生することがあるが、本実施形態によれば、同じ位置・サイズのものを何度も誤検出することを効果的に減らすことができる。これにより、第2の検出部に渡る候補領域(対象領域)の数が減るので、2段階の検出処理を行うことで検出性能を維持したまま、処理時間を削減することができる。
上述の実施形態1では、ステップS33において、候補領域および誤検出領域の位置や大きさに基づいて、類似度を決定する例について説明した。本実施形態では、ステップS33において、候補領域に対応する画像と誤検出領域に対応する画像とのパターンマッチングを行うことで類似度を決定する例について説明する。上述の実施形態1と同じ処理については説明を省略し、相違する処理である判定処理(S33)について説明する。
図5は、本実施形態において、ステップS33で行われる判定処理のフローチャートである。まず、判定部13は、記憶部16から検出情報を取得する(S51)。本実施形態では、検出情報には、誤検出領域に対応する画像が含まれる。そして、判定部13は、1つ以上の候補領域に対応する画像のそれぞれに対して、誤検出領域に対応する画像を用いてパターンマッチング処理を行う(S52)。そして、判定部13は、パターンマッチングによって得られる画像同士の類似度が所定の閾値T2以上であるか否かを判断して(S53)、類似度が閾値T2以上である場合に、該当する候補領域を除いた領域を対象領域として出力する(S54)。
本実施形態によれば、誤検出領域と候補領域との位置やサイズが一致または類似するが、2つの領域に対応する画像が全く異なる場合にも高精度に物体検出を行うことができる。例えば、図1に示す物体103の位置に、検出対象の物体が重なった場合にも、画像に基づいて類似度を算出しているため、当該位置に対応する領域を対象領域とすることができる。
上述の実施形態1および実施形態2では、判定部13は、候補領域の中から誤検出領域に類似するものを除いた候補領域を、対象領域として決定する例について説明したが、これに限定されない。例えば、第1の検出部12が上述の第1の検出信頼度を出力する場合に、判定部13は、当該第1の検出信頼度が所定の閾値T3以上である候補領域を対象領域として決定する。このとき、判定部13は、誤検出領域との類似度が所定の閾値T4以上の候補領域については、第1の検出信頼度から所定の値を減算した値が上述の所定の閾値T3以上である候補領域を対象領域として決定してもよい。
11:画像入力部
12:第1の検出部
13:判定部
14:第2の検出部
15:出力部
16:記憶部
1,20:カメラ
101,102,103:物体
111,112,113:候補領域
121,122,123:対象領域
Claims (11)
- 画像から所定の物体を検出する物体検出装置であって、
前記画像から前記物体が存在する候補領域を検出する第1の検出手段と、
前記第1の検出手段によって検出された1つ以上の前記候補領域から対象領域を決定する判定手段と、
前記対象領域を対象として、前記第1の検出手段とは異なる検出アルゴリズムによって前記物体を検出する第2の検出手段と、
前記対象領域に対する前記第2の検出手段による検出結果を表す検出情報を記憶する記憶手段と、
を有し、
前記判定手段は、1つ以上前のフレームに対する前記検出情報に基づいて、1つ以上の前記候補領域から前記対象領域を決定する、
ことを特徴とする物体検出装置。 - 前記検出情報は、前記第2の検出手段によって前記物体が検出されなかった対象領域に関する情報を含む、
請求項1に記載の物体検出装置。 - 前記判定手段は、前記候補領域のうち、前フレームにおいて前記物体が検出されなかった対象領域との類似度が所定値以上の候補領域以外を、前記対象領域として決定する、
請求項2に記載の物体検出装置。 - 前記第1の検出手段は、前記候補領域に前記物体が含まれる確からしさを表す第1の検出信頼度も出力し、
前記判定手段は、前フレームにおいて前記物体が検出されなかった対象領域との類似度が所定値以上の候補領域については前記第1の検出信頼度から所定の値を減算した値に基づいて、その他の候補領域については前記第1の検出信頼度に基づいて、前記対象領域を決定する、
請求項2に記載の物体検出装置。 - 前記所定の値は、前記第2の検出手段によって前記物体が検出されなかった連続フレーム数に応じた値である、
請求項4に記載の物体検出装置。 - 前記所定の値は、固定値である、
請求項4に記載の物体検出装置。 - 前記第1の検出手段は、前記候補領域に前記物体が含まれる確からしさを表す第1の検出信頼度も出力し、
前記検出情報は、前記第2の検出手段によって判定される、前記対象領域に前記物体が含まれる確からしさを表す第2の検出信頼度を含み、
前記判定手段は、前記検出情報に示される対象領域との類似度が所定値以上の候補領域については前記第1の検出信頼度から前記第2の検出信頼度に応じた値を減算した値に基づいて、その他の候補領域については前記第1の検出信頼度に基づいて、前記対象領域を決定する、
請求項1に記載の物体検出装置。 - 前記検出情報は、前記対象領域の位置および/またはサイズを含み、
前記判定手段は、前記候補領域の位置および/またはサイズと、前記対象領域の位置および/またはサイズとに基づいて、前記類似度を求める、
ことを特徴とする請求項3から7のいずれか一項に記載の物体検出装置。 - 前記検出情報は、前記対象領域に対応する画像を含み、
前記判定手段は、前記検出情報に含まれる前記画像と、前記候補領域に対応する画像とに基づいて、前記類似度を求める、
ことを特徴とする請求項3から7のいずれか一項に記載の物体検出装置。 - 画像から所定の物体を検出する物体検出方法であって、
前記画像から前記物体が存在する候補領域を検出する第1の検出ステップと、
前記第1の検出ステップで検出された1つ以上の前記候補領域から対象領域を決定する判定ステップと、
前記対象領域を対象として、前記第1の検出ステップとは異なる検出アルゴリズムによって前記物体を検出する第2の検出ステップと、
前記対象領域に対する前記2の検出ステップにおける検出結果を表す検出情報を記憶する記憶ステップと、
を有し、
前記判定ステップでは、1つ以上前のフレームに対する前記検出情報に基づいて、1つ以上の前記候補領域から前記対象領域を決定する、
を有することを特徴とする物体検出方法。 - 請求項10に記載の物体検出方法の各ステップをコンピュータに実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180093783.0A CN116868227A (zh) | 2021-03-08 | 2021-12-20 | 物体检测装置、物体检测方法以及程序 |
DE112021007212.9T DE112021007212T5 (de) | 2021-03-08 | 2021-12-20 | Objekterfassungseinrichtung, Objekterfassungsverfahren und Programm |
US18/547,793 US20240144631A1 (en) | 2021-03-08 | 2021-12-20 | Object detection device, object detection method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021036637A JP2022136840A (ja) | 2021-03-08 | 2021-03-08 | 物体検出装置、物体検出方法、およびプログラム |
JP2021-036637 | 2021-03-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022190531A1 true WO2022190531A1 (ja) | 2022-09-15 |
Family
ID=83227546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/047100 WO2022190531A1 (ja) | 2021-03-08 | 2021-12-20 | 物体検出装置、物体検出方法、およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240144631A1 (ja) |
JP (1) | JP2022136840A (ja) |
CN (1) | CN116868227A (ja) |
DE (1) | DE112021007212T5 (ja) |
WO (1) | WO2022190531A1 (ja) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019159391A (ja) * | 2018-03-07 | 2019-09-19 | オムロン株式会社 | 物体検出装置、物体検出方法、およびプログラム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4708835B2 (ja) | 2005-04-12 | 2011-06-22 | 日本電信電話株式会社 | 顔検出装置、顔検出方法、及び顔検出プログラム |
JP6907774B2 (ja) | 2017-07-14 | 2021-07-21 | オムロン株式会社 | 物体検出装置、物体検出方法、およびプログラム |
-
2021
- 2021-03-08 JP JP2021036637A patent/JP2022136840A/ja active Pending
- 2021-12-20 CN CN202180093783.0A patent/CN116868227A/zh active Pending
- 2021-12-20 DE DE112021007212.9T patent/DE112021007212T5/de active Pending
- 2021-12-20 WO PCT/JP2021/047100 patent/WO2022190531A1/ja active Application Filing
- 2021-12-20 US US18/547,793 patent/US20240144631A1/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019159391A (ja) * | 2018-03-07 | 2019-09-19 | オムロン株式会社 | 物体検出装置、物体検出方法、およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
US20240144631A1 (en) | 2024-05-02 |
JP2022136840A (ja) | 2022-09-21 |
DE112021007212T5 (de) | 2024-01-04 |
CN116868227A (zh) | 2023-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10885372B2 (en) | Image recognition apparatus, learning apparatus, image recognition method, learning method, and storage medium | |
Kim et al. | Uncertainty-guided cross-modal learning for robust multispectral pedestrian detection | |
US20220027669A1 (en) | Objects and Features Neural Network | |
US9292745B2 (en) | Object detection apparatus and method therefor | |
US10216979B2 (en) | Image processing apparatus, image processing method, and storage medium to detect parts of an object | |
US8705850B2 (en) | Object determining device and program thereof | |
CN111797709B (zh) | 一种基于回归检测的实时动态手势轨迹识别方法 | |
US10748294B2 (en) | Method, system, and computer-readable recording medium for image object tracking | |
JP5262705B2 (ja) | 運動推定装置及びプログラム | |
CN112784712B (zh) | 一种基于实时监控的失踪儿童预警实现方法、装置 | |
JP7392488B2 (ja) | 遺留物誤検出の認識方法、装置及び画像処理装置 | |
US20190114501A1 (en) | Target tracking method and system adaptable to multi-target tracking | |
JP6844230B2 (ja) | 画像処理プログラム、画像処理装置および画像処理方法 | |
CN111415370A (zh) | 一种基于嵌入式的红外复杂场景目标实时跟踪方法及系统 | |
US11394870B2 (en) | Main subject determining apparatus, image capturing apparatus, main subject determining method, and storage medium | |
Tsai et al. | Joint detection, re-identification, and LSTM in multi-object tracking | |
WO2022190531A1 (ja) | 物体検出装置、物体検出方法、およびプログラム | |
US20220207904A1 (en) | Image processing apparatus, image processing method, and storage medium | |
JP6540577B2 (ja) | 物体認識装置 | |
CN112633078B (zh) | 目标跟踪自校正方法、系统、介质、设备、终端及应用 | |
Chong et al. | A novel pedestrian detection and tracking with boosted HOG classifiers and Kalman filter | |
JP6555940B2 (ja) | 被写体追跡装置、撮像装置、及び被写体追跡装置の制御方法 | |
CN113920168A (zh) | 一种音视频控制设备中图像跟踪方法 | |
Mattheij et al. | Depth-based detection using Haarlike features | |
GB2446293A (en) | Video based monitoring system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21930376 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180093783.0 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18547793 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112021007212 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21930376 Country of ref document: EP Kind code of ref document: A1 |