WO2015170706A1 - Traveling body detection method and program - Google Patents

Traveling body detection method and program Download PDF

Info

Publication number
WO2015170706A1
WO2015170706A1 PCT/JP2015/063213 JP2015063213W WO2015170706A1 WO 2015170706 A1 WO2015170706 A1 WO 2015170706A1 JP 2015063213 W JP2015063213 W JP 2015063213W WO 2015170706 A1 WO2015170706 A1 WO 2015170706A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature points
target image
image
feature
target
Prior art date
Application number
PCT/JP2015/063213
Other languages
French (fr)
Japanese (ja)
Inventor
貴裕 東
Original Assignee
日本電産エレシス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電産エレシス株式会社 filed Critical 日本電産エレシス株式会社
Publication of WO2015170706A1 publication Critical patent/WO2015170706A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a moving object detection method and a program.
  • An object of the present invention has been made in view of the above-described problems, and is to provide a technique capable of detecting a moving object with a small calculation amount.
  • a method for detecting a moving body different from the host vehicle based on a target image including surrounding information of the host vehicle, capturing the target image over a plurality of frames, storing the image information, and storing the image A preparatory step of extracting two frame images having different imaging times from information as a first target image and a second target image; A plurality of first feature points included in the first target image are extracted, two first target feature points are selected from the first feature points, and a plurality of second feature points included in the second target image are selected.
  • the image portion including the first feature point or the second feature point is different from the own vehicle.
  • a moving object detection method including recognizing that it is a part of one moving body.
  • Whether or not the parallelism matches is provided by a method of detecting a moving body that is estimated to match even if the angle formed between the first line segment and the second line segment is within 5 °.
  • a target region is set in the first target image, the two first feature points in different sets included in the target region, and the target associated with the two first feature points
  • Two second feature points indicating the same part on the image are sequentially extracted, and in the detection step, a set in which the parallelism and the enlargement ratio in each set of the feature points match each other is stored.
  • a method for detecting a moving body is provided, which estimates a range of a moving body different from the host vehicle based on information including the number of sets (number of supports) that match these feature points.
  • a moving object that is a different set of feature points, and that when the enlargement ratios are the same, the respective image portions including these different feature points are both included in the same moving object.
  • a method of detecting is provided. Also, according to one embodiment of the present invention, Provided is a method for detecting a moving body, in which, when the enlargement ratios in the respective feature point sets are within 5% of the other, they are assumed to match each other.
  • a program that is recorded in a non-volatile storage medium that detects a moving body different from the host vehicle based on a target image including surrounding information of the host vehicle and that is executed by a computer, A preparation step of capturing the target image over a plurality of frames, storing these image information, and extracting two frame images having different imaging times as the first target image and the second target image from the image information; A plurality of first feature points included in the first target image are extracted, two first target feature points are selected from the first feature points, and a plurality of second feature points included in the second target image are selected.
  • FIG. 1 is a block diagram conceptually showing the processing configuration of an apparatus including a moving object detection method in the first embodiment.
  • the moving object detection apparatus 10 includes an imaging unit 110, a feature point extraction unit 120, a stationary object identification unit 130, a hypothesis generation unit 140, a hypothesis selection unit 150, and a detection unit 160.
  • the imaging unit 110 captures an image including a scene around the host vehicle over a plurality of frames.
  • “capturing an image over a plurality of frames” may mean capturing a moving image composed of a plurality of still images, or simply a plurality of still images at arbitrary intervals. May be taken.
  • the feature point extraction unit 120 is included in each of two target images (first target image and second target image) captured at two different times to be processed from a plurality of image frames captured by the imaging unit 110. Extract a plurality of feature points.
  • the feature points extracted here may include feature points related to stationary objects and feature points related to moving objects.
  • the plurality of feature points included in the first target image are related to the plurality of feature points on the second target image indicating the same part on the target image, for example, the end of the tire.
  • This association is performed by detecting a movement vector or the like.
  • feature points on images taken at different times indicate the same part in an actual moving object.
  • the above-described association is performed with respect to the plurality of feature points of the first target image and the second target image extracted here. These associations may be performed for each feature point extraction process described later, or all feature points may be associated in advance in a lump.
  • the stationary object specifying unit 130 specifies a feature point related to a stationary object among the feature points included in each image. The determination of whether or not the object is a stationary object will be described later. Thereby, the feature point regarding a moving object can be discriminate
  • the hypothesis generation unit 140 includes two feature points “two first feature points” and a plurality of feature points of the first target image and the second target image extracted by the feature point extraction unit 120, respectively. Select “Two second feature points”. Then, a first line segment connecting the two feature points of the extracted first target image and a second line segment connecting the two feature points of the first target image are generated. Next, when the first line segment and the second line segment are parallel, the two feature points are based on the respective movement vectors of the two feature points between the first target image and the second target image. A selection condition, which is a condition to be satisfied by the movement vector of the object including, is generated. This will be described later.
  • the hypothesis selection unit 150 selects a selection condition with the highest accuracy among the plurality of selection conditions.
  • the detection unit 160 selects a feature point that satisfies a selection condition by a movement vector between two images (first target image and second target image) captured by the imaging unit 110, and based on the selected feature point. Detect moving objects.
  • each component of the moving object detection apparatus 10 shown in FIG. 1 is not a hardware unit configuration but a functional unit block.
  • Each component of the moving object detection apparatus 10 includes an arbitrary computer CPU, memory, a program for realizing the components shown in the figure loaded in the memory, a storage medium such as a hard disk for storing the program, and a network connection interface. It is realized by any combination of hardware and software. There are various modifications of the implementation method and apparatus.
  • FIGS. 2 and 3 are flowcharts showing the flow of processing of the moving object detection apparatus in the first embodiment.
  • the feature point extraction unit 120 acquires a first image (first target image) and a second image (second target image) from the images captured by the imaging unit 110 (S101). The first target image and the second target image are captured at different timings.
  • the feature point extraction unit 120 processes each image using an arbitrary algorithm, and extracts each feature point included in the image (S102).
  • the stationary object specifying unit 130 calculates a movement vector of each feature point based on the correspondence between each feature point extracted for each image (S103). Furthermore, the stationary object specifying unit 130 detects the vanishing point of the image related to the first target image using an algorithm for detecting the vanishing point of the image (S104).
  • the stationary object specifying unit 130 selects one of the feature points extracted in S102 (S105), and compares it with a condition (stationary object condition) for determining whether or not the feature point relates to a stationary object. (S106). Specifically, the stationary object specifying unit 130 determines whether or not each feature point has a movement vector that coincides with a direction extending radially from the vanishing point of the image when the movement component due to camera rotation is canceled. Thus, the feature point related to the stationary object can be determined. When the feature point satisfies the stationary object condition (S106: YES), the stationary object specifying unit 130 excludes the feature point from the subsequent processing targets (S107).
  • the stationary object specifying unit 130 determines whether or not the stationary object condition has been confirmed for all feature points (S108). When unidentified feature points remain (S108: NO), the stationary object specifying unit 130 repeats the processing of S105 to S107. On the other hand, when all the feature points have been confirmed (S108: YES), the process proceeds to S109.
  • FIGS. 4 to 6 are diagrams for explaining the operation of the stationary object specifying unit 130.
  • FIG. 4 is a diagram illustrating an example of the first target image.
  • FIG. 5 is a diagram illustrating an example of the second target image.
  • FIG. 6 is a diagram illustrating a movement vector of each feature point obtained based on the first target image and the second target image.
  • the x mark is the vanishing point v detected in S104
  • the points a, b, p, and q are feature points included in the first target image.
  • the point a, the point b, the point p, and the point q exist as the point a ′, the point b ′, the point p ′, and the point q ′, respectively, in the second target image shown in FIG.
  • the first target image and the second target image include feature points other than these feature points, they are not shown for convenience of explanation.
  • the stationary object specifying unit 130 uses a point a, a point b, a point p, a point q, a point a ′, a point b ′, and a point p by using an arbitrary algorithm that specifies the correspondence of feature points between a plurality of images. The correspondence relationship with “and point q” is specified. Then, the stationary object specifying unit 130 calculates a movement vector for each feature point, as shown in FIG. 6, based on the correspondence between the feature points and the coordinate position in each image. 4 and 5 exemplify a case where the vehicle on which the camera is mounted goes straight. Here, as shown in FIG.
  • the movement vector aa ′ at the point a and the movement vector bb ′ at the point b coincide with the direction extending radially from the vanishing point v.
  • the direction of the movement vector pp ′ at the point p and the movement vector qq ′ at the point q does not coincide with the direction extending radially from the vanishing point v. That is, point a and point b satisfy the stationary object condition, and point p and point q do not satisfy the stationary object condition. Therefore, the stationary object specifying unit 130 determines that the point a and the point b are feature points related to the stationary object, and excludes them from the subsequent processing targets.
  • the hypothesis generation unit 140 sets a target region ROI (Region of Interest) based on the remaining feature points excluding the feature points related to the stationary object (S109).
  • a flow in which the hypothesis generation unit 140 sets the target region ROI will be described with reference to FIG.
  • FIG. 7 is a diagram for explaining a flow in which the hypothesis generation unit 140 sets the target region ROI.
  • the hypothesis generation unit 140 recognizes an area below the horizon h that can be estimated from the vanishing point v as a ground area, and calculates a ground projection point of each feature point. Then, the hypothesis generation unit 140 calculates the distances from the reference position (for example, the installation position of the camera, etc.) to each feature point based on the calculated ground projection point. Then, the hypothesis generation unit 140 sets a region within a predetermined range as the target region ROI with reference to the feature point having the closest distance.
  • the reference position for example, the installation position of the camera, etc.
  • the hypothesis generation unit sets the target region ROI as follows.
  • a rectangle surrounding a vehicle pattern found by image pattern recognition is defined as ROI.
  • a rectangle about 2 m square in the direction and distance of the radar detection target is set as the ROI.
  • a vehicle existence hypothesis is established for every 1 m in depth and 0.5 m in the width direction up to a distance of 50 m with respect to one's own lane and the adjacent lane, and a 2 m square is defined as ROI for each hypothesis.
  • An ROI is set for the entire field of view, and a moving object having the maximum support described later is detected. The feature points belonging to the support are removed, and the phase object having the next largest support is detected. Repeat these. Thereby, different moving bodies can be detected sequentially.
  • the method for setting the target region ROI is not limited to these methods.
  • the hypothesis generation unit 140 arbitrarily selects a feature point pair from the feature points included in the target region ROI (S110). Then, the hypothesis generation unit 140 determines whether or not the feature point pair selected in S110 satisfies the trapezoidal condition (S111).
  • the trapezoidal conditions are explained.
  • the moving object in the other image from each feature point coordinate belonging to the moving object in one image Conversion to the feature point coordinates belonging to can be expressed by translation and enlargement / reduction.
  • deriving a feature point of another image by translation and enlargement / reduction of a feature point of an image is referred to as similar translation.
  • the four selected feature points form a trapezoid. Specifically, as shown in FIG. 8, the points p and q in FIG.
  • FIG. 8 is a diagram showing a state in which the points p and q in FIG. 4 and the points p ′ and q ′ in FIG. 5 are plotted on the same coordinate axis.
  • a pair of feature points corresponding to two images forms a trapezoid”, in other words, a case where the angle formed by the straight line pq and the straight line p′q ′ can be recognized as parallel is a trapezoidal condition.
  • the straight line pq and the straight line p′q ′ do not need to be completely parallel, and include a case where the straight line pq and the straight line p′q ′ are inclined within a certain range (for example, a range of ⁇ 5%).
  • the trapezoidal condition is proved as follows.
  • the quadrangle (pp′q′q) forms a trapezoid.
  • the enlargement ratio ⁇ and the translation vector t can be expressed by the following equations 3 and 4.
  • Equation 1 and Equation 2 require 4 multiplications and 8 additions / subtractions, whereas when the following Equation 5 is used, 2 multiplications and 5 additions / subtractions are required.
  • the hypothesis generation unit 140 reselects the feature point pair (S110).
  • the hypothesis generation unit 140 generates a similar translation hypothesis from the selected feature point pair (S112).
  • the similar parallel movement hypothesis can also be called a selection condition for selecting feature points belonging to a moving object.
  • the similar translation hypothesis is that each feature point constituting the same object has the same enlargement ratio and translation vector.
  • the hypothesis generation unit 140 calculates the enlargement ratio ⁇ and the parallel movement vector t based on the movement vectors related to the two feature points.
  • the hypothesis generation unit 140 verifies the movement vector of each feature point included in the target region ROI, and calculates the number of feature points (supports) that satisfy the similar parallel movement hypothesis generated in S112 (S113).
  • the feature point (support) satisfying the similar translation hypothesis is specifically a feature point having an enlargement factor and a translation vector that coincide with the enlargement factor ⁇ and the translation vector t generated in S112. .
  • the part corresponding to the feature point having the same selection condition is a condition that is recognized as a part of one moving body.
  • the hypothesis generation unit 140 determines whether a predetermined number of similar parallel movement hypotheses has been generated (S114). When a predetermined number of similar translational hypotheses have not been generated (S114: NO), the hypothesis generation unit 140 selects a new feature point pair (S110), and repeats the processing of S111 to S113 a predetermined number of times.
  • the hypothesis selection unit 150 selects selection conditions (that is, enlargement ratio ⁇ and translation) extracted based on the plurality of generated similar translation hypotheses.
  • a feature point group of selection conditions having the maximum number of support is selected from the conditions defined by the vector t) (S115).
  • the detection unit 160 recognizes a region related to the moving object based on the feature point group satisfying the similar parallel movement hypothesis selected in S115 (S116). Then, the hypothesis generation unit 140 excludes the feature points included in the moving object area recognized in S116 from the processing target (S117), and repeats the above-described processing. This process is repeated until there are no more feature points to be processed.
  • each feature point belonging to the moving object can be easily calculated based on the similar parallel movement hypothesis calculated using at least two feature points among the plurality of feature points belonging to the moving object. It is possible to discriminate and specify an area related to the moving object. Thereby, according to this embodiment, a moving object can be detected with a small calculation amount. Since more hypotheses can be generated and tested in a limited calculation time, the probability of selecting an optimal similar translation transformation is increased.
  • the present embodiment it is possible to determine a feature point group that moves in the same manner between a plurality of images based on the similar parallel movement hypothesis.
  • region regarding each moving object can be specified only by confirming the motion of each feature point between images based on the similar parallel movement hypothesis.
  • an exceptional object that is likely to be missed by a method such as pattern recognition for example, a positive sample distribution when generating a pattern discriminator has an appearance probability higher than a similar negative sample in the vicinity. It is possible to detect by supplementing an object that is difficult to recognize because it is small) or an unknown object (for example, an object for which a similar positive sample is not prepared in advance).
  • the probability of generating a correct hypothesis by random sampling can be increased.
  • the probability that all selected feature points do not contain noise is about 1/256.
  • the probability that all feature points do not include noise can be reduced to about 1 ⁇ 4 at maximum. As a result, the number of samplings required to obtain a correct hypothesis can be reduced.
  • a third feature point that satisfies the trapezoidal condition by the pair of feature points is extracted, and each feature point pair and each third feature point are extracted.
  • one similar translation hypothesis may be generated. Specifically, three trapezoids can be generated from each feature point pair and the third feature point in each of the two images. Then, for each of the three trapezoids, the enlargement ratio ⁇ and the translation vector t are calculated, and one similar translation hypothesis is calculated using the average value, intermediate value, and the like. By doing so, the accuracy of the similar translation hypothesis can be improved. As a result, it is expected that correct parallel translation can be detected with fewer hypothesis generation times.

Abstract

A device (10) having: an imaging unit (110) that captures, across a plurality of frames, an image including periphery information for a vehicle; a feature point extraction unit (120) that extracts, from the captured plurality of image frames, a plurality of feature points included in first and second target images for processing that have been captured at two different times; a hypothesis generation unit (140) that selects two first feature points and two second feature points, each being two feature points, from the plurality of feature points for both the extracted first target image and second target image, generates a first line component that connects the two feature points selected in the first target image and a second line component that connects the two feature points in the second target image, and, if the first line component and the second line component are parallel, generates selection conditions being conditions to be fulfilled by a movement vector for one target object including two feature points, on the basis of the movement vectors for each of the two feature points between the first target image and the second target image; and a detection unit (160) that detects a traveling body on the basis of the selected feature points.

Description

移動物検知方法、及びプログラムMoving object detection method and program
 本発明は、移動物検知方法、及びプログラムに関する。 The present invention relates to a moving object detection method and a program.
 近年、車両など移動体の車室内に搭載したカメラを用いて、当該移動体の周囲に存在する障害物を認識する技術が普及してきている。
  移動体の周囲に存在する障害物を認識する従来技術として、例えば特許文献1の様な、周辺情報から抽出した特徴量をクラスタリングして、周囲の状態を認識する方式がある。
In recent years, a technique for recognizing an obstacle present around a moving body using a camera mounted in a vehicle interior of the moving body such as a vehicle has become widespread.
As a conventional technique for recognizing an obstacle existing around a moving body, there is a method of recognizing a surrounding state by clustering feature amounts extracted from surrounding information as in Patent Document 1, for example.
特開2011-14037JP2011-14037A
 しかしながら、このような従来技術の方式では、多くの計算量が必要となる。 However, such a conventional method requires a large amount of calculation.
 本発明の目的は、上述の課題に鑑みてなされたものであり、小さい計算量で移動物を検知可能な技術を提供することにある。 An object of the present invention has been made in view of the above-described problems, and is to provide a technique capable of detecting a moving object with a small calculation amount.
 本発明の一実施形態によれば、
自車両の周囲情報を含む対象画像に基づいて、自車両とは異なる移動体を検知する方法であって、前記対象画像を複数フレームに亘って撮像し、これらの画像情報を記憶し、前記画像情報から撮像時刻が異なる2つのフレーム画像を第一対象画像および第二対象画像として抽出する準備工程と、
前記第一対象画像に含まれる複数の第一特徴点を抽出し、前記第一特徴点から2つの第一対象特徴点を選択し、前記第二対象画像に含まれる複数の第二特徴点を抽出し、前記2つの第一特徴点と、前記対象画像上において同一の部位を示す前記第二対象画像の2つの第二特徴点とを関連づける特徴点抽出工程と、
前記2つの第一特徴点を結んだ第一線分と、前記2つの第二特徴点を結んだ第二線分とを比較し、それらの平行度および拡大率を検出する検出工程と、
前記平行度または前記拡大率、および前記2つの第一特徴点または前記2つの第二特徴点に基づいて、自車両とは異なる移動体の範囲を推定する推定工程と、
を含む移動体を検知する方法が提供される。
According to one embodiment of the present invention,
A method for detecting a moving body different from the host vehicle based on a target image including surrounding information of the host vehicle, capturing the target image over a plurality of frames, storing the image information, and storing the image A preparatory step of extracting two frame images having different imaging times from information as a first target image and a second target image;
A plurality of first feature points included in the first target image are extracted, two first target feature points are selected from the first feature points, and a plurality of second feature points included in the second target image are selected. A feature point extracting step of extracting and associating the two first feature points with the two second feature points of the second target image indicating the same part on the target image;
A detection step of comparing the first line segment connecting the two first feature points with the second line segment connecting the two second feature points, and detecting their parallelism and magnification rate;
An estimation step of estimating a range of a moving body different from the host vehicle based on the parallelism or the enlargement ratio and the two first feature points or the two second feature points;
There is provided a method for detecting a moving object including:
 また、本発明の一実施形態によれば、
前記第一線分と前記第二線分との前記平行度およびまたは前記拡大率が一致したことにより、前記第一特徴点または前記第二特徴点を含む画像部分は、自車両とは異なる一つの移動体の一部であると認識することを含む移動物検知方法が提供される。
Also, according to one embodiment of the present invention,
Since the parallelism and / or the enlargement ratio of the first line segment and the second line segment coincide with each other, the image portion including the first feature point or the second feature point is different from the own vehicle. There is provided a moving object detection method including recognizing that it is a part of one moving body.
 また、本発明の一実施形態によれば、
 前記平行度が一致するかどうかは、前記第一線分と前記第二線分とにおいて相互になす角度が5°以内でも、一致すると推定する移動体を検知する方法が提供される。
Also, according to one embodiment of the present invention,
Whether or not the parallelism matches is provided by a method of detecting a moving body that is estimated to match even if the angle formed between the first line segment and the second line segment is within 5 °.
  また、本発明の一実施形態によれば、
前記特徴点抽出工程において、前記第一対象画像に対象領域を設定し、該対象領域に含まれる異なる組の2つの第一特徴点、および2つの前記第一特徴点と関連付けられた、前記対象画像上において同一の部位を示す2つの第二特徴点を順次抽出し、前記検出工程において、更に、前記それぞれの特徴点の組における前記平行度および前記拡大率が、相互に一致する組を記憶し、これら特徴点の一致する組の数(サポート数)を含む情報に基づき、自車両とは異なる移動体の範囲を推定する、移動体を検知する方法を提供する。
 また、本発明の一実施形態によれば、
前記特徴点の異なる組であって、これらの前記拡大率が相互に同一であるとき、これらの異なる特徴点を含むそれぞれの画像部分は、共に同一の移動体に含まれると推定する、移動体を検知する方法が提供される。
 また、本発明の一実施形態によれば、
前記それぞれの特徴点の組における前記拡大率が、一方が他方の5%以内である場合に、相互に一致すると推定する、移動体を検知する方法が提供される。
 また、本発明の一実施形態によれば、
自車両の周囲情報を含む対象画像に基づいて、自車両とは異なる移動体を検知する不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、
前記対象画像を複数フレームに亘って撮像し、これらの画像情報を記憶し、前記画像情報から撮像時刻が異なる2つのフレーム画像を第一対象画像および第二対象画像として抽出する準備工程と、
前記第一対象画像に含まれる複数の第一特徴点を抽出し、前記第一特徴点から2つの第一対象特徴点を選択し、前記第二対象画像に含まれる複数の第二特徴点を抽出し、前記2つの第一特徴点と、前記対象画像上において同一の部位を示す前記第二対象画像の2つの第二特徴点とを関連づける特徴点抽出工程と、
前記2つの第一特徴点を結んだ第一線分と、前記2つの第二特徴点を結んだ第二線分とを比較し、それらの平行度および拡大率を検出する検出工程と、
前記平行度または前記拡大率、および前記2つの第一特徴点または前記2つの第二特徴点に基づいて、自車両とは異なる移動体の範囲を推定する推定工程とを含み、
前記第一線分と前記第二線分との前記平行度およびまたは前記拡大率が一致したことにより、前記第一特徴点または前記第二特徴点を含む画像部分は、自車両とは異なる一つの移動体の一部であると認識する、
コンピュータで読み出し可能な記憶媒体に記録されるプログラムが提供される。
Also, according to one embodiment of the present invention,
In the feature point extraction step, a target region is set in the first target image, the two first feature points in different sets included in the target region, and the target associated with the two first feature points Two second feature points indicating the same part on the image are sequentially extracted, and in the detection step, a set in which the parallelism and the enlargement ratio in each set of the feature points match each other is stored. Then, a method for detecting a moving body is provided, which estimates a range of a moving body different from the host vehicle based on information including the number of sets (number of supports) that match these feature points.
Also, according to one embodiment of the present invention,
A moving object that is a different set of feature points, and that when the enlargement ratios are the same, the respective image portions including these different feature points are both included in the same moving object. A method of detecting is provided.
Also, according to one embodiment of the present invention,
Provided is a method for detecting a moving body, in which, when the enlargement ratios in the respective feature point sets are within 5% of the other, they are assumed to match each other.
Also, according to one embodiment of the present invention,
A program that is recorded in a non-volatile storage medium that detects a moving body different from the host vehicle based on a target image including surrounding information of the host vehicle and that is executed by a computer,
A preparation step of capturing the target image over a plurality of frames, storing these image information, and extracting two frame images having different imaging times as the first target image and the second target image from the image information;
A plurality of first feature points included in the first target image are extracted, two first target feature points are selected from the first feature points, and a plurality of second feature points included in the second target image are selected. A feature point extracting step of extracting and associating the two first feature points with the two second feature points of the second target image indicating the same part on the target image;
A detection step of comparing the first line segment connecting the two first feature points with the second line segment connecting the two second feature points, and detecting their parallelism and magnification rate;
An estimation step of estimating a range of a moving body different from the host vehicle based on the parallelism or the enlargement ratio and the two first feature points or the two second feature points,
Since the parallelism and / or the enlargement ratio of the first line segment and the second line segment coincide with each other, the image portion including the first feature point or the second feature point is different from the own vehicle. Recognized as part of one mobile object,
A program recorded on a computer-readable storage medium is provided.
 本発明によれば、自車両の周辺に位置する移動物を小さい計算量で検知することができる。 According to the present invention, it is possible to detect a moving object located around the host vehicle with a small amount of calculation.
第1実施形態における移動物検知方法を含む装置の処理構成を概念的に示すブロック図である。It is a block diagram which shows notionally the processing structure of the apparatus containing the moving object detection method in 1st Embodiment. 第1実施形態における移動物検知方法を含む装置の処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process of the apparatus containing the moving object detection method in 1st Embodiment. 第1実施形態における移動物検知方法を含む装置の処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process of the apparatus containing the moving object detection method in 1st Embodiment. 第一対象画像の一例を示す図である。It is a figure which shows an example of a 1st object image. 第二対象画像の一例を示す図である。It is a figure which shows an example of a 2nd target image. 第一対象画像と第二対象画像とに基づいて求められる各特徴点の移動ベクトルを示す図である。It is a figure which shows the movement vector of each feature point calculated | required based on a 1st target image and a 2nd target image. 仮説生成部が対象領域ROIを設定する流れを説明するための図である。It is a figure for demonstrating the flow in which a hypothesis production | generation part sets the object area | region ROI. 図4の点p、点q及び図5の点p'及び点q'を同一座標軸にプロットした状態を示す図である。It is a figure which shows the state which plotted the point p of FIG. 4, the point q, and the point p 'of FIG. 5, and the point q' on the same coordinate axis.
 以下、本発明の実施形態について、図面を用いて説明する。尚、すべての図面において、同様な構成要素には同様の符号を付し、適宜説明を省略する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In all the drawings, the same reference numerals are given to the same components, and the description will be omitted as appropriate.
 〔処理構成〕
 図1は、第1実施形態における移動物検知方法を含む装置の処理構成を概念的に示すブロック図である。移動物検知装置10は、撮像部110、特徴点抽出部120、静止物特定部130、仮説生成部140、仮説選択部150、及び検知部160を有する。
[Processing configuration]
FIG. 1 is a block diagram conceptually showing the processing configuration of an apparatus including a moving object detection method in the first embodiment. The moving object detection apparatus 10 includes an imaging unit 110, a feature point extraction unit 120, a stationary object identification unit 130, a hypothesis generation unit 140, a hypothesis selection unit 150, and a detection unit 160.
 撮像部110は、自車両の周囲の光景を含む画像を複数フレームに亘って撮像する。ここで、「画像を複数のフレームに亘って撮像する」とは、複数の静止画像で構成される動画を撮像することであってもよいし、単に任意の時間間隔を空けて複数の静止画像を撮像することであってもよい。 The imaging unit 110 captures an image including a scene around the host vehicle over a plurality of frames. Here, “capturing an image over a plurality of frames” may mean capturing a moving image composed of a plurality of still images, or simply a plurality of still images at arbitrary intervals. May be taken.
 特徴点抽出部120は、撮像部110で撮像された複数の画像フレームから、処理対象となる2つの異なる時刻に撮像された2つの対象画像(第一対象画像および第二対象画像)にそれぞれ含まれる複数の特徴点を抽出する。ここで抽出される特徴点には、静止物に関する特徴点と、移動物に関する特徴点とが含まれ得る。そして第一対象画像に含まれる複数の特徴点は、対象画像上において同一の部位、例えばタイヤの端部等、を示す第二対象画像上の複数の特徴点と関連図けられる。この関連付けには、移動ベクトルの検出等により行われる。これにより異なる時刻に撮像された画像上の特徴点が、現実の移動体における同一の部位を示すこととなる。このように、ここで抽出された第一対象画像および第二対象画像のそれぞれの複数の特徴点に対して、相互に上記の関連付けが行われる。なおこれらの関連付けは、後述する特徴点抽出工程毎に行っても良いし、事前に全ての特徴点の関連付けを一括して行っても良い。 The feature point extraction unit 120 is included in each of two target images (first target image and second target image) captured at two different times to be processed from a plurality of image frames captured by the imaging unit 110. Extract a plurality of feature points. The feature points extracted here may include feature points related to stationary objects and feature points related to moving objects. Then, the plurality of feature points included in the first target image are related to the plurality of feature points on the second target image indicating the same part on the target image, for example, the end of the tire. This association is performed by detecting a movement vector or the like. As a result, feature points on images taken at different times indicate the same part in an actual moving object. As described above, the above-described association is performed with respect to the plurality of feature points of the first target image and the second target image extracted here. These associations may be performed for each feature point extraction process described later, or all feature points may be associated in advance in a lump.
 静止物特定部130は、各画像に含まれる各特徴点のうち、静止物に関する特徴点を特定する。静止物であるか否かの判定は、後述する。これにより、特徴点抽出部120で抽出された各特徴点のうち、移動物に関する特徴点を判別することができる。 The stationary object specifying unit 130 specifies a feature point related to a stationary object among the feature points included in each image. The determination of whether or not the object is a stationary object will be described later. Thereby, the feature point regarding a moving object can be discriminate | determined among each feature point extracted by the feature point extraction part 120. FIG.
 仮説生成部140は、特徴点抽出部120で抽出された第一対象画像および第二対象画像のそれぞれの複数の特徴点から、それぞれ、2つの特徴点である「2つの第一特徴点」および「2つの第二特徴点」を選択する。そして抽出された第一対象画像の2つの特徴点を結んだ第一線分、および第一対象画像の2つの特徴点を結んだ第二線分をそれぞれ生成する。次に、これらの第一線分と第二線分とが平行である場合に、第一対象画像と第二対象画像の間における2つの特徴点それぞれの移動ベクトルに基づいて、2つの特徴点を含む対象物の移動ベクトルが満たすべき条件である選択条件を生成する。これについては後述する。 The hypothesis generation unit 140 includes two feature points “two first feature points” and a plurality of feature points of the first target image and the second target image extracted by the feature point extraction unit 120, respectively. Select “Two second feature points”. Then, a first line segment connecting the two feature points of the extracted first target image and a second line segment connecting the two feature points of the first target image are generated. Next, when the first line segment and the second line segment are parallel, the two feature points are based on the respective movement vectors of the two feature points between the first target image and the second target image. A selection condition, which is a condition to be satisfied by the movement vector of the object including, is generated. This will be described later.
 仮説選択部150は、仮説生成部140によって複数の選択条件が生成された場合、当該複数の選択条件のうち、最も確度の高い選択条件を選択する。 When the hypothesis generation unit 140 generates a plurality of selection conditions, the hypothesis selection unit 150 selects a selection condition with the highest accuracy among the plurality of selection conditions.
 検知部160は、撮像部110で撮像された2つの画像(第一対象画像と第二対象画像)の間における移動ベクトルが選択条件を満たす特徴点を選択し、当該選択した特徴点に基づいて移動物を検知する。 The detection unit 160 selects a feature point that satisfies a selection condition by a movement vector between two images (first target image and second target image) captured by the imaging unit 110, and based on the selected feature point. Detect moving objects.
 なお、図1に示した移動物検知装置10の各構成要素は、ハードウエア単位の構成ではなく、機能単位のブロックを示している。移動物検知装置10の各構成要素は、任意のコンピュータのCPU、メモリ、メモリにロードされた本図の構成要素を実現するプログラム、そのプログラムを格納するハードディスクなどの記憶メディア、ネットワーク接続用インタフェースを中心にハードウエアとソフトウエアの任意の組合せによって実現される。そして、その実現方法、装置には様々な変形例がある。 In addition, each component of the moving object detection apparatus 10 shown in FIG. 1 is not a hardware unit configuration but a functional unit block. Each component of the moving object detection apparatus 10 includes an arbitrary computer CPU, memory, a program for realizing the components shown in the figure loaded in the memory, a storage medium such as a hard disk for storing the program, and a network connection interface. It is realized by any combination of hardware and software. There are various modifications of the implementation method and apparatus.
 〔動作例〕
 以下、図2および図3を用いて、本実施形態における移動物検知装置10の動作例を説明する。図2および図3は、第1実施形態における移動物検知装置の処理の流れを示すフローチャートである。
[Operation example]
Hereinafter, an operation example of the moving object detection device 10 according to the present embodiment will be described with reference to FIGS. 2 and 3. 2 and 3 are flowcharts showing the flow of processing of the moving object detection apparatus in the first embodiment.
 特徴点抽出部120は、撮像部110によって撮像された画像の中から、第1の画像(第一対象画像)および第2の画像(第二対象画像)を取得する(S101)。第一対象画像および第二対象画像は、撮像されたタイミングがそれぞれ異なっている。特徴点抽出部120は、各画像を任意のアルゴリズムを用いて処理し、当該画像に含まれる各特徴点を抽出する(S102)。静止物特定部130は、画像毎にそれぞれ抽出された各特徴点の対応関係に基づいて、各特徴点の移動ベクトルを算出する(S103)。さらに、静止物特定部130は、画像の消失点を検出するアルゴリズムを用いて、第一対象画像に関する画像の消失点を検出する(S104)。そして、静止物特定部130は、S102で抽出された各特徴点の1つを選択し(S105)、当該特徴点が静止物に関するものか否かを判断する条件(静止物条件)と照らし合わせる(S106)。詳細には、静止物特定部130は、「カメラ回転による移動成分をキャンセルしたとき各特徴点が画像の消失点から放射状に延伸する方向と一致する移動ベクトルを有するか否か」という静止物条件によって、静止物に関する特徴点を判別できる。特徴点が静止物条件を満たす場合(S106:YES)、静止物特定部130は、当該特徴点を以降の処理対象から除外する(S107)。一方、特徴点が静止物条件を満たさない場合(S106:NO)、静止物特定部130は、S107の処理を実行しない。そして、静止物特定部130は、全ての特徴点について静止物条件を確認したか否かを判定する(S108)。未確認の特徴点が残っている場合(S108:NO)、静止物特定部130はS105~S107の処理を繰り返す。一方、全ての特徴点を確認した場合(S108:YES)、処理はS109へ遷移する。 The feature point extraction unit 120 acquires a first image (first target image) and a second image (second target image) from the images captured by the imaging unit 110 (S101). The first target image and the second target image are captured at different timings. The feature point extraction unit 120 processes each image using an arbitrary algorithm, and extracts each feature point included in the image (S102). The stationary object specifying unit 130 calculates a movement vector of each feature point based on the correspondence between each feature point extracted for each image (S103). Furthermore, the stationary object specifying unit 130 detects the vanishing point of the image related to the first target image using an algorithm for detecting the vanishing point of the image (S104). Then, the stationary object specifying unit 130 selects one of the feature points extracted in S102 (S105), and compares it with a condition (stationary object condition) for determining whether or not the feature point relates to a stationary object. (S106). Specifically, the stationary object specifying unit 130 determines whether or not each feature point has a movement vector that coincides with a direction extending radially from the vanishing point of the image when the movement component due to camera rotation is canceled. Thus, the feature point related to the stationary object can be determined. When the feature point satisfies the stationary object condition (S106: YES), the stationary object specifying unit 130 excludes the feature point from the subsequent processing targets (S107). On the other hand, when the feature point does not satisfy the stationary object condition (S106: NO), the stationary object specifying unit 130 does not execute the process of S107. Then, the stationary object specifying unit 130 determines whether or not the stationary object condition has been confirmed for all feature points (S108). When unidentified feature points remain (S108: NO), the stationary object specifying unit 130 repeats the processing of S105 to S107. On the other hand, when all the feature points have been confirmed (S108: YES), the process proceeds to S109.
 ここで、静止物特定部130の動作を、図4から図6を用いて説明する。図4から図6は、静止物特定部130の動作を説明するための図である。図4は、第一対象画像の一例を示す図である。図5は、第二対象画像の一例を示す図である。図6は、第一対象画像と第二対象画像とに基づいて求められる各特徴点の移動ベクトルを示す図である。 Here, the operation of the stationary object specifying unit 130 will be described with reference to FIGS. 4 to 6 are diagrams for explaining the operation of the stationary object specifying unit 130. FIG. 4 is a diagram illustrating an example of the first target image. FIG. 5 is a diagram illustrating an example of the second target image. FIG. 6 is a diagram illustrating a movement vector of each feature point obtained based on the first target image and the second target image.
 図4において、×印はS104で検出された消失点vであり、点a、点b、点p、及び点qは第一対象画像に含まれる特徴点である。また、点a、点b、点p、及び点qは、図5に示される第二対象画像では、それぞれ点a'、点b'、点p'、及び点q'として存在する。なお、第一対象画像および第二対象画像には、これらの特徴点以外の特徴点が含まれているが、説明の便宜上、図示していない。静止物特定部130は、複数の画像間における特徴点の対応関係を特定する任意のアルゴリズムを用いて、点a、点b、点p、及び点qと点a'、点b'、点p'、及び点q'との対応関係を特定する。そして、静止物特定部130は、各特徴点の対応関係と各々の画像における座標位置とに基づいて、図6に示されるように、移動ベクトルを特徴点毎に算出する。図4,図5はカメラを搭載した自車両が直進した場合を例示している。ここで、図6に示されるように、点aの移動ベクトルaa'および点bの移動ベクトルbb'は、消失点vから放射状に延伸する方向と一致する。一方で、点pの移動ベクトルpp'および点qの移動ベクトルqq'の向きは、消失点vから放射状に延伸する方向と一致しない。すなわち、点a及び点bが静止物条件を満たし、点p及び点qは静止物条件を満たさない。よって、静止物特定部130は、点a及び点bを静止物に関する特徴点と判断し、以降の処理の対象から除外する。 4, the x mark is the vanishing point v detected in S104, and the points a, b, p, and q are feature points included in the first target image. In addition, the point a, the point b, the point p, and the point q exist as the point a ′, the point b ′, the point p ′, and the point q ′, respectively, in the second target image shown in FIG. Although the first target image and the second target image include feature points other than these feature points, they are not shown for convenience of explanation. The stationary object specifying unit 130 uses a point a, a point b, a point p, a point q, a point a ′, a point b ′, and a point p by using an arbitrary algorithm that specifies the correspondence of feature points between a plurality of images. The correspondence relationship with “and point q” is specified. Then, the stationary object specifying unit 130 calculates a movement vector for each feature point, as shown in FIG. 6, based on the correspondence between the feature points and the coordinate position in each image. 4 and 5 exemplify a case where the vehicle on which the camera is mounted goes straight. Here, as shown in FIG. 6, the movement vector aa ′ at the point a and the movement vector bb ′ at the point b coincide with the direction extending radially from the vanishing point v. On the other hand, the direction of the movement vector pp ′ at the point p and the movement vector qq ′ at the point q does not coincide with the direction extending radially from the vanishing point v. That is, point a and point b satisfy the stationary object condition, and point p and point q do not satisfy the stationary object condition. Therefore, the stationary object specifying unit 130 determines that the point a and the point b are feature points related to the stationary object, and excludes them from the subsequent processing targets.
 仮説生成部140は、静止物に関する特徴点を除いた残りの特徴点に基づいて、対象領域ROI(Region of Interest)を設定する(S109)。仮説生成部140が対象領域ROIを設定する流れを、図7を用いて説明する。図7は、仮説生成部140が対象領域ROIを設定する流れを説明するための図である。図7に示されるように、仮説生成部140は、消失点vから推測できる地平線hより下の領域を地面領域として認識し、各特徴点の地面投影点を算出する。そして、仮説生成部140は、算出した地面投影点に基づいて、自車両で基準となる位置(例えば、カメラの設置位置等)から各特徴点までの距離を算出する。そして、仮説生成部140は、距離が最も近い特徴点を基準として、所定範囲の領域を対象領域ROIとして設定する。 The hypothesis generation unit 140 sets a target region ROI (Region of Interest) based on the remaining feature points excluding the feature points related to the stationary object (S109). A flow in which the hypothesis generation unit 140 sets the target region ROI will be described with reference to FIG. FIG. 7 is a diagram for explaining a flow in which the hypothesis generation unit 140 sets the target region ROI. As shown in FIG. 7, the hypothesis generation unit 140 recognizes an area below the horizon h that can be estimated from the vanishing point v as a ground area, and calculates a ground projection point of each feature point. Then, the hypothesis generation unit 140 calculates the distances from the reference position (for example, the installation position of the camera, etc.) to each feature point based on the calculated ground projection point. Then, the hypothesis generation unit 140 sets a region within a predetermined range as the target region ROI with reference to the feature point having the closest distance.
   但し、これは一例であり、仮説生成部が対象領域ROIを設定するやり方は以下の通り、色々考えらる。
(1)画像パターン認識が見つけた車両パターンを囲む矩形をROIとする。
(2)レーダー検知対象の向きと距離に2m四方程度の矩形をROIとする。
(3)自分の車線と隣車線について距離50mまで奥行き1m毎、幅方向0.5m毎に車両存在仮説を立て、各仮説ごとに2m四方の矩形をROIとする。
(4)視野全体にROIを設定し、後述する最大サポートを持つ移動物体を検出する。そのサポートに属する特徴点を取り除き、次に多いサポートを持つ位相物体を検出する。これらを繰り返す。これにより異なる移動体を順次検出することができる。
対象領域ROIの設定方法はこれらの方法に限定されない。
However, this is only an example, and various methods are conceivable for the hypothesis generation unit to set the target region ROI as follows.
(1) A rectangle surrounding a vehicle pattern found by image pattern recognition is defined as ROI.
(2) A rectangle about 2 m square in the direction and distance of the radar detection target is set as the ROI.
(3) A vehicle existence hypothesis is established for every 1 m in depth and 0.5 m in the width direction up to a distance of 50 m with respect to one's own lane and the adjacent lane, and a 2 m square is defined as ROI for each hypothesis.
(4) An ROI is set for the entire field of view, and a moving object having the maximum support described later is detected. The feature points belonging to the support are removed, and the phase object having the next largest support is detected. Repeat these. Thereby, different moving bodies can be detected sequentially.
The method for setting the target region ROI is not limited to these methods.
   このように、画像認識機能やレーダ検知処理と連動してROIを設定することで、より演算時間を小さくしつつ、効率的な検知処理が可能になる。 As described above, by setting the ROI in conjunction with the image recognition function and the radar detection processing, efficient detection processing can be performed while reducing the calculation time.
 仮説生成部140は、対象領域ROIに含まれる特徴点の中から、特徴点のペアを任意に選択する(S110)。そして、仮説生成部140は、S110で選択された特徴点のペアが台形条件を満たすか否かを判定する(S111)。 The hypothesis generation unit 140 arbitrarily selects a feature point pair from the feature points included in the target region ROI (S110). Then, the hypothesis generation unit 140 determines whether or not the feature point pair selected in S110 satisfies the trapezoidal condition (S111).
 ここで台形条件について説明する。前提として、2つの画像間において共通する移動物に関して、物体の背面の凹凸が物体までの距離に対して十分小さい場合、一方の画像における移動物に属する各特徴点座標から他方の画像における移動物に属する各特徴点座標への変換は、平行移動と拡大/縮小によって表現できる。以降、ある画像の特徴点の平行移動および拡大/縮小によって他の画像の特徴点を導出することを、相似平行移動と呼ぶ。ここで、同一の移動物に属する特徴点のペアを各々の画像でそれぞれ選択した場合、選択された4つの特徴点は台形をなす。具体的には、図8に示されるように、図4の点p、点q及び図5の点p'及び点q'は台形TRPを形成する。図8は、図4の点p、点q及び図5の点p'及び点q'を同一座標軸にプロットした状態を示す図である。このように、「2つの画像間で対応する特徴点のペアが台形をなすこと」、言い換えると、直線pqと直線p'q'とにより形成される角度が平行と認識できる場合を台形条件と呼ぶ。ここで、直線pqおよび直線p'q'は完全に平行である必要はなく、一定範囲(例えば、±5%の範囲など)で傾いている場合も含まれる。また、台形条件については、以下のように証明される。 Here, the trapezoidal conditions are explained. As a premise, if the unevenness on the back of the object is sufficiently small with respect to the distance to the object with respect to the moving object common between the two images, the moving object in the other image from each feature point coordinate belonging to the moving object in one image Conversion to the feature point coordinates belonging to can be expressed by translation and enlargement / reduction. Hereinafter, deriving a feature point of another image by translation and enlargement / reduction of a feature point of an image is referred to as similar translation. Here, when a pair of feature points belonging to the same moving object is selected in each image, the four selected feature points form a trapezoid. Specifically, as shown in FIG. 8, the points p and q in FIG. 4 and the points p ′ and q ′ in FIG. 5 form a trapezoid TRP. FIG. 8 is a diagram showing a state in which the points p and q in FIG. 4 and the points p ′ and q ′ in FIG. 5 are plotted on the same coordinate axis. Thus, “a pair of feature points corresponding to two images forms a trapezoid”, in other words, a case where the angle formed by the straight line pq and the straight line p′q ′ can be recognized as parallel is a trapezoidal condition. Call. Here, the straight line pq and the straight line p′q ′ do not need to be completely parallel, and include a case where the straight line pq and the straight line p′q ′ are inclined within a certain range (for example, a range of ± 5%). The trapezoidal condition is proved as follows.
 ある移動物に属する特徴点が、2つの画像間で、平行移動ベクトルt=(tx、ty)、拡大率αで変化したと仮定すると、以下の式1及び式2が成り立つ。ここで、座標原点のとりかたは任意であるが、例えば対象領域ROIの中心を座標原点とすると拡大と平行移動の意味がより直感的に理解できる。 Assuming that a feature point belonging to a certain moving object has changed between two images with a translation vector t = (tx, ty) and an enlargement ratio α, the following Expressions 1 and 2 are established. Here, the coordinate origin can be arbitrarily determined. For example, if the center of the target area ROI is the coordinate origin, the meaning of enlargement and parallel movement can be understood more intuitively.
Figure JPOXMLDOC01-appb-M000001
 
Figure JPOXMLDOC01-appb-M000001
 
 上記式1および式2によればp'-q'=α*(p-q)となるため、四角形(pp'q'q)において2辺(辺pqと辺p'q')が平行となることがわかる。すなわち、四角形(pp'q'q)は台形をなすと言える。また、この台形をなす4点を基に、拡大率αと平行移動ベクトルtは以下の式3及び式4で表現できる。 According to Equation 1 and Equation 2 above, p′−q ′ = α * (pq), so that two sides (side pq and side p′q ′) are parallel in the quadrangle (pp′q′q). I understand that That is, it can be said that the quadrangle (pp′q′q) forms a trapezoid. Further, based on the four points forming the trapezoid, the enlargement ratio α and the translation vector t can be expressed by the following equations 3 and 4.
Figure JPOXMLDOC01-appb-M000002
 
Figure JPOXMLDOC01-appb-M000002
 
 ここで、台形条件の成立は次の簡易な等式を、予め定められた許容誤差以内でみたすかどうかにより確認することもできる。式1および式2の座標変換計算が乗算4回、加減算8回に対し、下の式5を用いた場合は乗算2回、加減算5回ですむ。 Here, the establishment of the trapezoidal condition can be confirmed by whether or not the following simple equation is satisfied within a predetermined tolerance. The coordinate transformation calculation of Equation 1 and Equation 2 requires 4 multiplications and 8 additions / subtractions, whereas when the following Equation 5 is used, 2 multiplications and 5 additions / subtractions are required.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 図3に戻り、S110で選択された特徴点のペアが台形条件を満たさない場合(S111:NO)、仮説生成部140は、特徴点のペアを選択し直す(S110)。一方、S110で選択された特徴点のペアが台形条件を満たす場合(S111:YES)、仮説生成部140は、選択された特徴点のペアから相似平行移動仮説を生成する(S112)。なお、相似平行移動仮説は、移動物に属する特徴点を選択するための選択条件と呼ぶこともできる。ここで、相似平行移動仮説とは、同一対象物を構成する各特徴点は同様の拡大率および平行移動ベクトルを有する、というものである。詳細には、仮説生成部140は、2つの特徴点に関する移動ベクトルに基づいて、拡大率αおよび平行移動ベクトルtを算出する。そして、仮説生成部140は、対象領域ROIに含まれる各特徴点の移動ベクトルを検証し、S112で生成された相似平行移動仮説を満たす特徴点(サポート)の数を算出する(S113)。ここで、相似平行移動仮説を満たす特徴点(サポート)とは、具体的には、S112で生成された拡大率αおよび平行移動ベクトルtと一致する拡大率および平行移動ベクトルを有する特徴点である。これら同一の選択条件を有する特徴点に対応する部位は、一つの移動体の一部であると認識される条件になる。なお、ここでいう「一致」とは、いわゆる完全一致ではなく、ある程度(例えば、平行度の場合は相互の角度が5°以内、拡大率の場合は一方が他方の±5%以内に入る場合等)のばらつきを許容する。そして、仮説生成部140は、所定個数の相似平行移動仮説が生成されたか否かを判定する(S114)。所定個数の相似平行移動仮説が生成されていない場合(S114:NO)、仮説生成部140は、新たな特徴点のペアを選択し(S110)、S111~S113の処理を所定の回数繰り返す。一方、所定個数の相似平行移動仮説が生成された場合(S114:YES)、仮説選択部150は、生成された複数の相似平行移動仮説に基づき抽出された選択条件(即ち拡大率αおよび平行移動ベクトルtで規定された条件)の中から、最大数のサポートを有する選択条件の特徴点群を選択する(S115)。検知部160は、S115で選択された相似平行移動仮説を満たす特徴点群に基づいて、移動物に関する領域を認識する(S116)。そして、仮説生成部140は、S116で認識された移動物領域に含まれる特徴点を処理対象から除外し(S117)、上述の処理を繰り返す。この処理は、処理対象の特徴点がなくなるまで繰り返される。この繰り返し処理により、一つの対象画像のペア(前記の第一対象画像および第二対象画像)の中に含まれる、自車両とは異なる複数の移動体を、順次抽出し、推定することができる。なお、S112で生成された1つの相似平行移動仮説に基づいて移動物を認識することもできるため、S114およびS115の処理はなくてもよい。S114およびS115の処理があることにより、相似平行移動仮説の精度の向上が見込める。 3, when the feature point pair selected in S110 does not satisfy the trapezoidal condition (S111: NO), the hypothesis generation unit 140 reselects the feature point pair (S110). On the other hand, when the feature point pair selected in S110 satisfies the trapezoidal condition (S111: YES), the hypothesis generation unit 140 generates a similar translation hypothesis from the selected feature point pair (S112). The similar parallel movement hypothesis can also be called a selection condition for selecting feature points belonging to a moving object. Here, the similar translation hypothesis is that each feature point constituting the same object has the same enlargement ratio and translation vector. Specifically, the hypothesis generation unit 140 calculates the enlargement ratio α and the parallel movement vector t based on the movement vectors related to the two feature points. Then, the hypothesis generation unit 140 verifies the movement vector of each feature point included in the target region ROI, and calculates the number of feature points (supports) that satisfy the similar parallel movement hypothesis generated in S112 (S113). Here, the feature point (support) satisfying the similar translation hypothesis is specifically a feature point having an enlargement factor and a translation vector that coincide with the enlargement factor α and the translation vector t generated in S112. . The part corresponding to the feature point having the same selection condition is a condition that is recognized as a part of one moving body. Note that “match” here is not so-called perfect match, but to some extent (for example, in the case of parallelism, the mutual angle is within 5 °, and in the case of magnification, one is within ± 5% of the other. Etc.). Then, the hypothesis generation unit 140 determines whether a predetermined number of similar parallel movement hypotheses has been generated (S114). When a predetermined number of similar translational hypotheses have not been generated (S114: NO), the hypothesis generation unit 140 selects a new feature point pair (S110), and repeats the processing of S111 to S113 a predetermined number of times. On the other hand, when a predetermined number of similar translation hypotheses are generated (S114: YES), the hypothesis selection unit 150 selects selection conditions (that is, enlargement ratio α and translation) extracted based on the plurality of generated similar translation hypotheses. A feature point group of selection conditions having the maximum number of support is selected from the conditions defined by the vector t) (S115). The detection unit 160 recognizes a region related to the moving object based on the feature point group satisfying the similar parallel movement hypothesis selected in S115 (S116). Then, the hypothesis generation unit 140 excludes the feature points included in the moving object area recognized in S116 from the processing target (S117), and repeats the above-described processing. This process is repeated until there are no more feature points to be processed. Through this iterative process, it is possible to sequentially extract and estimate a plurality of moving bodies different from the host vehicle included in one target image pair (the first target image and the second target image). . Since the moving object can be recognized based on one similar parallel movement hypothesis generated in S112, the processes in S114 and S115 are not necessary. The presence of the processes of S114 and S115 can improve the accuracy of the similar translation hypothesis.
 以上、本実施形態によれば、移動物に属する複数の特徴点のうち少なくとも2つの特徴点を用いて算出される相似平行移動仮説に基づいて、移動物に属する各特徴点を簡易な計算で判別し、移動物に関する領域を特定することができる。これにより、本実施形態によれば、小さい計算量で移動物を検知することができる。そして、限られた計算時間でより多くの仮説生成と検定ができるため、最適な相似平行移動変換を選択する確率が高まる。 As described above, according to the present embodiment, each feature point belonging to the moving object can be easily calculated based on the similar parallel movement hypothesis calculated using at least two feature points among the plurality of feature points belonging to the moving object. It is possible to discriminate and specify an area related to the moving object. Thereby, according to this embodiment, a moving object can be detected with a small calculation amount. Since more hypotheses can be generated and tested in a limited calculation time, the probability of selecting an optimal similar translation transformation is increased.
 また、本実施形態では、相似平行移動仮説に基づいて、複数の画像間において同様に移動する特徴点群が判別できる。ここで、同一の物体に属する各特徴点は当該物体の動きに追従するため、異なる画像間でほぼ同じように移動すると考えられる。よって、本実施形態によれば、相似平行移動仮説を基に画像間における各特徴点の動きを確認するだけで、各移動物に関する領域を特定することができる。これにより、例えば、パターン認識のような方式で検知もれになりやすい例外的対象例えば、パターン識別器を生成するときの正サンプル分布には含まれるが近傍の似通った負サンプルよりも出現確率が小さいため認識されにくい対象等)や未知の対象(例えば、予め類似した正サンプルが用意されていない対象等)などを補って検出できるようになる。 Further, in the present embodiment, it is possible to determine a feature point group that moves in the same manner between a plurality of images based on the similar parallel movement hypothesis. Here, since each feature point belonging to the same object follows the movement of the object, it is considered that the feature point moves in substantially the same manner between different images. Therefore, according to this embodiment, the area | region regarding each moving object can be specified only by confirming the motion of each feature point between images based on the similar parallel movement hypothesis. As a result, for example, an exceptional object that is likely to be missed by a method such as pattern recognition, for example, a positive sample distribution when generating a pattern discriminator has an appearance probability higher than a similar negative sample in the vicinity. It is possible to detect by supplementing an object that is difficult to recognize because it is small) or an unknown object (for example, an object for which a similar positive sample is not prepared in advance).
 また、本実施形態では、ランダムサンプリングで選択する特徴点が少なくとも2つあればよく、ランダムサンプリングによって正しい仮説を生成する確率を高めることができる。例えば、50%がノイズであるサンプルの中から特徴点を8点選択する場合、選択された全ての特徴点がノイズを含まない確率は約1/256となる。一方、本実施形態によれば、少なくとも2点を選択すればよいため、全ての特徴点がノイズを含まない確率を最大で約1/4とすることができる。結果として、正しい仮説を得るまでに必要なサンプリング回数を減らすことができる。 In this embodiment, it is sufficient that at least two feature points are selected by random sampling, and the probability of generating a correct hypothesis by random sampling can be increased. For example, when eight feature points are selected from samples in which 50% is noise, the probability that all selected feature points do not contain noise is about 1/256. On the other hand, according to the present embodiment, since at least two points may be selected, the probability that all feature points do not include noise can be reduced to about ¼ at maximum. As a result, the number of samplings required to obtain a correct hypothesis can be reduced.
 なお、上記実施形態では、特徴点のペアを用いる例を示したが、特徴点のペアによる台形条件を満たす第3の特徴点を抽出し、特徴点ペアの各々と第3の特徴点とに基づいて、1つの相似平行移動仮説を生成してもよい。具体的には、2つの画像それぞれにおける、特徴点ペアの各々と第3の特徴点とから3つの台形を生成できる。そして、この3つの台形各々について、拡大率αと平行移動ベクトルtをそれぞれ算出し、その平均値や中間値等を用いて、1つの相似平行移動仮説を算出する。このようにすることで相似平行移動仮説の精度を向上させることができる。これにより、より少ない仮説生成回数で正しい相似平行移動変換を検知できることが期待される。 In the above embodiment, an example using a pair of feature points is shown. However, a third feature point that satisfies the trapezoidal condition by the pair of feature points is extracted, and each feature point pair and each third feature point are extracted. Based on this, one similar translation hypothesis may be generated. Specifically, three trapezoids can be generated from each feature point pair and the third feature point in each of the two images. Then, for each of the three trapezoids, the enlargement ratio α and the translation vector t are calculated, and one similar translation hypothesis is calculated using the average value, intermediate value, and the like. By doing so, the accuracy of the similar translation hypothesis can be improved. As a result, it is expected that correct parallel translation can be detected with fewer hypothesis generation times.
10 移動物検知装置
110 撮像部
120 特徴点抽出部
130 静止物特定部
140 仮説生成部
150 仮説選択部
160 検知部

 
DESCRIPTION OF SYMBOLS 10 Moving object detection apparatus 110 Image pick-up part 120 Feature point extraction part 130 Stationary object specification part 140 Hypothesis generation part 150 Hypothesis selection part 160 Detection part

Claims (7)

  1. 自車両の周囲情報を含む対象画像に基づいて、自車両とは異なる移動体を検知する方法であって、
    前記対象画像を複数フレームに亘って撮像し、これらの画像情報を記憶し、
    前記画像情報から撮像時刻が異なる2つのフレーム画像を第一対象画像および第二対象画像として抽出する準備工程と、
    前記第一対象画像に含まれる複数の第一特徴点を抽出し、前記第一特徴点から2つの第一対象特徴点を選択し、前記第二対象画像に含まれる複数の第二特徴点を抽出し、前記2つの第一特徴点と、前記対象画像上において同一の部位を示す前記第二対象画像の2つの第二特徴点とを関連づける特徴点抽出工程と、
    前記2つの第一特徴点を結んだ第一線分と、前記2つの第二特徴点を結んだ第二線分とを比較し、それらの平行度および拡大率を検出する検出工程と、
    前記平行度または前記拡大率、および前記2つの第一特徴点または前記2つの第二特徴点に基づいて、自車両とは異なる移動体の範囲を推定する推定工程と、
    を含む移動体を検知する方法。
    A method of detecting a moving body different from the own vehicle based on a target image including surrounding information of the own vehicle,
    Capture the target image over a plurality of frames, store these image information,
    A preparation step of extracting two frame images having different imaging times from the image information as a first target image and a second target image;
    A plurality of first feature points included in the first target image are extracted, two first target feature points are selected from the first feature points, and a plurality of second feature points included in the second target image are selected. A feature point extracting step of extracting and associating the two first feature points with the two second feature points of the second target image indicating the same part on the target image;
    A detection step of comparing the first line segment connecting the two first feature points with the second line segment connecting the two second feature points, and detecting their parallelism and magnification rate;
    An estimation step of estimating a range of a moving body different from the host vehicle based on the parallelism or the enlargement ratio and the two first feature points or the two second feature points;
    For detecting moving objects including
  2. 前記第一線分と前記第二線分との前記平行度およびまたは前記拡大率が一致したことにより、前記第一特徴点または前記第二特徴点を含む画像部分は、自車両とは異なる一つの移動体の一部であると認識する、請求項1の移動体を検知する方法。 Since the parallelism and / or the enlargement ratio of the first line segment and the second line segment coincide with each other, the image portion including the first feature point or the second feature point is different from the own vehicle. The method for detecting a moving body according to claim 1, wherein the moving body is recognized as a part of one moving body.
  3. 前記平行度が一致するかどうかは、前記第一線分と前記第二線分とにおいて相互になす角度が5°以内でも、一致すると推定する請求項1および2の移動体を検知する方法。 3. The method for detecting a moving body according to claim 1, wherein whether or not the parallelism matches is estimated to match even if an angle between the first line segment and the second line segment is within 5 °.
  4. 前記特徴点抽出工程において、前記第一対象画像に対象領域を設定し、該対象領域に含まれる異なる組の2つの第一特徴点、および2つの前記第一特徴点と関連付けられた、前記対象画像上において同一の部位を示す2つの第二特徴点を順次抽出し、
    前記検出工程において、更に、前記それぞれの特徴点の組における前記平行度および前記拡大率が、相互に一致する組を記憶し、これら特徴点の一致する組の数(サポート数)を含む情報に基づき、自車両とは異なる移動体の範囲を推定する、請求項1乃至3の移動体を検知する方法。
    In the feature point extraction step, a target region is set in the first target image, the two first feature points in different sets included in the target region, and the target associated with the two first feature points Sequentially extract two second feature points indicating the same part on the image,
    In the detection step, a set in which the parallelism and the enlargement ratio in the respective feature point sets match each other is stored, and information including the number of sets (support number) in which these feature points match is stored. 4. The method for detecting a moving body according to claim 1, wherein a range of the moving body different from the own vehicle is estimated.
  5. 前記特徴点の異なる組であって、これらの前記拡大率が相互に同一であるとき、これらの異なる特徴点を含むそれぞれの画像部分は、共に同一の移動体に含まれると推定する請求項4の移動体を検知する方法。 5. It is presumed that the image portions including the different feature points are included in the same moving object when the enlargement factors are different from each other in the different sets of the feature points. To detect moving objects.
  6. 前記それぞれの特徴点の組における前記拡大率が、一方が他方の5%以内である場合に、相互に一致すると推定する請求項4および5の移動体を検知する方法。 The method for detecting a moving object according to claim 4 and 5, wherein the enlargement ratio in each of the set of feature points is estimated to match each other when one is within 5% of the other.
  7. 自車両の周囲情報を含む対象画像に基づいて、自車両とは異なる移動体を検知する、
    不揮発性の記憶媒体に記録されコンピュータに実行させるプログラムであって、
    前記対象画像を複数フレームに亘って撮像し、これらの画像情報を記憶し、前記画像情報から撮像時刻が異なる2つのフレーム画像を第一対象画像および第二対象画像として抽出する準備工程と、
    前記第一対象画像に含まれる複数の第一特徴点を抽出し、前記第一特徴点から2つの第一対象特徴点を選択し、前記第二対象画像に含まれる複数の第二特徴点を抽出し、
    前記2つの第一特徴点と、前記対象画像上において同一の部位を示す前記第二対象画像の2つの第二特徴点とを関連づける特徴点抽出工程と、
    前記2つの第一特徴点を結んだ第一線分と、前記2つの第二特徴点を結んだ第二線分とを比較し、それらの平行度および拡大率を検出する検出工程と、
    前記平行度または前記拡大率、および前記2つの第一特徴点または前記2つの第二特徴点に基づいて、自車両とは異なる移動体の範囲を推定する推定工程とを含み、
    前記第一線分と前記第二線分との前記平行度およびまたは前記拡大率が一致したことにより、前記第一特徴点または前記第二特徴点を含む画像部分は、自車両とは異なる一つの移動体の一部であると認識する、
    コンピュータで読み出し可能な記憶媒体に記録されるプログラム。

     
    Based on the target image including the surrounding information of the host vehicle, a moving body different from the host vehicle is detected.
    A program recorded in a non-volatile storage medium and executed by a computer,
    A preparation step of capturing the target image over a plurality of frames, storing these image information, and extracting two frame images having different imaging times as the first target image and the second target image from the image information;
    A plurality of first feature points included in the first target image are extracted, two first target feature points are selected from the first feature points, and a plurality of second feature points included in the second target image are selected. Extract and
    A feature point extracting step for associating the two first feature points with the two second feature points of the second target image indicating the same part on the target image;
    A detection step of comparing the first line segment connecting the two first feature points with the second line segment connecting the two second feature points, and detecting their parallelism and magnification rate;
    An estimation step of estimating a range of a moving body different from the host vehicle based on the parallelism or the enlargement ratio and the two first feature points or the two second feature points,
    Since the parallelism and / or the enlargement ratio of the first line segment and the second line segment coincide with each other, the image portion including the first feature point or the second feature point is different from the own vehicle. Recognized as part of one mobile object,
    A program recorded on a computer-readable storage medium.

PCT/JP2015/063213 2014-05-07 2015-05-07 Traveling body detection method and program WO2015170706A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014096262 2014-05-07
JP2014-096262 2014-05-07

Publications (1)

Publication Number Publication Date
WO2015170706A1 true WO2015170706A1 (en) 2015-11-12

Family

ID=54392556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/063213 WO2015170706A1 (en) 2014-05-07 2015-05-07 Traveling body detection method and program

Country Status (1)

Country Link
WO (1) WO2015170706A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113063484A (en) * 2021-03-31 2021-07-02 中煤科工集团重庆研究院有限公司 Vibration identification amplification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008044592A1 (en) * 2006-10-06 2008-04-17 Aisin Seiki Kabushiki Kaisha Mobile object recognizing device, mobile object recognizing method, and computer program
JP2012027595A (en) * 2010-07-21 2012-02-09 Aisin Seiki Co Ltd Start notification device, start notification method and program
JP2012146146A (en) * 2011-01-12 2012-08-02 Denso It Laboratory Inc Moving object detection apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008044592A1 (en) * 2006-10-06 2008-04-17 Aisin Seiki Kabushiki Kaisha Mobile object recognizing device, mobile object recognizing method, and computer program
JP2012027595A (en) * 2010-07-21 2012-02-09 Aisin Seiki Co Ltd Start notification device, start notification method and program
JP2012146146A (en) * 2011-01-12 2012-08-02 Denso It Laboratory Inc Moving object detection apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113063484A (en) * 2021-03-31 2021-07-02 中煤科工集团重庆研究院有限公司 Vibration identification amplification method

Similar Documents

Publication Publication Date Title
JP5484184B2 (en) Image processing apparatus, image processing method, and program
US20120155745A1 (en) Apparatus and method for extracting correspondences between aerial images
WO2015014111A1 (en) Optical flow tracking method and apparatus
KR20090098167A (en) Method and system for detecting lane by using distance sensor
US9117135B2 (en) Corresponding point searching apparatus
CN109671098B (en) Target tracking method and system applicable to multiple tracking
JP5743935B2 (en) Object detection apparatus and object detection method
WO2014002813A1 (en) Image processing device, image processing method, and image processing program
WO2015170706A1 (en) Traveling body detection method and program
CN107993233B (en) Pit area positioning method and device
JP5867167B2 (en) Shadow detection device
JP5953166B2 (en) Image processing apparatus and program
JP2015001804A (en) Hand gesture tracking system
JP2003317106A (en) Travel path recognition device
KR101284252B1 (en) Curvature Field-based Corner Detection
JP2017091202A (en) Object recognition method and object recognition device
KR101781158B1 (en) Apparatus for image matting using multi camera, and method for generating alpha map
US20180268228A1 (en) Obstacle detection device
JP2010244301A (en) Moving direction detection device, moving direction detection method, and program
WO2014192061A1 (en) Image processing device, image processing method, and image processing program
KR101781172B1 (en) Apparatus and method for matching images
JP6688091B2 (en) Vehicle distance deriving device and vehicle distance deriving method
JP2017508285A5 (en)
US9665938B2 (en) Image processing apparatus and specific figure detecting method
JP5878090B2 (en) Traveling line detector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15789770

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15789770

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP