WO2015025897A1 - 対象物推定装置および対象物推定方法 - Google Patents
対象物推定装置および対象物推定方法 Download PDFInfo
- Publication number
- WO2015025897A1 WO2015025897A1 PCT/JP2014/071800 JP2014071800W WO2015025897A1 WO 2015025897 A1 WO2015025897 A1 WO 2015025897A1 JP 2014071800 W JP2014071800 W JP 2014071800W WO 2015025897 A1 WO2015025897 A1 WO 2015025897A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- value
- acquired
- transition information
- information
- estimated value
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012937 correction Methods 0.000 claims abstract description 47
- 230000002123 temporal effect Effects 0.000 claims abstract description 6
- 230000007704 transition Effects 0.000 claims description 157
- 230000002159 abnormal effect Effects 0.000 claims description 108
- 230000002547 anomalous effect Effects 0.000 abstract 3
- 238000004364 calculation method Methods 0.000 description 71
- 230000003287 optical effect Effects 0.000 description 12
- 230000014509 gene expression Effects 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 11
- 238000004088 simulation Methods 0.000 description 8
- 238000002474 experimental method Methods 0.000 description 5
- 230000000052 comparative effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/06—Automatic manoeuvring for parking
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present invention relates to an object estimation apparatus and an object estimation method for estimating the position and speed of an object from an image obtained by photographing the object.
- Patent Document 1 Patent Document 1
- Non-patent document 1 a detection device
- the object is detected, and the distance from the vehicle (camera) to the object and the movement (speed) of the object are measured (estimated).
- Non-Patent Document 1 discloses a method for estimating the three-dimensional position and velocity of an object from images taken by two cameras. Specifically, in the method of Non-Patent Document 1, parallax is obtained for each pixel from two images taken by both cameras. Further, a transition (optical flow) for each pixel is calculated from continuous frame images acquired by one camera. Then, using these parallax and optical flow as input, the current position and velocity of the object are estimated by the Kalman filter.
- parallax and optical flow acquired from video captured by a camera inevitably include errors due to camera resolution and errors (tracking errors) due to erroneous tracking of objects.
- the estimated value of the calculated object also includes an error.
- One embodiment provides an object estimation device and an object estimation method that can appropriately estimate an estimated value when an abnormal value is detected in disparity information and transition information, and accurately estimate the position and speed of the object. provide.
- the object estimation apparatus estimates the position and speed of an object in the image based on a plurality of images taken from different positions by the image acquisition unit.
- the target object estimation device includes a transition information acquisition unit that acquires temporal transition information of the position of a corresponding pixel from a temporally preceding and following frame in a reference image serving as a reference among the plurality of images, and the plurality of the plurality of images Based on the parallax information acquisition unit that acquires the parallax information of each corresponding pixel from the image based on the reference image, the transition information acquired by the transition information acquisition unit, and the parallax information acquired by the parallax information acquisition unit , Estimated value acquisition means for estimating an estimated value of the position and velocity of the object in a three-dimensional space using a filter, transition information acquired by the transition information acquisition means, and disparity information acquired by the disparity information acquisition means Is a correction means for correcting the estimated value acquired by the estimated value acquiring means based on the determination result of the determining means
- the correction unit determines that the determination unit determines that the transition information acquired by the transition information acquisition unit is an abnormal value, and the determination that the parallax information acquired by the parallax information acquisition unit is an abnormal value.
- the estimated value acquired by the estimated value acquiring means is corrected by a different method when the means determines.
- the block diagram which shows the system configuration
- the conceptual diagram which shows the state which the target object estimation apparatus which concerns on embodiment estimates a target object.
- (A) is a figure which shows the image by which the other vehicle as a target object was image
- (b) is a figure which shows parallax information
- (c) is a figure which shows transition information.
- the figure explaining the case where an abnormal value arises in transition information and an estimated value is corrected.
- the flowchart which shows the control flow at the time of estimating the state of a target object with the target object estimation method which concerns on embodiment.
- Embodiment demonstrates the case where the target object estimation apparatus is mounted in the motor vehicle (normal passenger car) as a vehicle.
- FIG. 1 is a block diagram showing a system configuration of an automobile on which an object estimation device 10 according to this embodiment is mounted.
- the object estimation device 10 is electrically connected to an image acquisition unit 12 (image acquisition unit) and a vehicle information acquisition unit 14.
- the target object estimation apparatus 10 is based on the input from the image acquisition part 12 and the vehicle information acquisition part 14, and the state (position) of the target object (for example, another vehicle or a pedestrian) imaged by the image acquisition part 12 And speed).
- the object estimation device 10 is electrically connected to a driving support device (not shown), and the estimated value of the object obtained by the object estimation device 10 is output to the driving support device.
- the driving support device uses the estimated value that has been input to, for example, warn the driver of the presence of an object or automatically control the brake to assist the driver in driving.
- the image acquisition unit 12 is attached, for example, to a vehicle compartment side of a windshield (windshield glass) (not shown) of an automobile and photographs the front in the traveling direction of the automobile.
- the image acquisition unit 12 of the present embodiment is a stereo camera including a pair of cameras 16 and 18 that are spaced apart from each other (horizontal direction perpendicular to the traveling direction of the automobile).
- the left camera is referred to as the first camera 16 (first photographing means)
- the right camera is referred to as the second camera 18 (second photographing means).
- the first camera 16 and the second camera 18 are synchronized in shooting timing so that frames are shot at the same time.
- the first camera 16 and the second camera 18 capture the front in the traveling direction of the automobile from different positions to obtain a first image (reference image) and a second image, respectively.
- the vehicle information acquisition unit 14 detects the running state of the automobile, and includes a yaw rate sensor 20 that detects the yaw rate of the automobile and a vehicle speed sensor 22 that detects the vehicle speed (traveling speed) of the automobile.
- the yaw rate and vehicle speed acquired by the vehicle information acquisition unit 14 are output to an estimated value acquisition unit 26 described later of the target object estimation device 10.
- the object estimation device 10 has an input value acquisition unit 24 and an estimated value acquisition unit 26 as a basic configuration.
- the input value acquisition unit 24 includes a transition information calculation unit 28 (transition information acquisition unit) and a parallax information calculation unit 30 (parallax information acquisition unit).
- the input value acquisition unit 24 acquires input values (disparity information and transition information described later) to be input to the estimated value acquisition unit 26 based on the first image and the second image acquired by the image acquisition unit 12.
- the transition information calculation unit 28 is electrically connected to the first camera 16 and receives the first image acquired by the first camera 16.
- the transition information calculation unit 28 includes a rewritable storage unit (not shown) such as a RAM, and stores and updates the first image one frame before in the storage unit as needed.
- the transition information calculation unit 28 is a temporal transition information of the position of the corresponding pixel from the first image of the current frame and the first image one frame before (stored temporally) stored in the storage unit. (Optical flow) is acquired.
- the transition information is defined as a position component u in the horizontal direction (x direction) of the pixel and a position component v in the vertical direction (y direction) of the pixel with the optical axis center of the first camera 16 as the origin.
- transition information calculation unit 28 calculates the transition information (u, v), for example, the concentration gradient method or the Horn-Schunkck method (BK P. Horn and BG Schunck, Determining Optical Flow, AI (17) , No. 1-3, 1981, pp. 185-203), etc., are used.
- the concentration gradient method for example, the concentration gradient method or the Horn-Schunkck method (BK P. Horn and BG Schunck, Determining Optical Flow, AI (17) , No. 1-3, 1981, pp. 185-203), etc.
- the parallax information calculation unit 30 is electrically connected to both the first camera 16 and the second camera 18, and the first image and the second image acquired by both the cameras 16 and 18 are respectively input thereto.
- the parallax information calculation unit 30 calculates the parallax (parallax information d) for each pixel based on the input first image and second image.
- the parallax information d is calculated by, for example, image processing by Semi-Global Matching.
- the parallax information calculation unit 30 is set to calculate the parallax information d with reference to the first image. That is, the parallax information d is uniquely associated with the depth position of the pixel (the distance from the first camera 16 to the object) with the optical axis center of the first camera 16 as the origin. Note that tracking between frames of each pixel is performed based on transition information (u, v) (that is, optical flow) calculated by the transition information calculation unit 28.
- transition information (u, v) calculated by the transition information calculation unit 28 and the parallax information d calculated by the parallax information calculation unit 30 are output to the estimated value acquisition unit 26.
- the estimated value acquiring unit 26 includes a calculating unit 32 (estimated value acquiring unit), a determining unit 34 (determining unit), and a correcting unit 36 (correcting unit).
- the calculation unit 32 uses the transition information (u, v) and the parallax information d as input values, and executes a Kalman filter (Kalman Filter) having a predetermined process model and observation model, thereby estimating the position and speed of the object. Is calculated. In the present embodiment, the calculation unit 32 calculates an estimated value using the first image as a reference.
- a Kalman filter Kalman Filter
- the calculation unit 32 does not calculate the estimated value for all the pixels of the first image in order to suppress the calculation amount, but for example, the estimated value for the pixels in a predetermined range excluding the peripheral portion of the first image. Is set to calculate.
- FIG. 2 is a conceptual diagram when the object estimation device 10 estimates the state of the object.
- the coordinate system is set as shown in FIG. 2 with the optical axis center of the first camera 16 as the origin, and the calculation unit 32 determines the position (X, Y, Z) and velocity (V X , V Y , V z ) of the object. ).
- the calculation unit 32 is set with a process model for modeling the position and speed of an object by constant-velocity linear motion, as shown in Equation (1).
- v follows a multivariate normal distribution with mean zero and covariance matrix Q.
- the vehicle speed is a value detected by the vehicle speed sensor 22, and the yaw rate is a value detected by the yaw rate sensor 20.
- the calculation unit 32 includes the disparity information d and the transition information (u, v), and the position (X, Y, Z) and speed (Vx, Vy, Vz) of the object.
- An observation model that defines the relationship is set.
- H is a 3D ⁇ 2D projective transformation matrix.
- the focal length in the horizontal direction of the first image is fx [pix]
- the focal length in the vertical direction is fy [pix]
- the base line length (inter-optical axis distance) of the first camera 16 and the second camera 18 is B
- H is expressed by the following equation (5).
- Equation 6 is a non-linear function
- the Kalman filter cannot be applied as it is. Therefore, in order to linearly approximate the equation (6), as shown in the equation (7), the Taylor expansion is performed around the predicted value (calculated from the process model) of the position of the object at time t (t frame). The items up to are adopted.
- w is zero mean and follows a multivariate normal distribution of covariance matrix R.
- the calculation unit 32 executes a Kalman filter based on the process model and the observation model. That is, as shown below, the calculation unit 32 performs prediction based on the process model (Formula 12 to Formula 14) and update based on the observation model (Formula 15 to Formula 20) for each frame using the first image as a reference. repeat. Thereby, the calculation unit 32 estimates the state (position / velocity) of the object. Equations 10 and 11 show the estimated value of the Kalman filter at t frame and the covariance matrix of the error of the estimated value, respectively.
- Equation 11 represents the true value of the state (position / velocity) of the object. [prediction]
- the calculation unit 32 is preset with an initial value (Equation 21) of an estimated value and an initial value (Equation 22) regarding an error covariance matrix of the estimated value.
- the determination unit 34 determines whether or not the transition information (u, v) calculated by the transition information calculation unit 28 and the parallax information d calculated by the parallax information calculation unit 30 are abnormal values.
- the transition information (u, v) and the parallax information d occur will be described with specific examples.
- FIG. 3 (a) a vehicle that travels in front of the traveling direction of the vehicle on which the object estimation device 10 is mounted (hereinafter referred to as another vehicle 38) is the object, and the state by the object estimation device 10 Assume that estimation is performed.
- FIG. 3B shows the parallax information d in the t ⁇ 1 frame, the t frame, and the t + 1 frame calculated by the parallax information calculation unit 30.
- the 3B indicates a pixel (tracking point) that is normally tracked (that is, the transition information is properly calculated).
- the disparity information d in the pixel changes from the t-1 frame to the t frame, regardless of whether the pixel in the surrounding line is the same location of the object.
- the parallax information d is corrected in the t + 1 frame. That is, the parallax information d of the pixels in the encircling line calculated by the parallax information calculating unit 30 in the t frame is an abnormal value (outlier) deviated from the true value.
- Such an abnormal value is caused by an influence of noise included in the first image and the second image, an error generated when each image is partitioned by finite pixels, or the like.
- the reason that the parallax information d is corrected in the t + 1 frame is that the parallax information calculation unit 30 independently calculates the parallax information d for each frame. That is, since the disparity information calculation unit 30 calculates the disparity information d without depending on the disparity information d calculated in the past, even if an abnormal value occurs once in the disparity information d, the appropriate disparity information in the t + 1 frame. This is because d is calculated.
- FIG. 3C shows the transition information (u, v) in the t′ ⁇ 1 frame, the t ′ frame, and the t ′ + 1 frame calculated by the transition information calculation unit 28.
- the tracking point has moved from the other vehicle 38 to the scenery (side wall) other than the other vehicle 38 or to another vehicle 40 at the location indicated by the surrounding line in the t ′ frame. That is, the transition information (u, v) calculated by the transition information calculation unit 28 for the pixels with a surrounding line of the t ′ frame is an abnormal value (outlier).
- Such abnormal values of the transition information (u, v) are mainly caused by pattern matching errors.
- the tracking point that is out of the other vehicle 38 is tracked as it is. That is, once an abnormal value occurs in the transition information (u, v), the tracking point is not corrected (it does not return to the tracking point on the other vehicle 38). This is due to the fact that the transition information (u, v) (optical flow) is calculated over the temporally changing frames and depends on the transition information (u, v) calculated in the past.
- the determination unit 34 determines whether the transition information (u, v) and the parallax information d are based on the difference between the transition information (u, d) and the parallax information d and the predicted value (Equation 13) calculated by the calculation unit 32. It is determined whether it is an abnormal value.
- the determination unit 34 obtains the prediction corresponding to the transition information (u, v) obtained from the transition information (u, v) calculated by the transition information calculation unit 28 and the predicted value calculated by the calculation unit 32. Based on the difference from the transition information, it is determined whether or not the transition information (u, v) is an abnormal value.
- the determination unit 34 also obtains the disparity based on the difference between the disparity information d calculated by the disparity information calculation unit 30 and the predicted value calculated by the calculation unit 32 and the predicted disparity information corresponding to the disparity information d. It is determined whether the information d is an abnormal value.
- the determination unit 34 uses the residual of the observed value and the predicted value (Equation 16) calculated when the calculating unit 32 updates the Kalman filter.
- the determination unit 34 determines the observation value (disparity information d , Transition information (u, d)) is determined to be an abnormal value.
- the determination unit 34 determines that the transition information (u, v) is an abnormal value.
- the determination unit 34 determines that the parallax information d is an abnormal value.
- the thresholds Th u , Th v , Th d fixed values obtained by simulations or experiments are adopted.
- the threshold value does not necessarily have to be a fixed value.
- the disparity information d of a distant object is greatly influenced by noise or the like of a captured image, and the error of the calculated disparity information d tends to increase.
- the determination unit 34 may always determine that the parallax information d is an abnormal value.
- a threshold value as a variable may be used instead of the threshold value as a fixed value in order to carry out the determination in consideration of variations in the observed value and the predicted value.
- the threshold may be adjusted (variation) of.
- the determination unit 34 determines that the observed value is an abnormal value.
- the determination unit 34 determines that the transition information (u, v) is an abnormal value.
- the determination unit 34 determines that the parallax information d is an abnormal value.
- the correction unit 36 corrects the estimated value calculated by the calculation unit 32 when the determination unit 34 determines that any of the observed values is an abnormal value. On the other hand, when the observed value is not an abnormal value, the correction unit 36 does not correct the estimated value calculated by the calculation unit 32.
- the correction unit 36 corrects the estimated value by different methods when it is determined that the transition information (u, v) is an abnormal value and when it is determined that the parallax information d is an abnormal value. To do.
- the tracking (transition information) of the portion (tracking point A) shown in the encircled line in the figure is normally performed until the t'-1 frame. Therefore, the estimated value calculated by the calculating unit 32 is a value estimated by observing the tracking point A on the other vehicle 38.
- the tracking point changes from A on the other vehicle 38 to B on the background (side wall). That is, the transition information (u, v) calculated by the transition information calculation unit 28 in the t ′ frame is an abnormal value that satisfies Equation 24 or Equation 25 (or Equation 28 or Equation 29).
- the estimated value for the tracking point B in the t ′ frame calculated using the estimated value for the tracking point A in the t′ ⁇ 1 frame is an error value obtained by tracking a different object. Become.
- the correction unit 36 cancels (invalidates) the estimated value in the t ′ frame calculated by the calculation unit 32 and resets the Kalman filter to the initial state.
- the correction unit 36 resets the estimated value to the initial value of Equation 21. .
- transition information (u, v) is an abnormal value
- tracking for the pixel is invalid
- the correction unit 36 sets an invalid value as the output of the object estimation device 10. That is, an invalid value is output as the estimated value for the pixel for which the transition information (u, v) is determined to be an abnormal value, and the estimation for the pixel ends.
- the calculation unit 32 calculates the estimated value for the tracking point B using the above-described initial value. In the subsequent frames, an estimated value for the tracking point B is calculated.
- the correcting unit 36 When it is determined that the transition information (u, v) is not an abnormal value, the correcting unit 36 outputs the estimated value to the output of the object estimating apparatus 10 without correcting the estimated value calculated by the calculating unit 32. Set to.
- FIG. 5 shows the parallax information d calculated by the parallax information calculation unit 30, and is the same as that described in FIG. 3B. Further, the right image in FIG. 5 shows tracking (transition information) in a corresponding frame of the left image. In addition, the circular surrounding line in a figure has shown the same location (tracking point C) of the target object, and the parallax information d becomes an abnormal value in t frame. Note that the transition information (u, d) is calculated appropriately in all frames. That is, tracking for the tracking point C is accurately performed.
- the correction unit 36 sets the estimated value calculated by the calculation unit 32 as an output.
- the disparity information d calculated by the disparity information calculating unit 30 in the t frame is an abnormal value that satisfies Equation 26 (or Equation 30).
- the predicted value obtained in Equation 13 more accurately represents the state of the object than the estimated value (Equation 19) calculated using the parallax information d that is an abnormal value (by the true value). It is considered close. Therefore, when it is determined that the parallax information d is an abnormal value, the correction unit 36 cancels the estimated value calculated from Equation 19 and adopts the predicted value obtained in Equation 13 as the estimated value. And the correction
- the correction unit 36 adopts the estimated value calculated by the calculation unit 32 without correcting it. That is, in the t + 1 frame, the estimated value calculated by the calculating unit 32 using the estimated value (predicted value) obtained in the t frame is used as it is. And the correction
- the flowchart of FIG. 6 shows the control flow at the time of the target object estimation apparatus 10 estimating the state of a target object in arbitrary frames (t frames).
- the first camera 16 and the second camera 18 capture a landscape in front of the traveling direction of the automobile and acquire a first image and a second image, respectively.
- the first image and the second image acquired by the image acquisition unit 12 are input to the object estimation device 10 (input value acquisition unit 24) (step S10).
- the transition information calculation unit 28 stores the input first image in the storage unit (step S12).
- the first image of the t frame stored in the storage unit is used when the transition information calculating unit 28 calculates the transition information (u, v) in the next frame (t + 1 frame).
- step S14 the yaw rate and vehicle speed of the automobile acquired by the vehicle information acquisition unit 14 are input to the object estimation device 10 (estimated value acquisition unit 26) (step S14). That is, the yaw rate detected by the yaw rate sensor 20 and the vehicle speed of the automobile detected by the vehicle speed sensor 22 are input to the object estimation device 10.
- the transition information calculation unit 28 reads the first image one frame before (t ⁇ 1 frame) from the storage unit (step S16). Then, the transition information calculation unit 28, based on the first image in the frame (t frame) input from the image acquisition unit 12 and the first image one frame before, the transition information (u, v ) Is calculated (step S18).
- the parallax information calculation unit 30 calculates the parallax information d based on the first image and the second image input from the image acquisition unit 12 (step S20). At this time, the parallax information calculation unit 30 calculates the parallax information d for each pixel of the first image with reference to the first image. The parallax information d and the transition information (u, v) acquired by the input value acquisition unit 24 are output to the estimated value acquisition unit 26.
- the estimated value acquisition unit 26 acquires an estimated value of the state of the object for each pixel in a predetermined range of the first image (steps S22 to S38). That is, based on the disparity information d and the transition information (u, v) input from the input value acquisition unit 24, the calculation unit 32 performs the Kalman filter based on prediction (Equation 12 to Equation 14) and update (Equation 15 to Equation 20). Is executed (step S24).
- the determination unit 34 determines whether or not the transition information (u, v) calculated by the transition information calculation unit 28 is an abnormal value (step S26). That is, the determination unit 34 determines whether each component of the transition information (u, v) satisfies Expressions 24 and 25 (or Expressions 28 and 29).
- each component of the transition information (u, v) does not satisfy Equation 24, Equation 25 (or Equation 28, Equation 29).
- the determination unit 34 determines that the transition information (u, v) calculated by the transition information calculation unit 28 is not an abnormal value (No in step S26).
- Step S26 determines whether or not the parallax information d calculated by the parallax information calculation unit 30 in Step S16 is an abnormal value (Step S28). That is, the determination unit 34 determines whether or not the parallax information d satisfies Equation 26 (or Equation 30).
- the determination unit 34 determines that the parallax information d calculated by the parallax information calculation unit 30 is not an abnormal value (No in step S28).
- the correction unit 36 adopts the estimated value calculated by the calculation unit 32 in step S24 without correcting it. That is, the correction unit 36 sets the estimated value calculated in step S24 as it is as the output of the estimated value for the pixel (step S30).
- step S26 determines that the transition information (u, v) is an abnormal value (Yes in step S26).
- the correction unit 36 If it is determined that the transition information (u, v) is an abnormal value, the correction unit 36 resets the Kalman filter to the initial state without adopting the estimated value calculated in step S24 (step S32). That is, the correction unit 36 sets a preset initial value (Equation 21) in the Kalman filter. Then, the correction unit 36 sets an invalid value as the estimated value output for the pixel (step S34).
- the correction unit 36 cancels the estimated value calculated based on the transition information (u, v).
- the estimated value at the pixel is invalidated. Accordingly, it is possible to prevent an estimated value including a large error from being output by using an estimation result for a pixel that has not been properly tracked.
- the determination unit 34 determines that the parallax information d is an abnormal value (Yes in step S28). If it is determined that the parallax information d is an abnormal value, the correction unit 36 cancels the estimated value calculated by the calculation unit 32 in step S24. And the correction
- the correction unit 36 cancels the estimated value because the estimated value calculated by the calculating unit 32 includes an error. Then, the correction unit 36 sets a predicted value that is considered to be more accurate than the estimated value calculated by the calculating unit 32 as the estimated value for the pixel. Thereby, even if the error is included in the parallax information d, a more appropriate value is adopted as the estimated value of the object, and the reliability of the object estimation device 10 can be improved. it can.
- the correction unit 36 does not reset the Kalman filter to the initial state as in the case where the transition information (u, v) is an abnormal value, and the subsequent estimation for the pixel is continued.
- the correction unit 36 sets the value of the estimated value as the estimated value, and sets the value of the estimated value as the output value of the pixel (step S30).
- the estimated value acquisition unit 26 executes steps S26 to S36 for the next pixel.
- the estimated value acquisition unit 26 outputs the estimated value set as the output to the driving support device (step S40). Then, the process for the current frame is terminated.
- Equation 1 the process model of the Kalman filter shown in Equation 1 is expressed as follows.
- the covariance matrix R shown in Equation 9 of the observation model employs a fitness value obtained in a simulation performed in advance.
- the following values were set as the initial values of the Kalman filter.
- the position is calculated from the initial value (u 0 , v 0 , d 0 ) of the input value, and the speed is obtained by a simulation performed in advance.
- a fitness value (0.00) was used.
- equations 28 to 30 are used as the input value determination by the determination unit 34.
- the state of the object was estimated by the object estimation device 10 from 0 frame to 20 frames.
- the estimation result of the distance (Z) to the other vehicle 44 is shown in the graph of FIG.
- a result estimated without using a Kalman filter (Comparative Example 1) and a result estimated under the condition that correction by the correction unit 36 is not performed even when the parallax information d is an abnormal value (Comparative Example 2) are also shown.
- the true value in the graph indicates the actual distance to the other vehicle 44.
- Comparative Example 2 in which the estimated value is not corrected, the same locus as the experimental example is drawn up to 8 frames. However, in the nine frames in which the parallax information d has an abnormal value, the estimated value calculated from the parallax information d is used as it is, and it can be seen that the estimation result is out of the true value. Further, after 10 frames, the estimation result deviated from the true value continues due to the influence of the estimation value including the error in 9 frames.
- the estimation value gradually approaches the true value from the start of estimation, and the estimation result is close to the true value even in 9 frames in which the parallax information d is an abnormal value. This is because the predicted value is used as the estimated value in 9 frames instead of the estimated value calculated by the calculating unit 32. In addition, even after 10 frames, the estimation result is close to the true value. Thus, it can be seen that the experimental example most accurately estimates the distance of the object.
- the correction unit 36 corrects the estimated value calculated by the calculation unit 32 based on the determination result of the determination unit 34. Therefore, the accuracy of the estimated value is improved, and the state of the object can be accurately estimated.
- the correction unit 36 resets the Kalman filter to the initial state (sets the estimated value to the initial value) and calculates the calculation unit 32.
- the estimated value calculated by is set as an invalid value. Thereby, it is possible to prevent the estimated value calculated using the transition information (u, v), which is an abnormal value, from being output as it is. That is, the reliability of the object estimation device can be increased by invalidating the estimated value obtained from the pixel in which the tracking error has occurred.
- the tracking point can be tracked from the initial state after the tracking point has changed due to a tracking error. That is, the estimation can be continued as a new tracking point different from the tracking point where the abnormal value has occurred.
- the disparity information d is calculated for each frame, and the abnormal value of the disparity information d is also generated independently for each frame.
- the correction unit 36 sets a predicted value that is considered to be more accurate than the estimated value calculated by the calculating unit 32 as an estimated value. . Thereby, when the parallax information d is an abnormal value, the error of the estimated value of the object can be reduced.
- the correction unit 36 does not reset the Kalman filter to the initial state unlike when an abnormal value occurs in the transition information (u, v). Therefore, in the next frame, the estimation for the pixel can be continued using the estimated value set as the predicted value.
- the determination unit 34 determines the difference between the transition information (u, v) calculated by the transition information calculation unit 28 and the predicted transition information calculated from the predicted value (see formula 24, formula 25 or formula 28, formula 29). ) To determine whether or not the transition information (u, v) is an abnormal value. Therefore, the determination unit 34 can appropriately determine whether or not the transition information (u, v) calculated by the transition information calculation unit 28 is an abnormal value.
- the determination unit 34 determines whether the disparity information d is based on the difference between the disparity information d calculated by the disparity information calculation unit 30 and the predicted disparity information calculated from the predicted value (see Expression 26 or Expression 30). Judged as an abnormal value. Therefore, the determination unit 34 can appropriately determine whether or not the parallax information d calculated by the parallax information calculation unit 30 is an abnormal value.
- a Kalman filter that estimates the state of an object by repeating prediction (Equation 12 to Equation 14) and update (Equation 15 to Equation 20) is used as a filter.
- the Kalman filter is not limited to the embodiment, and for example, an unscented Kalman filter or a particle filter may be employed.
- a filter it is not limited to a Kalman filter like embodiment, for example, other filters, such as a Hinfinity filter, are employable.
- the image acquisition unit including the first camera and the second camera is exemplified as the image acquisition unit.
- the image acquisition means only needs to acquire a plurality of images from different positions, and may include three or more cameras.
- the parallax information is calculated from the first image and the second image acquired by the first camera and the second camera.
- the parallax information is obtained from three or more images captured by three or more cameras. You may make it calculate.
- the state of the object is estimated using the image acquired by the first camera as the reference image, but the estimated value may be calculated using the image acquired by the second camera as the reference image.
- the second camera is the first photographing means
- the first camera is the second photographing means
- the image photographed by the second camera is the first image
- the image photographed by the first camera is the second image.
- the transition information is calculated from successive frames of the first image, but it is not always necessary to calculate the transition information from successive frames as long as the frames are temporally mixed.
- the transition information may be calculated from the frames that are before and after one frame (that is, t frames and t + 2 frames).
- the first camera and the second camera photograph the front in the traveling direction of the automobile and estimate the object in front of the automobile is shown.
- the first camera and the second camera may capture the side (left-right direction) and the rear of the automobile. Thereby, it becomes possible to estimate a state about objects (for example, back vehicles) other than the front of a car.
- the object estimation device is mounted on an ordinary passenger car, but the object estimation device can be mounted on any vehicle such as a large vehicle such as a bus or a truck or a motorcycle. Further, the object estimation device may be mounted in an experimental facility such as a simulation device.
- the translational motion is calculated by the vehicle speed sensor, but even if the translational motion of the automobile is calculated from the first image acquired by the first imaging unit or the second image acquired by the second imaging unit. Good.
- the rotational motion of the automobile may be calculated from the first image or the second image without adopting the yaw rate sensor.
- the Kalman filter is executed for pixels in a predetermined range of the first image and the estimated value is calculated, but the estimated values may be calculated for all the pixels of the first image.
- the object estimation apparatus estimates the position and speed of the object in the image based on a plurality of images taken from different positions by the image acquisition means.
- the target object estimation device includes a transition information acquisition unit that acquires temporal transition information of the position of a corresponding pixel from a temporally preceding and following frame in a reference image serving as a reference among the plurality of images, and the plurality of the plurality of images Based on the parallax information acquisition unit that acquires the parallax information of each corresponding pixel from the image based on the reference image, the transition information acquired by the transition information acquisition unit, and the parallax information acquired by the parallax information acquisition unit , Estimated value acquisition means for estimating an estimated value of the position and velocity of the object in a three-dimensional space using a filter, transition information acquired by the transition information acquisition means, and disparity information acquired by the disparity information acquisition means Is a correction means for correcting the estimated value acquired by the estimated value acquiring means based on the determination result of the determining means and
- the correction unit determines that the determination unit determines that the transition information acquired by the transition information acquisition unit is an abnormal value, and the determination that the parallax information acquired by the parallax information acquisition unit is an abnormal value.
- the estimated value acquired by the estimated value acquiring means is corrected by a different method when the means determines.
- the determination means determines whether the transition information and the parallax information are abnormal values.
- the correcting unit corrects the estimated value acquired by the estimated value acquiring unit based on the determination result of the determining unit. Therefore, when the transition information or the parallax information is an abnormal value, the estimated value including the error acquired by the estimated value acquisition unit is corrected, and the position and speed of the object can be estimated with high accuracy.
- the correcting means changes the method of correcting the estimated value when the determining means determines that the transition information is an abnormal value and when the parallax information is determined to be an abnormal value. That is, the state of the object can be estimated more accurately by appropriately correcting the estimated value according to the type of input value (transition information, parallax information) in which an abnormal value has occurred.
- the correction means invalidates the estimated value acquired by the estimated value acquisition means when the determination means determines that the transition information acquired by the transition information acquisition means is an abnormal value, and the filter To the initial state.
- the transition information acquisition means acquires transition information from frames that are temporally changed in the reference image. That is, the transition information is acquired depending on the transition information acquired in the past. Therefore, once the transition information becomes an abnormal value (tracking error), the transition information acquired in subsequent frames includes a large error due to the influence of the transition information of the abnormal value.
- the correcting unit sets the estimated value acquired by the estimated value acquiring unit as an invalid value and resets the filter to the initial state. That is, for pixels for which transition information is determined to be an abnormal value, estimation is stopped in the frame. Accordingly, it is possible to prevent the pixel in which the tracking error has occurred from being estimated as the same object, and to improve the reliability of the object estimation device.
- the filter is reset to the initial state, tracking of a pixel in which an abnormal value has occurred is newly started, and an estimated value for the pixel can be obtained.
- the estimated value acquisition means acquires the predicted position and speed of the object from the estimated values acquired in the past, assuming that the object follows a preset process model.
- the correction unit acquires the estimation value acquired by the estimation value acquisition unit. The predicted value is corrected.
- the disparity information acquisition unit acquires disparity information from a plurality of images in the same frame. That is, since the disparity information is calculated independently in time, the past disparity information does not affect the disparity information calculated thereafter. Accordingly, the abnormal value of the parallax information is generated independently for each frame.
- the correcting unit corrects the estimated value acquired by the estimated value acquiring unit to a predicted value. That is, by changing an estimated value including an error acquired using disparity information that is an abnormal value to a predicted value, the error of the estimated value in the frame can be reduced.
- the correction unit continues the estimation for the pixel in the subsequent frames without resetting the filter to the initial state. That is, when an abnormal value occurs in the disparity information, the disparity information in the frame can be set as a predicted value, and estimation for the pixel can be continued in the subsequent frames.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Automation & Control Theory (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Measurement Of Optical Distance (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
図1は、本実施形態に係る対象物推定装置10が搭載された自動車のシステム構成を示すブロック図である。対象物推定装置10は、画像取得部12(画像取得手段)および車両情報取得部14に電気的に接続されている。そして、対象物推定装置10は、画像取得部12および車両情報取得部14からの入力に基づいて、画像取得部12で撮影された対象物(例えば、他の車両や歩行者)の状態(位置および速度)を推定する。
画像取得部12は、例えば自動車のフロントガラス(ウインドシールドガラス)(図示せず)の車両室側に取り付けられ、自動車の進行方向の前方を撮影する。本実施形態の画像取得部12は、左右(自動車の進行方向に対し垂直な水平方向)に離間する一対のカメラ16,18を備えたステレオカメラである。なお、以下の説明では、左側のカメラを第1カメラ16(第1撮影手段)、右側のカメラを第2カメラ18(第2撮影手段)とそれぞれ指称する。
車両情報取得部14は、自動車の走行状態を検出するものであって、自動車のヨーレートを検出するヨーレートセンサ20および自動車の車速(走行速度)を検出する車速センサ22を備えている。車両情報取得部14で取得されたヨーレートおよび車速は、対象物推定装置10の後述する推定値取得部26に出力される。
図1に示すように、対象物推定装置10は、入力値取得部24および推定値取得部26を基本構成としている。
入力値取得部24は、推移情報算出部28(推移情報取得手段)および視差情報算出部30(視差情報取得手段)を備えている。入力値取得部24は、画像取得部12が取得した第1の画像および第2の画像に基づいて、推定値取得部26に入力する入力値(後述する視差情報および推移情報)を取得する。
先ず始めに、推移情報(u,v)が異常値であると判定された場合に、補正部36が推定値を補正する方法について、図4を用いて説明する。なお、図4に示す画像は、推移情報算出部28が算出した推移情報(u,v)を示すものであって、図3(c)で説明したものと同一である。
次に、視差情報dが異常値であると判定された場合に、補正部36が推定値を補正する方法について、図5を用いて説明する。なお、図5の左側の画像は、視差情報算出部30が算出した視差情報dを示すものであって、図3(b)で説明したものと同一である。また、図5の右側の画像は、左側の画像の対応するフレームでのトラッキング(推移情報)を示したものである。なお、図中の円形の囲み線は、対象物の同一箇所(追跡点C)を示しており、tフレームにおいて視差情報dが異常値となっている。なお、推移情報(u,d)については、全てのフレームにおいて適正に算出されたものとする。すなわち、追跡点Cについてのトラッキングは、正確に行われている。
次に、前述した対象物推定装置10の効果を確認するべく、視差情報dに異常値が生じる場合での実証実験(実験例)を行った。図7に示すように、対象物推定装置10が搭載された自動車42は、車速25km/hで道路を直進するものとする。そして、自動車42の走行方向の前方を走行する車両(以下、他車両44という)を対象物として、当該他車両44の位置および速度を推定した。なお、座標系は、図7に示す通りであり、第1カメラ16のレンズの中心を原点位置とした。
前述した実施形態に係る対象物推定装置は、以下のように変更することが可能である。
Claims (10)
- 画像取得手段(12)が異なる位置から撮影した複数の画像に基づいて、当該画像中の対象物の位置および速度を推定する対象物推定装置であって、
前記複数の画像のうち基準となる基準画像において、時間的に前後するフレームから対応する画素の位置の時間的な推移情報を取得する推移情報取得手段(28)と、
前記複数の画像から対応する各画素の視差情報を前記基準画像を基準として取得する視差情報取得手段(30)と、
前記推移情報取得手段が取得した推移情報と、前記視差情報取得手段が取得した視差情報とに基づいて、前記対象物の3次元空間上の位置および速度の推定値をフィルタを用いて推定する推定値取得手段(32)と、
前記推移情報取得手段が取得した推移情報および前記視差情報取得手段が取得した視差情報が、それぞれ異常値であるか否か判定する判定手段(34)と、
前記判定手段の判定結果に基づいて、前記推定値取得手段が取得した前記推定値を補正する補正手段(36)とを備え、
前記補正手段は、前記推移情報取得手段が取得した前記推移情報が異常値であると前記判定手段が判定した場合と、前記視差情報取得手段が取得した前記視差情報が異常値であると前記判定手段が判定した場合とで、前記推定値取得手段が取得した推定値を異なる方法で補正する
ことを特徴とする対象物推定装置。 - 前記補正手段は、前記推移情報取得手段が取得した前記推移情報を前記判定手段が異常値であると判定した場合に、前記推定値取得手段が取得した前記推定値を無効にすると共に、前記フィルタを初期状態にリセットする請求項1記載の対象物推定装置。
- 前記推定値取得手段は、前記対象物が予め設定したプロセスモデルに従うと仮定して、過去に取得した前記推定値から前記対象物の位置および速度の予測値を取得し、
前記補正手段は、前記視差情報取得手段が取得した前記視差情報が異常値であると前記判定手段が判定した場合に、前記推定値取得手段が取得した前記推定値を当該推定値取得手段が取得した前記予測値の値に補正する請求項1または2記載の対象物推定装置。 - 前記判定手段は、前記推移情報取得手段が取得した前記推移情報と、前記推定値取得手段が取得した前記予測値から算出され、前記推移情報に対応する予測推移情報との差に基づいて、当該推移情報が異常値であるか否かを判定する請求項3記載の対象物推定装置。
- 前記判定手段は、前記視差情報取得手段が取得した前記視差情報と、前記推定値取得手段が取得した前記予測値から算出され、前記視差情報に対応する予測視差情報との差に基づいて、当該視差情報が異常値であるか否か判定する請求項3または4記載の対象物推定装置。
- 画像取得手段(12)が異なる位置から撮影した複数の画像に基づいて、当該画像中の対象物の位置および速度を推定する対象物推定方法であって、
前記複数の画像のうち基準となる基準画像において、時間的に前後するフレームから対応する画素の位置の時間的な推移情報を推移情報取得手段(28)が取得するステップ(S18)と、
前記複数の画像から対応する各画素の視差情報を前記基準画像を基準として視差情報取得手段(30)が取得するステップ(S20)と、
前記推移情報取得手段が取得した推移情報と、前記視差情報取得手段が取得した視差情報とに基づいて、前記対象物の3次元空間上の位置および速度の推定値をフィルタを用いて推定値取得手段(32)が推定するステップ(S24)と、
前記推移情報取得手段が取得した推移情報および前記視差情報取得手段が取得した視差情報が、それぞれ異常値であるか否かを判定手段(34)が判定するステップ(S26,S28)と、
前記判定手段の判定結果に基づいて、前記推定値取得手段が取得した前記推定値を補正手段(36)が補正するステップ(S32,S34,S36)とを備え、
前記補正手段は、前記推移情報取得手段が取得した前記推移情報が異常値であると前記判定手段が判定した場合と、前記視差情報取得手段が取得した前記視差情報が異常値であると前記判定手段が判定した場合とで、前記推定値取得手段が取得した推定値を異なる方法で補正する
ことを特徴とする対象物推定方法。 - 前記補正手段は、前記推移情報取得手段が取得した前記推移情報を前記判定手段が異常値であると判定した場合に、前記推定値取得手段が取得した前記推定値を無効にすると共に(S34)、前記フィルタを初期状態にリセットする(S32)請求項6記載の対象物推定方法。
- 前記推定値取得手段は、前記対象物が予め設定したプロセスモデルに従うと仮定して、過去に取得した前記推定値から前記対象物の位置および速度の予測値を取得し、
前記補正手段は、前記視差情報取得手段が取得した前記視差情報が異常値であると前記判定手段が判定した場合に、前記推定値取得手段が取得した前記推定値を前記推定値取得手段が取得した前記予測値の値に補正する(S36)請求項6または7記載の対象物推定方法。 - 前記判定手段は、前記推移情報取得手段が取得した前記推移情報と、前記推定値取得手段が取得した前記予測値から算出され、前記推移情報に対応する予測推移情報との差に基づいて、当該推移情報が異常値であるか否か判定する請求項8記載の対象物推定方法。
- 前記判定手段は、前記視差情報取得手段が取得した前記視差情報と、前記推定値取得手段が取得した前記予測値から算出され、前記視差情報に対応する予測視差情報との差に基づいて、当該視差情報が異常値であるか否か判定する請求項8または9記載の対象物推定方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/912,885 US9767367B2 (en) | 2013-08-21 | 2014-08-20 | Object estimation apparatus and object estimation method |
DE112014003818.0T DE112014003818T5 (de) | 2013-08-21 | 2014-08-20 | Objektschätzvorrichtung und Objektschätzverfahren |
CN201480046274.2A CN105474265B (zh) | 2013-08-21 | 2014-08-20 | 对象物推断装置以及对象物推断方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-171774 | 2013-08-21 | ||
JP2013171774A JP6110256B2 (ja) | 2013-08-21 | 2013-08-21 | 対象物推定装置および対象物推定方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015025897A1 true WO2015025897A1 (ja) | 2015-02-26 |
Family
ID=52483668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/071800 WO2015025897A1 (ja) | 2013-08-21 | 2014-08-20 | 対象物推定装置および対象物推定方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9767367B2 (ja) |
JP (1) | JP6110256B2 (ja) |
CN (1) | CN105474265B (ja) |
DE (1) | DE112014003818T5 (ja) |
WO (1) | WO2015025897A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9767367B2 (en) | 2013-08-21 | 2017-09-19 | Denso Corporation | Object estimation apparatus and object estimation method |
US11407411B2 (en) * | 2016-09-29 | 2022-08-09 | Denso Corporation | Other lane monitoring device |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101832189B1 (ko) * | 2015-07-29 | 2018-02-26 | 야마하하쓰도키 가부시키가이샤 | 이상화상 검출장치, 이상화상 검출장치를 구비한 화상 처리 시스템 및 화상 처리 시스템을 탑재한 차량 |
JP6815856B2 (ja) * | 2016-12-14 | 2021-01-20 | 日立オートモティブシステムズ株式会社 | 先行車両の走行軌跡予測装置及びその搭載車両 |
US10146225B2 (en) * | 2017-03-02 | 2018-12-04 | GM Global Technology Operations LLC | Systems and methods for vehicle dimension prediction |
EP3407273A1 (de) * | 2017-05-22 | 2018-11-28 | Siemens Aktiengesellschaft | Verfahren und anordnung zur ermittlung eines anomalen zustands eines systems |
JP6849569B2 (ja) * | 2017-09-29 | 2021-03-24 | トヨタ自動車株式会社 | 路面検出装置 |
JP6936098B2 (ja) * | 2017-09-29 | 2021-09-15 | トヨタ自動車株式会社 | 対象物推定装置 |
JP7142468B2 (ja) * | 2018-05-30 | 2022-09-27 | スズキ株式会社 | 移動体追跡装置 |
JP7223587B2 (ja) * | 2019-02-04 | 2023-02-16 | 日産自動車株式会社 | 物体運動推定方法及び物体運動推定装置 |
WO2020170462A1 (ja) * | 2019-02-22 | 2020-08-27 | 公立大学法人会津大学 | 動画像距離算出装置および動画像距離算出用プログラムを記録したコンピュータ読み取り可能な記録媒体 |
JP7196789B2 (ja) * | 2019-07-02 | 2022-12-27 | 株式会社デンソー | 車載センサ装置 |
DE102020201505A1 (de) | 2020-02-07 | 2021-08-12 | Robert Bosch Gesellschaft mit beschränkter Haftung | Verfahren und Vorrichtung zum kamerabasierten Ermitteln einer Geschwindigkeit eines Fortbewegungsmittels |
CN112435475B (zh) * | 2020-11-23 | 2022-04-29 | 北京软通智慧科技有限公司 | 一种交通状态检测方法、装置、设备及存储介质 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5109425A (en) * | 1988-09-30 | 1992-04-28 | The United States Of America As Represented By The United States National Aeronautics And Space Administration | Method and apparatus for predicting the direction of movement in machine vision |
ATE203844T1 (de) * | 1992-03-20 | 2001-08-15 | Commw Scient Ind Res Org | Gegenstands-überwachungsystem |
JPH0894320A (ja) | 1994-09-28 | 1996-04-12 | Nec Corp | 移動物体計測装置 |
US7321386B2 (en) * | 2002-08-01 | 2008-01-22 | Siemens Corporate Research, Inc. | Robust stereo-driven video-based surveillance |
US7450735B1 (en) * | 2003-10-16 | 2008-11-11 | University Of Central Florida Research Foundation, Inc. | Tracking across multiple cameras with disjoint views |
JP4151631B2 (ja) | 2004-09-08 | 2008-09-17 | 日産自動車株式会社 | 物体検出装置 |
DE102005008131A1 (de) | 2005-01-31 | 2006-08-03 | Daimlerchrysler Ag | Objektdetektion auf Bildpunktebene in digitalen Bildsequenzen |
JP5063023B2 (ja) * | 2006-03-31 | 2012-10-31 | キヤノン株式会社 | 位置姿勢補正装置、位置姿勢補正方法 |
KR100927096B1 (ko) * | 2008-02-27 | 2009-11-13 | 아주대학교산학협력단 | 기준 좌표상의 시각적 이미지를 이용한 객체 위치 측정방법 |
JP4985516B2 (ja) | 2008-03-27 | 2012-07-25 | ソニー株式会社 | 情報処理装置、および情報処理方法、並びにコンピュータ・プログラム |
WO2010097943A1 (ja) * | 2009-02-27 | 2010-09-02 | トヨタ自動車株式会社 | 車両相対位置推定装置及び車両相対位置推定方法 |
JP4733756B2 (ja) * | 2009-04-28 | 2011-07-27 | 本田技研工業株式会社 | 車両周辺監視装置 |
JP5528151B2 (ja) * | 2010-02-19 | 2014-06-25 | パナソニック株式会社 | 対象物追跡装置、対象物追跡方法、および対象物追跡プログラム |
JP5389002B2 (ja) * | 2010-12-07 | 2014-01-15 | 日立オートモティブシステムズ株式会社 | 走行環境認識装置 |
US20130197736A1 (en) * | 2012-01-30 | 2013-08-01 | Google Inc. | Vehicle control based on perception uncertainty |
GB2506338A (en) * | 2012-07-30 | 2014-04-02 | Sony Comp Entertainment Europe | A method of localisation and mapping |
US9641763B2 (en) * | 2012-08-29 | 2017-05-02 | Conduent Business Services, Llc | System and method for object tracking and timing across multiple camera views |
JP6014440B2 (ja) * | 2012-09-26 | 2016-10-25 | 日立オートモティブシステムズ株式会社 | 移動物体認識装置 |
JP6110256B2 (ja) | 2013-08-21 | 2017-04-05 | 株式会社日本自動車部品総合研究所 | 対象物推定装置および対象物推定方法 |
JP6225889B2 (ja) * | 2014-11-19 | 2017-11-08 | 株式会社豊田中央研究所 | 車両位置推定装置及びプログラム |
US9710712B2 (en) * | 2015-01-16 | 2017-07-18 | Avigilon Fortress Corporation | System and method for detecting, tracking, and classifiying objects |
-
2013
- 2013-08-21 JP JP2013171774A patent/JP6110256B2/ja active Active
-
2014
- 2014-08-20 DE DE112014003818.0T patent/DE112014003818T5/de not_active Ceased
- 2014-08-20 WO PCT/JP2014/071800 patent/WO2015025897A1/ja active Application Filing
- 2014-08-20 US US14/912,885 patent/US9767367B2/en active Active
- 2014-08-20 CN CN201480046274.2A patent/CN105474265B/zh active Active
Non-Patent Citations (3)
Title |
---|
CLEMENS RABE ET AL.: "Fast detection of moving objects in complex scenarios", 2007 IEEE INTELLIGENT VEHICLES SYMPOSIUM, 13 June 2007 (2007-06-13), pages 398 - 403, XP031126977 * |
SHIGEAKI HARAD A ET AL.: "A method of detecting network anomalies and determining their termination", IEICE TECHNICAL REPORT, IN 2006- 114 TO 137, INFORMATION NETWORKS, vol. 106, no. 420, 7 December 2006 (2006-12-07), pages 115 - 120 * |
TOBI VAUDREY ET AL.: "Integrating Disparity Images by Incorporating Disparity Rate", 2ND WORKSHOP ROBOT VISION, 2008 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9767367B2 (en) | 2013-08-21 | 2017-09-19 | Denso Corporation | Object estimation apparatus and object estimation method |
US11407411B2 (en) * | 2016-09-29 | 2022-08-09 | Denso Corporation | Other lane monitoring device |
Also Published As
Publication number | Publication date |
---|---|
CN105474265B (zh) | 2017-09-29 |
US9767367B2 (en) | 2017-09-19 |
US20160203376A1 (en) | 2016-07-14 |
JP6110256B2 (ja) | 2017-04-05 |
CN105474265A (zh) | 2016-04-06 |
DE112014003818T5 (de) | 2016-05-04 |
JP2015041238A (ja) | 2015-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6110256B2 (ja) | 対象物推定装置および対象物推定方法 | |
US9729858B2 (en) | Stereo auto-calibration from structure-from-motion | |
KR101961001B1 (ko) | 단일-카메라 거리 추정 | |
US20180308364A1 (en) | Obstacle detection device | |
JP6743171B2 (ja) | 自動車両の道路付近の物体を検出するための方法、コンピュータデバイス、運転者支援システム、及び、自動車両 | |
JP6565188B2 (ja) | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム | |
KR102507248B1 (ko) | 에고모션 추정 시스템 및 방법 | |
JP7212486B2 (ja) | 位置推定装置 | |
JP2007263669A (ja) | 3次元座標取得装置 | |
JP2016148512A (ja) | 単眼モーションステレオ距離推定方法および単眼モーションステレオ距離推定装置 | |
JP6455164B2 (ja) | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム | |
JP2019066302A (ja) | 対象物推定装置 | |
JP6543935B2 (ja) | 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム | |
JP2009139324A (ja) | 車両用走行路面検出装置 | |
JP2011209070A (ja) | 画像処理装置 | |
EP3435286B1 (en) | Imaging control device and imaging control method | |
JP6747176B2 (ja) | 画像処理装置、撮影装置、プログラム、機器制御システム及び機器 | |
JP7457574B2 (ja) | 画像処理装置 | |
JP7223587B2 (ja) | 物体運動推定方法及び物体運動推定装置 | |
JP3951734B2 (ja) | 車両用外界認識装置 | |
KR102002228B1 (ko) | 동적 객체 검출 장치 및 방법 | |
JP5682266B2 (ja) | 移動量推定装置および移動量推定方法 | |
JP2021043486A (ja) | 位置推定装置 | |
KR102479253B1 (ko) | 차량용 카메라 영상 기반 공차 보정 방법 | |
JP2020087210A (ja) | キャリブレーション装置及びキャリブレーション方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480046274.2 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14837572 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14912885 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112014003818 Country of ref document: DE Ref document number: 1120140038180 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14837572 Country of ref document: EP Kind code of ref document: A1 |