US10650535B2 - Measurement device and measurement method - Google Patents

Measurement device and measurement method Download PDF

Info

Publication number
US10650535B2
US10650535B2 US15/257,221 US201615257221A US10650535B2 US 10650535 B2 US10650535 B2 US 10650535B2 US 201615257221 A US201615257221 A US 201615257221A US 10650535 B2 US10650535 B2 US 10650535B2
Authority
US
United States
Prior art keywords
moving object
information
image capturing
capturing unit
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/257,221
Other versions
US20170084048A1 (en
Inventor
Tsuyoshi Tasaki
Manabu Nishiyama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TASAKI, TSUYOSHI, NISHIYAMA, MANABU
Publication of US20170084048A1 publication Critical patent/US20170084048A1/en
Application granted granted Critical
Publication of US10650535B2 publication Critical patent/US10650535B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • An embodiment described herein relates generally to a measurement device and a measurement method.
  • the conventional techniques cause accuracy of three-dimensional measurement to deteriorate because the position of the other moving object differs among the images as a result of the movement of the other moving object.
  • FIG. 1 is a schematic diagram illustrating exemplary structures of a measurement device and a moving object in an embodiment
  • FIG. 2 is a schematic diagram illustrating an example of the moving object in the embodiment
  • FIG. 3 is a schematic diagram illustrating an example of model information in the etch embodiment
  • FIG. 4 is a schematic diagram illustrating an exemplary image in the embodiment
  • FIG. 5 is an explanatory view illustrating an example of identifying moving object regions in the embodiment
  • FIG. 6 is an explanatory view illustrating other example of identifying the moving object regions in the embodiment.
  • FIG. 7 is an explanatory view illustrating still other example of identifying the moving object regions in the embodiment.
  • FIG. 8 is an explanatory view illustrating an exemplary technique for tracking feature points in the embodiment
  • FIG. 9 is an explanatory view illustrating the exemplary technique for tracking the feature points in the embodiment.
  • FIG. 10 is an explanatory view illustrating an exemplary technique for searching for corresponding points in the embodiment
  • FIG. 11 is an explanatory view illustrating an exemplary technique for extracting three-dimensional points in the embodiment
  • FIG. 12 is an explanatory view illustrating an exemplary technique for dividing a space in the embodiment
  • FIG. 13 is an explanatory view illustrating an exemplary technique for selecting a representative point in the embodiment
  • FIG. 14 is an explanatory view illustrating an exemplary technique for estimating a movement plane in the embodiment
  • FIG. 15 is an explanatory view illustrating an exemplary technique for detecting an obstacle in the embodiment
  • FIG. 16 is a flowchart illustrating exemplary processing in the embodiment
  • FIG. 17 is an explanatory view illustrating a comparative example of the embodiment.
  • FIG. 16 is an explanatory view illustrating an example of the advantages of the embodiment.
  • FIG. 19 is a schematic diagram illustrating an exemplary hardware structure of the measurement device in the embodiment.
  • a measurement device includes a processing circuitry.
  • the processing circuitry acquires a plurality of images captured in time series by an image capturing unit installed in a moving object.
  • the processing circuitry acquires first position information that indicates a position of the moving object and first direction information that indicates a direction of the moving object.
  • the processing circuitry acquires moving object information that includes second position information indicating a position of other moving object moving in surroundings of the moving object.
  • the processing circuitry identifies a moving object region in which the other moving object is present for each of the images, based on the first position information, the first direction information, and the moving object information.
  • the processing circuitry estimates a position and a posture of the image capturing unit based on the images.
  • the processing circuitry searches for a plurality of sets of corresponding points among non-moving object regions other than the moving object regions in the respective images.
  • the processing circuitry performs three-dimensional measurement based on the position and the posture of the image capturing unit and the sets of the corresponding points.
  • FIG. 1 is a schematic diagram illustrating an exemplary structure of a measurement device 10 according to the embodiment and an exemplary structure of a moving object 1 provided with the measurement device 10 .
  • FIG. 2 is a schematic diagram illustrating an example of the moving object 1 in the embodiment.
  • the measurement device 10 is installed in the moving object 1 that includes an image capturing unit 5 and an azimuth sensor 6 .
  • the measurement device 10 includes a first acquirer 11 , a second acquirer 13 , a third acquirer 15 , an identifier 17 , a first estimator 18 , a searcher 19 , a measurer 21 , a second estimator 23 , a detector 25 , and an output unit 27 .
  • the moving object 1 is a vehicle such as a motor vehicle that moves on a road surface serving as a movement plane, for example.
  • the moving object 1 is, however, not limited to this example.
  • the moving object 1 may be any object that can move on the movement plane.
  • the moving object 1 may be a ship that moves on a water surface serving as the movement plane or a robot that moves on a floor surface serving as the movement plane.
  • the image capturing unit 5 can be achieved by an image sensor or a camera, for example.
  • the image capturing unit 5 captures images of the surroundings (e.g., in a traveling direction of the moving object 1 ) of the moving object 1 in time series and outputs a plurality of captured images to the measurement device 10 .
  • the azimuth sensor 6 detects a direction of the moving object 1 and outputs, to the measurement device 10 , first position information that indicates the detected direction of the moving object. 1
  • the first acquirer 11 , the second acquirer 13 , the third acquirer 15 , the identifier 17 , the first estimator 18 , the searcher 19 , the measurer 21 , the second estimator 23 , the detector 25 , and the output unit 27 may be achieved by a processing unit such as a central processing unit (CPU) executing a program, i.e., by software, by hardware such as an integrated circuit (IC), or by both of the software and the hardware.
  • the measurement device 10 may be achieved by a chip (integrated circuit) or a typical computer.
  • the first acquirer 11 acquires, from the image capturing unit 5 , the multiple images captured in time series.
  • the second acquirer 13 sequentially acquires the first position information that indicates the position of the moving object 1 and first direction information that indicates the direction of the moving object 1 .
  • the second acquirer 13 acquires the first position information from the global positioning system (GPS) satellites and the first direction information from the azimuth sensor 6 , for example.
  • GPS global positioning system
  • the acquisition manner of the first position information and the first direction information is not limited to the example.
  • the third acquirer 15 acquires information about other moving object (hereinafter referred to as moving object information).
  • the moving object information includes second position information indicating the position of other moving object moving in the surroundings of the moving object 1 .
  • the moving object information may further include at least one of speed information that indicates a speed of the other moving object, model information that indicates a model obtained by abstracting a shape of the other moving object, texture information that indicates at least one of a color and a pattern of the other moving object, and second direction information that indicates a direction of the other moving object.
  • the model information may indicate at least one of the width, the height, and the depth (the length) of the other moving object as illustrated in FIG. 3 .
  • the model information indicates a model obtained by abstracting the shape of the other moving object to a rectangular parallelepiped.
  • the model information may indicate a model of a cube obtained by multiplying any length of the width, the height, and the depth of the other moving object by a constant, or a model of a rectangular parallelepiped obtained by estimating a length of the other side from any length of the width, the height, and the depth of the other moving object using a general aspect ratio of a vehicle.
  • the third acquirer 15 sequentially acquires the moving object information from other moving object by performing inter-vehicle communication (e.g., wireless communication according to IEEE 802.21p) with the other moving object running in the surroundings of the moving object 1 , for example.
  • the other moving object acquires the second position information from the GPS satellites, puts the acquired second position information in the moving object information, and transmits the moving object information to the moving object 1 (the measurement device 10 ), for example.
  • the other moving object may detect a speed thereof, produce the speed information, and put the produced speed information in the moving object information. When preliminarily holding the model information and the texture information thereof, the other moving object may put them in the moving object information.
  • the other moving object may acquire the second direction information from an azimuth sensor included therein, and put the acquired second direction information in the moving object information.
  • the third acquirer 15 may sequentially acquire the moving object information from the other moving object moving in the surroundings of the moving object 1 from a monitoring device by performing road-vehicle communication (e.g., wireless communication according to IEEE 802.21p) with the monitoring device present on a road shoulder in the surroundings of the moving object 1 .
  • the monitoring device is a monitoring camera, for example.
  • the monitoring device is, however, not limited to the example.
  • the monitoring device captures an image of the other moving object moving in the surroundings of the moving object 1 , calculates the second position information using the captured image, puts the calculated second position information in the moving object information, and transmits the moving object information to the moving object 1 (the measurement device 10 ).
  • the monitoring device may use the position information about the monitoring device acquired from the GPS satellites for calculating the second position information.
  • the monitoring device may calculate the model information, the texture information, and the second direction information using the captured image, and put the calculated information in the moving object information.
  • the identifier 17 identifies a moving object region in which the other moving object is present for each of the multiple images acquired by the first acquirer 11 on the basis of the first position information and the first direction information that are acquired by the second acquirer 13 and the moving object information acquired by the third acquirer 15 .
  • the identifier 17 determines that the other moving object is moving when a speed indicated by the speed information included in the moving object information is equal to or larger than a first threshold.
  • the identifier 17 may calculate a speed to be included in the moving object information from a difference between the position indicated by the second position information included in the moving object information at this time and the position indicated by the second position information included in the moving object information at previous time, and an acquisition interval of the moving object information.
  • the identifier 17 When determining that the other moving object is moving, the identifier 17 identifies the position of the other moving object on an image acquired by the first acquirer 11 on the basis of the first position information and the first direction information that are acquired by the second acquirer 13 and the second position information included in the moving object information acquired by the third acquirer 15 .
  • the identifier 17 can identify a positional relation between the moving object 1 and the other moving object on an image captured by the image capturing unit 5 of the moving object 1 because the position and the direction of the moving object 1 and the position of the other moving object are known, thereby making it possible to identify the position of the other moving object on the image.
  • the first position information, the first direction information, the moving object information, and the image are captured at substantially the same time.
  • the identifier 17 further identifies the moving object region in which the other moving object is present on the image on the basis of the identified position of the other moving object on the image.
  • the identifier 17 identifies, as the moving object region, a region having a predetermined size based on the identified position of the other moving object. For example, the identifier 17 identifies, as the moving object region, a region that includes the identified position of the other moving object as the center and the predetermined number of pixels.
  • the moving object region may have any shape such as rectangular or circular.
  • the identifier 17 may identify, as the moving object region, a region that has a size according to a distance between the moving object 1 and the other moving object based on the identified position of the other moving object. For example, when identifying the positions (center positions) of moving bodies 31 and 32 serving as the other moving bodies as illustrated in FIG. 4 , the identifier 17 may identify a moving object region 41 having the position of the moving object 31 as the center thereof and a moving object region 42 having the position of the moving object 32 as the center thereof as illustrated in FIG. 5 . In the examples illustrated in FIGS.
  • the size (the number of pixels) of the moving object region 42 is smaller than that of the moving object region 41 because the distance from the moving object 1 to the moving object 32 is larger (farther) than that from the moving object 1 to the moving object 31 .
  • the distance between the moving object 1 and the other moving object is obtained on the basis of the first position information and the corresponding second position information.
  • the identifier 17 may identify, as the moving object region, a region according to the model of the other moving object based on the identified position of the other moving object.
  • the identifier 17 may identify, as the moving object region, a region according to the direction of the other moving object.
  • the identifier 17 may identify, as the moving object region, a region according to the distance between the other moving object and the moving object 1 .
  • the identifier 17 may identify a moving object region 51 having the position of the moving object 31 as the center thereof and a moving object region 52 having the position of the moving object 32 as the center thereof as illustrated in FIG. 6 .
  • the moving object region 51 is identified according to the model and the direction of the moving object 31 and the distance between the moving object 31 and the moving object 1 .
  • the moving object region 51 is identified in such a manner that the model and the direction of the moving object 31 are set in a three-dimensional space and the model is projected such that the moving object 31 is positioned at the center of the moving object region 51 .
  • the moving object region 52 is identified according to the model and the direction of the moving object 32 and the distance between the moving object 32 and the moving object 1 .
  • the moving object region 52 is identified in such a manner that the model and the direction of the moving object 32 are set in a three-dimensional space and the model, is projected such that the moving object 32 is positioned at the center of the moving object region 52 .
  • the model of the other moving object can be obtained from the model information while the direction of the other moving object can be obtained from the second direction information.
  • the projection may be performed such that a length of the depth of the model projected on the image is a length of a rectangle in a lateral direction while a length of the height of the model projected on the image is a length of the rectangle in the height direction.
  • the identifier 17 may identify, as the moving object region, a region according to the texture information about the other moving object based on the identified position of the other moving object. For example, when identifying the positions (center positions) of the moving bodies 31 and 32 serving as the other moving bodies as illustrated in FIG. 4 , the identifier 17 may identify a moving object region 61 of the moving object 31 and a moving object region 62 of the moving object 32 as illustrated in FIG. 7 .
  • the moving object region 61 is identified according to the texture information about the moving object 31 . Specifically, the moving object region 61 has a color the same as or a similar to that indicated by the texture information around the position of the moving object 31 . Likewise, the moving object region 62 is identified according to the texture information about the moving object 32 . Specifically, the moving object region 62 has a color the same as or a similar to that indicated by the texture information around the position of the moving object 32 .
  • the first estimator 18 estimates the position and the posture of the image capturing unit 5 on the basis of the multiple images acquired by the first acquirer 11 . Specifically, the first estimator 18 extracts feature points from the respective images, tracks the feature points among the images, and estimates the position and the posture of the image capturing unit 5 . In the embodiment, the first estimator 18 extracts the feature points from non-moving object regions other than the moving object regions in the respective images, tracks the feature points among the images, and estimates the position and the posture of the image capturing unit 5 .
  • FIG. 8 illustrates the image captured at time t ⁇ 1.
  • FIG. 9 illustrates the image captured at time t.
  • the first estimator 18 extracts the feature points from the non-moving object region other than a moving object region 74 as illustrated in FIG. 8 and tracks the extracted feature points on the image illustrated in FIG. 9 .
  • the first estimator 18 extracts feature points 71 , 72 , and 73 from the image illustrated in FIG. 8 , tracks the respective feature points 71 , 72 , and 73 on the image illustrated in FIG. 9 , and extracts feature points 81 , 82 , and 83 , respectively, from the image illustrated in FIG. 9 .
  • the first estimator 18 obtains a correspondence in time series between the corresponding feature points on the corresponding images for each feature point.
  • the first estimator 18 obtains the position and the posture (current position and posture) of the image capturing unit 5 relative to the position and the posture of the image capturing unit 5 at previous time on the basis of the obtained correspondences of the respective feature points in time series (posture rotation R and parallel movement T) by epipolar geometry.
  • a point having a large luminance difference in an image can be chosen as the feature point.
  • the feature points can be extracted by a known extraction technique such as Harris corner detector.
  • a technique that associates the feature points with each other is described as an example of tracking the feature points.
  • the tracking the feature points includes a technique in which a region surrounding a certain pixel is considered as the feature point, and the pixels are associated with each other between the regions or the pixels are associated with each other using the region and the feature point.
  • the searcher 19 searches for a plurality of sets of the corresponding points among the non-moving object regions in the respective images acquired by the first acquirer 11 . Specifically, the searcher 19 arranges search points in the non-moving object region on the image captured at previous time, and searches for corresponding points each correspond to one of the search points on the image captured at this time to determine the sets of the corresponding points.
  • the search points may be arranged in the non-moving object region on the image captured at previous time such that the search points are arranged entirely on the periphery of the non-moving object region or evenly and entirely in the non-moving object region.
  • the searcher 19 can identify a range in which the search points are capable of being observed on the image captured at this time from the position and the posture, which are estimated by the first estimator 18 , of the image capturing unit 5 , thereby searching the identified range and searching for the corresponding points or the search points. Specifically, the searcher 19 searches for the corresponding points corresponding to the search points on epipolar lines (indicated with the arrows in FIG. 10 ), on which the search points can be observed as illustrated in FIG. 10 .
  • the search points are arranged on the image captured at previous time and the corresponding points corresponding to the search points are searched for on the image captured at this time.
  • the search points may be arranged on the image captured at this time and the corresponding points corresponding to the search points may be searched for on the image captured at previous time.
  • the measurer 21 performs three-dimensional measurement on the basis of the position and the posture, which are estimated by the first estimator 18 , of the image capturing unit 5 and the multiple sets of the corresponding points searched for by the searcher 19 to obtain three-dimensional points. Specifically, the measurer 21 performs three-dimensional measurement based on a principle of triangulation using the position and the posture, which are estimated by the first estimator 18 , of the image capturing unit 5 and the multiple sets of the corresponding points searched for by the searcher 19 . As a result, the measurer 21 restores the depths of the respective corresponding points, thereby obtaining the three-dimensional points.
  • the second estimator 23 estimates a movement plane on which the moving object 1 moves on the basis of the three-dimensional points obtained by the measurer 21 .
  • a plane detection technique using a random sample consensus (RANSAC) which is a known technique, may be used for example.
  • RANSAC random sample consensus
  • a set of three points having a height equal to or smaller than a second threshold is randomly acquired at several times in a group of the three-dimensional points obtained by the measurer 21 , and a plane that includes the largest number of three-dimensional points within a third threshold distance from the plane may be estimated as the movement plane out of the planes each formed by the three points.
  • the movement plane can be estimated highly accurately using the following technique.
  • the second estimator 23 sets a fourth threshold on the basis of a time series change in the posture, which is estimated by the first estimator 18 , of the image capturing unit 5 (the moving object 1 ). Specifically, the second estimator 23 sets the fourth threshold such that with an increase in the time series change in the posture of the image capturing unit 5 , the value of the fourth threshold is reduced. Specifically, the second estimator 23 sets, to the value of the fourth threshold, a value obtained by applying a value indicating the time series change in the posture of the image capturing unit 5 to a monotonically decreasing function.
  • the second estimator 23 calculates an absolute value of a difference between a value indicating the posture of the image capturing unit 5 of the moving object 1 at a present time t and a value indicating the posture of the image capturing unit 5 of the moving object 1 at a calculation time t ⁇ P, which is the time P hours before the present time t.
  • the second estimator 23 sets, to the fourth threshold, the value of y obtained by substituting the calculated absolute value of the difference to x of the monotonically decreasing function.
  • the calculation time t ⁇ P is preferably the calculation time of the posture of the image capturing unit 5 at previous time, but is not limited thereto.
  • the calculation time is the calculation time of the posture of the image capturing unit 5 at or before the previous time.
  • the absolute value of the difference is the sum of the absolute values of the differences in roll, pitch, and yaw, for example, but is not limited thereto.
  • the second estimator 23 extracts a plurality of three-dimensional points each having a distance from the moving object 1 equal to or smaller than the set fourth threshold in the movement direction of the moving object 1 out of the three-dimensional point group obtained by the measurer 21 .
  • FIG. 11 is en explanatory view illustrating an exemplary technique for extracting the three-dimensional points in the embodiment, and illustrates a three-dimensional point group 101 obtained by the measurer 21 on the yz plane.
  • the movement direction of the moving object 1 is the z-axis direction (specifically, +z direction).
  • the second estimator 23 extracts a plurality of three-dimensional points 102 each having a z coordinate value equal to or smaller than the set fourth threshold T out of the three-dimensional point group 101 .
  • the second estimator 23 divides a space in which the extracted three-dimensional points are positioned into a plurality of divided spaces in the movement direction of the moving object 1 .
  • FIG. 12 is an explanatory view illustrating an exemplary technique for dividing the space in the embodiment, and illustrates the three-dimensional point group 101 and the three-dimensional points 102 on the yz plane.
  • the second estimator 23 identifies a minimum value L and a maximum value U out of the z coordinate values of the extracted three-dimensional points 102 , and divides a space having an coordinate value equal to or larger than L and equal to or smaller than U equally into k (k ⁇ 2) in the z-axis direction to obtain k block spaces.
  • the second estimator 23 selects, for each divided space, a representative point out of the three-dimensional points included in the divided space. Specifically, the second estimator 23 selects, as the representative point, the three-dimensional point at the lowest position in the vertical direction out of the three-dimensional points included in the divided space.
  • FIG. 13 is an explanatory view illustrating an exemplary technique for selecting the representative point in the embodiment, and illustrates the three-dimensional points 102 and the block spaces after the division on the yz plane.
  • the second estimator 23 selects, as the representative point, the three-dimensional point having a maximum y coordinate value in each block space.
  • the second estimator 23 selects three-dimensional point 103 - 1 as the representative point.
  • the second estimator 23 selects three-dimensional point 103 - k as the representative point.
  • the second estimator 23 estimates a plane approximated with the selected representative points as the movement plane on which the moving object 1 moves.
  • FIG. 1.4 is an explanatory view illustrating an exemplary technique for estimating the movement plane in the embodiment, and illustrates the block spaces after the division and k selected representative points 103 on the yz plane.
  • the second estimator 23 estimates a plane 104 (illustrated as the straight line on the yz plane in FIG. 14 ) approximated with the k selected representative points 103 as the movement plane on which the moving object 1 moves.
  • the detector 25 detects an obstacle on the basis of the three-dimensional point group obtained by the measurer 21 and the movement plane estimated by the second estimator 23 . Specifically, the detector 25 detects, as an obstacle, the three-dimensional points that are not present on the movement plane out of the three-dimensional point group obtained by the measurer 21 .
  • FIG. 15 is an explanatory view illustrating an exemplary technique for detecting the obstacle in the embodiment, and illustrates the three-dimensional point group 101 obtained by the measurer 21 and the movement plane 104 estimated by the second estimator 23 on the yz plane.
  • the detector 25 calculates a distance d from the movement plane 104 in the y-axis direction for each of the three-dimensional points included in the three-dimensional point group 101 , and detects the three-dimensional point having a distance d equal to or larger than an error as the three-dimensional point included in the obstacle.
  • the error is a measurement error in the three-dimensional measurement performed by the measurer 21 .
  • the output unit 27 performs output on the basis of the detection result of the detector 25 .
  • the output unit 27 causes a speaker (not illustrated) installed in the moving object 1 to output the position of the detected obstacle as a voice, or a display installed in the moving object 1 to display the position of the detected obstacle on an image acquired by the first acquirer 11 .
  • FIG. 16 is a flowchart illustrating an exemplary flow of a procedure of the processing in the embodiment.
  • the first acquirer 11 sequentially acquires images captured in time series from the image capturing unit 5 while the second acquirer 13 sequentially acquires the first position information and the first direction information (step S 101 ).
  • the third acquirer 15 acquires the moving object information about other moving object that moves in the surroundings of the moving object 1 (step S 103 ).
  • the identifier 17 identifies the moving object region in which the other moving object is present for each of the images acquired by the first acquirer 11 on the basis of the first position information and the first direction information that are acquired by the second acquirer 13 and the moving object information acquired by the third acquirer 15 (step S 105 ).
  • the first estimator 18 extracts the feature points from the non-moving object regions in the respective images acquired by the first acquirer 11 in order to estimate the position and the posture of the image capturing unit 5 . If the feature points sufficient for estimation are extracted (Yes at step S 107 ), the first estimator 18 tracks (perform tracking) the feature points among the images (step S 109 ) and estimates the position and the posture of the image capturing unit 5 (step S 111 ).
  • the searcher 19 searches for a plurality of sets of the corresponding points among the non-moving object regions in the respective images acquired by the first acquirer 11 (step S 114 ).
  • the measurer 21 performs three-dimensional measurement on the basis of the position and the posture, which are estimated by the first estimator 18 , of the image capturing unit 5 and the multiple sets of the corresponding points searched for by the searcher 19 to obtain the three-dimensional points (step S 115 ).
  • the second estimator 23 estimates the movement plane on which the moving object 1 moves on the basis or the three-dimensional point group obtained by the measurer 21 (step S 117 ).
  • the detector 25 detects an obstacle on the basis of the three-dimensional point group obtained by the measurer 21 and the movement plane estimated by the second estimator 23 (step S 119 ).
  • the output unit 27 performs output on the basis of the detection result of the detector 25 (step S 121 ).
  • the position of a search point 201 of the moving object 200 at previous image capturing time and the position of a corresponding point 202 of the moving object 200 at this image capturing time differ from each other. If the three-dimensional measurement is performed using the set of the search point 201 and the corresponding point 201 as the set of the corresponding points, the three-dimensional point (specifically, the depth) cannot be accurately obtained due to the principle of triangulation.
  • the embodiment searches for the sets of the corresponding points among a plurality of images on the basis of the non-moving object regions other than the moving object regions in the respective images, thereby making it possible to search for the sets of the corresponding points excluding the moving object (other moving object). As a result, deterioration of accuracy of three-dimensional measurement can be reduced.
  • the embodiment can reduce the deterioration of accuracy of three-dimensional measurement as described above.
  • detection accuracy of the obstacle can be increased. For example, a tiny obstacle having a height about 10 cm can be accurately detected.
  • the threshold is reduced, thereby hardly deteriorating the accuracy of three-dimensional measurement due to the influence of the change in posture of the moving object even when the change increases. Furthermore, the movement plane of the moving object is estimated using the three-dimensional points near the moving object, thereby making it possible to increase the accuracy of estimating the movement plane.
  • the embodiment can increase the accuracy of estimating the movement plane as described above.
  • the detection accuracy of the obstacle can be more increased.
  • the feature points are extracted from the non-moving object regions when the position and the posture of the image capturing unit 5 are estimated.
  • the positions of the feature points to be tracked hardly shift. Consequently, the deterioration of accuracy of estimating the position and the posture of the image capturing unit 5 can also be reduced.
  • the three-dimensional points each having a distance (distance in the height direction) equal to or larger than the error from the movement plane are detected as the three-dimensional points included in the obstacle.
  • the three-dimensional points each having a distance (distance in the height direction) equal to or larger than the error from the movement plane and equal to or smaller than a fifth threshold may be detected as the three-dimensional points included in the obstacle.
  • the fifth threshold is set to the height (distance) nearly the same as the height of the moving object 1 , obstacles such as a traffic signal and a pedestrian bridge are prevented from being mistakenly detected as the obstacles.
  • FIG. 19 is a schematic diagram illustrating an exemplary hardware structure of the measurement device of the embodiment and the modification.
  • the measurement device of the embodiment and the modification has a hardware structure utilizing a normal computer.
  • the measurement device includes a control device 901 such as a CPU, a main storage device 902 such as a read only memory (ROM) or a random access memory (RAM), an auxiliary storage device 903 such as a hard disk drive (HDD) or a solid state drive (SSD), a display device 904 such as a display, an input device 905 such as a keyboard or a mouse, and a communication device 906 such as a communication interface.
  • a control device 901 such as a CPU
  • main storage device 902 such as a read only memory (ROM) or a random access memory (RAM)
  • an auxiliary storage device 903 such as a hard disk drive (HDD) or a solid state drive (SSD)
  • a display device 904 such as a display
  • an input device 905 such as
  • a program executed by the measurement device in the embodiment and the modification is stored and provided in a computer-readable storage medium, which may be provided as a computer program product, such as a compact disc read only memory (CD-ROM), a compact disc recordable (CD-R), a memory card, a digital versatile disc (DVD), or a flexible disk (FD), as an installable or executable file.
  • a computer-readable storage medium such as a compact disc read only memory (CD-ROM), a compact disc recordable (CD-R), a memory card, a digital versatile disc (DVD), or a flexible disk (FD), as an installable or executable file.
  • the program executed by the measurement device in the embodiment and the modification may be stored in a computer connected to a network such as the Internet and provided by being downloaded via the network. Furthermore, the program executed by the measurement device in the embodiment and the modification may be provided or distributed via a network such as the Internet. The program executed by measurement device in the embodiment and the modification may be embedded and provided in a ROM, for example.
  • the program executed by the measurement device in the embodiment and the modification has a module structure that achieves the respective units described above in a computer.
  • the CPU reads out the program from the ROM or the HDD to the RAM so as to execute the program, so that the respective units described above are achieved in the computer.
  • the present invention is not directly limited to the embodiment and the modification.
  • the invention can be embodied by changing components without departing from the spirit and scope of the invention when practiced.
  • various aspects of the invention can be made by properly combining the components of the embodiment and the modification. For example, some components may be eliminated from all of the components of the embodiment and the modification.
  • the components of the different embodiments and modifications may be properly combined.
  • the steps in the flowchart of the embodiment may be changed in execution order, some steps may be executed simultaneously, or the steps may be executed in different order every implementation without departing from their roles.
  • the embodiment and modification can prevent the deterioration of accuracy of three-dimensional measurement even when other moving object is present in a plurality of images captured in time series.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Geometry (AREA)

Abstract

According to an embodiment, a measurement device includes a processing circuitry. A plurality of images are captured in time series by an image capturing unit installed in a moving object. The processing circuitry identifies a region in which other moving object moving in surroundings of the moving object is present for each of the images, based on position and direction information of the moving object, and moving object information of the other moving object. The processing circuitry estimates position and posture of the image capturing unit based on the images. The processing circuitry searches for sets of corresponding points among non-moving object regions in the respective images. The processing circuitry performs 3D measurement based on the position and posture of the image capturing unit and the sets of the corresponding points.

Description

CROSS-REFERENCE TO RELATED APPLICATION
This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2015-183874, filed on Sep. 17, 2015; the entire contents of which are incorporated herein by reference.
FIELD
An embodiment described herein relates generally to a measurement device and a measurement method.
When other moving object is present in captured images, the conventional techniques, however, cause accuracy of three-dimensional measurement to deteriorate because the position of the other moving object differs among the images as a result of the movement of the other moving object.
BACKGROUND
Techniques have been known that implement three-dimensional measurement using a plurality of images captured in time series by a camera installed in a moving object such as a vehicle that is running and a movement amount of the moving object.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic diagram illustrating exemplary structures of a measurement device and a moving object in an embodiment;
FIG. 2 is a schematic diagram illustrating an example of the moving object in the embodiment;
FIG. 3 is a schematic diagram illustrating an example of model information in the etch embodiment;
FIG. 4 is a schematic diagram illustrating an exemplary image in the embodiment;
FIG. 5 is an explanatory view illustrating an example of identifying moving object regions in the embodiment;
FIG. 6 is an explanatory view illustrating other example of identifying the moving object regions in the embodiment;
FIG. 7 is an explanatory view illustrating still other example of identifying the moving object regions in the embodiment;
FIG. 8 is an explanatory view illustrating an exemplary technique for tracking feature points in the embodiment;
FIG. 9 is an explanatory view illustrating the exemplary technique for tracking the feature points in the embodiment;
FIG. 10 is an explanatory view illustrating an exemplary technique for searching for corresponding points in the embodiment;
FIG. 11 is an explanatory view illustrating an exemplary technique for extracting three-dimensional points in the embodiment;
FIG. 12 is an explanatory view illustrating an exemplary technique for dividing a space in the embodiment;
FIG. 13 is an explanatory view illustrating an exemplary technique for selecting a representative point in the embodiment;
FIG. 14 is an explanatory view illustrating an exemplary technique for estimating a movement plane in the embodiment;
FIG. 15 is an explanatory view illustrating an exemplary technique for detecting an obstacle in the embodiment;
FIG. 16 is a flowchart illustrating exemplary processing in the embodiment;
FIG. 17 is an explanatory view illustrating a comparative example of the embodiment;
FIG. 16 is an explanatory view illustrating an example of the advantages of the embodiment; and
FIG. 19 is a schematic diagram illustrating an exemplary hardware structure of the measurement device in the embodiment.
DETAILED DESCRIPTION
According to an embodiment, a measurement device includes a processing circuitry. The processing circuitry acquires a plurality of images captured in time series by an image capturing unit installed in a moving object. The processing circuitry acquires first position information that indicates a position of the moving object and first direction information that indicates a direction of the moving object. The processing circuitry acquires moving object information that includes second position information indicating a position of other moving object moving in surroundings of the moving object. The processing circuitry identifies a moving object region in which the other moving object is present for each of the images, based on the first position information, the first direction information, and the moving object information. The processing circuitry estimates a position and a posture of the image capturing unit based on the images. The processing circuitry searches for a plurality of sets of corresponding points among non-moving object regions other than the moving object regions in the respective images. The processing circuitry performs three-dimensional measurement based on the position and the posture of the image capturing unit and the sets of the corresponding points.
The following describes an embodiment in detail with reference to the accompanying drawings.
FIG. 1 is a schematic diagram illustrating an exemplary structure of a measurement device 10 according to the embodiment and an exemplary structure of a moving object 1 provided with the measurement device 10. FIG. 2 is a schematic diagram illustrating an example of the moving object 1 in the embodiment. As illustrated in FIG. 1, the measurement device 10 is installed in the moving object 1 that includes an image capturing unit 5 and an azimuth sensor 6. The measurement device 10 includes a first acquirer 11, a second acquirer 13, a third acquirer 15, an identifier 17, a first estimator 18, a searcher 19, a measurer 21, a second estimator 23, a detector 25, and an output unit 27.
In the embodiment, the moving object 1 is a vehicle such as a motor vehicle that moves on a road surface serving as a movement plane, for example. The moving object 1 is, however, not limited to this example. The moving object 1 may be any object that can move on the movement plane. For example, the moving object 1 may be a ship that moves on a water surface serving as the movement plane or a robot that moves on a floor surface serving as the movement plane.
The image capturing unit 5 can be achieved by an image sensor or a camera, for example. The image capturing unit 5 captures images of the surroundings (e.g., in a traveling direction of the moving object 1) of the moving object 1 in time series and outputs a plurality of captured images to the measurement device 10.
The azimuth sensor 6 detects a direction of the moving object 1 and outputs, to the measurement device 10, first position information that indicates the detected direction of the moving object. 1
The first acquirer 11, the second acquirer 13, the third acquirer 15, the identifier 17, the first estimator 18, the searcher 19, the measurer 21, the second estimator 23, the detector 25, and the output unit 27 may be achieved by a processing unit such as a central processing unit (CPU) executing a program, i.e., by software, by hardware such as an integrated circuit (IC), or by both of the software and the hardware. The measurement device 10 may be achieved by a chip (integrated circuit) or a typical computer.
The first acquirer 11 acquires, from the image capturing unit 5, the multiple images captured in time series.
The second acquirer 13 sequentially acquires the first position information that indicates the position of the moving object 1 and first direction information that indicates the direction of the moving object 1. The second acquirer 13 acquires the first position information from the global positioning system (GPS) satellites and the first direction information from the azimuth sensor 6, for example. The acquisition manner of the first position information and the first direction information is not limited to the example.
The third acquirer 15 acquires information about other moving object (hereinafter referred to as moving object information). The moving object information includes second position information indicating the position of other moving object moving in the surroundings of the moving object 1. The moving object information may further include at least one of speed information that indicates a speed of the other moving object, model information that indicates a model obtained by abstracting a shape of the other moving object, texture information that indicates at least one of a color and a pattern of the other moving object, and second direction information that indicates a direction of the other moving object. The model information may indicate at least one of the width, the height, and the depth (the length) of the other moving object as illustrated in FIG. 3. In this case, the model information indicates a model obtained by abstracting the shape of the other moving object to a rectangular parallelepiped. The model information may indicate a model of a cube obtained by multiplying any length of the width, the height, and the depth of the other moving object by a constant, or a model of a rectangular parallelepiped obtained by estimating a length of the other side from any length of the width, the height, and the depth of the other moving object using a general aspect ratio of a vehicle.
The third acquirer 15 sequentially acquires the moving object information from other moving object by performing inter-vehicle communication (e.g., wireless communication according to IEEE 802.21p) with the other moving object running in the surroundings of the moving object 1, for example. In this case, the other moving object acquires the second position information from the GPS satellites, puts the acquired second position information in the moving object information, and transmits the moving object information to the moving object 1 (the measurement device 10), for example. The other moving object may detect a speed thereof, produce the speed information, and put the produced speed information in the moving object information. When preliminarily holding the model information and the texture information thereof, the other moving object may put them in the moving object information. The other moving object may acquire the second direction information from an azimuth sensor included therein, and put the acquired second direction information in the moving object information.
The third acquirer 15 may sequentially acquire the moving object information from the other moving object moving in the surroundings of the moving object 1 from a monitoring device by performing road-vehicle communication (e.g., wireless communication according to IEEE 802.21p) with the monitoring device present on a road shoulder in the surroundings of the moving object 1. The monitoring device is a monitoring camera, for example. The monitoring device is, however, not limited to the example. In this case, the monitoring device captures an image of the other moving object moving in the surroundings of the moving object 1, calculates the second position information using the captured image, puts the calculated second position information in the moving object information, and transmits the moving object information to the moving object 1 (the measurement device 10). The monitoring device may use the position information about the monitoring device acquired from the GPS satellites for calculating the second position information. The monitoring device may calculate the model information, the texture information, and the second direction information using the captured image, and put the calculated information in the moving object information.
The identifier 17 identifies a moving object region in which the other moving object is present for each of the multiple images acquired by the first acquirer 11 on the basis of the first position information and the first direction information that are acquired by the second acquirer 13 and the moving object information acquired by the third acquirer 15.
Specifically, the identifier 17 determines that the other moving object is moving when a speed indicated by the speed information included in the moving object information is equal to or larger than a first threshold. When no speed information is included in the moving object information, the identifier 17 may calculate a speed to be included in the moving object information from a difference between the position indicated by the second position information included in the moving object information at this time and the position indicated by the second position information included in the moving object information at previous time, and an acquisition interval of the moving object information.
When determining that the other moving object is moving, the identifier 17 identifies the position of the other moving object on an image acquired by the first acquirer 11 on the basis of the first position information and the first direction information that are acquired by the second acquirer 13 and the second position information included in the moving object information acquired by the third acquirer 15. The identifier 17 can identify a positional relation between the moving object 1 and the other moving object on an image captured by the image capturing unit 5 of the moving object 1 because the position and the direction of the moving object 1 and the position of the other moving object are known, thereby making it possible to identify the position of the other moving object on the image. The first position information, the first direction information, the moving object information, and the image are captured at substantially the same time.
The identifier 17 further identifies the moving object region in which the other moving object is present on the image on the basis of the identified position of the other moving object on the image.
Specifically, the identifier 17 identifies, as the moving object region, a region having a predetermined size based on the identified position of the other moving object. For example, the identifier 17 identifies, as the moving object region, a region that includes the identified position of the other moving object as the center and the predetermined number of pixels. The moving object region may have any shape such as rectangular or circular.
The identifier 17 may identify, as the moving object region, a region that has a size according to a distance between the moving object 1 and the other moving object based on the identified position of the other moving object. For example, when identifying the positions (center positions) of moving bodies 31 and 32 serving as the other moving bodies as illustrated in FIG. 4, the identifier 17 may identify a moving object region 41 having the position of the moving object 31 as the center thereof and a moving object region 42 having the position of the moving object 32 as the center thereof as illustrated in FIG. 5. In the examples illustrated in FIGS. 4 and 5, the size (the number of pixels) of the moving object region 42 is smaller than that of the moving object region 41 because the distance from the moving object 1 to the moving object 32 is larger (farther) than that from the moving object 1 to the moving object 31. The distance between the moving object 1 and the other moving object is obtained on the basis of the first position information and the corresponding second position information.
The identifier 17 may identify, as the moving object region, a region according to the model of the other moving object based on the identified position of the other moving object. The identifier 17 may identify, as the moving object region, a region according to the direction of the other moving object. The identifier 17 may identify, as the moving object region, a region according to the distance between the other moving object and the moving object 1. For example, when identifying the positions (center positions) of the moving bodies 31 and 32 serving as the other moving bodies as illustrated in FIG. 4, the identifier 17 may identify a moving object region 51 having the position of the moving object 31 as the center thereof and a moving object region 52 having the position of the moving object 32 as the center thereof as illustrated in FIG. 6.
In the example illustrated in FIG. 6, the moving object region 51 is identified according to the model and the direction of the moving object 31 and the distance between the moving object 31 and the moving object 1. Specifically, the moving object region 51 is identified in such a manner that the model and the direction of the moving object 31 are set in a three-dimensional space and the model is projected such that the moving object 31 is positioned at the center of the moving object region 51. Likewise, the moving object region 52 is identified according to the model and the direction of the moving object 32 and the distance between the moving object 32 and the moving object 1. Specifically, the moving object region 52 is identified in such a manner that the model and the direction of the moving object 32 are set in a three-dimensional space and the model, is projected such that the moving object 32 is positioned at the center of the moving object region 52. The model of the other moving object can be obtained from the model information while the direction of the other moving object can be obtained from the second direction information.
When the region according to the model of the moving object 31 and the distance between the moving object 31 and the moving object 1 is identified as the moving object region, the projection may be performed such that a length of the depth of the model projected on the image is a length of a rectangle in a lateral direction while a length of the height of the model projected on the image is a length of the rectangle in the height direction.
The identifier 17 may identify, as the moving object region, a region according to the texture information about the other moving object based on the identified position of the other moving object. For example, when identifying the positions (center positions) of the moving bodies 31 and 32 serving as the other moving bodies as illustrated in FIG. 4, the identifier 17 may identify a moving object region 61 of the moving object 31 and a moving object region 62 of the moving object 32 as illustrated in FIG. 7.
In the example illustrated in FIG. 7, the moving object region 61 is identified according to the texture information about the moving object 31. Specifically, the moving object region 61 has a color the same as or a similar to that indicated by the texture information around the position of the moving object 31. Likewise, the moving object region 62 is identified according to the texture information about the moving object 32. Specifically, the moving object region 62 has a color the same as or a similar to that indicated by the texture information around the position of the moving object 32.
The first estimator 18 estimates the position and the posture of the image capturing unit 5 on the basis of the multiple images acquired by the first acquirer 11. Specifically, the first estimator 18 extracts feature points from the respective images, tracks the feature points among the images, and estimates the position and the posture of the image capturing unit 5. In the embodiment, the first estimator 18 extracts the feature points from non-moving object regions other than the moving object regions in the respective images, tracks the feature points among the images, and estimates the position and the posture of the image capturing unit 5.
FIG. 8 illustrates the image captured at time t−1. FIG. 9 illustrates the image captured at time t. In the examples illustrated in FIGS. 8 and 9, the first estimator 18 extracts the feature points from the non-moving object region other than a moving object region 74 as illustrated in FIG. 8 and tracks the extracted feature points on the image illustrated in FIG. 9. In the examples, the first estimator 18 extracts feature points 71, 72, and 73 from the image illustrated in FIG. 8, tracks the respective feature points 71, 72, and 73 on the image illustrated in FIG. 9, and extracts feature points 81, 82, and 83, respectively, from the image illustrated in FIG. 9. The first estimator 18 obtains a correspondence in time series between the corresponding feature points on the corresponding images for each feature point. The first estimator 18 obtains the position and the posture (current position and posture) of the image capturing unit 5 relative to the position and the posture of the image capturing unit 5 at previous time on the basis of the obtained correspondences of the respective feature points in time series (posture rotation R and parallel movement T) by epipolar geometry.
A point having a large luminance difference in an image (e.g., a point having a large luminance difference both in lateral and longitudinal directions) can be chosen as the feature point. The feature points can be extracted by a known extraction technique such as Harris corner detector.
In the embodiment, a technique that associates the feature points with each other is described as an example of tracking the feature points. The tracking the feature points includes a technique in which a region surrounding a certain pixel is considered as the feature point, and the pixels are associated with each other between the regions or the pixels are associated with each other using the region and the feature point.
The searcher 19 searches for a plurality of sets of the corresponding points among the non-moving object regions in the respective images acquired by the first acquirer 11. Specifically, the searcher 19 arranges search points in the non-moving object region on the image captured at previous time, and searches for corresponding points each correspond to one of the search points on the image captured at this time to determine the sets of the corresponding points.
The search points may be arranged in the non-moving object region on the image captured at previous time such that the search points are arranged entirely on the periphery of the non-moving object region or evenly and entirely in the non-moving object region. The searcher 19 can identify a range in which the search points are capable of being observed on the image captured at this time from the position and the posture, which are estimated by the first estimator 18, of the image capturing unit 5, thereby searching the identified range and searching for the corresponding points or the search points. Specifically, the searcher 19 searches for the corresponding points corresponding to the search points on epipolar lines (indicated with the arrows in FIG. 10), on which the search points can be observed as illustrated in FIG. 10.
In the embodiment, the search points are arranged on the image captured at previous time and the corresponding points corresponding to the search points are searched for on the image captured at this time. The search points may be arranged on the image captured at this time and the corresponding points corresponding to the search points may be searched for on the image captured at previous time.
The measurer 21 performs three-dimensional measurement on the basis of the position and the posture, which are estimated by the first estimator 18, of the image capturing unit 5 and the multiple sets of the corresponding points searched for by the searcher 19 to obtain three-dimensional points. Specifically, the measurer 21 performs three-dimensional measurement based on a principle of triangulation using the position and the posture, which are estimated by the first estimator 18, of the image capturing unit 5 and the multiple sets of the corresponding points searched for by the searcher 19. As a result, the measurer 21 restores the depths of the respective corresponding points, thereby obtaining the three-dimensional points.
The second estimator 23 estimates a movement plane on which the moving object 1 moves on the basis of the three-dimensional points obtained by the measurer 21. For estimating the movement plane, a plane detection technique using a random sample consensus (RANSAC), which is a known technique, may be used for example. Specifically, a set of three points having a height equal to or smaller than a second threshold is randomly acquired at several times in a group of the three-dimensional points obtained by the measurer 21, and a plane that includes the largest number of three-dimensional points within a third threshold distance from the plane may be estimated as the movement plane out of the planes each formed by the three points.
The movement plane can be estimated highly accurately using the following technique.
The second estimator 23 sets a fourth threshold on the basis of a time series change in the posture, which is estimated by the first estimator 18, of the image capturing unit 5 (the moving object 1). Specifically, the second estimator 23 sets the fourth threshold such that with an increase in the time series change in the posture of the image capturing unit 5, the value of the fourth threshold is reduced. Specifically, the second estimator 23 sets, to the value of the fourth threshold, a value obtained by applying a value indicating the time series change in the posture of the image capturing unit 5 to a monotonically decreasing function.
For example, let the monotonically decreasing function be y=−ax+b where a and b are any desired variables. In this case, the second estimator 23 calculates an absolute value of a difference between a value indicating the posture of the image capturing unit 5 of the moving object 1 at a present time t and a value indicating the posture of the image capturing unit 5 of the moving object 1 at a calculation time t−P, which is the time P hours before the present time t. The second estimator 23 sets, to the fourth threshold, the value of y obtained by substituting the calculated absolute value of the difference to x of the monotonically decreasing function.
The calculation time t−P is preferably the calculation time of the posture of the image capturing unit 5 at previous time, but is not limited thereto. The calculation time is the calculation time of the posture of the image capturing unit 5 at or before the previous time. The absolute value of the difference is the sum of the absolute values of the differences in roll, pitch, and yaw, for example, but is not limited thereto.
The second estimator 23 extracts a plurality of three-dimensional points each having a distance from the moving object 1 equal to or smaller than the set fourth threshold in the movement direction of the moving object 1 out of the three-dimensional point group obtained by the measurer 21.
FIG. 11 is en explanatory view illustrating an exemplary technique for extracting the three-dimensional points in the embodiment, and illustrates a three-dimensional point group 101 obtained by the measurer 21 on the yz plane. The movement direction of the moving object 1 is the z-axis direction (specifically, +z direction). In the example illustrated in FIG. 11, the second estimator 23 extracts a plurality of three-dimensional points 102 each having a z coordinate value equal to or smaller than the set fourth threshold T out of the three-dimensional point group 101.
The second estimator 23 divides a space in which the extracted three-dimensional points are positioned into a plurality of divided spaces in the movement direction of the moving object 1.
FIG. 12 is an explanatory view illustrating an exemplary technique for dividing the space in the embodiment, and illustrates the three-dimensional point group 101 and the three-dimensional points 102 on the yz plane. In the example illustrated in FIG. 12, the second estimator 23 identifies a minimum value L and a maximum value U out of the z coordinate values of the extracted three-dimensional points 102, and divides a space having an coordinate value equal to or larger than L and equal to or smaller than U equally into k (k≥2) in the z-axis direction to obtain k block spaces. While FIG. 12 exemplifies a case of U=T, it is sufficient if U satisfies a relation of U≤T.
The second estimator 23 selects, for each divided space, a representative point out of the three-dimensional points included in the divided space. Specifically, the second estimator 23 selects, as the representative point, the three-dimensional point at the lowest position in the vertical direction out of the three-dimensional points included in the divided space.
FIG. 13 is an explanatory view illustrating an exemplary technique for selecting the representative point in the embodiment, and illustrates the three-dimensional points 102 and the block spaces after the division on the yz plane. In the example illustrated in FIG. 13, the second estimator 23 selects, as the representative point, the three-dimensional point having a maximum y coordinate value in each block space. In a first block space, the second estimator 23 selects three-dimensional point 103-1 as the representative point. In a kth block space, the second estimator 23 selects three-dimensional point 103-k as the representative point.
The second estimator 23 estimates a plane approximated with the selected representative points as the movement plane on which the moving object 1 moves.
FIG. 1.4 is an explanatory view illustrating an exemplary technique for estimating the movement plane in the embodiment, and illustrates the block spaces after the division and k selected representative points 103 on the yz plane. In the example illustrated in FIG. 14, the second estimator 23 estimates a plane 104 (illustrated as the straight line on the yz plane in FIG. 14) approximated with the k selected representative points 103 as the movement plane on which the moving object 1 moves.
The detector 25 detects an obstacle on the basis of the three-dimensional point group obtained by the measurer 21 and the movement plane estimated by the second estimator 23. Specifically, the detector 25 detects, as an obstacle, the three-dimensional points that are not present on the movement plane out of the three-dimensional point group obtained by the measurer 21.
FIG. 15 is an explanatory view illustrating an exemplary technique for detecting the obstacle in the embodiment, and illustrates the three-dimensional point group 101 obtained by the measurer 21 and the movement plane 104 estimated by the second estimator 23 on the yz plane. In the example illustrated in FIG. 15, the detector 25 calculates a distance d from the movement plane 104 in the y-axis direction for each of the three-dimensional points included in the three-dimensional point group 101, and detects the three-dimensional point having a distance d equal to or larger than an error as the three-dimensional point included in the obstacle. The error is a measurement error in the three-dimensional measurement performed by the measurer 21.
The output unit 27 performs output on the basis of the detection result of the detector 25. For example, the output unit 27 causes a speaker (not illustrated) installed in the moving object 1 to output the position of the detected obstacle as a voice, or a display installed in the moving object 1 to display the position of the detected obstacle on an image acquired by the first acquirer 11.
FIG. 16 is a flowchart illustrating an exemplary flow of a procedure of the processing in the embodiment.
The first acquirer 11 sequentially acquires images captured in time series from the image capturing unit 5 while the second acquirer 13 sequentially acquires the first position information and the first direction information (step S101).
The third acquirer 15 acquires the moving object information about other moving object that moves in the surroundings of the moving object 1 (step S103).
The identifier 17 identifies the moving object region in which the other moving object is present for each of the images acquired by the first acquirer 11 on the basis of the first position information and the first direction information that are acquired by the second acquirer 13 and the moving object information acquired by the third acquirer 15 (step S105).
The first estimator 18 extracts the feature points from the non-moving object regions in the respective images acquired by the first acquirer 11 in order to estimate the position and the posture of the image capturing unit 5. If the feature points sufficient for estimation are extracted (Yes at step S107), the first estimator 18 tracks (perform tracking) the feature points among the images (step S109) and estimates the position and the posture of the image capturing unit 5 (step S111).
If the feature points sufficient for estimation are not extracted (No at step S107) or if the estimation of the position and the posture of the image capturing unit 5 fails (No at step S113), the processing ends.
If the estimation of the position and the posture of the image capturing unit 5 is successful (Yes at step S113), the searcher 19 searches for a plurality of sets of the corresponding points among the non-moving object regions in the respective images acquired by the first acquirer 11 (step S114).
The measurer 21 performs three-dimensional measurement on the basis of the position and the posture, which are estimated by the first estimator 18, of the image capturing unit 5 and the multiple sets of the corresponding points searched for by the searcher 19 to obtain the three-dimensional points (step S115).
The second estimator 23 estimates the movement plane on which the moving object 1 moves on the basis or the three-dimensional point group obtained by the measurer 21 (step S117).
The detector 25 detects an obstacle on the basis of the three-dimensional point group obtained by the measurer 21 and the movement plane estimated by the second estimator 23 (step S119).
The output unit 27 performs output on the basis of the detection result of the detector 25 (step S121).
When the set of the corresponding points is searched for on the image on the basis of a moving object 200 that is moving and serves as other moving object as illustrated in FIG. 17, the position of a search point 201 of the moving object 200 at previous image capturing time and the position of a corresponding point 202 of the moving object 200 at this image capturing time differ from each other. If the three-dimensional measurement is performed using the set of the search point 201 and the corresponding point 201 as the set of the corresponding points, the three-dimensional point (specifically, the depth) cannot be accurately obtained due to the principle of triangulation.
In contrast, when the set of the corresponding points are searched for on the image on the basis of a non-moving object 210 that does not move such as an object as illustrated in FIG. 18, a difference hardly occurs between the position of a search point 211 on the non-moving object 210 at previous image capturing time and the position of a corresponding point 211 on the non-moving object 210 at this image capturing time. As a result, when the three-dimensional measurement is performed using the set of the search point 211 and the corresponding point 211 as the set of the corresponding points, the three-dimensional point (specifically, the depth) can be accurately obtained in the principle of triangulation.
The embodiment searches for the sets of the corresponding points among a plurality of images on the basis of the non-moving object regions other than the moving object regions in the respective images, thereby making it possible to search for the sets of the corresponding points excluding the moving object (other moving object). As a result, deterioration of accuracy of three-dimensional measurement can be reduced.
The embodiment can reduce the deterioration of accuracy of three-dimensional measurement as described above. When an obstacle is detected using the three-dimensional points thus obtained, detection accuracy of the obstacle can be increased. For example, a tiny obstacle having a height about 10 cm can be accurately detected.
In the embodiment, as the change in the posture of the moving object increases, the threshold is reduced, thereby hardly deteriorating the accuracy of three-dimensional measurement due to the influence of the change in posture of the moving object even when the change increases. Furthermore, the movement plane of the moving object is estimated using the three-dimensional points near the moving object, thereby making it possible to increase the accuracy of estimating the movement plane.
The embodiment can increase the accuracy of estimating the movement plane as described above. When an obstacle is detected using the movement plane thus estimated, the detection accuracy of the obstacle can be more increased.
In the embodiment, the feature points are extracted from the non-moving object regions when the position and the posture of the image capturing unit 5 are estimated. As a result, the positions of the feature points to be tracked hardly shift. Consequently, the deterioration of accuracy of estimating the position and the posture of the image capturing unit 5 can also be reduced.
Modification
In the embodiment, the three-dimensional points each having a distance (distance in the height direction) equal to or larger than the error from the movement plane are detected as the three-dimensional points included in the obstacle. The three-dimensional points each having a distance (distance in the height direction) equal to or larger than the error from the movement plane and equal to or smaller than a fifth threshold may be detected as the three-dimensional points included in the obstacle. In this case, when the fifth threshold is set to the height (distance) nearly the same as the height of the moving object 1, obstacles such as a traffic signal and a pedestrian bridge are prevented from being mistakenly detected as the obstacles.
Hardware Structure
FIG. 19 is a schematic diagram illustrating an exemplary hardware structure of the measurement device of the embodiment and the modification. As illustrated in FIG. 19, the measurement device of the embodiment and the modification has a hardware structure utilizing a normal computer. Specifically, the measurement device includes a control device 901 such as a CPU, a main storage device 902 such as a read only memory (ROM) or a random access memory (RAM), an auxiliary storage device 903 such as a hard disk drive (HDD) or a solid state drive (SSD), a display device 904 such as a display, an input device 905 such as a keyboard or a mouse, and a communication device 906 such as a communication interface.
A program executed by the measurement device in the embodiment and the modification is stored and provided in a computer-readable storage medium, which may be provided as a computer program product, such as a compact disc read only memory (CD-ROM), a compact disc recordable (CD-R), a memory card, a digital versatile disc (DVD), or a flexible disk (FD), as an installable or executable file.
The program executed by the measurement device in the embodiment and the modification may be stored in a computer connected to a network such as the Internet and provided by being downloaded via the network. Furthermore, the program executed by the measurement device in the embodiment and the modification may be provided or distributed via a network such as the Internet. The program executed by measurement device in the embodiment and the modification may be embedded and provided in a ROM, for example.
The program executed by the measurement device in the embodiment and the modification has a module structure that achieves the respective units described above in a computer. In practical hardware, the CPU reads out the program from the ROM or the HDD to the RAM so as to execute the program, so that the respective units described above are achieved in the computer.
The present invention is not directly limited to the embodiment and the modification. The invention can be embodied by changing components without departing from the spirit and scope of the invention when practiced. In addition, various aspects of the invention can be made by properly combining the components of the embodiment and the modification. For example, some components may be eliminated from all of the components of the embodiment and the modification. Furthermore, the components of the different embodiments and modifications may be properly combined.
For example, the steps in the flowchart of the embodiment may be changed in execution order, some steps may be executed simultaneously, or the steps may be executed in different order every implementation without departing from their roles.
The embodiment and modification can prevent the deterioration of accuracy of three-dimensional measurement even when other moving object is present in a plurality of images captured in time series.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (13)

What is claimed is:
1. A measurement device comprising:
processing circuitry configured to:
acquire a plurality of images captured in time series by an image capturing unit installed in a moving object;
acquire first position information that indicates a position of the moving object and first direction information that indicates a direction of the moving object;
acquire, through a communication device separated from the moving object by performing road-vehicle communication, moving object information that includes second position information indicating a position of other moving object moving in surroundings of the moving object;
identify a moving object region in which the other moving object is present for each of the images, based on the first position information, the first direction information, and the moving object information;
estimate a position and a posture of the image capturing unit based on the images;
search for a plurality of sets of corresponding points among non-moving object regions other than the moving object regions in the respective images; and
perform three-dimensional measurement based on the position and the posture of the image capturing unit and the sets of the corresponding points;
estimate a movement plane on which the moving object moves based on a three-dimensional point group that is obtained as a result of performing the three-dimensional measurement; and
detect an obstacle based on the three-dimensional point group and the movement plane,
wherein the three-dimensional point group is obtained by extracting three-dimensional points each having a distance from the moving object equal to or smaller than a threshold in a movement direction of the moving object, the threshold being set such that with an increase in a time series change in the posture of the image capturing unit, the threshold is reduced.
2. The device according to claim 1, wherein
in estimating, the processing circuitry is configured to extract feature points from the respective images, tracks the feature points among the images, and estimates the position and the posture of the image capturing unit.
3. The device according to claim 2, wherein
in estimating, the processing circuitry is configured to extract the feature points from the non-moving object regions of the respective images, tracks the feature points among the images, and estimates the position and the posture of the image capturing unit.
4. The device according to claim 1, wherein the non-moving object regions each have a predetermined size based on the position of the other moving object.
5. The device according to claim 1, wherein the non-moving object regions each have a size according to a distance between the moving object and the other moving object based on the position of the other moving object.
6. The device according to claim 1, wherein
the moving object information further includes model information that indicates a model obtained by abstracting a shape of the other moving object, and
the non-moving object regions each correspond to a region calculated based at least in part on the model of the other moving object and the position of the other moving object.
7. The device according to claim 6, wherein
the moving object information further includes second direction information that indicates a direction of the other moving object, and
the non-moving object regions each further correspond to the region calculated based at least in part on the direction of the other moving object and the position of the other moving object.
8. The device according to claim 1, wherein
the moving object information further includes texture information that indicates at least one of a color and a pattern of the other moving object, and
the non-moving object regions each correspond to a region calculated based at least in part on the texture information about the other moving object and the position of the other moving object.
9. The device according to claim 1, wherein
in detecting, the processing circuitry is configured to detect, as the obstacle, a three-dimensional point that is not present on the movement plane out of the three-dimensional point group.
10. The device according to claim 1, wherein
in acquiring the moving object information, the processing circuitry is configured to acquire the moving object information from the other moving object.
11. The device according to claim 1, wherein
in acquiring the moving object information, the processing circuitry is configured to acquire the moving object information from a monitoring device that monitors the moving object and the other moving object.
12. A measurement method comprising:
acquiring a plurality of images captured in time series by an image capturing unit installed in a moving object;
acquiring first position information that indicates a position of the moving object and first direction information that indicates a direction of the moving object;
acquiring, through a communication device separated from the moving object by performing road-vehicle communication, moving object information that includes second position information indicating a position of other moving object moving in surroundings of the moving object;
identifying a moving object region in which the other moving object is present for each of the images based on the first position information, the first direction information, and the moving object information;
estimating a position and a posture of the image capturing unit based on the images;
searching for a plurality of sets of corresponding points among non-moving object regions other than the moving object regions in the respective images; and
performing three-dimensional measurement based on the position and the posture of the image capturing unit and the sets of the corresponding points;
estimating a movement plane on which the moving object moves based on a three-dimensional point group that is obtained as a result of performing the three-dimensional measurement; and
detecting an obstacle based on the three-dimensional point group and the movement plane,
wherein the three-dimensional point group is obtained by extracting three-dimensional points each having a distance from the moving object equal to or smaller than a threshold in a movement direction of the moving object, the threshold being set such that with an increase in a time series change in the posture of the image capturing unit, the threshold is reduced.
13. A measurement device comprising:
a processor; and
a memory that stores processor-executable instructions that, when executed by the processor, cause the processor to:
acquire a plurality of images captured in time series by an image capturing unit installed in a moving object;
acquire first position information that indicates a position of the moving object and first direction information that indicates a direction of the moving object;
acquire, through a communication device separated from the moving object by performing road-vehicle communication, moving object information that includes second position information indicating a position of other moving object moving in surroundings of the moving object;
identify a moving object region in which the other moving object is present for each of the images, based on the first position information, the first direction information, and the moving object information;
estimate a position and a posture of the image capturing unit based on the images;
search for a plurality of sets of corresponding points among non-moving object regions other than the moving object regions in the respective images; and
perform three-dimensional measurement based on the position and the posture of the image capturing unit and the sets of the corresponding points;
estimate a movement plane on which the moving object moves based on a three-dimensional point group that is obtained as a result of performing the three-dimensional measurement; and
detect an obstacle based on the three-dimensional point group and the movement plane,
wherein the three-dimensional point group is obtained by extracting three-dimensional points each having a distance from the moving object equal to or smaller than a threshold in a movement direction of the moving object, the threshold being set such that with an increase in a time series change in the posture of the image capturing unit, the threshold is reduced.
US15/257,221 2015-09-17 2016-09-06 Measurement device and measurement method Active 2037-03-11 US10650535B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015183874A JP6499047B2 (en) 2015-09-17 2015-09-17 Measuring device, method and program
JP2015-183874 2015-09-17

Publications (2)

Publication Number Publication Date
US20170084048A1 US20170084048A1 (en) 2017-03-23
US10650535B2 true US10650535B2 (en) 2020-05-12

Family

ID=58282757

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/257,221 Active 2037-03-11 US10650535B2 (en) 2015-09-17 2016-09-06 Measurement device and measurement method

Country Status (2)

Country Link
US (1) US10650535B2 (en)
JP (1) JP6499047B2 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3267421A4 (en) * 2015-03-03 2018-11-14 Pioneer Corporation Route searching device, control method, program, and storage medium
JP6757690B2 (en) * 2017-03-31 2020-09-23 鹿島建設株式会社 Inspection support equipment, inspection support methods and programs
JP6815935B2 (en) * 2017-06-05 2021-01-20 日立オートモティブシステムズ株式会社 Position estimator
US10453150B2 (en) * 2017-06-16 2019-10-22 Nauto, Inc. System and method for adverse vehicle event determination
WO2019098318A1 (en) * 2017-11-20 2019-05-23 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional point group data generation method, position estimation method, three-dimensional point group data generation device, and position estimation device
GB201813740D0 (en) * 2018-08-23 2018-10-10 Ethersec Ind Ltd Method of apparatus for volumetric video analytics
JP6802944B1 (en) * 2020-08-31 2020-12-23 鹿島建設株式会社 Inspection support equipment, inspection support methods and programs
CN114308972B (en) * 2021-12-23 2022-12-20 临沂矿业集团有限责任公司 Intelligent mining movable dust removal device and control method

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535114B1 (en) * 2000-03-22 2003-03-18 Toyota Jidosha Kabushiki Kaisha Method and apparatus for environment recognition
JP2004203068A (en) 2002-12-24 2004-07-22 Aisin Seiki Co Ltd Mobile body periphery monitor device
US20050244034A1 (en) * 2004-04-30 2005-11-03 Visteon Global Technologies, Inc. Single camera system and method for range and lateral position measurement of a preceding vehicle
US20070236561A1 (en) * 2006-04-06 2007-10-11 Topcon Corporation Image processing device and method
JP2009186353A (en) 2008-02-07 2009-08-20 Fujitsu Ten Ltd Object detecting device and object detecting method
US20100226544A1 (en) * 2007-12-25 2010-09-09 Toyota Jidosha Kabushiki Kaisha Moving state estimating device
US20110069951A1 (en) 2009-09-19 2011-03-24 Samsung Electronics Co., Ltd. Apparatus and method for supporting mobility of a mobile terminal that performs visible light communication
US8456527B2 (en) * 2007-07-27 2013-06-04 Sportvision, Inc. Detecting an object in an image using templates indexed to location or camera sensors
JP2013134609A (en) 2011-12-26 2013-07-08 Toyota Central R&D Labs Inc Curbstone detection device and curbstone detection program
US20130236107A1 (en) * 2012-03-09 2013-09-12 Kabushiki Kaisha Topcon Moving image processing device, moving image processing method, and recording medium having moving image processing program
JP2014142241A (en) 2013-01-23 2014-08-07 Denso Corp Three-dimensional position estimation device, vehicle controller, and three-dimensional position estimation method
US20140309841A1 (en) * 2011-11-22 2014-10-16 Hitachi, Ltd. Autonomous Mobile System
US20150057871A1 (en) * 2012-04-05 2015-02-26 Yukihiko ONO Map data creation device, autonomous movement system and autonomous movement control device
US20150324651A1 (en) * 2012-07-27 2015-11-12 Nissan Motor Co., Ltd. Three-dimensional object detection device and three-dimensional object detection method
US20150334269A1 (en) * 2014-05-19 2015-11-19 Soichiro Yokota Processing apparatus, processing system, and processing method
US20160133128A1 (en) * 2014-11-11 2016-05-12 Hyundai Mobis Co., Ltd System and method for correcting position information of surrounding vehicle
US9430850B1 (en) * 2015-04-02 2016-08-30 Politechnika Poznanska System and method for object dimension estimation using 3D models
US20160292905A1 (en) * 2015-04-01 2016-10-06 Vayavision, Ltd. Generating 3-dimensional maps of a scene using passive and active measurements

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4192680B2 (en) * 2003-05-28 2008-12-10 アイシン精機株式会社 Moving object periphery monitoring device
JP4232167B1 (en) * 2007-08-27 2009-03-04 三菱電機株式会社 Object identification device, object identification method, and object identification program
JP2011253241A (en) * 2010-05-31 2011-12-15 Toyota Motor Corp Object detector
FR3014553A1 (en) * 2013-12-11 2015-06-12 Parrot METHOD FOR ANGULAR CALIBRATION OF THE POSITION OF AN ON-BOARD VIDEO CAMERA IN A MOTOR VEHICLE
JP6398218B2 (en) * 2014-02-24 2018-10-03 日産自動車株式会社 Self-position calculation device and self-position calculation method

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535114B1 (en) * 2000-03-22 2003-03-18 Toyota Jidosha Kabushiki Kaisha Method and apparatus for environment recognition
JP2004203068A (en) 2002-12-24 2004-07-22 Aisin Seiki Co Ltd Mobile body periphery monitor device
US20050244034A1 (en) * 2004-04-30 2005-11-03 Visteon Global Technologies, Inc. Single camera system and method for range and lateral position measurement of a preceding vehicle
US7561720B2 (en) * 2004-04-30 2009-07-14 Visteon Global Technologies, Inc. Single camera system and method for range and lateral position measurement of a preceding vehicle
US20070236561A1 (en) * 2006-04-06 2007-10-11 Topcon Corporation Image processing device and method
US8456527B2 (en) * 2007-07-27 2013-06-04 Sportvision, Inc. Detecting an object in an image using templates indexed to location or camera sensors
US20100226544A1 (en) * 2007-12-25 2010-09-09 Toyota Jidosha Kabushiki Kaisha Moving state estimating device
US8559674B2 (en) * 2007-12-25 2013-10-15 Toyota Jidosha Kabushiki Kaisha Moving state estimating device
JP2009186353A (en) 2008-02-07 2009-08-20 Fujitsu Ten Ltd Object detecting device and object detecting method
US20110069951A1 (en) 2009-09-19 2011-03-24 Samsung Electronics Co., Ltd. Apparatus and method for supporting mobility of a mobile terminal that performs visible light communication
WO2011034390A2 (en) 2009-09-19 2011-03-24 Samsung Electronics Co., Ltd. Apparatus and method for supporting mobility of a mobile terminal that performs visible light communication
JP2013504971A (en) 2009-09-19 2013-02-07 サムスン エレクトロニクス カンパニー リミテッド Apparatus and method for supporting mobility of a mobile terminal performing visible light communication
US20130279917A1 (en) 2009-09-19 2013-10-24 Samsung Electronics Co., Ltd. Apparatus and method for supporting mobilility of a mobile terminal that performs visible light communication
US20140309841A1 (en) * 2011-11-22 2014-10-16 Hitachi, Ltd. Autonomous Mobile System
JP2013134609A (en) 2011-12-26 2013-07-08 Toyota Central R&D Labs Inc Curbstone detection device and curbstone detection program
US20130236107A1 (en) * 2012-03-09 2013-09-12 Kabushiki Kaisha Topcon Moving image processing device, moving image processing method, and recording medium having moving image processing program
US20150057871A1 (en) * 2012-04-05 2015-02-26 Yukihiko ONO Map data creation device, autonomous movement system and autonomous movement control device
US20150324651A1 (en) * 2012-07-27 2015-11-12 Nissan Motor Co., Ltd. Three-dimensional object detection device and three-dimensional object detection method
US9589193B2 (en) * 2012-07-27 2017-03-07 Nissan Motor Co., Ltd Three-dimensional object detection device and three-dimensional object detection method
JP2014142241A (en) 2013-01-23 2014-08-07 Denso Corp Three-dimensional position estimation device, vehicle controller, and three-dimensional position estimation method
US20150334269A1 (en) * 2014-05-19 2015-11-19 Soichiro Yokota Processing apparatus, processing system, and processing method
US20160133128A1 (en) * 2014-11-11 2016-05-12 Hyundai Mobis Co., Ltd System and method for correcting position information of surrounding vehicle
US20160292905A1 (en) * 2015-04-01 2016-10-06 Vayavision, Ltd. Generating 3-dimensional maps of a scene using passive and active measurements
US10024965B2 (en) * 2015-04-01 2018-07-17 Vayavision, Ltd. Generating 3-dimensional maps of a scene using passive and active measurements
US9430850B1 (en) * 2015-04-02 2016-08-30 Politechnika Poznanska System and method for object dimension estimation using 3D models

Also Published As

Publication number Publication date
JP2017058274A (en) 2017-03-23
JP6499047B2 (en) 2019-04-10
US20170084048A1 (en) 2017-03-23

Similar Documents

Publication Publication Date Title
US10650535B2 (en) Measurement device and measurement method
US10762643B2 (en) Method for evaluating image data of a vehicle camera
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
US9589326B2 (en) Depth image processing apparatus and method based on camera pose conversion
JP5587930B2 (en) Distance calculation device and distance calculation method
KR101776621B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
JP4702569B2 (en) Image processing apparatus for vehicle
KR101776620B1 (en) Apparatus for recognizing location mobile robot using search based correlative matching and method thereof
US9001222B2 (en) Image processing device, image processing method, and program for image processing for correcting displacement between pictures obtained by temporally-continuous capturing
EP2824425B1 (en) Moving-object position/attitude estimation apparatus and method for estimating position/attitude of moving object
US20170337701A1 (en) Method and system for 3d capture based on structure from motion with simplified pose detection
US20110205338A1 (en) Apparatus for estimating position of mobile robot and method thereof
JP2017526082A (en) Non-transitory computer-readable medium encoded with computer program code for causing a motion estimation method, a moving body, and a processor to execute the motion estimation method
KR101551026B1 (en) Method of tracking vehicle
US10521915B2 (en) Distance measurement device and distance measurement method
EP3324359B1 (en) Image processing device and image processing method
CN108369739B (en) Object detection device and object detection method
RU2725561C2 (en) Method and device for detection of lanes
JP7343054B2 (en) Location estimation method, location estimation device, and location estimation program
JP2014526736A (en) Resolving ambiguity of homography decomposition based on orientation sensor
US10789727B2 (en) Information processing apparatus and non-transitory recording medium storing thereon a computer program
KR102425270B1 (en) Method for estimating position in three-dimensional space
JP6699323B2 (en) Three-dimensional measuring device and three-dimensional measuring method for train equipment
JP2014238409A (en) Distance calculation device and distance calculation method
CA2994645C (en) Step detection device and step detection method

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TASAKI, TSUYOSHI;NISHIYAMA, MANABU;SIGNING DATES FROM 20161214 TO 20161215;REEL/FRAME:040870/0729

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4