WO2019233728A1 - Method and computer based location determination system for determining the location of at least one structure by means of an imaging sensor - Google Patents

Method and computer based location determination system for determining the location of at least one structure by means of an imaging sensor Download PDF

Info

Publication number
WO2019233728A1
WO2019233728A1 PCT/EP2019/062642 EP2019062642W WO2019233728A1 WO 2019233728 A1 WO2019233728 A1 WO 2019233728A1 EP 2019062642 W EP2019062642 W EP 2019062642W WO 2019233728 A1 WO2019233728 A1 WO 2019233728A1
Authority
WO
WIPO (PCT)
Prior art keywords
location
imaging sensor
sensor
images
determining
Prior art date
Application number
PCT/EP2019/062642
Other languages
French (fr)
Inventor
Ciaran Hughes
Anto MICHAEL
Jean Francois Bariant
Ahmed Ali
Jonathan Horgan
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2019233728A1 publication Critical patent/WO2019233728A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • Method and computer based location determination system for determining the location of at least one structure by means of an imaging sensor
  • the invention relates to a method for determining the location of at least one structure by means of an imaging sensor, in particular an imaging sensor mounted on a vehicle, wherein the location of the individual structure is determined from images created by the imaging sensor from at least two different sensor positions, which sensor positions have a distance larger than or equal to a minimum distance for determining the location of said structure.
  • the invention further relates to a corresponding computer program product and a corresponding computer based location determination system.
  • the moving imaging sensor generates a series of consecutive images, wherein the distance of the sensor positions is verified by recognizing the structure in the images and determining the location of the structure relative to the position and an orientation of the imaging sensor for each of the images.
  • the orientation of the imaging sensor is, e.g., given by a viewing direction of the sensor.
  • the imaging sensor is mounted in/on a vehicle and the structures are structures in a surrounding of said vehicle.
  • the method according to the invention has the advantage that a single moving imaging sensor is sufficient to determine the location of the structure(s), wherein a provision of an adequate distance of the sensor positions at which the images were generated can be determined/verified from the images themselves.
  • the imaging sensor preferably is a camera, an ultrasonic imaging sensor, a radar imaging sensor or a lidar imaging sensor. All these kind of imaging sensors are well known from various automotive applications.
  • a reference vector indicates the respective orientation of the imaging sensor
  • a direction vector indicates the location of the structure relative to the position of the imaging sensor and the presence of a sufficient distance between the sensor positions results from the fact that an angular difference of the respective direction vectors s, s' related to the reference vector n is greater than a threshold value 0 th .
  • the reference vector n indicating the respective orientation of the imaging sensor is, e.g., given by a viewing direction of the sensor.
  • the orientation of the imaging sensor does not change significantly between the two sensor positions.
  • the distance of the sensor positions can be determined by use of the direction vectors only.
  • the images are frames of a video sequence recorded by the moving imaging sensor, especially a (video) camera.
  • a pair of two immediately sequent frames is used for determining the distance of the sensor positions.
  • the angular difference Q is determined by calculating the scalar product of the direction vectors s, s'.
  • Each of the di rection vectors s, s' preferably is a unit vector with length 1. This gives the following equation:
  • the location is determined by means of triangulation.
  • Triangulation is the process of determining the location of a point by measuring only angles to it from known points at either end of a fixed baseline, rather than measuring distances to the point directly as in trilateration. The point can then be fixed as the third point of a triangle with one known side and two known angles.
  • the method is a method for 3D-reconstruction .
  • 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.
  • the computer program product comprises computer- executable program code portions having program code instructions configured to execute the aforementioned method, e.g. when loaded into a processor of a computer based location determination system.
  • the computer based location determination system for determining the location of at least one structure by means of an imaging sensor for generating a series of consecutive images, the system being arranged for determining the location of the individual structure from images created by the imaging sensor from at least two different sensor positions having a distance larger than or equal to a minimum distance for determining the location of said structure, wherein the system is arranged for verifying the distance of the sensor positions by recognizing the structure in the images and determining the location of the structure relative to the position and an orientation of the imaging sensor for each of the images.
  • the location determination system is arranged for execut ing the aforementioned method by means of at least one imaging sensor.
  • the computer based location determination system comprises a computer unit with a processor and a memory.
  • the location determination system further comprises the imaging sensor.
  • Fig. 1 shows a top view of a vehicle on a road detecting structures of an object by use of an imaging sensor.
  • Fig. 1 shows a top view of a scene with a vehicle 10 driving on a road 12.
  • This vehicle 10 is a motor vehicle, to be more specific: a passenger car.
  • the vehicle 10 comprises a computer based location determination system 14 with a computer unit 16 and one (or more) imaging sensor(s) 18.
  • the imaging sensor 18 used in the example is a camera 20 located at the front area of the vehicle 10.
  • Other possible imaging sensors 18 for this purpose are based on ultrasonic, radar, lidar, etc.
  • the detection area of the imaging sensor 18 is a corresponding surrounding region 22 in front of the vehicle 10.
  • the vehicle 12 is driving on the road 12 with the velocity v and the mounted imaging sensor 18 generates a series of consecutive images 24 while moving. Two images 24 are shown on the left side of Fig. 1.
  • the lower first image 24 is an image of the imaging sensor 18 generated at position P1
  • the upper second image 24 is an image of the imaging sensor 18 generated at position P2.
  • the two positions P1 , P2 have a distance D.
  • a visible structure 26 e.g. of an object like a road sign, a parked car, etc.
  • this structure 26 is visible (beside the road 12) in both images 24.
  • the distance D of the sensor positions P1 , P2 is verified by recognizing the structure 26 in the images 24 and the position of the structure 26 relative to the position P1 , P2 in relation to an orientation of the imaging sensor 18 for each of the images 24 is determined.
  • a reference vector n indicates the respective orientation of the imaging sensor 18.
  • a direction vector s, s' indicates the location of the structure 26 relative to the sensor position P1 , P2 and the presence of a sufficiently large distance D between the sensor positions P1 , P2 results from the fact that an angular difference Q of the respective direction vectors s, s' related to the reference vector n is greater than a threshold value 6 th (not shown in Fig. 1 ).
  • the orientation of the imaging sensor 18 does not change significantly between the two sensor positions P1 , P2 so that only one reference vector is used. In these cases the distance D of the sensor positions P1 , P2 can be determined by use of the direction vectors s, s’ only.
  • s is the vector corresponding to a ray projected from the current frame
  • s’ is the equivalent from the previous frame
  • 0 th is the angle threshold (and refers to the dot product of two vectors in R 3 ).
  • s and s’ must be in the same coordinate system, and therefore the odometry (can be represented as a rotation matrix.
  • R and translation vector t) of the camera 20 must be taken into account when formulating s and s’.
  • the cosine of a small angle is close to one, and cosine of a large angle is close to zero. Hence the inversion of the inequality sign.
  • the angle between the rays must be greater than some threshold to be considered unsharp (note again that the cosines lead to an inverted Inequality sign). Below this threshold the features are completely ignored, they are not considered at all. Above this threshold but below the minimum angle for a "normal” feature, the features are considered unsharp, and can be used in places where the application allows (i.e. accurate 3D position is not strictly required). As usual, features with angle greater than the threshold already discussed above are classed as "normal" features for reconstruction.
  • Bundle adjustment is a numerical method to refine the position of the camera 20 and the location of the points In R 3 that are reconstructed, typically by using reprojection error as an error function.
  • individual samples can be weighted according to some criteria. Given a point in R 3 that is reconstructed with rays that .are parallel or close to parallel, we can add a weight to the bundle adjustment such that the influence of this feature is minimal on the overall bundle adjustment result. The reason for this is that, if the rays for said feature are close to parallel (or parallel) then the reprojection error will always be small, even if sometimes there is very large error in R 3 .
  • bundle adjustment typically uses reprojection error as an error function.
  • Reprojection error is a common mechanism, by which the estimated point in R 3 is re-projected to the image space, and the distance to the image sample point is measured as an error.

Abstract

The invention relates to a method for determining the location of at least one structure (24) by means of an imaging sensor (18), in particular an imaging sensor (18) mounted on a vehicle (10), wherein the location of the individual structure (26) is determined from images (24) created by the imaging sensor (18) from at least two different sensor positions (P1, P2), which sensor positions (P1, P2) have a distance (D) larger than or equal to a minimum distance (Dth) for determining the location of said structure (24). The imaging sensor (18) generates a series of consecutive images (26) while moving, wherein the distance (D) of the sensor positions (P1, P2) is verified by recognizing the structure (26) in the images (24) and determining the location of the structure (26) relative to the sensor position (P1, P2) and an orientation of the imaging sensor (18) for each of the images (24). The invention further relates to a corresponding computer program product and a corresponding computer based location determination system (14).

Description

Method and computer based location determination system for determining the location of at least one structure by means of an imaging sensor
The invention relates to a method for determining the location of at least one structure by means of an imaging sensor, in particular an imaging sensor mounted on a vehicle, wherein the location of the individual structure is determined from images created by the imaging sensor from at least two different sensor positions, which sensor positions have a distance larger than or equal to a minimum distance for determining the location of said structure.
The invention further relates to a corresponding computer program product and a corresponding computer based location determination system.
The determination of the location of structure from motion and/or 3D reconstruction by use of an imaging sensor like a camera requires the motion of said imaging sensor before an estimation of the environmental structure is possible. Thus, in an application where structure from motion or 3D reconstruction is needed, the application must allow sufficient motion of the imaging sensor before commencing 3D reconstruction. Typically, this is done by enforcing a minimum baseline (minimum distance Dth) between the two sensor positions before triggering the 3D Reconstruction, i.e. by saying that the imaging sensor needs to go a minimum amount of translational movement before performing the reconstruction. This approach connected with the term "degeneracy" is discussed in more detail in the textbook: Hartley, R. and Zisserman, A.:“Multiple View Geometry in Computer Vision, Second Edition”; Cambridge University Press, March 25, 2004.
It is an object of the invention to provide an accordingly improved method for determining the location of at least one structure by means of a moving imaging sensor, a corresponding improved computer program product, and a corresponding improved computer based location determination system.
This object is achieved by a method, a computer program product, as well as a corresponding computer based location determination system having the features according to the respective independent claims. Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
According to the inventive method for determining the location of at least one structure by means of an imaging sensor, wherein the location is determined from images created by the imaging sensor from at least two different sensor positions having a distance larger than or equal to a minimum distance for determining the location of said structure, the moving imaging sensor generates a series of consecutive images, wherein the distance of the sensor positions is verified by recognizing the structure in the images and determining the location of the structure relative to the position and an orientation of the imaging sensor for each of the images.The orientation of the imaging sensor is, e.g., given by a viewing direction of the sensor. In a preferred application the imaging sensor is mounted in/on a vehicle and the structures are structures in a surrounding of said vehicle. The method according to the invention has the advantage that a single moving imaging sensor is sufficient to determine the location of the structure(s), wherein a provision of an adequate distance of the sensor positions at which the images were generated can be determined/verified from the images themselves.
The imaging sensor preferably is a camera, an ultrasonic imaging sensor, a radar imaging sensor or a lidar imaging sensor. All these kind of imaging sensors are well known from various automotive applications.
According to a preferred embodiment of the invention, a reference vector indicates the respective orientation of the imaging sensor, a direction vector indicates the location of the structure relative to the position of the imaging sensor and the presence of a sufficient distance between the sensor positions results from the fact that an angular difference of the respective direction vectors s, s' related to the reference vector n is greater than a threshold value 0th. The reference vector n indicating the respective orientation of the imaging sensor is, e.g., given by a viewing direction of the sensor.
In many cases the orientation of the imaging sensor does not change significantly between the two sensor positions. In these cases the distance of the sensor positions can be determined by use of the direction vectors only. According to another preferred embodiment of the invention, the images are frames of a video sequence recorded by the moving imaging sensor, especially a (video) camera. Preferably a pair of two immediately sequent frames is used for determining the distance of the sensor positions.
According to another preferred embodiment of the invention, the angular difference Q is determined by calculating the scalar product of the direction vectors s, s'. Each of the di rection vectors s, s' preferably is a unit vector with length 1. This gives the following equation:
s-s’< COS (0th)
Preferably the location is determined by means of triangulation. Triangulation is the process of determining the location of a point by measuring only angles to it from known points at either end of a fixed baseline, rather than measuring distances to the point directly as in trilateration. The point can then be fixed as the third point of a triangle with one known side and two known angles.
According to yet another preferred embodiment of the invention, the location
determination is part of a 3D-reconstruction of a plurality of structures. In other words: the method is a method for 3D-reconstruction . 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.
The computer program product according to the invention comprises computer- executable program code portions having program code instructions configured to execute the aforementioned method, e.g. when loaded into a processor of a computer based location determination system.
According to the computer based location determination system for determining the location of at least one structure by means of an imaging sensor for generating a series of consecutive images, the system being arranged for determining the location of the individual structure from images created by the imaging sensor from at least two different sensor positions having a distance larger than or equal to a minimum distance for determining the location of said structure, wherein the system is arranged for verifying the distance of the sensor positions by recognizing the structure in the images and determining the location of the structure relative to the position and an orientation of the imaging sensor for each of the images.
According to a preferred embodiment of the computer based location determination sys tem according to the invention, the location determination system is arranged for execut ing the aforementioned method by means of at least one imaging sensor.
Generally speaking, the computer based location determination system comprises a computer unit with a processor and a memory. Preferably the location determination system further comprises the imaging sensor.
Further features of the invention are apparent from the claims, the figure and the description of the figure. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of the figure and/or shown in the figure alone are usable not only in the respectively specified combination, but also in other combinations or else alone.
Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.
In the drawings:
Fig. 1 shows a top view of a vehicle on a road detecting structures of an object by use of an imaging sensor.
Fig. 1 shows a top view of a scene with a vehicle 10 driving on a road 12. This vehicle 10 is a motor vehicle, to be more specific: a passenger car. The vehicle 10 comprises a computer based location determination system 14 with a computer unit 16 and one (or more) imaging sensor(s) 18. The imaging sensor 18 used in the example is a camera 20 located at the front area of the vehicle 10. Other possible imaging sensors 18 for this purpose are based on ultrasonic, radar, lidar, etc. The detection area of the imaging sensor 18 is a corresponding surrounding region 22 in front of the vehicle 10. The vehicle 12 is driving on the road 12 with the velocity v and the mounted imaging sensor 18 generates a series of consecutive images 24 while moving. Two images 24 are shown on the left side of Fig. 1. The lower first image 24 is an image of the imaging sensor 18 generated at position P1 the upper second image 24 is an image of the imaging sensor 18 generated at position P2. The two positions P1 , P2 have a distance D. At the roadside of the road 12 in the detection area of the imaging sensor 18 (in the surrounding region 22) there is a visible structure 26, e.g. of an object like a road sign, a parked car, etc.
Thus this structure 26 is visible (beside the road 12) in both images 24. The distance D of the sensor positions P1 , P2 is verified by recognizing the structure 26 in the images 24 and the position of the structure 26 relative to the position P1 , P2 in relation to an orientation of the imaging sensor 18 for each of the images 24 is determined. A reference vector n indicates the respective orientation of the imaging sensor 18. A direction vector s, s' indicates the location of the structure 26 relative to the sensor position P1 , P2 and the presence of a sufficiently large distance D between the sensor positions P1 , P2 results from the fact that an angular difference Q of the respective direction vectors s, s' related to the reference vector n is greater than a threshold value 6th (not shown in Fig. 1 ). The orientation of the imaging sensor 18 does not change significantly between the two sensor positions P1 , P2 so that only one reference vector is used. In these cases the distance D of the sensor positions P1 , P2 can be determined by use of the direction vectors s, s’ only.
By examining the each pair of vectors s, s’ corresponding to a matched point in to adjacent frames 24 of video, and observing, if the angle Q between them is "large enough” then there is no need for a global camera baseline. This has the advantage that structures 26 of closer objects are detected sooner, and the accuracy of detections of the structures 26 of farther objects (objects further away) is greater.
Additionally, we have the opportunity to say that a certain set of features are "unsharp", thus indicating that an object with the structure 26 exists, but with low precision. This is very useful for a fusion system (e.g. with ultrasonic), where we wish the vehicle 10 to react on more than one source (to avoid false reactions) but wish to be able to react quickly before accurate detections are built. Between two processed frames of video, features are associated, for example using optical flow or feature matching. Each feature can be converted to a ray from the imaging sensor 18 (like a camera 20) using calibration. For each feature, to reconstruct the position of that feature in 3D space, the rays must be able to be triangulated. That is, the rays must have some convergence. This is not necessarily how a feature will be reconstructed, as doing directly ray convergence in R3 can be prone to error. Other methods include, using reprojection error to estimate the depth.
To ensure this convergence, existing solutions impose the restriction that the camera 20 must move through some minimum baseline distance before processing to estimate the 3D point is done. However, we propose to ensure, convergence by examining the projected rays, and ensure that the angle formed by the two rays is above a threshold. That is, for a feature to be added to the internal buffer for processing, the following must be true:
s-s’< COS (0th)
Where s is the vector corresponding to a ray projected from the current frame, s’ is the equivalent from the previous frame, and 0th is the angle threshold (and refers to the dot product of two vectors in R3). Note that s and s’ must be in the same coordinate system, and therefore the odometry (can be represented as a rotation matrix. R and translation vector t) of the camera 20 must be taken into account when formulating s and s’. Note also that the cosine of a small angle is close to one, and cosine of a large angle is close to zero. Hence the inversion of the inequality sign.
Sometimes it is useful to track a feature over more than one frame, for example if we wish to use three or more samples to create the 3D point in reconstruction. In such a case, the internal buffer for the features will only be added to if each new features meets the criteria outline above.
Note that the reason for having a minimum convergence angle threshold is that rays that converge with 0 or a very small angle will have high numerical error and be highly prone to noise in the signal, and as such the estimated 3D point for that ray tuple will have high error. However, for some application a higher error is acceptable, and so we introduce the concept of "unsharp" -feature. An "unsharp" feature is one that does not necessarily meet the criteria above. For example, for a ray to be classed as unsharp, the following criteria can be applied:
COS(0unsharp) > S'S’ > COS (0th)
That is, the angle between the rays must be greater than some threshold to be considered unsharp (note again that the cosines lead to an inverted Inequality sign). Below this threshold the features are completely ignored, they are not considered at all. Above this threshold but below the minimum angle for a "normal” feature, the features are considered unsharp, and can be used in places where the application allows (i.e. accurate 3D position is not strictly required). As usual, features with angle greater than the threshold already discussed above are classed as "normal" features for reconstruction.
A similar approach can be used in bundle adjustment. Bundle adjustment (BA) is a numerical method to refine the position of the camera 20 and the location of the points In R3 that are reconstructed, typically by using reprojection error as an error function. As with many numerical methods, individual samples can be weighted according to some criteria. Given a point in R3 that is reconstructed with rays that .are parallel or close to parallel, we can add a weight to the bundle adjustment such that the influence of this feature is minimal on the overall bundle adjustment result. The reason for this is that, if the rays for said feature are close to parallel (or parallel) then the reprojection error will always be small, even if sometimes there is very large error in R3.
As mentioned already, bundle adjustment typically uses reprojection error as an error function. Reprojection error is a common mechanism, by which the estimated point in R3 is re-projected to the image space, and the distance to the image sample point is measured as an error. In addition, we can construct the bundle adjustment error function such that such parallel rays always return a high error, and in this way features constructed that have high error can be given low confidence. List of Reference signs
10 vehicle
12 road
14 location determination system
16 computer unit
18 imaging sensor
20 camera
22 detection area
24 image
26 structure n reference vector s direction vector s’ direction vector
V velocity
D distance
P1 first position
P2 second position

Claims

Claims
1. A method for determining the location of at least one structure (26) by means of an imaging sensor (18), in particular an imaging sensor (18) mounted on a vehicle (10), wherein the location of the individual structure (26) is determined from images (24) created by the imaging sensor (18) from at least two different sensor positions (P1 , P2), which sensor positions (P1 , P2) have a distance (D) larger than or equal to a minimum distance (Dth) for determining the location of said structure (26),
characterized in that,
the imaging sensor (18) generates a series of consecutive images (24) while moving, wherein the distance (D) of the sensor positions (P1 , P2) is verified by recognizing the structure (26) in the images (24) and determining the location of the structure (26) relative to the sensor position (P1 , P2) and an orientation of the imaging sensor (18) for each of the images (24)
while a reference vector (n) indicates the respective orientation of the imaging sensor (18), a direction vector (s, s') indicates the location of the structure (26) relative to the sensor position (P1 , P2) and the presence of a sufficiently large distance (D) between the sensor positions (P1 , P2) arises from the fact that an angular difference (Q) of the respective direction vectors (s, s') related to the reference vector (n) is greater than a threshold value (0*).
2. The method according to claim 1 ,
characterized in that the images (24) are frames of a video sequence recorded by the moving imaging sensor (18).
3. The method according to claim 1 or 2,
characterized in that the angular difference (Q) is determined by calculating the scalar product of the direction vectors (s, s').
4. The method according to any one of claims 1 to 3,
characterized in that the location of the structure (26) is determined by means of triangulation.
5. The method according to claim 3 or 4,
characterized in that the location determination is part of a 3D-reconstruction of a plurality of structures (26).
6. A computer program product comprising computer-executable program code
portions having program code instructions configured to execute the method according to one of claims 1 to 5.
7. A computer based location determination system (14) for determining the location of at least one structure (24) by means of an imaging sensor (18) for generating a series of consecutive images (24), in particular an imaging sensor (18) mounted on a vehicle (10), wherein the system (14) is arranged for determining the location of the individual structure (26) from images (24) created by the imaging sensor (18) from at least two different sensor positions (P1 , P2), which sensor positions (P1 , P2) have a distance (D) larger than or equal to a minimum distance (Dth) for determining the location of said structure (26), wherein the system (14) is arranged for verifying the distance of the sensor positions (P1 , P2) by recognizing the structure (24) in the images (24) and determining the location of the structure (26) relative to the sensor position (P1 , P2) and an orientation of the imaging sensor (18) for each of the images (24).
8. The system according to claim 7, characterized in that the system (14) is arranged for executing the method according to one of claims 1 to 5.
9. The system according to claim 7 or 8,
characterized by a computer unit (16) with a processor and a memory.
PCT/EP2019/062642 2018-06-05 2019-05-16 Method and computer based location determination system for determining the location of at least one structure by means of an imaging sensor WO2019233728A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102018113325.1A DE102018113325A1 (en) 2018-06-05 2018-06-05 A method and computerized location system for determining the location of at least one structure by means of an imaging sensor
DE102018113325.1 2018-06-05

Publications (1)

Publication Number Publication Date
WO2019233728A1 true WO2019233728A1 (en) 2019-12-12

Family

ID=66597593

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/062642 WO2019233728A1 (en) 2018-06-05 2019-05-16 Method and computer based location determination system for determining the location of at least one structure by means of an imaging sensor

Country Status (2)

Country Link
DE (1) DE102018113325A1 (en)
WO (1) WO2019233728A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8831290B2 (en) * 2012-08-01 2014-09-09 Mitsubishi Electric Research Laboratories, Inc. Method and system for determining poses of vehicle-mounted cameras for in-road obstacle detection
EP3051494A1 (en) * 2015-01-28 2016-08-03 Connaught Electronics Ltd. Method for determining an image depth value depending on an image region, camera system and motor vehicle
US20180107883A1 (en) * 2016-10-19 2018-04-19 Texas Instruments Incorporated Estimation of Time to Collision in a Computer Vision System

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10336638A1 (en) * 2003-07-25 2005-02-10 Robert Bosch Gmbh Apparatus for classifying at least one object in a vehicle environment
DE102007008543A1 (en) * 2007-02-21 2008-08-28 Hella Kgaa Hueck & Co. Method and device for determining the state of motion of objects
DE102007021576A1 (en) * 2007-05-08 2008-11-13 Hella Kgaa Hueck & Co. Method and device for determining the position of a traffic sign

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8831290B2 (en) * 2012-08-01 2014-09-09 Mitsubishi Electric Research Laboratories, Inc. Method and system for determining poses of vehicle-mounted cameras for in-road obstacle detection
EP3051494A1 (en) * 2015-01-28 2016-08-03 Connaught Electronics Ltd. Method for determining an image depth value depending on an image region, camera system and motor vehicle
US20180107883A1 (en) * 2016-10-19 2018-04-19 Texas Instruments Incorporated Estimation of Time to Collision in a Computer Vision System

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HARTLEY, R.ZISSERMAN, A.: "Multiple View Geometry in Computer Vision, Second Edition", 25 March 2004, CAMBRIDGE UNIVERSITY PRESS
JOHN FIELDS ET AL: "Monocular structure from motion for near to long ranges", 2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCV WORKSHOPS : KYOTO, JAPAN, 27 SEPTEMBER - 4 OCTOBER 2009, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, PISCATAWAY, NJ, 27 September 2009 (2009-09-27), pages 1702 - 1709, XP031664518, ISBN: 978-1-4244-4442-7 *

Also Published As

Publication number Publication date
DE102018113325A1 (en) 2019-12-05

Similar Documents

Publication Publication Date Title
US11763568B2 (en) Ground plane estimation in a computer vision system
JP5876165B2 (en) Method for operating automobile driver assistance device, driver assistance device, and automobile
JP4899424B2 (en) Object detection device
JP2019526781A (en) Improved object detection and motion state estimation for vehicle environment detection systems
JP6392693B2 (en) Vehicle periphery monitoring device, vehicle periphery monitoring method, and program
KR20170132860A (en) Generate three-dimensional map of a scene using manual and active measurements
CA3010997C (en) Passenger counting device, system, method and program, and vehicle movement amount calculation device, method and program
JP6830140B2 (en) Motion vector field determination method, motion vector field determination device, equipment, computer readable storage medium and vehicle
JP6906567B2 (en) Obstacle detection methods, systems, computer devices, computer storage media
US20180218509A1 (en) Alternating Frequency Captures for Time of Flight Depth Sensing
US20200393246A1 (en) System and method for measuring a displacement of a mobile platform
JP2004531424A5 (en)
JP7107931B2 (en) Method and apparatus for estimating range of moving objects
US20180075609A1 (en) Method of Estimating Relative Motion Using a Visual-Inertial Sensor
US11769267B2 (en) Object distance measurement apparatus and method
CN110007313A (en) Obstacle detection method and device based on unmanned plane
JP2008249555A (en) Position-specifying device, position-specifying method, and position-specifying program
CN108844538B (en) Unmanned aerial vehicle obstacle avoidance waypoint generation method based on vision/inertial navigation
WO2022135594A1 (en) Method and apparatus for detecting target object, fusion processing unit, and medium
KR20200071960A (en) Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera Convergence
CN105793909B (en) The method and apparatus for generating warning for two images acquired by video camera by vehicle-periphery
KR101491305B1 (en) Apparatus and method for detecting obstacle
KR101549165B1 (en) Apparatus and method for estimating pose of vehicle
WO2019233728A1 (en) Method and computer based location determination system for determining the location of at least one structure by means of an imaging sensor
Teutsch et al. 3d-segmentation of traffic environments with u/v-disparity supported by radar-given masterpoints

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19725122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19725122

Country of ref document: EP

Kind code of ref document: A1