WO2019012004A1 - Procédé de détermination d'incertitude spatiale dans des images d'une région environnementale d'un véhicule automobile, système d'aide à la conduite et véhicule automobile - Google Patents

Procédé de détermination d'incertitude spatiale dans des images d'une région environnementale d'un véhicule automobile, système d'aide à la conduite et véhicule automobile Download PDF

Info

Publication number
WO2019012004A1
WO2019012004A1 PCT/EP2018/068827 EP2018068827W WO2019012004A1 WO 2019012004 A1 WO2019012004 A1 WO 2019012004A1 EP 2018068827 W EP2018068827 W EP 2018068827W WO 2019012004 A1 WO2019012004 A1 WO 2019012004A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coordinate system
distribution
motor vehicle
determined
Prior art date
Application number
PCT/EP2018/068827
Other languages
English (en)
Inventor
George SIOGKAS
Robert Voros
Michael Starr
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2019012004A1 publication Critical patent/WO2019012004A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/03Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • the invention relates to a method for determining a spatial uncertainty between object points in an environmental area of a motor vehicle and of image points representing the object points in at least one image of the environmental area which image is generated by means of image data of at least one camera of the motor vehicle.
  • the invention also relates to a driver assistance system as well as a motor vehicle.
  • a driver assistance system can, for example, be an automatic line marking detection system, wherein as the objects roadway markings are recognized in the form of lines on a roadway surface of the motor vehicle and their positions relative to the motor vehicle are determined. Based on the spatial position or location of the road markings with respect to the motor vehicle, it is then possible, for example, to recognize or calculate the current lane of the motor vehicle as well as the adjacent lanes. For example, the driver can be supported during lane-changing maneuvers.
  • original images from the surroundings of the motor vehicle are projected onto an arbitrary image plane.
  • Such original images could, for example, be raw data of a fish-eye camera which distorts the environmental area due to a fish eye lens.
  • the images projected onto the image plane can, for example, be top views of the environmental area.
  • a relation between the original image and the projected image can be described via intrinsic and extrinsic camera parameters of the camera capturing the raw data. These intrinsic and extrinsic camera parameters influence a precision as well as a quality of the projection of the original image on the image plane. Moreover, the accuracy of the projection itself is limited by a resolution of the original image and the projected image.
  • the environmental area can be determined by means of a coordinate transformation dependent on the sample set, wherein predetermined reference world coordinates of the predetermined reference object point are converted from a world coordinate system of the environmental area into an image coordinate system of the image data and are re- projected to the world coordinate system.
  • the spatial uncertainty is determined as a function of the set of projected reference world coordinates of the reference object point.
  • a distribution for at least one extrinsic camera parameter of the vehicle-side camera is pregiven and a sample set with random values of the at least one extrinsic camera parameter is determined based on the at least one distribution.
  • a set of reference world coordinates of at least one predetermined reference object point in the environmental area is determined by means of a coordinate transformation dependent on the sample set, wherein predetermined reference world coordinates of the predetermined reference object point are converted from a world coordinate system of the environmental area into an image coordinate system of the image data and are re-projected to the world coordinate system. Moreover, the spatial uncertainty is determined as a function of the set of projected reference world coordinates of the reference object point.
  • the image data from the environmental area of the motor vehicle can be detected by the at least one vehicle-side camera and, for example, provided to an image processing device of the driver assistance system.
  • the at least one camera is, in particular, a fish-eye camera which has a fish-eye lens for enlarging its detection area.
  • the image data or raw image data detected by the camera are distorted due to the fish- eye lens.
  • the distorted image data can be projected onto an image plane to produce the image.
  • image data from at least four cameras forming a surround view camera system are acquired, wherein the distorted image data can be combined and projected onto the image plane in the form of a top view image of the environmental area of the motor vehicle.
  • the world coordinate system is a three-dimensional coordinate system, for example, a vehicle coordinate system wherein positions of object points in the environmental area can be described by means of an x- coordinate in the vehicle longitudinal direction, a y-coordinate in the vehicle transverse direction and a z-coordinate in the vehicle vertical direction.
  • the object points described in the world coordinate system are mapped into a two-dimensional image coordinate system.
  • Positions of image points are described in the two-dimensional image coordinate system, in particular by means of a u- coordinate in the horizontal image direction and a v-coordinate in the vertical image direction.
  • a relation between the world coordinate system and the image coordinate system can be described via extrinsic, external camera parameters and intrinsic, internal camera parameters.
  • the camera parameters thus enable a coordinate transformation between the world coordinate system or an object space and the image coordinate system or an image space.
  • the extrinsic camera parameters describe, in particular, a position and an orientation of the camera, i.e. a pose of the camera, in the object space and establish a connection between the world coordinate system and a camera coordinate system.
  • the intrinsic camera parameters establish the relationship between the camera coordinate system and the image coordinate system.
  • the at least one distribution of the values of the at least one extrinsic camera parameter is predetermined.
  • the at least one rotation and/or at least one translation is determined, wherein a distribution as well as a sample set with random values are determined for each extrinsic camera parameter.
  • a translation along the z-axis as a second extrinsic camera parameter, a translation along the z-axis, as a second extrinsic camera parameter a rotation around the x-axis, as a third extrinsic camera parameter a rotation around the y-axis, and as a fourth extrinsic camera parameter a rotation around the z-axis is determined.
  • the translation value describes a displacement of the camera in the z-direction to an origin of the world coordinate system.
  • the rotation values describe a rotation of the camera around the three Euler angles.
  • the sample set of random values of the extrinsic camera parameter is generated, in particular simulated. This means that, based on the distribution, a plurality of values of the extrinsic camera parameter are artificially generated, which can be approximated over the predetermined distribution.
  • a first sample set with random values for the translation in the z-direction is generated on the basis of a first distribution of translation values in the z-direction.
  • a second sample set with random values for the rotation around the x-axis is determined, based on a third distribution of rotational values around the y-axis, a third sample set with random values for the rotation around the y-axis is determined, and based on a fourth distribution of rotational values around the z-axis, a fourth sample set with random values for the rotation around the z-axis is determined.
  • the respective sample set represents, in particular, all possible fluctuations which the associated extrinsic camera parameter can have. In other words, the sample set represents all possible values which the associated extrinsic camera parameter can assume.
  • a coordinate transformation with two coordinate transformation steps is performed depending on the determined sample set.
  • the at least one predetermined reference object point whose position in the environmental area is known, is projected from the world coordinate system into the image coordinate system in a first transformation step, and the projected reference object point is re-projected into the world coordinate system from the image coordinate system in a second transformation step. Since the coordinate transformation, in particular the first transformation step or the second transformation step, is carried out as a function of the sample set with random values of the extrinsic camera parameters, a plurality of reference world coordinates, i.e. the set of reference world coordinates, is formed during the transformation.
  • the coordinate transformation is additionally performed as a function of at least one predetermined intrinsic camera parameter.
  • the at least one intrinsic camera parameter serves in particular to restore a connection between the three- dimensional camera coordinate system and the two-dimensional image coordinate system.
  • the intrinsic camera parameters a focal length of the camera, coordinates of an image center point and pixel scaling in both image coordinate directions can be specified.
  • the at least one intrinsic camera parameter is assumed to be constant.
  • the translations or displacements of the camera in the x-direction and in the y-direction are also assumed to be constant and are given for the coordinate
  • the set of reference world coordinates resulting from the coordinate transformation represents the possible reference world coordinates of the object point as a function of the different random values of the extrinsic camera parameter.
  • the set of reference world coordinates represents the spatial uncertainty in the position
  • the spatial uncertainty is determined for each pixel within the projected image.
  • a computing power of the image processing device is thus used in order to artificially generate the extensive sample set with random values of the at least one extrinsic camera parameter. Since the sample set particularly maps all fluctuations or changes of the extrinsic camera parameters, the spatial uncertainty can be determined particularly reliably on the basis of the sample set. In addition, there is the advantage that no complex analytical solution for determining the spatial uncertainty has to be found. Based on the spatial uncertainty, the integration of the object recognition into a Bayesian network, in particular into a Kalman filter or a particle filter, can also be advantageously facilitated.
  • a systematic error is determined between at least one end point of a line marking on a roadway of the motor vehicle and the representation of the end point in the at least one image of the roadway generated by the image data of the at least one camera.
  • line markings are recognized on a road surface of the motor vehicle.
  • a position of the line marking end points can be determined particularly precisely.
  • a driver assistance system can be implemented in the form of a line marking detection system which can support a driver, for example, when changing lanes to an adjacent lane or when tracking on the current lane.
  • a type of the distribution and/or at least one measure describing the distribution in particular a mean value and/or a standard deviation of values of the at least one extrinsic camera parameter, is determined and the sample set is determined as a function of the type of the distribution and/or the at least one measure.
  • a frequency of occurring values of the at least one extrinsic camera parameter can be defined by the type of the distribution and/or by the at least one measure describing the distribution. Knowing the type of distribution and/or the at least one distribution-describing measure, the sample set with random values of the extrinsic camera parameter, which represent the real values of the at least one extrinsic camera parameter particularly realistically, can be simulated.
  • the sample set with random values of the at least one extrinsic camera parameter is simulated by means of a Monte Carlo method as a function of the at least one distribution.
  • the random values generated by means of a deterministic random number generator as a function of the at least one distribution can be described as a vector, wherein the components of the vector are determined by means of the deterministic random number generator or pseudo-random number generator with the aid of the specified distribution.
  • the generated sample set can be described as a two- dimensional matrix comprising the vectors comprising the random values of the extrinsic camera parameters.
  • the matrix has, for example, four vectors which correspond to four extrinsic camera parameters.
  • Monte Carlo method a particularly large sample set or a sample volume can thus be determined numerically, by means of which the spatial uncertainty for each image point of the generated image of the environmental area can then be determined with high reliability.
  • the at least one distribution is predetermined by approximating measurement values of the at least one extrinsic camera parameter by means of a predetermined distribution function, in particular a Gaussian distribution.
  • a predetermined distribution function in particular a Gaussian distribution.
  • the measured values of the at least one extrinsic camera parameter are measured during at least one camera calibration of the at least one camera.
  • the measured values are thereby approximated by the distribution function, for example the Gaussian distribution, a mean value as well as a standard deviation being dependent on the measured values.
  • the measured values represent the basis on which the distribution is generated.
  • a second number of random values which is significantly greater in comparison to the first number, is generated. Based on the second number of random values the spatial uncertainty can then be determined particularly accurately and reliably. Thus, it can be advantageously prevented that a large number of camera calibrations or calibration cycles has to be performed for measuring the extrinsic camera parameters. Rather, the extrinsic camera parameters necessary for determining the spatial uncertainty are simulated with the aid of computing power of the image processing device.
  • a two-dimensional reference subarea with reference object points is predefined in the environmental area and the coordinate transformation is performed on the reference object points of the reference subarea.
  • the two-dimensional reference partial region in the environmental region is a so-called region of interest (ROI) which is located in particular on a roadway of the motor vehicle.
  • ROI region of interest
  • a z-coordinate of the reference image points within the reference subarea is equal to 0.
  • the coordinate transformation for determining the set of reference world coordinates is thus performed on the basis of reference ground points. By selecting the z-coordinate with the value 0, the coordinate transformation can be carried out particularly quickly and simply.
  • the conversion of the at least one reference object point from the world coordinate system into the image coordinate system is carried out as a function of the sample set.
  • the re-projection of the reference object point transferred into the image coordinate system into the world coordinate system is performed as a function of at least one predetermined extrinsic ideal camera parameter.
  • the simulated sample set with random values of the extrinsic camera parameter is therefore used during the first coordinate transformation step. Therefore, a set of reference image coordinates of a reference image point corresponding to the reference object point can be determined from the reference world coordinates as a function of the sample set. This means that when projecting the reference object point located at a reference world position into the image coordinate system, a plurality of reference image positions arise. Based on the sample set with the plurality of random values for the at least one extrinsic camera parameter, a reference image point is thus formed whose reference image position has a spatial uncertainty. This spatial uncertainty of the reference image position is represented by the set of reference image coordinates.
  • the generated set of reference image coordinates is transformed back into the world coordinate system in the second coordinate transformation step, wherein the at least one extrinsic ideal camera parameter is used in the back-transformation.
  • the set of reference world coordinates from the set of reference image coordinates can be determined as a function of the at least one predetermined extrinsic ideal camera parameter.
  • the extrinsic ideal camera parameter describes an ideal calibration of the camera.
  • a constant value for each extrinsic camera parameter can be predetermined, for example, and the second transformation step can be performed based on the constant values of the extrinsic camera parameters.
  • the same, constant intrinsic camera parameters are assumed in both coordinate transformation steps. The intrinsic camera parameters are thus kept stable.
  • the set of reference world coordinates which is then used to determine the spatial uncertainty between the object space and the image space, is produced by the re-transformation of the set of reference image coordinates from the image coordinate system into the world coordinate system.
  • the transformation of the at least one reference object point from the world coordinate system into the image coordinate system is carried out as a function of at least one predetermined extrinsic ideal camera parameter.
  • the re- projection of the reference object point transferred into the image coordinate system back into the world coordinate system is carried out as a function of the sample set.
  • the sample set with random values of the at least one extrinsic camera parameter is used only in the second transformation step.
  • the corresponding to the reference object point can be determined from the reference world coordinates as a function of the at least one predetermined extrinsic ideal camera parameter.
  • the first transformation step in which the ideal calibration of the camera is assumed and therefore the at least one predetermined extrinsic ideal camera parameter is used, only one reference image point is generated from the reference object point at a particular reference image position.
  • the set of world reference coordinates can be determined from the reference image coordinates as a function of the sample set.
  • the individual reference image point determined at the first transformation step at the reference image position, which is described by means of the reference image coordinates, is thus transformed back into the world coordinate system as a function of the simulated sample set.
  • the set of reference world coordinates which describes the spatial uncertainty between the image space and the object space, arises only in the second transformation step.
  • a mean value and a standard deviation are determined for the set of reference world coordinates, which is dependent on the sample set.
  • the spatial uncertainty is determined as a function of the mean value and the standard deviation. Since the random values of the extrinsic camera parameter can be represented or characterized by means of a distribution, the values of the reference world coordinates can also be represented or characterized by means of a distribution. The spatial uncertainty can then be determined from this distribution.
  • the spatial uncertainty which describes the systematic error within the at least one image generated by means of the image data of the camera, can, for example, be fed to the Bayesian network.
  • the invention also relates to a driver assistance system for a motor vehicle comprising at least one camera for capturing image data from an environmental area of the motor vehicle and an image processing device which is designed to carry out a method according to the invention or an advantageous embodiment thereof.
  • the driver assistance system is particularly designed as an automatic line marking detection system.
  • roadway markings for example lines on a roadway surface for the motor vehicle, can thus be recognized.
  • the driver can be supported, for example, during tracking or during lane change maneuvers.
  • a motor vehicle according to the invention comprises a driver assistance system according to the invention.
  • the motor vehicle is designed in particular as a passenger car.
  • the motor vehicle can, for example, comprise a surround view camera system with at least four cameras which can capture image data from the environmental area around the motor vehicle.
  • the cameras have in particular fish eye lenses.
  • Fig. 1 a schematic representation of an embodiment of a motor vehicle according to the invention
  • Fig. 2a to 2d schematic representations of measured values of extrinsic camera parameters as well as distributions of the measured values of the extrinsic camera parameters
  • Fig. 3a, 3b schematic representations of sample sets with random values of the extrinsic camera parameters
  • Fig. 4 a schematic representation of a coordinate transformation
  • Fig. 5 a schematic representation of distorted image data of a camera of the motor vehicle
  • Fig. 6 a schematic representation of a top view image generated from image data of a camera of the motor vehicle with a spatial uncertainty
  • Fig. 7 a further schematic representation of a coordinate transformation between a world coordinate system and an image coordinate system.
  • Fig. 1 shows a motor vehicle 1 according to the present invention.
  • the motor vehicle 1 is designed as a passenger car.
  • the motor vehicle 1 comprises a driver assistance system 2, which can assist a driver of the motor vehicle 1 during his driving task.
  • the driver assistance system 2 in the present case has four cameras 3, which form a surround view camera system 4.
  • the cameras 3 are designed to detect image data from an environmental area 5 of the motor vehicle 1 .
  • the image data acquired by the cameras 3 can be fed to an image processing device 6 of the driver assistance system 2, which is designed to generate images of the environmental area 5, for example, top-view images, from the image data of the cameras 3.
  • the image processing device 6 can be designed to recognize objects 7, in particular roadway markings 8 on a roadway of the motor vehicle 1 , in the image data as well as to determine their positions in a world coordinate system W.
  • the world coordinate system W is a three- dimensional coordinate system with an x-axis oriented along a vehicle longitudinal direction, a y-axis oriented along a vehicle transverse direction, and a z-axis oriented along a vehicle vertical direction.
  • the image data acquired by the cameras 3 are described in a two-dimensional image coordinate system I which comprises a u-axis in the horizontal image direction and a v- axis in the vertical image direction.
  • a relation between the world coordinate system W and the image coordinate system I is established via extrinsic and intrinsic camera parameters of the cameras 3.
  • extrinsic camera parameters describing a position and an orientation of a camera 3 in the world coordinate system W
  • a relation between the world coordinate system W and a camera coordinate system is described.
  • the intrinsic camera parameters a relation between the camera coordinate system and the image coordinate system I is described.
  • the extrinsic camera parameters can introduce a systematic error during the generation of images from the image data of the cameras 3 which describes a spatial uncertainty between object points in the world coordinate system W of the environmental area 5 and image points in the image coordinate system I of the generated image.
  • Such an image 24 in the form of a top view image comprising the systematic error visualized as a distribution 25 is exemplarily shown in Fig. 6.
  • Fig. 2a to 2d show measured values 9 of extrinsic camera parameters Tz, Rx, Ry, Rz, which, for example, can be measured during a camera calibration of the cameras 3.
  • Fig. 2a shows a histogram of measured values 9 for a first extrinsic camera parameter in the form of a translation Tz in the z-direction. A position d is indicated on the abscissa, a number A is indicated on the ordinate.
  • the translation Tz in the z-direction describes a displacement of the camera 3 with respect to an origin of the world coordinate system W.
  • FIG. 2b shows a histogram of measured values 9 for a second extrinsic camera parameter in the form of a rotation Rx around the x-axis.
  • Fig. 2c shows a histogram of measured values 9 for a third extrinsic camera parameter in the form of a rotation Ry around the y-axis.
  • Fig. 2d a histogram of measured values 9 for a fourth extrinsic camera parameter in the form of a rotation Rz around the z-axis is shown.
  • an angle a is indicated on the abscissa, on the ordinate a number A is indicated.
  • the rotations Rx, Ry, Rz around the respective axes x, y, z describe rotations of the camera 3 around the Euler-angles.
  • a pose of the camera 3 can be specified.
  • These measured values 9 of the extrinsic camera parameters Tz, Rx, Ry, Rz, which are mostly available only in a limited, small number, are approximated by respective distributions D1 , D2, D3, D4, in particular Gaussian distributions.
  • D1 , D2, D3, D4 at least one measure describing the distribution D1 , D2, D3, D4, for example a mean value and a standard deviation, is determined.
  • a first distribution D1 according to Fig. 2a describes a distribution of the measured values 9 of the translation Tz
  • a second distribution D2 according to Fig.
  • a third distribution D3 according to Fig. 2c describes the distribution of the measured values 9 of the rotation Ry
  • a fourth distribution D4 according to Fig. 2d describes the distribution of the measured values 9 of the rotation Rz.
  • Fig. 3a schematically shows a distribution D, which is generated based on measured values 9 of an extrinsic camera parameter.
  • a distribution parameter 10 is determined.
  • the distribution parameter 10 describes in particular the type of the distribution D, for example a Gaussian distribution, as well as the at least one measure describing the distribution D, for example the mean value and the standard deviation.
  • a sample set 1 1 with random values 12 for the extrinsic camera parameter described by the distribution D can be determined by means of Monte Carlo simulation.
  • the random values 12 can be generated by means of a deterministic random number generator.
  • a sample set 1 1 with random values 12 for the translation Tz a sample set 1 1 with random values 12 for the rotation Rx, a sample set 1 1 with random values 12 for the rotation Ry, and a sample set 1 1 with random values 12 for the rotation Ry can be determined.
  • Fig. 3b shows that the sample sets 12, of which only one sample set 12 is shown schematically, can be represented in the form of a matrix M with rows m and columns n. Rows m of the matrix M are formed by vectors, each vector comprising the concrete values Xi , x 2 , x n of the random values 12 of one extrinsic camera parameter. For the four extrinsic camera parameters Tz, Rx, Ry, Rz, the matrix M comprises four rows m.
  • the coordinate transformation 13 has a first transformation step 14 between the world coordinate system W and the image coordinate system I and a second transformation step 19 between the image coordinate system I and the world coordinate system W.
  • the first transformation step 14 is carried out based on the sample sets 1 1 by transferring a reference subarea 15 with reference object points, which are described in reference world coordinates, from the world coordinate system W into the image coordinate system I.
  • the reference subarea 15, a so-called region of interest, is described in world coordinates x, y, z with a height of 0. This means that the reference subarea 15 is a two-dimensional region whose z coordinates are 0.
  • first transformation step 14 constant intrinsic camera parameters 16 as well as constant translation parameters 17 in the x-direction and y-direction are assumed.
  • a raw image is produced with a set 18 of reference image coordinates, the set 18 of reference image coordinates being dependent on the sample sets 1 1 with the random values 12 of the extrinsic camera parameters Tz, Rx, Ry, Rz.
  • a plurality of sets 18 of reference image coordinates are shown in Fig. 5 in a raw image 22 distorted by fish eye lenses of the capturing camera 3.
  • the sets 18 of reference image coordinates can be represented by a distribution 23 which describes a fish eye error when projecting the reference world coordinates of the reference subarea 15 into the image coordinate system I.
  • the set 18 of reference image coordinates is projected back from the image coordinate system I into the world coordinate system W.
  • the constant intrinsic camera parameters 16 as well as the constant translation parameters 17 in the x-direction and in the y-direction from the first transformation step 14 are again used.
  • a set 20 of extrinsic ideal camera parameters is used which describes an ideal camera calibration of the camera 3.
  • a set 21 of reference world coordinates is formed, by means of which the systematic error can be determined.
  • a plurality of sets 21 of reference world coordinates are shown in the projected top view image 24 in Fig. 6, wherein the systematic error describing the spatial uncertainty can be represented by a distribution 25.
  • the spatial uncertainty can be described or quantified by means of a mean value and a standard deviation of the distribution 25 at each image position in the top view image 24.
  • Fig. 7 shows the coordinate transformation 13 with the first transformation step 14 and the second transformation step 19, the sample sets 1 1 being used during the second transformation step 19 as compared to the coordinate transformation 13 according to Fig. 4.
  • the reference subarea 15 with the reference object points 26 is transformed from the world coordinate system W into the image coordinate system I by means of the set 20 of the extrinsic ideal camera parameters, the constant intrinsic camera parameters 16 and the constant translation parameters 17 in the x- direction and in the y-direction, which are not shown here.
  • a raw image 27 with reference image points 28 is produced.
  • the reference image coordinates of the reference image points 28 have, in particular, no error 23 due to the assumption of the ideal calibration of the camera 3.
  • the coordinate transformation 13 described with reference to Fig. 4 or the coordinate transformation 13 described with reference to Fig. 7 can be used.
  • the coordinate transformations 13 shown in Fig. 4 and 7 produce identical results, i.e. identical systematic errors 25, if the first transformation step 14 is the ideal inverse of the second transformation step 19.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de détermination d'une incertitude spatiale (25) entre des points d'objet dans une zone environnementale (5) d'un véhicule à moteur (1) et des points d'image représentant les points d'objet dans au moins une image (24) de la zone environnementale (5) qui est générée au moyen de données d'image (22) d'au moins une caméra (3) du véhicule à moteur (1), une répartition (D, D1, D2, D3, D4) pour au moins un paramètre de caméra extrinsèque (Tz, Rx, Ry, Rz) de la caméra côté véhicule (3) étant fournie, un ensemble d'échantillons (11) avec des valeurs aléatoires (12) du ou des paramètres de caméra extrinsèques (Tz, Rx, Ry, Rz) étant déterminé sur la base de la ou des répartitions (D, D1, D2, D3, D4), un ensemble (21) de coordonnées du monde de référence d'au moins un point d'objet de référence prédéterminé (26) dans la zone environnementale (5) étant déterminé au moyen d'une transformation de coordonnées (13) en fonction de l'ensemble d'échantillons (11) et l'incertitude spatiale (25) étant déterminée en fonction de l'ensemble (21) de coordonnées du monde de référence projetée du point d'objet de référence (26). L'invention concerne en outre un système d'aide à la conduite (2) ainsi qu'un véhicule automobile (1).
PCT/EP2018/068827 2017-07-12 2018-07-11 Procédé de détermination d'incertitude spatiale dans des images d'une région environnementale d'un véhicule automobile, système d'aide à la conduite et véhicule automobile WO2019012004A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017115587.2A DE102017115587A1 (de) 2017-07-12 2017-07-12 Verfahren zum Bestimmen einer räumlichen Unsicherheit in Bildern eines Umgebungsbereiches eines Kraftfahrzeugs, Fahrerassistenzsystem sowie Kraftfahrzeug
DE102017115587.2 2017-07-12

Publications (1)

Publication Number Publication Date
WO2019012004A1 true WO2019012004A1 (fr) 2019-01-17

Family

ID=62904476

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/068827 WO2019012004A1 (fr) 2017-07-12 2018-07-11 Procédé de détermination d'incertitude spatiale dans des images d'une région environnementale d'un véhicule automobile, système d'aide à la conduite et véhicule automobile

Country Status (2)

Country Link
DE (1) DE102017115587A1 (fr)
WO (1) WO2019012004A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961522A (zh) * 2019-04-02 2019-07-02 百度国际科技(深圳)有限公司 图像投射方法、装置、设备和存储介质
CN111751824A (zh) * 2020-06-24 2020-10-09 杭州海康汽车软件有限公司 车辆周围的障碍物检测方法、装置及设备
CN113223076A (zh) * 2021-04-07 2021-08-06 东软睿驰汽车技术(沈阳)有限公司 车辆与车载摄像头的坐标系标定方法、设备及存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012139636A1 (fr) * 2011-04-13 2012-10-18 Connaught Electronics Limited Étalonnage de caméra de véhicule en ligne sur la base d'un suivi de texture de surface routière et de propriétés géométriques
WO2012139660A1 (fr) * 2011-04-15 2012-10-18 Connaught Electronics Limited Étalonnage de caméra de véhicule en ligne sur la base d'extractions de marquage routier

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11699247B2 (en) * 2009-12-24 2023-07-11 Cognex Corporation System and method for runtime determination of camera miscalibration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012139636A1 (fr) * 2011-04-13 2012-10-18 Connaught Electronics Limited Étalonnage de caméra de véhicule en ligne sur la base d'un suivi de texture de surface routière et de propriétés géométriques
WO2012139660A1 (fr) * 2011-04-15 2012-10-18 Connaught Electronics Limited Étalonnage de caméra de véhicule en ligne sur la base d'extractions de marquage routier

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHIGANG LI ET AL: "Easy Calibration of a Blind-Spot-Free Fisheye Camera System Using a Scene of a Parking Space", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 12, no. 1, 1 March 2011 (2011-03-01), pages 232 - 242, XP011348841, ISSN: 1524-9050, DOI: 10.1109/TITS.2010.2085435 *
SUNDARESWARA R ET AL: "Bayesian Modelling of Camera Calibration and Reconstruction", 3-D DIGITAL IMAGING AND MODELING, 2005. 3DIM 2005. FIFTH INTERNATIONAL CONFERENCE ON OTTAWA, ON, CANADA 13-16 JUNE 2005, PISCATAWAY, NJ, USA,IEEE, 13 June 2005 (2005-06-13), pages 394 - 401, XP010811021, ISBN: 978-0-7695-2327-9, DOI: 10.1109/3DIM.2005.24 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961522A (zh) * 2019-04-02 2019-07-02 百度国际科技(深圳)有限公司 图像投射方法、装置、设备和存储介质
CN111751824A (zh) * 2020-06-24 2020-10-09 杭州海康汽车软件有限公司 车辆周围的障碍物检测方法、装置及设备
CN111751824B (zh) * 2020-06-24 2023-08-04 杭州海康汽车软件有限公司 车辆周围的障碍物检测方法、装置及设备
CN113223076A (zh) * 2021-04-07 2021-08-06 东软睿驰汽车技术(沈阳)有限公司 车辆与车载摄像头的坐标系标定方法、设备及存储介质
CN113223076B (zh) * 2021-04-07 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 车辆与车载摄像头的坐标系标定方法、设备及存储介质

Also Published As

Publication number Publication date
DE102017115587A1 (de) 2019-01-17

Similar Documents

Publication Publication Date Title
JP7054803B2 (ja) カメラパラメタセット算出装置、カメラパラメタセット算出方法及びプログラム
US11455746B2 (en) System and methods for extrinsic calibration of cameras and diffractive optical elements
JP7016058B2 (ja) カメラパラメタセット算出方法、カメラパラメタセット算出プログラム及びカメラパラメタセット算出装置
CN108805934B (zh) 一种车载摄像机的外部参数标定方法及装置
CN111383279B (zh) 外参标定方法、装置及电子设备
US7768527B2 (en) Hardware-in-the-loop simulation system and method for computer vision
JP7002007B2 (ja) カメラパラメタセット算出装置、カメラパラメタセット算出方法及びプログラム
JP5455124B2 (ja) カメラ姿勢パラメータ推定装置
CN109690622A (zh) 多相机系统中的相机登记
CN105551020A (zh) 一种检测目标物尺寸的方法及装置
WO2019012004A1 (fr) Procédé de détermination d'incertitude spatiale dans des images d'une région environnementale d'un véhicule automobile, système d'aide à la conduite et véhicule automobile
JP2023505891A (ja) 環境のトポグラフィを測定するための方法
CN114067001B (zh) 车载相机角度标定方法、终端及存储介质
CN112270719A (zh) 相机标定方法、装置及系统
Ding et al. A robust detection method of control points for calibration and measurement with defocused images
CN113658262A (zh) 相机外参标定方法、装置、系统及存储介质
CN112017236A (zh) 一种基于单目相机计算目标物位置的方法及装置
CN113034605B (zh) 目标对象的位置确定方法、装置、电子设备及存储介质
CN111382591A (zh) 一种双目相机测距校正方法及车载设备
WO2016146559A1 (fr) Procédé permettant de déterminer une position d'un objet dans un système de coordonnées tridimensionnelles, produit-programme informatique, système de caméra et véhicule à moteur
CN114119682A (zh) 一种激光点云和图像配准方法及配准系统
CA3122865A1 (fr) Procede de detection et de modelisation d'un objet sur la surface d'une route
TWI424259B (zh) 相機擺置角度校正法
CN113052974A (zh) 物体三维表面的重建方法和装置
CN113450415B (zh) 一种成像设备标定方法以及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18740199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18740199

Country of ref document: EP

Kind code of ref document: A1