WO2020212148A1 - Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device as well as driver assistance system - Google Patents

Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device as well as driver assistance system Download PDF

Info

Publication number
WO2020212148A1
WO2020212148A1 PCT/EP2020/059336 EP2020059336W WO2020212148A1 WO 2020212148 A1 WO2020212148 A1 WO 2020212148A1 EP 2020059336 W EP2020059336 W EP 2020059336W WO 2020212148 A1 WO2020212148 A1 WO 2020212148A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
camera
motor vehicle
image
determining
Prior art date
Application number
PCT/EP2020/059336
Other languages
French (fr)
Inventor
Naveen KURUBA
Ehsan CHAH
Ahmed Fathy
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2020212148A1 publication Critical patent/WO2020212148A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the invention relates to a method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle by means of an electronic computing device of a driver assistance system. Further, the invention relates to an electronic computing device as well as to a driver assistance system.
  • the following interest is in particular directed to motor vehicles, which comprise a camera in the rearwards directed area of the motor vehicle.
  • the camera is formed for observing a trailer.
  • the camera can for example be disposed at an upper end of the roof of the motor vehicle.
  • orientation errors can occur in mounting the camera such that errors can in particular occur in evaluating the images. If the camera should for example be used for determining a trajectory of a trailer, this can in particular result in misinterpretations within the image.
  • US 79 49 486 B2 discloses a camera, which is mounted on a reversing camera housing of a motor vehicle, which captures a ground plane image of the ground on a side of the motor vehicle for display on a display unit within the vehicle.
  • the camera is calibrated to correct the offset of the camera from the ideal position.
  • an image of a reference point at the vehicle is captured by pivoting the mirror housing from a rest position into a working position.
  • the offset of the actual position of the image of the reference point in the captured image from its ideal position is calculated, and a look-up table is created, which indicates the position, at which the pixels of the subsequently captured image frames should be located to generate image frames with offset correction.
  • This object is solved by a method, by an electronic computing device as well as by a driver assistance system according to the independent claims.
  • One aspect of the invention relates to a method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle by means of an electronic computing device of a driver assistance system.
  • An image of an environment of the motor vehicle with a calibration object of the motor vehicle is captured by means of the camera.
  • the calibration object is recognized in the captured image by means of the electronic computing device.
  • At least a first calibration point, a second calibration point, a third calibration point and a fourth calibration point of the calibration object are determined depending on an exterior shape of the calibration object.
  • a first calibration line, which extends through the first and the second calibration point, a second calibration line, which extends through the second and the third calibration point, a third calibration line, which extends through the third and the fourth calibration point, and a fourth calibration line, which extends through the fourth and the first calibration point, are determined.
  • a virtual first position of a first vanishing point of the image which is formed as a virtual first point of intersection of the first and the third calibration line, is determined.
  • a virtual second position of a second vanishing point of the image, which is formed as a virtual second point of intersection of the second and the fourth calibration line is determined.
  • the at least one correction value is determined depending on the virtual first position and the virtual second position.
  • determination of the correction value allows high flexibility of the camera position since this determination of the correction value is independent of the position of the camera at the motor vehicle. Furthermore, a low computing effort is in particular required since templates for matching the calibration object are not required.
  • the orientation error is in particular an error of the camera setup.
  • the camera can have been incorrectly set up/mounted in particular with respect to the three motor vehicle axes (longitudinal axis, transverse axis, vertical axis) for example in mounting the camera.
  • the camera can in particular have an actual pose, which deviates from a desired pose, in particular of a reference camera. This deviation is referred to as orientation error.
  • the camera can be disposed at a rearward directed stoplight/brake light, which can also be referred to as third stoplight/brake light.
  • the camera can be formed as a separate component to the stoplight or as an integral constituent of the stoplight.
  • a corresponding evaluation of the image for example by means of an image processing program, is performed.
  • the calibration object is a part of the motor vehicle and thus not a separate component.
  • the determination of the correction value can be performed reduced in effort and without additional component.
  • the determination of the correction value can be performed by means of the method within an image, in other words, without having to capture a further image.
  • V x , V y the vanishing points
  • s can in particular correspond to an unknown scaling factor.
  • K corresponds to the intrinsic matrix of the camera.
  • the factors r x , r y and r z are
  • the motor vehicle is provided as a pickup vehicle and the at least one correction value is determined by capturing a loading platform of the pickup vehicle as a calibration object.
  • the at least one correction value can be determined in particular without additional component due to the utilization of the loading platform of the pickup vehicle.
  • the pickup vehicle has a cockpit or a driver's cab and the loading platform is formed separately therefrom.
  • the driver's cab and the loading platform are commonly disposed on a chassis of the motor vehicle. In particular in a pickup vehicle, it can for example be possible that a trailer is attached.
  • an orientation and a trajectory of the trailer relative to the motor vehicle can then be reliably determined by means of the correction of the orientation error.
  • it can thereby be allowed that critical situations in the road traffic are prevented, which could for example arise due to veering measures of the trailer relative to the motor vehicle.
  • a hitch angle between the motor vehicle and the trailer can thereby be reliably determined.
  • the driver assistance system can then in particular be formed as a hitch angle assistance system, which can also be referred to as hitch angle detection system.
  • a correction value for a pitch angle of the camera relative to the motor vehicle is determined and/or a correction value of the roll angle of the camera relative to the motor vehicle and/or a correction value for a yaw angle of the camera relative to the motor vehicle are determined.
  • the correction of the orientation error which is in particular characterized by the pitch angle, the roll angle and the yaw angle, can be determined.
  • every possible relative position of the trailer to the motor vehicle in the environment in all of the spatial directions can in particular be reliably determined.
  • a single correction with respect to the pitch angle, the roll angle and the yaw angle is performed.
  • the orientation error of the camera at the motor vehicle can be reliably determined such that it can in particular be corrected.
  • the image can be captured as a fish-eye image by means of the camera and the captured fish-eye image with the captured calibration object can be adapted in perspective for determining the at least one correction value.
  • the reversing cameras are often provided as fish-eye cameras with a fish-eye lens to in particular be able to capture a large capturing range of the environment of the motor vehicle, an improvement of the evaluation of the correction value is allowed by the adaptation of the fish-eye image.
  • the correction value of the orientation error can nevertheless be reliably determined with a large capturing range by means of the fish-eye camera.
  • this can be performed by means of an image processing program of the camera and/or the electronic computing device.
  • a display image of the environment corrected depending on the at least one correction value is displayed by means of a display device of the motor vehicle.
  • the corrected display image can in particular be provided for example for a driver of the motor vehicle or for a user of the driver assistance system.
  • the driver assistance system can be provided as a hitch angle assistance system and thereby the hitch angle between a trailer of the motor vehicle and the motor vehicle can be reliably displayed.
  • a trajectory of the trailer relative to the motor vehicle can for example also be determined and displayed by means of the hitch angle assistance system.
  • the fish-eye image is first captured and the fish-eye image is corrected with respect to the distortion.
  • the corrected fish-eye image is then corrected with respect to the orientation error and can be displayed on the display device as the display image.
  • the hitch angle can be regarded as a total of the roll angle and the pitch angle and the yaw angle of the trailer relative to the motor vehicle.
  • the hitch angle is also dependent on further factors such as for example an exterior shape of the motor vehicle and the trailer.
  • a critical angle between the motor vehicle and the trailer can then be determined by the hitch angle assistance system, wherein the motor vehicle and the trailer would contact each other at the critical angle.
  • first and the third calibration line are defined as parallel to each other by means of the electronic computing device and the second and the fourth calibration line are defined as parallel to each other by means of the electronic computing device.
  • parallel lines in other words in particular the first and the third calibration line and the second and the fourth calibration line, meet each other in a finite point of a projected world, which then result in the corresponding vanishing points.
  • the vertical edges of the loading platform in vehicle longitudinal direction and the vertical edges of the loading platform in vehicle transverse direction can then for example be correspondingly defined as respectively parallel.
  • presetting or evaluating the calibration lines can be regarded as defining. In other words, even if the real calibration object does not have parallel edges, at least the determined calibration lines are assumed as respectively parallel to each other.
  • the corresponding vanishing points in vehicle longitudinal direction and in vehicle transverse direction can then be determined.
  • the real calibration object at the motor vehicle is formed with a substantially rectangular base surface and the determined calibration points are selected as a respective corner of the rectangular base surface.
  • a real calibration object at the motor vehicle with a cornered base surface is preset as the calibration object in particular for capturing with the camera.
  • the calibration object can also be polygonally formed.
  • the electronic computing device is formed to determine as calibration points corners respectively located one after the other viewed in the
  • the calibration points can then for example be captured as corners of the loading platform by means of different image processing techniques.
  • the calibration points can be captured by means of a Flough transformation or by means of an edge pixel calculation mask or by means of a histogram of oriented gradients (HOG).
  • HOG histogram of oriented gradients
  • a display image of the environment can be displayed by means of a display device of the motor vehicle, wherein the display image is displayed in a bird's eye perspective.
  • a loading platform which is captured as the calibration object
  • a top view can then be displayed as the bird's eye perspective.
  • the driver assistance system should for example be provided as a hitch angle assistance system, thus, the hitch angle and the corresponding trajectories of the trailer can thereby be reliably displayed in the bird's eye perspective. This allows a display intuitively perceptible for the user.
  • a height of the camera relative to the motor vehicle is determined by means of the electronic computing device for generating the display image in the bird's eye perspective.
  • orientation errors due to the height or representation errors due to the height of the camera relative to the motor vehicle can then for example be taken into account.
  • this contributes to the fact that for example with an attached trailer, the driver of the motor vehicle can better estimate the situation of the trailer relative to the motor vehicle. Thereby, a hitch angle between the motor vehicle and the trailer can in particular be displayed in improved manner.
  • the captured calibration object in the image is corrected with respect to the orientation error of the camera for determining the height of the camera.
  • the calibration object is calibrated to the corrected calibration object. For example, if it should be recognized that the loading platform as the calibration object is not displayed rectangularly within the bird's eye perspective, thus, a corresponding adaptation with respect to the orientation error can be performed. In particular, this can be performed by adapting the captured calibration object to a rectangular shape.
  • the orientation error is corrected by machine learning of the electronic computing device. In particular, an adaptation to a rectangular shape can then be simply performed by machine learning. In particular, this can be performed in addition to the determination of the correction value.
  • a stored reference calibration object is compared to the corrected calibration object with respect to the respective size for determining the relative height of the camera and the relative height is determined depending on the comparison.
  • the reference calibration object can in particular for example be preset by an actual size of a loading platform of the motor vehicle. They can for example be stored on a storage device of the electronic computing device.
  • a deviation of the height of the camera can then for example be determined by the formulas:
  • dH corresponds to the deviation of the camera height.
  • L and W correspond to the expected length L and to the expected width W of the calibration object.
  • I and w correspond to the corrected lengths I and the corrected width w in the corrected display image.
  • the determined relative height of the camera is taken into account in the display of the display image as a bird's eye perspective.
  • the corresponding determined relative height is taken into account in the display such that the display image can be reliably displayed.
  • a further aspect of the invention relates to an electronic computing device, which is formed for performing the method according to the previous aspect.
  • the electronic computing device comprises a computer program product with program code means.
  • the computer program product can be stored on a computer-readable medium to perform the method according to the preceding aspect when the computer program product is run on a processor of the electronic computing device.
  • a still further aspect of the invention relates to a driver assistance system with a camera and with an electronic computing device according to the preceding aspect.
  • the driver assistance system can be formed as a hitch angle assistance system.
  • the invention relates to a motor vehicle with a driver assistance system.
  • the motor vehicle is in particular formed as a passenger car, in particular as a pickup vehicle.
  • An independent aspect of the invention relates to a method for determining at least a real height of a camera relative to a reference height of a reference camera for correcting a height error for a motor vehicle by means of an electronic computing device of a driver assistance system.
  • An image of an environment of the motor vehicle with a calibration object of the motor vehicle is captured by means of the camera.
  • the calibration object in the captured image is recognized by means of the electronic computing device.
  • the image is generated as a bird's eye perspective.
  • the captured calibration object in the bird's eye perspective is corrected with respect to an orientation error of the calibration object in the captured image to a corrected calibration object.
  • a size of the corrected calibration object is compared to a size of a reference calibration object and the real height of the camera is determined depending on the comparison.
  • a hitch angle between the motor vehicle and the trailer can for example be performed in improved manner by the correction and an image processing based on the correction.
  • trajectories, for example of a trailer can for example be displayed in improved manner.
  • a deviation of the height of the camera can then for example be determined by the formulas:
  • dH corresponds to the deviation of the camera height.
  • L and W correspond to the expected length L and the expected width W of the calibration object.
  • I and w correspond to the corrected lengths I and the corrected width w in the corrected display image.
  • the determined relative height of the camera is taken into account in the display of the display image as a bird's eye perspective.
  • the corresponding determined relative height is taken into account in the display such that the display image can be reliably displayed.
  • the orientation error is corrected by machine learning of the electronic computing device.
  • an adaptation to a rectangular shape can then be simply performed by machine learning. In particular, this can be performed in addition to the determination of the correction value.
  • a region of interest in which the calibration object is located is specified in the captured image to capture the calibration object.
  • the region of interest is a partial section of the image. Therefore it possible that only the region of interest is evaluated during the evaluation of the image, whereby the computing capacity of the electronic computing device can be saved during the evaluation, since the entire captured image does not have to be evaluated to capture the calibration object.
  • the region of interest can be specified in such a way that at least the loading platform of the motor vehicle and in particular the calibration points are captured.
  • a central part of the calibration object is not part of the region of interest, whereby, for example, the calibration points are still present, but an area which is essentially located between the calibration points is not part of the region of interest. This can save even more computing capacity as a smaller portion of the image needs to be evaluated to capture the calibration object.
  • the region of interest is specified by stored parameters of the calibration object and/or the region of interest is generated by a tolerance range for the calibration object.
  • the intrinsic parameters of the camera can be used as stored parameters.
  • CAD computer-aided design
  • these can, for example, be specified by computer-aided design, a so-called CAD, and then retrieved as stored parameters.
  • CAD computer-aided design
  • it can be provided that a corresponding tolerance range is generated and specified, so that the calibration object can still be reliably detected even if the camera is misaligned.
  • the tolerance range can be specified depending on the stored parameter and/or the CAD.
  • correction value is additionally determined by homographic matrix decomposition.
  • correction value can be determined by
  • the loading platform of the motor vehicle can also be used here as a calibration object.
  • the camera no longer has to be arranged directly along a longitudinal axis of the motor vehicle, but that the corresponding arrangement positions of the camera can be taken into account by the method.
  • the first calibration line and the third calibration line as well as the second calibration line and the fourth calibration line are parallel to each other in reality, whereby an angle Q between the first calibration line and the fourth calibration line within the image is also assumed to be known.
  • line segments of the calibration lines are captured within the image. From a respective line segment (p 1 , p 2 ) within the image a respective ray (r 1 , r 2 ) can be defined. By means of these rays a plane can be generated, where then its normal is defined by
  • the first normal vector n 1 is assigned to the first calibration line
  • the second normal vector n 2 is assigned to the second calibration line
  • the third normal vector n 3 is assigned to the third calibration line
  • the fourth normal vector n 4 is assigned to the fourth calibration line.
  • the first calibration line and the third calibration line lie in a corresponding plane which is aligned along an axis b y .
  • the direction v y depends on the order of m and n 3 . Since in particular the loading platform must lie in a half-space corresponding to the negative z coordinate, it can be assumed that the point product c x .u y is negative, whereby can be determined. Then you can search for two unit vectors x 1 and X 2 , which are perpendicular to n 2 and n 4 . Furthermore, for symmetry reasons it can be assumed that x 2 - x 1 is perpendicular to m. The angle Q is the angle between x 1 and x 2 , where is.
  • the camera matrix with c x , c y , c z can be generated by transforming the rotation matrix.
  • the backward transformation R -1 R T can be used for this.
  • the Euler angles can be extracted from the rotation matrix accordingly.
  • a further aspect of the invention relates to an electronic computing device, which is designed to perform the method according to the preceding aspect.
  • the method is carried out by means of the electronic computing device.
  • the driver assistance system is designed for a motor vehicle, in particular for a pick up vehicle.
  • the driver assistance system can also be described as an electronic vehicle guidance system.
  • Advantageous forms of configuration of the method are to be regarded as advantageous forms of configuration of the electronic computing device as well as of the driver assistance system.
  • the electronic computing device as well as the driver assistance system comprise concrete features, which allow performing the method or an advantageous form of configuration thereof.
  • Fig. 1 a schematic top view to a vehicle/trailer combination with an embodiment of a driver assistance system
  • Fig. 2 a schematic view of the method for determining a correction value
  • Fig. 3 a schematic view of the method for determining a height of an embodiment of the camera
  • Fig. 4 a schematic view of an embodiment of a calibration object
  • Fig. 5 a schematic block diagram for determining the correction value
  • Fig. 6 a schematic perspective view to determine a homographic matrix
  • Fig. 7 a schematic perspective view of an image with a region of interest.
  • identical or functionally identical elements are provided with the same reference characters.
  • Fig. 1 schematically shows a motor vehicle 1 with an attached trailer 2, which constitute a vehicle/trailer combination 3 in the state coupled together, in a top view.
  • the motor vehicle 1 is in particular formed as a passenger car.
  • the motor vehicle 1 is provided as a pickup vehicle and comprises a loading platform 4.
  • the motor vehicle 1 has a longitudinal axis L1 and the trailer 2 has a longitudinal axis L2.
  • the trailer 2 is in particular coupled to the motor vehicle 1 via a tow coupling 5.
  • the motor vehicle 1 comprises a camera 6, by means of which an environment 7 of the motor vehicle 1 with a calibration object, which corresponds to the loading platform 4 in the present embodiment, can be captured.
  • the camera 6 can capture at least the loading platform 4 and the environment 7 by means of an image processing program.
  • the orientation error is in particular an error of the camera setup.
  • the camera 6 can have been incorrectly set up/mounted, in particular with respect to the three motor vehicle axes (longitudinal axis L1 , transverse axis Q, vertical axis) for example in mounting the camera 6.
  • the camera 6 can in particular have an actual pose, which deviates from a desired pose, in particular of a reference camera. This deviation is referred to as orientation error.
  • the camera 6 can be disposed at a rearwards directed stoplight/brake light, which can also be referred to as third stoplight/brake light.
  • the camera 6 can be formed as a separate component to the stoplight or as an integral constituent of the stoplight.
  • the motor vehicle 1 comprises a driver assistance system 8, which in particular comprises an electronic computing device 9.
  • a driver assistance system 8 which in particular comprises an electronic computing device 9.
  • an image B (Fig. 2) of the environment 7 of the motor vehicle 1 with the calibration object, in other words with the loading platform 4 in this embodiment, of the motor vehicle 1 is captured by means of the camera 6.
  • the calibration object in the captured image B is detected by means of the electronic computing device 9.
  • At least a first calibration point 10, a second calibration point 1 1 , a third calibration point 12 and a fourth calibration point 13 of the calibration object are captured.
  • a first calibration line K1 (Fig. 2), which extends through the first and the second calibration point 10, 1 1
  • a second calibration line K2 which extends through the second and the third calibration point 1 1 ,
  • a third calibration line K3, which extends through the third and the fourth calibration point 12, 13, and a fourth calibration line K4, which extends through the fourth and the first calibration point 13, 10, are determined.
  • a virtual first position 14 of a first vanishing point V y of the image B, which is formed as a virtual first point of intersection of the first and the third calibration line K1 , K3, is determined.
  • At least one correction value K (Fig. 2) is determined depending on the virtual first position 14 and the virtual second position 15.
  • a correction of the orientation error of the camera 6 relative to the motor vehicle 1 can be performed by means of the electronic computing device 9 by means of the correction value K.
  • a hitch angle a can be reliably and accurately determined after the correction.
  • the hitch angle a can be regarded as a total of the roll angle and the pitch angle and the yaw angle of the trailer 2 relative to the motor vehicle 1 .
  • the hitch angle a is also dependent on further factors, such as for example the exterior shape of the motor vehicle 1 and the trailer 2.
  • a critical angle between the motor vehicle 1 and the trailer 2 can then be determined by the hitch angle assistance system, wherein the motor vehicle 1 and the trailer 2 would contact each other at the critical angle.
  • correction values K are determined, wherein a correction value K for a pitch angle of the camera 6 relative to the motor vehicle 1 and/or a correction value K for a roll angle of the camera 6 relative to the motor vehicle 1 and/or a correction value K for a yaw angle of the camera 6 relative to the motor vehicle 1 are determined.
  • a corrected display image of the environment 7 depending on the at least one correction value K is displayed by means of a display device 16 of the motor vehicle 1.
  • at least the hitch angle a can then be displayed on the display device 16.
  • the calibration object is provided with a
  • the loading platform 4 is in particular substantially rectangular and the corners of the loading platform 4 are selected as the calibration points 10, 1 1 , 12, 13.
  • a real calibration object at the motor vehicle 1 with a cornered base surface can be preset as the calibration object in particular for capturing by the camera 6.
  • the calibration object can also be polygonally formed.
  • the electronic computing device 9 is formed to determine as the calibration points 10, 1 1 , 12, 13 corners respectively located one after the other viewed in the longitudinal axis L1 of the motor vehicle 1 , in other words in alignment, as the calibration points 10, 1 1 , 12, 13.
  • Fig. 2 schematically shows an embodiment of the method.
  • the image B is captured as a fish-eye image by means of the camera 6 and the captured fish-eye image B with the calibration object is adapted in perspective for determining the at least one correction value K in step S1 .
  • intrinsic parameter values IP are used as an input for processing in step S1 for the adaptation.
  • the corrected image B k is generated from the fish-eye image B.
  • step S2 the calibration points 10, 1 1 , 12, 13 are determined from the corrected image B k for example by Hough transformation or by a histogram with oriented gradients.
  • the first and the third calibration line K1 , K3 are defined as parallel to each other.
  • the second calibration line K2 and the fourth calibration line K4 are defined as parallel to each other by the electronic computing device 9.
  • An image B v shows the corrected image B k , wherein the calibration lines K1 , K2, K3, K4 are extended such that the vanishing points V x and V y can be determined.
  • the vanishing point V y is situated along a vehicle transverse axis Q in a vehicle transverse direction.
  • the vanishing point V x is situated in the direction of the longitudinal axis L1 of the motor vehicle 1 .
  • step S4 the at least one correction value K, in particular the three correction values K, is then determined depending on the vanishing points V x , V y . These correction values K can then in turn be output and for example be used for processing for the display device 16.
  • the vanishing points V x , V y can be determined with the aid of the formula:
  • s can in particular correspond to an unknown scaling factor.
  • K corresponds to the intrinsic matrix of the camera 6.
  • the factors r x , r y and r z are corresponding columns of the rotation matrix between the environmental and motor vehicle coordinates to the camera coordinates.
  • the factor t is a translation vector from the environmental/motor vehicle coordinates to the camera coordinates.
  • the scaling factor s can be neglected in determining by the unit vector in the directions r x , r y and r z .
  • Fig. 3 shows the method for determining the height H of the camera 6 in a schematic view.
  • a display image of the environment 7 is displayed by means of the display device 16 of the motor vehicle 1 , wherein the display image is displayed in a bird's eye perspective.
  • the height H of the camera 6 relative to the motor vehicle 1 is determined by means of the electronic computing device 9.
  • the image B which is in particular represented as a fish-eye image, is correspondingly corrected in step S1 as in Fig. 2.
  • step S5 the generation of the display image in the bird's eye perspective is effected depending on the corrected image B k .
  • corresponding bird's eye perspective parameters VP can in particular be used for determining the bird's eye perspective.
  • the captured calibration object in the image B is corrected with respect to the orientation error of the camera 6 for determining the height H of the camera 6. This is in particular effected in step S6.
  • this correction is performed by machine learning of the electronic computing device 9.
  • the loading platform 4 is not rectangularly formed in the image B by the incorrect orientation of the camera 6.
  • step S6 the corresponding errors can then be minimized and in particular the correction values K can be taken into account in step S6.
  • reference calibration object sizes 4P are taken into account in the determination of the height H.
  • the reference calibration object 17 (Fig. 4) is an expected object, in particular an expected object size.
  • step S7 the height H relative to the motor vehicle 1 is then determined and output in particular depending on the reference calibration object 17 and the reference calibration parameters 4P.
  • Fig. 4 schematically shows the determination of the height H.
  • the reference calibration object 17 is compared to the calibration object 18 corrected from Fig. 3.
  • the deviation of the camera 6 in the height H results by:
  • dH in particular corresponds to the deviation of the camera 6 in the height H.
  • L corresponds to the length L of the reference object 17.
  • W corresponds to the width W of the reference calibration object 17.
  • 1 corresponds to the length I of the corrected calibration object 18 and w corresponds to the width w of the corrected calibration object 18.
  • the height H can then respectively be determined.
  • a stored reference calibration object 17 can be compared to the corrected calibration object 18 with respect to the respective size for determining the relative height H of the camera 6 and the relative height can be determined depending on the comparison.
  • the determined relative height H of the camera 6 is taken into account in the display of the display image as a bird's eye perspective.
  • Fig. 5 shows a schematic block diagram of a design of the method. According to this Fig. 5, the correction value K is formed by means of a homographic matrix decomposition 23.
  • step S1 The image B and the intrinsic parameters of camera 6 are fed to step S1.
  • An overhead view 19 from step S1 is displayed.
  • a region-of-interest extraction 20 can be performed.
  • step S2 in particular can be executed.
  • a rotation solver 21 or a homographic estimation can be performed after step S2.
  • mechanical data 22 for example of the loading platform 4 can be used.
  • a matrix decomposition 23 is performed with a Z-position solver 24.
  • Estimated extrinsic parameters 25, a rotation deviation from the rotation solver 21 and a position deviation from the Z-position solver are fed to a calibrator 26 for generating an extrinsic correction.
  • the intrinsic matrix can be generated:
  • Fig. 6 shows schematically in a perspective view the generation of the correction value K by means of the homographic matrix decomposition 23.
  • the loading platform 4 of the motor vehicle 1 can also be used here as the calibration object 18.
  • the camera 6 no longer has to be arranged directly along a longitudinal axis L1 of the motor vehicle 1 , but the corresponding arrangement positions of the camera 6 can be taken into account accordingly.
  • the first calibration line K1 and the third calibration line K3 as well as the second calibration line K2 and the fourth calibration line K4 are parallel to each other in reality, whereby an angle Q between the first calibration line K1 and the fourth calibration line K4 within image B is also assumed to be known.
  • line segments of the calibration lines K1 , K2, K3, K4 are recorded within the image B.
  • a respective ray (r 1 , r 2 ) can be defined.
  • the first calibration line K1 and the third calibration line K3 lie in a corresponding plane which is aligned along an axis b y .
  • the direction v y depends on the order of m and n 3 . Since in particular the loading platform 4 must lie in a half-space
  • the camera matrix with c x , c y , c z can be generated by transforming the rotation matrix.
  • the backward transformation R 1 R T can be used for this.
  • the Euler angles can be extracted from the rotation matrix accordingly.
  • Fig. 7 shows in a schematic perspective view an image B.
  • a region of interest ROI can be specified in which the calibration object 18 is located.
  • the region of interest ROI is a partial section of image B. This makes it possible that only the region of interest ROI is evaluated when image B is evaluated, which means that the computing capacity of the electronic computing device 9 can be saved when evaluating, since the entire image B captured does not have to be evaluated to capture the calibration object 18.
  • the region of interest ROI can be specified in such a way that at least the loading platform 4 of the motor vehicle 1 and in particular the calibration points 10, 1 1 , 12, 13 can be captured.
  • a central part of the calibration object 18 is not part of the region of interest ROI, whereby calibration points 10, 1 1 , 12, 13, for example, are still present, but an area located essentially between calibration points 10,
  • the region of interest ROI is specified by stored parameters of the calibration object 18 and/or the region of interest ROI is generated by a tolerance range T for the calibration object 18.
  • the intrinsic parameters of camera 6 can be used as stored parameters, for example. These can, for example, be specified by computer-aided design (CAD) and then retrieved as stored parameters.
  • CAD computer-aided design
  • the tolerance range T can be specified in particular depending on the stored parameter and/or the CAD. This allows a calculation capacity-saving acquisition and evaluation of the calibration object 18.

Abstract

The invention relates to a method for determining at least one correction value (K) for correcting an orientation error of a camera (6) for a motor vehicle (1) by means of an electronic computing device (9) of a driver assistance system (8), including the steps of: - capturing an image (B) of an environment (7) of the motor vehicle (1) with a calibration object (18) of the motor vehicle (1) by means of the camera (6); - determining at least four calibration points (10, 11, 12, 13) depending on an exterior shape of the calibration object (18); - determining four calibration lines (K1, K2, K3, K4), which extend through the respective calibration points (10, 11, 12, 13); - determining a virtual first position (14) of a first vanishing point (Vy) of the image (B), and determining a virtual second position (15) of a second vanishing point (Vx); and - determining the at least one correction value (K) depending on the virtual first position (14) and the virtual second position (15). Further, the invention relates to an electronic computing device (9) as well as to a driver assistance system (8).

Description

Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device as well as driver assistance system
The invention relates to a method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle by means of an electronic computing device of a driver assistance system. Further, the invention relates to an electronic computing device as well as to a driver assistance system.
The following interest is in particular directed to motor vehicles, which comprise a camera in the rearwards directed area of the motor vehicle. In particular, the camera is formed for observing a trailer. The camera can for example be disposed at an upper end of the roof of the motor vehicle. In particular, orientation errors can occur in mounting the camera such that errors can in particular occur in evaluating the images. If the camera should for example be used for determining a trajectory of a trailer, this can in particular result in misinterpretations within the image.
US 79 49 486 B2 discloses a camera, which is mounted on a reversing camera housing of a motor vehicle, which captures a ground plane image of the ground on a side of the motor vehicle for display on a display unit within the vehicle. The camera is calibrated to correct the offset of the camera from the ideal position. During the calibration, an image of a reference point at the vehicle is captured by pivoting the mirror housing from a rest position into a working position. The offset of the actual position of the image of the reference point in the captured image from its ideal position is calculated, and a look-up table is created, which indicates the position, at which the pixels of the subsequently captured image frames should be located to generate image frames with offset correction.
It is the object of the present invention to provide a method, an electronic computing device as well as a driver assistance system, by means of which a correction value for correcting an orientation error of a camera for a motor vehicle can be reliably determined. This object is solved by a method, by an electronic computing device as well as by a driver assistance system according to the independent claims. One aspect of the invention relates to a method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle by means of an electronic computing device of a driver assistance system. An image of an environment of the motor vehicle with a calibration object of the motor vehicle is captured by means of the camera. The calibration object is recognized in the captured image by means of the electronic computing device. At least a first calibration point, a second calibration point, a third calibration point and a fourth calibration point of the calibration object are determined depending on an exterior shape of the calibration object. A first calibration line, which extends through the first and the second calibration point, a second calibration line, which extends through the second and the third calibration point, a third calibration line, which extends through the third and the fourth calibration point, and a fourth calibration line, which extends through the fourth and the first calibration point, are determined. A virtual first position of a first vanishing point of the image, which is formed as a virtual first point of intersection of the first and the third calibration line, is determined. A virtual second position of a second vanishing point of the image, which is formed as a virtual second point of intersection of the second and the fourth calibration line, is determined. The at least one correction value is determined depending on the virtual first position and the virtual second position.
In particular, it is thereby allowed that a determination of the correction value can be performed without knowing the exact dimensions of the calibration object. The
determination of the correction value allows high flexibility of the camera position since this determination of the correction value is independent of the position of the camera at the motor vehicle. Furthermore, a low computing effort is in particular required since templates for matching the calibration object are not required.
The orientation error is in particular an error of the camera setup. In other words, the camera can have been incorrectly set up/mounted in particular with respect to the three motor vehicle axes (longitudinal axis, transverse axis, vertical axis) for example in mounting the camera. Thus, the camera can in particular have an actual pose, which deviates from a desired pose, in particular of a reference camera. This deviation is referred to as orientation error.
In particular, the camera can be disposed at a rearward directed stoplight/brake light, which can also be referred to as third stoplight/brake light. In particular, the camera can be formed as a separate component to the stoplight or as an integral constituent of the stoplight. Preferably, it can be provided that for recognizing the calibration object within the captured image, a corresponding evaluation of the image, for example by means of an image processing program, is performed.
In particular, it can be provided that the calibration object is a part of the motor vehicle and thus not a separate component. Thus, the determination of the correction value can be performed reduced in effort and without additional component. Furthermore, the determination of the correction value can be performed by means of the method within an image, in other words, without having to capture a further image.
In particular, the vanishing points (Vx, Vy) can be determined with the aid of the formula:
Figure imgf000005_0001
Therein, s can in particular correspond to an unknown scaling factor. Therein, K corresponds to the intrinsic matrix of the camera. The factors rx, ry and rz are
corresponding columns of the rotation matrix between the environmental and motor vehicle coordinates to the camera coordinates. The factor t is a translation vector from the environmental/motor vehicle coordinates to the camera coordinates. By means of the assumption X- = [1000]T and Y- = [0100]T, the corresponding equations for the vanishing points can then be determined. In particular, it can be provided that the following three formulas are used for generating the columns of the rotation matrix:
Figure imgf000005_0002
The scaling factor s can be neglected in the determination by the unit vector in the directions rx, ry and rz. According to an advantageous form of configuration of the method, the motor vehicle is provided as a pickup vehicle and the at least one correction value is determined by capturing a loading platform of the pickup vehicle as a calibration object. Thereby, the at least one correction value can be determined in particular without additional component due to the utilization of the loading platform of the pickup vehicle. In particular, the pickup vehicle has a cockpit or a driver's cab and the loading platform is formed separately therefrom. The driver's cab and the loading platform are commonly disposed on a chassis of the motor vehicle. In particular in a pickup vehicle, it can for example be possible that a trailer is attached. In particular, an orientation and a trajectory of the trailer relative to the motor vehicle can then be reliably determined by means of the correction of the orientation error. In particular, it can thereby be allowed that critical situations in the road traffic are prevented, which could for example arise due to veering measures of the trailer relative to the motor vehicle. In particular, a hitch angle between the motor vehicle and the trailer can thereby be reliably determined. The driver assistance system can then in particular be formed as a hitch angle assistance system, which can also be referred to as hitch angle detection system.
It has further proven advantageous if at least two, in particular three correction values are determined, wherein a correction value for a pitch angle of the camera relative to the motor vehicle is determined and/or a correction value of the roll angle of the camera relative to the motor vehicle and/or a correction value for a yaw angle of the camera relative to the motor vehicle are determined. Thereby, the correction of the orientation error, which is in particular characterized by the pitch angle, the roll angle and the yaw angle, can be determined. Thereby, every possible relative position of the trailer to the motor vehicle in the environment in all of the spatial directions can in particular be reliably determined. In particular, it can then be provided that a single correction with respect to the pitch angle, the roll angle and the yaw angle is performed. Thereby, the orientation error of the camera at the motor vehicle can be reliably determined such that it can in particular be corrected.
In a further advantageous form of configuration, the image can be captured as a fish-eye image by means of the camera and the captured fish-eye image with the captured calibration object can be adapted in perspective for determining the at least one correction value. Since in particular the reversing cameras are often provided as fish-eye cameras with a fish-eye lens to in particular be able to capture a large capturing range of the environment of the motor vehicle, an improvement of the evaluation of the correction value is allowed by the adaptation of the fish-eye image. Thus, the correction value of the orientation error can nevertheless be reliably determined with a large capturing range by means of the fish-eye camera. In particular, it can be provided that this can be performed by means of an image processing program of the camera and/or the electronic computing device.
It is also advantageous if a display image of the environment corrected depending on the at least one correction value is displayed by means of a display device of the motor vehicle. Thereby, the corrected display image can in particular be provided for example for a driver of the motor vehicle or for a user of the driver assistance system. Thus, a more reliable display of the environment of the motor vehicle is allowed. In particular, it can be provided that the driver assistance system can be provided as a hitch angle assistance system and thereby the hitch angle between a trailer of the motor vehicle and the motor vehicle can be reliably displayed. Further, a trajectory of the trailer relative to the motor vehicle can for example also be determined and displayed by means of the hitch angle assistance system. By the display of the corrected display image, a critical situation in the road traffic can thus be reliably prevented since both an improved determination of the correction value and an improved display of the image for a driver of the motor vehicle can be performed.
Preferably, it is provided that the fish-eye image is first captured and the fish-eye image is corrected with respect to the distortion. The corrected fish-eye image is then corrected with respect to the orientation error and can be displayed on the display device as the display image.
In particular, it can be provided that the hitch angle can be regarded as a total of the roll angle and the pitch angle and the yaw angle of the trailer relative to the motor vehicle. In particular, the hitch angle is also dependent on further factors such as for example an exterior shape of the motor vehicle and the trailer. In particular, a critical angle between the motor vehicle and the trailer can then be determined by the hitch angle assistance system, wherein the motor vehicle and the trailer would contact each other at the critical angle.
It has further proven advantageous if the first and the third calibration line are defined as parallel to each other by means of the electronic computing device and the second and the fourth calibration line are defined as parallel to each other by means of the electronic computing device. In particular in a Euclidean space, parallel lines, in other words in particular the first and the third calibration line and the second and the fourth calibration line, meet each other in a finite point of a projected world, which then result in the corresponding vanishing points. In particular, the vertical edges of the loading platform in vehicle longitudinal direction and the vertical edges of the loading platform in vehicle transverse direction can then for example be correspondingly defined as respectively parallel. In particular, presetting or evaluating the calibration lines can be regarded as defining. In other words, even if the real calibration object does not have parallel edges, at least the determined calibration lines are assumed as respectively parallel to each other.
In particular, the corresponding vanishing points in vehicle longitudinal direction and in vehicle transverse direction can then be determined.
Further, it has proven advantageous if the real calibration object at the motor vehicle is formed with a substantially rectangular base surface and the determined calibration points are selected as a respective corner of the rectangular base surface. In other words, a real calibration object at the motor vehicle with a cornered base surface is preset as the calibration object in particular for capturing with the camera. The calibration object can also be polygonally formed. The electronic computing device is formed to determine as calibration points corners respectively located one after the other viewed in the
longitudinal axis of the motor vehicle, in other words in alignment, as the calibration points. In particular, the calibration points can then for example be captured as corners of the loading platform by means of different image processing techniques. For example, the calibration points can be captured by means of a Flough transformation or by means of an edge pixel calculation mask or by means of a histogram of oriented gradients (HOG). Similarly, it can be provided that the intrinsic and extrinsic parameters of the loading platform, in particular the size dimensions, are for example known.
In a further advantageous form of configuration, a display image of the environment can be displayed by means of a display device of the motor vehicle, wherein the display image is displayed in a bird's eye perspective. Thereby, it is allowed that in particular a loading platform, which is captured as the calibration object, can be advantageously displayed for a driver. In particular, a top view can then be displayed as the bird's eye perspective. In particular, if the driver assistance system should for example be provided as a hitch angle assistance system, thus, the hitch angle and the corresponding trajectories of the trailer can thereby be reliably displayed in the bird's eye perspective. This allows a display intuitively perceptible for the user.
Further, it has proven advantageous if a height of the camera relative to the motor vehicle is determined by means of the electronic computing device for generating the display image in the bird's eye perspective. In particular, orientation errors due to the height or representation errors due to the height of the camera relative to the motor vehicle can then for example be taken into account. In particular, this contributes to the fact that for example with an attached trailer, the driver of the motor vehicle can better estimate the situation of the trailer relative to the motor vehicle. Thereby, a hitch angle between the motor vehicle and the trailer can in particular be displayed in improved manner.
Further, it can advantageously be provided that the captured calibration object in the image is corrected with respect to the orientation error of the camera for determining the height of the camera. In particular, the calibration object is calibrated to the corrected calibration object. For example, if it should be recognized that the loading platform as the calibration object is not displayed rectangularly within the bird's eye perspective, thus, a corresponding adaptation with respect to the orientation error can be performed. In particular, this can be performed by adapting the captured calibration object to a rectangular shape. Further, it can be provided that the orientation error is corrected by machine learning of the electronic computing device. In particular, an adaptation to a rectangular shape can then be simply performed by machine learning. In particular, this can be performed in addition to the determination of the correction value. It has further proven advantageous if a stored reference calibration object is compared to the corrected calibration object with respect to the respective size for determining the relative height of the camera and the relative height is determined depending on the comparison. Thereto, the reference calibration object can in particular for example be preset by an actual size of a loading platform of the motor vehicle. They can for example be stored on a storage device of the electronic computing device. In particular, a deviation of the height of the camera can then for example be determined by the formulas:
Figure imgf000009_0001
Therein, dH corresponds to the deviation of the camera height. L and W correspond to the expected length L and to the expected width W of the calibration object. I and w correspond to the corrected lengths I and the corrected width w in the corrected display image. Thus, the relative height of the camera relative to the motor vehicle is reliably determinable. In particular, this can be performed without further image capture or complicated computing methods. This results in a reliable determination of the hitch angle and the trajectory of a trailer attached to the motor vehicle in particular with a form of configuration of the driver assistance system as hitch angle assistance system.
In a further advantageous form of configuration, it is provided that the determined relative height of the camera is taken into account in the display of the display image as a bird's eye perspective. In other words, the corresponding determined relative height is taken into account in the display such that the display image can be reliably displayed.
A further aspect of the invention relates to an electronic computing device, which is formed for performing the method according to the previous aspect. In particular, it can be provided that the electronic computing device comprises a computer program product with program code means. Therein, the computer program product can be stored on a computer-readable medium to perform the method according to the preceding aspect when the computer program product is run on a processor of the electronic computing device.
A still further aspect of the invention relates to a driver assistance system with a camera and with an electronic computing device according to the preceding aspect. In particular, the driver assistance system can be formed as a hitch angle assistance system.
Similarly, the invention relates to a motor vehicle with a driver assistance system. The motor vehicle is in particular formed as a passenger car, in particular as a pickup vehicle.
An independent aspect of the invention relates to a method for determining at least a real height of a camera relative to a reference height of a reference camera for correcting a height error for a motor vehicle by means of an electronic computing device of a driver assistance system. An image of an environment of the motor vehicle with a calibration object of the motor vehicle is captured by means of the camera. The calibration object in the captured image is recognized by means of the electronic computing device. The image is generated as a bird's eye perspective. The captured calibration object in the bird's eye perspective is corrected with respect to an orientation error of the calibration object in the captured image to a corrected calibration object. A size of the corrected calibration object is compared to a size of a reference calibration object and the real height of the camera is determined depending on the comparison. Thereby, an improved display of the image is allowed for example on a display device of the motor vehicle. Furthermore, a hitch angle between the motor vehicle and the trailer can for example be performed in improved manner by the correction and an image processing based on the correction. Furthermore, trajectories, for example of a trailer, can for example be displayed in improved manner.
In particular, a deviation of the height of the camera can then for example be determined by the formulas:
Figure imgf000011_0001
Therein, dH corresponds to the deviation of the camera height. L and W correspond to the expected length L and the expected width W of the calibration object. I and w correspond to the corrected lengths I and the corrected width w in the corrected display image. Thus, the relative height of the camera relative to the motor vehicle is reliably determinable. In particular, this can be performed without further image capture or complicated computing methods. This results in a reliable determination of the hitch angle and the trajectory of a trailer attached to the motor vehicle in particular with a form of configuration of the driver assistance system as a hitch angle assistance system.
In a further advantageous form of configuration, it is provided that the determined relative height of the camera is taken into account in the display of the display image as a bird's eye perspective. In other words, the corresponding determined relative height is taken into account in the display such that the display image can be reliably displayed.
Further, it can be provided that the orientation error is corrected by machine learning of the electronic computing device. In particular, an adaptation to a rectangular shape can then be simply performed by machine learning. In particular, this can be performed in addition to the determination of the correction value.
Preferably a region of interest in which the calibration object is located is specified in the captured image to capture the calibration object. In particular, the region of interest is a partial section of the image. Therefore it possible that only the region of interest is evaluated during the evaluation of the image, whereby the computing capacity of the electronic computing device can be saved during the evaluation, since the entire captured image does not have to be evaluated to capture the calibration object. For example, the region of interest can be specified in such a way that at least the loading platform of the motor vehicle and in particular the calibration points are captured. In addition, it can be provided that a central part of the calibration object is not part of the region of interest, whereby, for example, the calibration points are still present, but an area which is essentially located between the calibration points is not part of the region of interest. This can save even more computing capacity as a smaller portion of the image needs to be evaluated to capture the calibration object.
Preferably the region of interest is specified by stored parameters of the calibration object and/or the region of interest is generated by a tolerance range for the calibration object. In particular, the intrinsic parameters of the camera can be used as stored parameters.
These can, for example, be specified by computer-aided design, a so-called CAD, and then retrieved as stored parameters. In particular, it can be provided that a corresponding tolerance range is generated and specified, so that the calibration object can still be reliably detected even if the camera is misaligned. In particular, the tolerance range can be specified depending on the stored parameter and/or the CAD. Thus, a calculation capacity saving acquisition and evaluation of the calibration object can be carried out.
In an independent aspect the correction value is additionally determined by homographic matrix decomposition. Alternatively, the correction value can be determined by
homographic matrix decomposition. In particular, the loading platform of the motor vehicle can also be used here as a calibration object. Using this method, it is possible that the camera no longer has to be arranged directly along a longitudinal axis of the motor vehicle, but that the corresponding arrangement positions of the camera can be taken into account by the method. In particular, it is assumed that the first calibration line and the third calibration line as well as the second calibration line and the fourth calibration line are parallel to each other in reality, whereby an angle Q between the first calibration line and the fourth calibration line within the image is also assumed to be known. In particular, line segments of the calibration lines are captured within the image. From a respective line segment (p1, p2) within the image a respective ray (r1, r2) can be defined. By means of these rays a plane can be generated, where then its normal is defined by
Figure imgf000012_0001
Four normal vectors n1, n2, n3 and n4 can be generated by the four calibration lines. The first normal vector n1 is assigned to the first calibration line, the second normal vector n2 is assigned to the second calibration line, the third normal vector n3 is assigned to the third calibration line and the fourth normal vector n4 is assigned to the fourth calibration line.
In reality, i.e. not in the image, the first calibration line and the third calibration line lie in a corresponding plane which is aligned along an axis by. This means that the axis by must be orthogonal to the first calibration line and to the third calibration line. This can be done by using the vector product
Figure imgf000013_0003
Since the vector product is an asymmetric operator, the direction vy depends on the order of m and n3. Since in particular the loading platform must lie in a half-space corresponding to the negative z coordinate, it can be assumed that the point product cx.uy is negative, whereby
Figure imgf000013_0001
can be determined. Then you can search for two unit vectors x1 and X2, which are perpendicular to n2 and n4. Furthermore, for symmetry reasons it can be assumed that x2 - x1 is perpendicular to m. The angle Q is the angle between x1 and x2, where is.
Figure imgf000013_0004
The following formulas then result:
Figure imgf000013_0005
As well as the formulas:
Figure imgf000013_0006
Furthermore it is known:
Figure imgf000013_0002
Figure imgf000014_0001
With results:
Figure imgf000014_0006
Figure imgf000014_0002
The following formulas follow:
Figure imgf000014_0003
Assuming that follows:
Figure imgf000014_0007
Figure imgf000014_0004
With:
Figure imgf000014_0005
Under the conditions
Figure imgf000014_0008
one arrives at the equation of the second degree with the unknown x1z. Here there are two solutions, one with a positive x1z and one with a negative x1z. In particular the negative x1z is chosen.
Therefore it can be determined:
Figure imgf000014_0009
and
Figure imgf000015_0001
In the camera coordinates expressed as rotation matrix R:
Figure imgf000015_0002
The camera matrix with cx, cy, cz can be generated by transforming the rotation matrix. In particular, the backward transformation R-1=RT can be used for this. The Euler angles can be extracted from the rotation matrix accordingly.
A further aspect of the invention relates to an electronic computing device, which is designed to perform the method according to the preceding aspect. In particular, the method is carried out by means of the electronic computing device.
Another aspect of the invention relates to a driver assistance system. In particular, the driver assistance system is designed for a motor vehicle, in particular for a pick up vehicle. In the case of at least partially autonomous, in particular fully autonomous, driving operation of the motor vehicle, the driver assistance system can also be described as an electronic vehicle guidance system.
Advantageous forms of configuration of the method are to be regarded as advantageous forms of configuration of the electronic computing device as well as of the driver assistance system. Thereto, the electronic computing device as well as the driver assistance system comprise concrete features, which allow performing the method or an advantageous form of configuration thereof.
Further features of the invention are apparent from the claims, the figures and the descrip tion of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of fig ures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and dis closed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by the separated feature combinations from the ex plained implementations. Implementations and feature combinations are also to be con sidered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be con- sidered as disclosed, in particular by the implementations set out above, which extend be yond or deviate from the feature combinations set out in the relations of the claims.
Now, the invention is explained in more detail based on preferred embodiments as well as with reference to the attached drawings.
There show:
Fig. 1 a schematic top view to a vehicle/trailer combination with an embodiment of a driver assistance system;
Fig. 2 a schematic view of the method for determining a correction value; Fig. 3 a schematic view of the method for determining a height of an embodiment of the camera;
Fig. 4 a schematic view of an embodiment of a calibration object; Fig. 5 a schematic block diagram for determining the correction value; Fig. 6 a schematic perspective view to determine a homographic matrix
decomposition; and
Fig. 7 a schematic perspective view of an image with a region of interest. In the figures, identical or functionally identical elements are provided with the same reference characters.
Fig. 1 schematically shows a motor vehicle 1 with an attached trailer 2, which constitute a vehicle/trailer combination 3 in the state coupled together, in a top view. The motor vehicle 1 is in particular formed as a passenger car. In particular, the motor vehicle 1 is provided as a pickup vehicle and comprises a loading platform 4. The motor vehicle 1 has a longitudinal axis L1 and the trailer 2 has a longitudinal axis L2. The trailer 2 is in particular coupled to the motor vehicle 1 via a tow coupling 5. Further, the motor vehicle 1 comprises a camera 6, by means of which an environment 7 of the motor vehicle 1 with a calibration object, which corresponds to the loading platform 4 in the present embodiment, can be captured. In particular, it can be provided that the camera 6 can capture at least the loading platform 4 and the environment 7 by means of an image processing program. The orientation error is in particular an error of the camera setup. In other words, the camera 6 can have been incorrectly set up/mounted, in particular with respect to the three motor vehicle axes (longitudinal axis L1 , transverse axis Q, vertical axis) for example in mounting the camera 6. Thus, the camera 6 can in particular have an actual pose, which deviates from a desired pose, in particular of a reference camera. This deviation is referred to as orientation error.
In particular, the camera 6 can be disposed at a rearwards directed stoplight/brake light, which can also be referred to as third stoplight/brake light. In particular, the camera 6 can be formed as a separate component to the stoplight or as an integral constituent of the stoplight.
Further, the motor vehicle 1 comprises a driver assistance system 8, which in particular comprises an electronic computing device 9. It is provided that an image B (Fig. 2) of the environment 7 of the motor vehicle 1 with the calibration object, in other words with the loading platform 4 in this embodiment, of the motor vehicle 1 is captured by means of the camera 6. The calibration object in the captured image B is detected by means of the electronic computing device 9. At least a first calibration point 10, a second calibration point 1 1 , a third calibration point 12 and a fourth calibration point 13 of the calibration object are captured. A first calibration line K1 (Fig. 2), which extends through the first and the second calibration point 10, 1 1 , a second calibration line K2, which extends through the second and the third calibration point 1 1 ,
12, a third calibration line K3, which extends through the third and the fourth calibration point 12, 13, and a fourth calibration line K4, which extends through the fourth and the first calibration point 13, 10, are determined. A virtual first position 14 of a first vanishing point Vy of the image B, which is formed as a virtual first point of intersection of the first and the third calibration line K1 , K3, is determined. A virtual second position 15 of a second vanishing point Vx of the image B, which is formed as a virtual second point of intersection of the second calibration line K2 and the fourth calibration line K4, is determined. At least one correction value K (Fig. 2) is determined depending on the virtual first position 14 and the virtual second position 15.
It is provided that a correction of the orientation error of the camera 6 relative to the motor vehicle 1 can be performed by means of the electronic computing device 9 by means of the correction value K. In particular, a hitch angle a can be reliably and accurately determined after the correction.
In particular, it can be provided that the hitch angle a can be regarded as a total of the roll angle and the pitch angle and the yaw angle of the trailer 2 relative to the motor vehicle 1 . In particular, the hitch angle a is also dependent on further factors, such as for example the exterior shape of the motor vehicle 1 and the trailer 2. In particular, a critical angle between the motor vehicle 1 and the trailer 2 can then be determined by the hitch angle assistance system, wherein the motor vehicle 1 and the trailer 2 would contact each other at the critical angle.
Further, it is in particular provided that at least two, in particular three, correction values K are determined, wherein a correction value K for a pitch angle of the camera 6 relative to the motor vehicle 1 and/or a correction value K for a roll angle of the camera 6 relative to the motor vehicle 1 and/or a correction value K for a yaw angle of the camera 6 relative to the motor vehicle 1 are determined.
In particular, it can be provided that a corrected display image of the environment 7 depending on the at least one correction value K is displayed by means of a display device 16 of the motor vehicle 1. In particular, it can be provided that at least the hitch angle a can then be displayed on the display device 16.
Further, it is in particular provided that the calibration object is provided with a
substantially rectangular base surface and the calibration points 10, 1 1 , 12, 13 are captured as a respective corner of the rectangular base surface. In the present example, the loading platform 4 is in particular substantially rectangular and the corners of the loading platform 4 are selected as the calibration points 10, 1 1 , 12, 13. A real calibration object at the motor vehicle 1 with a cornered base surface can be preset as the calibration object in particular for capturing by the camera 6. The calibration object can also be polygonally formed. The electronic computing device 9 is formed to determine as the calibration points 10, 1 1 , 12, 13 corners respectively located one after the other viewed in the longitudinal axis L1 of the motor vehicle 1 , in other words in alignment, as the calibration points 10, 1 1 , 12, 13.
Fig. 2 schematically shows an embodiment of the method. In particular, the image B is captured as a fish-eye image by means of the camera 6 and the captured fish-eye image B with the calibration object is adapted in perspective for determining the at least one correction value K in step S1 . In particular, intrinsic parameter values IP are used as an input for processing in step S1 for the adaptation. The corrected image Bk is generated from the fish-eye image B.
In step S2, the calibration points 10, 1 1 , 12, 13 are determined from the corrected image Bk for example by Hough transformation or by a histogram with oriented gradients.
By means of the electronic computing device 9, the first and the third calibration line K1 , K3 are defined as parallel to each other. Similarly, the second calibration line K2 and the fourth calibration line K4 are defined as parallel to each other by the electronic computing device 9.
An image Bv shows the corrected image Bk, wherein the calibration lines K1 , K2, K3, K4 are extended such that the vanishing points Vx and Vy can be determined. In particular, the vanishing point Vy is situated along a vehicle transverse axis Q in a vehicle transverse direction. The vanishing point Vx is situated in the direction of the longitudinal axis L1 of the motor vehicle 1 .
In step S4, the at least one correction value K, in particular the three correction values K, is then determined depending on the vanishing points Vx, Vy. These correction values K can then in turn be output and for example be used for processing for the display device 16.
In particular, the vanishing points Vx, Vy can be determined with the aid of the formula:
Figure imgf000019_0001
Therein, s can in particular correspond to an unknown scaling factor. Therein, K corresponds to the intrinsic matrix of the camera 6. The factors rx, ry and rz are corresponding columns of the rotation matrix between the environmental and motor vehicle coordinates to the camera coordinates. The factor t is a translation vector from the environmental/motor vehicle coordinates to the camera coordinates. By means of the assumption X- = [1000]T and Y- = [0100]T, the corresponding equations for the vanishing points Vx, Vy can then be determined.
In particular, it can be provided that the following three formulas are used for generating the columns of the rotation matrix:
Figure imgf000020_0001
The scaling factor s can be neglected in determining by the unit vector in the directions rx, ry and rz.
Fig. 3 shows the method for determining the height H of the camera 6 in a schematic view. In particular, it is provided that a display image of the environment 7 is displayed by means of the display device 16 of the motor vehicle 1 , wherein the display image is displayed in a bird's eye perspective. For generating the display image in the bird's eye perspective, the height H of the camera 6 relative to the motor vehicle 1 is determined by means of the electronic computing device 9.
The image B, which is in particular represented as a fish-eye image, is correspondingly corrected in step S1 as in Fig. 2. In step S5, the generation of the display image in the bird's eye perspective is effected depending on the corrected image Bk. Thereto, corresponding bird's eye perspective parameters VP can in particular be used for determining the bird's eye perspective. In particular, the captured calibration object in the image B is corrected with respect to the orientation error of the camera 6 for determining the height H of the camera 6. This is in particular effected in step S6. In particular, it can be provided that this correction is performed by machine learning of the electronic computing device 9. In particular, the loading platform 4 is not rectangularly formed in the image B by the incorrect orientation of the camera 6. These deviations can be defined in an error vector with the formula:
Figure imgf000020_0002
By means of the machine learning, the corresponding errors can then be minimized and in particular the correction values K can be taken into account in step S6. Further, reference calibration object sizes 4P are taken into account in the determination of the height H. In particular, the reference calibration object 17 (Fig. 4) is an expected object, in particular an expected object size. In step S7, the height H relative to the motor vehicle 1 is then determined and output in particular depending on the reference calibration object 17 and the reference calibration parameters 4P.
Fig. 4 schematically shows the determination of the height H. The reference calibration object 17 is compared to the calibration object 18 corrected from Fig. 3. By the law of the equal triangles, the deviation of the camera 6 in the height H results by:
Figure imgf000021_0001
Therein, dH in particular corresponds to the deviation of the camera 6 in the height H. Therein, L corresponds to the length L of the reference object 17. W corresponds to the width W of the reference calibration object 17. 1 corresponds to the length I of the corrected calibration object 18 and w corresponds to the width w of the corrected calibration object 18. Depending on this data, the height H can then respectively be determined. In particular, a stored reference calibration object 17 can be compared to the corrected calibration object 18 with respect to the respective size for determining the relative height H of the camera 6 and the relative height can be determined depending on the comparison.
Further, it can be provided that the determined relative height H of the camera 6 is taken into account in the display of the display image as a bird's eye perspective.
Fig. 5 shows a schematic block diagram of a design of the method. According to this Fig. 5, the correction value K is formed by means of a homographic matrix decomposition 23.
The image B and the intrinsic parameters of camera 6 are fed to step S1. An overhead view 19 from step S1 is displayed. A region-of-interest extraction 20 can be performed. From this in turn, step S2 in particular can be executed. A rotation solver 21 or a homographic estimation can be performed after step S2. For this purpose, mechanical data 22 for example of the loading platform 4 can be used. A matrix decomposition 23 is performed with a Z-position solver 24. Estimated extrinsic parameters 25, a rotation deviation from the rotation solver 21 and a position deviation from the Z-position solver are fed to a calibrator 26 for generating an extrinsic correction. In the step S1 the intrinsic matrix can be generated:
Figure imgf000022_0001
Fig. 6 shows schematically in a perspective view the generation of the correction value K by means of the homographic matrix decomposition 23. In particular, the loading platform 4 of the motor vehicle 1 can also be used here as the calibration object 18. With this method it is possible that the camera 6 no longer has to be arranged directly along a longitudinal axis L1 of the motor vehicle 1 , but the corresponding arrangement positions of the camera 6 can be taken into account accordingly. In particular, it is assumed that the first calibration line K1 and the third calibration line K3 as well as the second calibration line K2 and the fourth calibration line K4 are parallel to each other in reality, whereby an angle Q between the first calibration line K1 and the fourth calibration line K4 within image B is also assumed to be known. In particular, line segments of the calibration lines K1 , K2, K3, K4 are recorded within the image B. From a respective line segment (p1, p2) within the image B a respective ray (r1, r2) can be defined. By means of these rays a plane 27, 28,
29, 30 can be generated, where then its normal is defined by:
Figure imgf000022_0002
Four normal vectors m, n2, n3 and n4 can be generated by the four calibration lines K1 , K2, K3 and K4. The first normal vector m is assigned to the first calibration line K1 , the second normal vector n2 is assigned to the second calibration line K2, the third normal vector n3 is assigned to the third calibration line K3 and the fourth normal vector n4 is assigned to the fourth calibration line K4.
In reality, i.e. not in Figure B, the first calibration line K1 and the third calibration line K3 lie in a corresponding plane which is aligned along an axis by. This means that the axis by must be orthogonal to the first calibration line K1 and to the third calibration line K3. This can be done by using the vector product
Figure imgf000023_0002
Since the vector product is an asymmetric operator, the direction vy depends on the order of m and n3. Since in particular the loading platform 4 must lie in a half-space
corresponding to the negative z coordinate, it can be assumed that the point product cx.uy is negative, whereby
Figure imgf000023_0001
can be determined. Then you can search for two unit vectors x1 and x2, which are perpendicular to n2 and n4. Furthermore, for symmetry reasons it can be assumed that x2 - x1 is perpendicular to m. The angle Q is the angle between x1 and x2, where is.
Figure imgf000023_0003
The following formulas then result:
Figure imgf000023_0004
As well as the formulas:
Figure imgf000023_0005
Furthermore it is known:
Figure imgf000023_0006
With follows:
Figure imgf000023_0007
Figure imgf000024_0005
The formulas follows:
Figure imgf000024_0001
Under the condition that it follows:
Figure imgf000024_0006
Figure imgf000024_0002
With:
Figure imgf000024_0007
Under the condition
Figure imgf000024_0003
one arrives at the equation of the second degree with the unknown x1z.
Here there are two solutions, one with a positive x1z and one with a negative x1z. In particular the negative x1z is chosen.
Therefore it can be determined:
and
Figure imgf000024_0008
In the camera coordinates expressed as rotation matrix R:
Figure imgf000024_0004
The camera matrix with cx, cy, cz can be generated by transforming the rotation matrix. In particular, the backward transformation R 1=RT can be used for this. The Euler angles can be extracted from the rotation matrix accordingly.
Fig. 7 shows in a schematic perspective view an image B. To capture the calibration object 18 in the captured image B, a region of interest ROI can be specified in which the calibration object 18 is located. In particular, the region of interest ROI is a partial section of image B. This makes it possible that only the region of interest ROI is evaluated when image B is evaluated, which means that the computing capacity of the electronic computing device 9 can be saved when evaluating, since the entire image B captured does not have to be evaluated to capture the calibration object 18. For example, the region of interest ROI can be specified in such a way that at least the loading platform 4 of the motor vehicle 1 and in particular the calibration points 10, 1 1 , 12, 13 can be captured. Furthermore, it may be additionally provided that a central part of the calibration object 18 is not part of the region of interest ROI, whereby calibration points 10, 1 1 , 12, 13, for example, are still present, but an area located essentially between calibration points 10,
1 1 , 12, 13 is not part of the region of interest ROI. This can save even more computing capacity as a smaller part of image B needs to be evaluated to capture the calibration object 18.
It may also be provided that the region of interest ROI is specified by stored parameters of the calibration object 18 and/or the region of interest ROI is generated by a tolerance range T for the calibration object 18. In particular, the intrinsic parameters of camera 6 can be used as stored parameters, for example. These can, for example, be specified by computer-aided design (CAD) and then retrieved as stored parameters. In particular, it can be provided that a corresponding tolerance range T is generated and specified, so that the calibration object 18 can still be reliably detected even if the camera 6 is misaligned. The tolerance range T can be specified in particular depending on the stored parameter and/or the CAD. This allows a calculation capacity-saving acquisition and evaluation of the calibration object 18.

Claims

Claims
1. Method for determining at least one correction value (K) for correcting an orientation error of a camera (6) for a motor vehicle (1 ) by means of an electronic computing device (9) of a driver assistance system (8), including the steps of:
- capturing an image (B) of an environment (7) of the motor vehicle (1 ) with a calibration object (18) of the motor vehicle (1 ) by means of the camera (6);
- recognizing the calibration object (18) in the captured image (B) by means of the electronic computing device (9);
- determining at least a first calibration point (10), a second calibration point (1 1 ), a third calibration point (12) and a fourth calibration point (13) of the calibration object
(18) depending on an exterior shape of the calibration object (18);
- determining a first calibration line (K1 ), which extends through the first and the second calibration point (10, 1 1 ), a second calibration line (K2), which extends through the second and the third calibration point (1 1 , 12), a third calibration line (K3), which extends through the third and the fourth calibration point (12, 13), and a fourth calibration line (K4), which extends through the fourth and the first calibration point (10, 13);
- determining a virtual first position (14) of a first vanishing point (Vy) of the image (B), which is formed as a virtual first point of intersection of the first and the third calibration line (K1 , K3);
- determining a virtual second position (15) of a second vanishing point (Vx) of the image (B), which is formed as a virtual second point of intersection of the second and the fourth calibration line (K2, K4); and
- determining the at least one correction value (K) depending on the virtual first position (14) and the virtual second position (15).
2. Method according to claim 1 ,
characterized in that the motor vehicle (1 ) is provided as a pickup vehicle and the at least one correction value (K) is determined by capturing a loading platform (4) of the pickup vehicle as the calibration object (18). 3. Method according to claim 1 or 2,
characterized in that
at least two, in particular three, correction values (K) are determined, wherein a correction value (K) for a pitch angle of the camera (6) relative to the motor vehicle (1 ) is determined and/or a correction value (K) for a roll angle of the camera (6) relative to the motor vehicle (1 ) and/or a correction value (K) for a yaw angle of the camera (6) relative to the motor vehicle (1 ) are determined.
4. Method according to any one of the preceding claims,
characterized in that
the image (B) is captured as a fish-eye image by means of the camera (6) and the captured fish-eye image with the captured calibration object (18) is adapted in perspective for determining the at least one correction value (K). (S1 )
5. Method according to any one of the preceding claims,
characterized in that
a corrected display image of the environment (7) depending on the at least one correction value (K) is displayed by means of a display device (16) of the motor vehicle (1 ). 6. Method according to any one of the preceding claims,
characterized in that
the first and the third calibration line (K1 , K3) are defined as parallel to each other based on the electronic computing device (9) and the second and the fourth calibration line (K2, K4) are defined as parallel to each other by means of the electronic computing device (9).
7. Method according to any one of the preceding claims,
characterized in that the real calibration object (18) at the motor vehicle (1 ) is formed with a substantially rectangular base surface and the calibration points (10, 1 1 , 12, 13) are selected as a respective corner of the rectangular base surface. 8. Method according to any one of the preceding claims,
characterized in that
a display image of the environment (7) is displayed by means of a display device (16) of the motor vehicle (1 ), wherein the display image is displayed in a bird's eye perspective.
9. Method according to claim 8,
characterized in that
for generating the display image in the bird's eye perspective, a height (H) of the camera (6) relative to the motor vehicle (1 ), in particular to the calibration object (18), is determined by means of the electronic computing device (9).
10. Method according to claim 9,
characterized in that
for determining the height (H) of the camera (6), the captured calibration object (18) in the image (B) is corrected with respect to the orientation error of the camera (6).
1 1 . Method according to claim 10,
characterized in that
the orientation error is corrected by machine learning of the electronic computing device (9).
12. Method according to any one of claims 10 to 1 1 ,
characterized in that
for determining the relative height (H) of the camera (6), a stored reference calibration object (17) is compared to the corrected calibration object (18) with respect to the respective size and the relative height (H) is determined depending on the comparison.
13. Method according to any one of claims 9 to 12,
characterized in that the determined relative height (H) of the camera (6) is taken into account in the display of the display image as a bird's eye perspective.
14. Electronic computing device (9), which is formed for performing the method according to any one of claims 1 to 13.
15. Driver assistance system (8) with a camera (6) and with an electronic computing device (9) according to claim 14.
PCT/EP2020/059336 2019-04-15 2020-04-02 Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device as well as driver assistance system WO2020212148A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019109881.5 2019-04-15
DE102019109881.5A DE102019109881A1 (en) 2019-04-15 2019-04-15 Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device and driver assistance system

Publications (1)

Publication Number Publication Date
WO2020212148A1 true WO2020212148A1 (en) 2020-10-22

Family

ID=70295088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/059336 WO2020212148A1 (en) 2019-04-15 2020-04-02 Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device as well as driver assistance system

Country Status (2)

Country Link
DE (1) DE102019109881A1 (en)
WO (1) WO2020212148A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063488A (en) * 2022-05-18 2022-09-16 东风汽车集团股份有限公司 Digital outside rearview mirror system marking calibration method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070299596A1 (en) * 2004-10-01 2007-12-27 Robert Bosch Gmbh Method for Detecting an Optical Structure
US7949486B2 (en) 2005-10-28 2011-05-24 Hi-Key Limited Method and apparatus for calibrating an image capturing device, and a method and apparatus for outputting image frames from sequentially captured image frames with compensation for image capture device offset
EP3174007A1 (en) * 2015-11-30 2017-05-31 Delphi Technologies, Inc. Method for calibrating the orientation of a camera mounted to a vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070299596A1 (en) * 2004-10-01 2007-12-27 Robert Bosch Gmbh Method for Detecting an Optical Structure
US7949486B2 (en) 2005-10-28 2011-05-24 Hi-Key Limited Method and apparatus for calibrating an image capturing device, and a method and apparatus for outputting image frames from sequentially captured image frames with compensation for image capture device offset
EP3174007A1 (en) * 2015-11-30 2017-05-31 Delphi Technologies, Inc. Method for calibrating the orientation of a camera mounted to a vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIGANG LI ET AL: "Easy Calibration of a Blind-Spot-Free Fisheye Camera System Using a Scene of a Parking Space", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 12, no. 1, 1 March 2011 (2011-03-01), pages 232 - 242, XP011348841, ISSN: 1524-9050, DOI: 10.1109/TITS.2010.2085435 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063488A (en) * 2022-05-18 2022-09-16 东风汽车集团股份有限公司 Digital outside rearview mirror system marking calibration method and system

Also Published As

Publication number Publication date
DE102019109881A1 (en) 2020-10-15

Similar Documents

Publication Publication Date Title
EP2009590B1 (en) Drive assistance device
EP2530647A1 (en) Method of calibrating a vehicle vision system and vehicle vision system
EP3678096B1 (en) Method for calculating a tow hitch position
CN108367714B (en) Filling in areas of peripheral vision obscured by mirrors or other vehicle components
JP4803449B2 (en) On-vehicle camera calibration device, calibration method, and vehicle production method using this calibration method
US7006667B2 (en) Apparatus and method for detecting road white line for automotive vehicle
EP2541498A1 (en) Method of determining extrinsic parameters of a vehicle vision system and vehicle vision system
US11288833B2 (en) Distance estimation apparatus and operating method thereof
EP3671643A1 (en) Method and apparatus for calibrating the extrinsic parameter of an image sensor
JP7270499B2 (en) Abnormality detection device, abnormality detection method, posture estimation device, and mobile body control system
US20170259830A1 (en) Moving amount derivation apparatus
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
US9892519B2 (en) Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle
WO2020212148A1 (en) Method for determining at least one correction value for correcting an orientation error of a camera for a motor vehicle, electronic computing device as well as driver assistance system
KR102528004B1 (en) Apparatus and method for generating around view
CN113296516B (en) Robot control method for automatically lifting automobile
CN111881878A (en) Lane line identification method for look-around multiplexing
US20230237809A1 (en) Image processing device of person detection system
CN111738035A (en) Method, device and equipment for calculating yaw angle of vehicle
CN114299466A (en) Monocular camera-based vehicle attitude determination method and device and electronic equipment
JP7380443B2 (en) Partial image generation device and computer program for partial image generation
CN114078090A (en) Tractor aerial view splicing method and system based on imu pose correction
JP7311407B2 (en) Posture estimation device and posture estimation method
CN116630429B (en) Visual guiding and positioning method and device for docking of vehicle and box and electronic equipment
JP7311406B2 (en) Image processing device and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20719948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20719948

Country of ref document: EP

Kind code of ref document: A1