WO2020239457A1 - Image acquisition system - Google Patents

Image acquisition system Download PDF

Info

Publication number
WO2020239457A1
WO2020239457A1 PCT/EP2020/063447 EP2020063447W WO2020239457A1 WO 2020239457 A1 WO2020239457 A1 WO 2020239457A1 EP 2020063447 W EP2020063447 W EP 2020063447W WO 2020239457 A1 WO2020239457 A1 WO 2020239457A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
camera
image
camera rotation
height
Prior art date
Application number
PCT/EP2020/063447
Other languages
French (fr)
Inventor
Paul Moran
Ciaran Hughes
Pantelis ERMILIOS
Leroy-Francisco PEREIRA
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2020239457A1 publication Critical patent/WO2020239457A1/en

Links

Classifications

    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to an image acquisition system and a method performed in such a system.
  • vehicles are provided with driver assistance systems which rely on the use of an image acquisition system comprising one or more cameras that capture images of a portion of the environment surrounding the vehicle.
  • images from more than one camera can be stitched together to provide a top plan or birds eye view image of the environment surrounding vehicle.
  • These images may be displayed on a driver display or processed for feature extraction and/or recognition to assist in the autonomous or semi-autonomous control of the vehicle.
  • WO 2012/139636 (Ref: SIE0025) relates to a method for online calibration of a vehicle video system using image frames captured by a camera containing features on a road surface. At least two different features within an image frame are chosen as respective reference points.
  • a sequence of at least one more image frames is acquired by the camera while locating within each new frame the features chosen as reference points.
  • a geometrical shape is obtained by joining the located reference points in each image and considering the respective trajectories covered by the reference points during a driving interval between images of the sequence.
  • a deviation of the resulting geometrical object from a parallelogram with corners defined by the reference points from at least two subsequent images is calculated with any measured deviation being used to define an offset correction of the camera.
  • WO 2012/139660 (Ref: SIE0028) relates to a method for online calibration of a vehicle video system using image frames captured by a camera containing longitudinal road features on a road surface. An identification of longitudinal road features within the image frame is performed. Points are extracted along the identified longitudinal road features to be transformed to a virtual road plan view via a perspective mapping taking into account prior known parameters of the camera. The transformed extracted points are analysed with respect to the vehicle by determining a possible deviation of the points from a line parallel to the vehicle with any measured deviation being used to define an offset correction of the camera.
  • WO 2012/143036 (Ref: SIE0029) relates to a method for online calibration of a vehicle video system using image frames containing longitudinal road features.
  • Adjacent portions of a road surface are captured by at least two cameras of the vehicle video system. Longitudinal road features are identified within the image frames. Identified longitudinal road features are selected within at least two image frames respectively taken by two cameras as matching a single line between the two image frames. An analysis of the single line is performed by determining an offset of the line between the two image frames. The consequently determined offset of the line is applied for a calibration of the cameras of the vehicle video system.
  • Figure 1 illustrates a side view of a vehicle comprising an image acquisition system according to an embodiment of the invention
  • Figure 2 illustrates a perspective view of a camera component of the system of Figure 1;
  • Figure 3 illustrates planes indicative of a stance of the vehicle of Figure 1;
  • Figure 4 is a block diagram illustrating an exemplary configuration of the image acquisition system of Figure 1 ;
  • Figure 5 is a block diagram illustrating an image processing method according to an embodiment of the invention.
  • FIG. 1 illustrates a vehicle 102 including an image acquisition system 100 according to an embodiment of the invention.
  • the image acquisition system 100 comprises one or more cameras 110, only one shown, and a processor 120 for processing images captured by the camera 110.
  • the processor 120 can comprise a dedicated processor for processing images provided by the camera 110; or indeed the processor 120 could comprise any processor of a multiple processor core within a vehicle control system and which may be available as required for processing acquired images.
  • the camera 110 may be any type of image capture device.
  • the camera 110 can be of the type commonly referred to as a digital camera, such as complementary metal-oxide- semiconductor (CMOS) camera, charged coupled device (CCD) camera or the like.
  • CMOS complementary metal-oxide- semiconductor
  • CCD charged coupled device
  • the camera 110 can capture images in the visible and/or not visible spectrum and provide these to the processor 120.
  • the processor 120 processes images produced by the camera using any combination of chromatic or intensity image plane information provided by camera 110.
  • CMOS complementary metal-oxide-semiconductor
  • FOV field of view
  • ROI region of interest
  • the ROI does not extend to the boundaries of the camera image sensor, so allowing the ROI to be aligned as required within any image acquired from the camera 110.
  • the camera 110 shown is a front facing camera mounted in or on the grille of the vehicle 102.
  • the camera 110 is configured to capture a downward looking plan view of the ground towards the front of the vehicle 102.
  • the camera 110 may be mounted in any desired location, within the cabin or on the exterior of the vehicle 102, for capturing an image looking in a generally forward direction from the vehicle 102. So, for example, the camera 110 could also be located within an overhead console unit (not shown) within the cabin of the vehicle 102; on a forward-facing surface of a rear-view mirror (not shown) of the vehicle 102; or on a wing mirror (not shown) of the vehicle 102. Other cameras may be mounted in any other desired location of the vehicle 102 for capturing other portions of the environment surrounding the vehicle 102.
  • cameras may be mounted in any desired location of the vehicle 102 for capturing a downward looking plan view of the ground towards the rear of the vehicle 102, or for capturing an image looking in a generally backward direction from the vehicle 102. So, a camera may be mounted at a rear panel of the vehicle, inside a rear windscreen or on a rear-facing surface of a wing mirror. In another implementations, cameras could also be mounted in any desired location of the vehicle 102 for capturing a downward looking plan view of the ground on the left/right side of the vehicle 102. So, cameras may be mounted at the side middle panels of the vehicle 102, wing mirrors or on side windscreens.
  • the processor 120 is tasked with aligning a ROI within images acquired by the camera 110 based on measurements received from the suspension system simply in order to display a consistent image from a given camera on the driver display; in other implementations the fields of view of front, rear and side cameras may overlap to an extent that they can be stitched together to provide a single birds eye view of the environment surrounding the vehicle. Additionally, or alternatively, the processor 120 may perform parking assist function and/or autonomous or semi-autonomous driving such as active cruise control or indeed any other vehicle function based on a ROI aligned based on measurements received from the suspension system.
  • Figure 2 shows a perspective view of the camera 110 and a three-dimensional coordinate system Xc,Yc, Zc of the camera 110.
  • the Xc-axis extends along a longitudinal direction corresponding to the optical axis of the camera and the Yc-axis extends in a transverse direction.
  • the Zc-axis is perpendicular to Xc and Yc .
  • the Zc and Xc axes are also shown in Figure 1, however, the Yc-axis is not shown in Figure 1 as this is perpendicular to the side view image of the vehicle.
  • a roll is a rotation about the Xc-axis
  • a pitch is a rotation about the Yc-axis
  • a yaw is a rotation about the Zc-axis, where RotXc, RotYc and RotZc are the angles of rotation.
  • the vehicle 102 further comprises a suspension system (not shown) comprising any combination of: springs, shock absorbers and linkages that connect the body of the vehicle 102 to respective wheels of the vehicle.
  • the suspension system may be of the dependent, independent, or semi-independent type depending on how each wheel moves with respect to each other wheel of the vehicle 102.
  • the suspension system may be active or passive and may be controlled in response to the driver settings, in response to changes to the vehicle loading, or in response to body roll of the vehicle.
  • the suspension system may react to signals from an electronic controller (not shown) to adjust the stance of the vehicle or adjust the stance of the vehicle in response to changes to the vehicle loading.
  • the suspension system may include a pneumatically controlled mechanism or an anti-roll bar that adjusts the vehicle stance such that the loading forces or roll forces are substantially cancelled.
  • the suspension system is operable to control the body position of the vehicle 102 with respect to it nominal (unloaded) position and so also the ground and thus to actively or reactively change an initial stance of the vehicle to an adjusted vehicle stance.
  • the vehicle 102 is equipped with one or more sensors (not shown) that directly or indirectly monitor changes of the suspension system from their nominal settings.
  • the one or more sensors determine suspension data for the vehicle 102 including one of more of vehicle pitch around a horizontal transverse axis of the vehicle; roll around a horizontal longitudinal axis of the vehicle; and height of the vehicle.
  • These one or more sensors measurements may be a set of rotation angles and/or distances.
  • the one or more sensors are configured to monitor the heights of the wheel arches 106 with respect to their nominal settings thus providing four height measurements E-H indicative of the vehicle stance.
  • point E corresponds to the height measurement of the rear right wheel arch of the vehicle
  • point F corresponds to the height measurement of the rear left wheel arch of the vehicle
  • point G corresponds to the height measurement of the front right wheel arch of the vehicle
  • point H corresponds to the height measurement of the front left wheel arch of the vehicle.
  • Points E-H are vertically displaced from the corresponding reference points A-D. (In the example, the displacements are all positive, but this need not be the case.
  • Reference points A-D correspond to the known nominal height settings for the wheel arches and define a reference plane 310 of the vehicle 102 (dashed line).
  • the X R -axis extends along the longitudinal direction and Y R -axis extends along the transverse direction of the reference plane 310.
  • the Z R -axis is perpendicular to both XR and YR axes and extends vertically from the reference plane 310 of the vehicle 102.
  • points A-D may be points where the wheels of the vehicle 102 touch the ground and in this case the displacements of points E-G would all be positive.
  • the reference plane 310 is fixed in response to changes to the vehicle stance.
  • any such suspension measurements can be converted to a set of rotation angles comprising rotations RotXv, RotYv around the XR and YR axes respectively and collectively referred to as R SU sp and an overall height offset (h SU sp) from a nominal height of the vehicle 102.
  • R SU sp rotations RotXv, RotYv around the XR and YR axes respectively and collectively referred to as R SU sp and an overall height offset (h SU sp) from a nominal height of the vehicle 102.
  • the one or more sensors measurements may be provided at any frequency and/or basis. For example, such one or more sensors measurements may be performed continuously or periodically (asynchronously or synchronously). The measurements may be performed at a regular time interval within a predetermined time period or in response to a change of the road surface condition, the stance of the vehicle, the status of the suspension system, the loading of the vehicle 112, or in response to the acquisition of an image from the camera 110.
  • these measurements may be run as a background service in a vehicle processor and may be rendered available to the processor 120 via a dedicated memory or via a vehicle network, such as a system bus 104.
  • the system bus 104 can comprise any suitable mechanism for communicating vehicle information from one part of a vehicle to another, for example, a CAN, FlexRay, Ethernet or the like.
  • the processor 120 includes a Motion Tracking Calibration (MTC) module 122 and an MTC module 122 .
  • MTC Motion Tracking Calibration
  • the MTC module 122 is configured to determine extrinsically, based on at least one or more images acquired from a camera, a set of X, Y and possibly Z camera offsets from nominal rotation angles (ARotXc, ARotYc, and ARotZc), referred to collectively as Rextr and a height parameter h ex tr along the vertical axis Z R -axis extending from a reference plane of the vehicle 102. (This could be the same reference plane as the plane 310 in Figure 3, or the plane could be a ground plane.)
  • speed information or odometry is available to the processor 120 across the vehicle system bus 104. These may include wheel rotation per minutes (rpm), yaw rate (deg/s), steering angle or the like and in certain embodiments, this information can be used by the MTC module 122 to determine the above extrinsic parameters R e xtr , h ex tr.
  • the processor 120 includes for example, the nominal height of each of the wheel arches, the nominal rotation angles of each camera as well as their nominal X,Y, Z coordinates within the vehicle - in particular relative to the wheels.
  • the intermediate module 124 acquires the output of the MTC module 122 as well as the suspension information across the system bus 104 and in response generates one or more adjusted camera rotation parameter values (R p0S e) and/or an adjusted height parameter value
  • FIG. 5 there is shown a block diagram illustrating a method according to an embodiment of the invention operable in the system of Figure 4.
  • step 1 the processor 120 obtains an image N in a succession of images from the camera 110 covering a FOV 112, extending around a ROI, Figure 2.
  • the FOV 112 may change in accordance with vehicle stance.
  • the load weight may cause the suspension system to rise up the front of the vehicle 102 and lower down the rear of the vehicle 102 towards the ground.
  • one part of the suspension may compress relative to another so causing the stance of the body of the vehicle to change relative to the road surface. Consequently, the FOV 112 of the camera 110 changes from an initial orientation to a different orientation according to the movement of the vehicle body.
  • the processor 120 determines the one or more camera rotation parameters (R ex tr) and a height parameter (h ex tr) along the vertical axis Z R -axis extending from a reference plane of the vehicle 102.
  • the processor determines R ex tr and h ex tr parameters extrinsically using the MTC module 122 disclosed in WO 2012/139636 (Ref: SIE0025).
  • the MTC module 122 can use the currently acquired image N as well as up to m previous (or possibly future relative to N) images, as well as knowledge of the nominal vehicle geometry and speed to determine the one or more camera rotation parameters (R ex tr) and height parameter (h ex tr).
  • images from the same camera which are to aligned as described below are also used to determine the extrinsic values
  • these images need not necessarily be the same images as the images whose ROI is to be aligned.
  • one or more of the extrinsic values may be calculated more or less frequently than the ROI in any given image is adjusted.
  • the processor 120 uses reference points extracted from images of the road surface to determine an offset rotation about the X R -axis with respect to a measured deviation based on parallelism between lines obtained from subsequent image frames by joining the tracked reference points within each frame.
  • the processor 120 applies a similar approach for an offset rotation about the Y R -axis, although in this case the trajectory of tracked reference points in a plurality of up to m successive image frames is considered.
  • the MTC module 122 can also provide an offset rotation about the vehicle Z R -axis, however, as this rotation is not significantly affected by changes in vehicle stance, in preferred embodiments, this parameter is not adjusted by the intermediate module 124.
  • the actual physical distance between the tracked features on the road surface can be determined and thus the height parameter (h ex tr) of the camera can be determined using triangulation.
  • the processor 120 may determine the one or more camera rotation parameters (R ex tr) and the height parameter (h ex tr) using methods disclosed in WO 2012/139660 (Ref: SIE0028) or in WO 2012/143036 (Ref: SIE0029). Irrespective of the method used, such one or more camera rotation parameters (R ex tr) and the height parameter (h ex tr) may be rendered available within the processor 120 or may be already available to the processor 120 via the dedicated memory or via the vehicle network.
  • the intermediate module 124 determines, from suspension data of the vehicle 102 at a time of acquisition of the image, one or more changes in such one or more camera rotation parameter values and the height parameter value (R SU sp, h SU sp).
  • the processor 120 acquires suspension data for the vehicle 102 via the dedicated memory or via the vehicle network.
  • the processor 120 extracts a current plane 320 of the vehicle from the suspension data.
  • the processor 120 determines one or more changes in such one or more camera rotation parameter values and the height parameter value (R SU sp, h SU sp) by comparing the current plane 320 with the reference plane 310.
  • the height offset h SU sp may need to be translated to account for the X,Y location of the camera within the vehicle i.e. positive offset values E or F at the rear of the vehicle, along with negative offset values G or H at the front of the vehicle will mean that the offset for any given camera location in between may or may not be the same.
  • the processor extracts a current plane 320 of the vehicle 102 from the data points E to H (solid line) and compares such plane with the plane defined by the reference points A-D to determine two rotations about the X and Y axis.
  • the processor 120 extracts the current plane 320 of the vehicle from the data points E to H using a linear least square planar fit.
  • the least square method can advantageously identify an outlier in the four data points E to H as the data point that is displaced at a distance from the plane of fit.
  • the outliner may correspond to a displacement of a wheel with respect to the other wheels if the distance is above a threshold.
  • the outlier may indicate that a wheel is parked on a speed bump or sidewalk.
  • any other method can be used to fit a current plane 320 of the vehicle to data points E to H.
  • the processor determines the position of the current plane 320 with respect to the reference plane 310. Specifically, the processor 120 determines the current plane 320 as the rotation about XR axis and YR axis (ARotXv and ARotYv) of the reference plane 310 and the height parameter value (h SU sp). Such rotations correspond to respectively roll and pitch which can be described by corresponding basic rotation matrixes Rx-v and Ry-v. As will be appreciated the suspension changes would not typically lead to any significant yaw change, and so, as provided in this embodiment, the third rotation parameter (ARotZv) is ignored. In an alternative, the third rotation parameter (ARotZv) can be considered.
  • any significant yaw change may be indicative of a response of the braking system that causes a net centring steering force substantially greater than zero at a zero steer-angle.
  • the third rotation parameter (ARotZv) may be used as an indicator of an optimal or not optimal response of the braking system, such as when the braking system is unbalanced.
  • the processor applies the one or more changes (R SU sp, h SU sp) to the one or more camera rotation parameter values (R ex tr) and height parameter value (h ex tr) to generate one or more adjusted camera rotation parameter values (R p0S e) or an adjusted height parameter value (hpose) .
  • Rpose can thus be a 3x3 rotation matrix that better estimates the camera pose relative to the road surface.
  • the processor 120 determines the adjusted height parameter value (hpose) from the adjusted position of the camera 110 in response to the change in the vehicle stance.
  • this can simply involve adding the offset h SU sp for the camera derived from the suspension data to the extrinsically calculated value h ex tr to provide h p0S e.
  • P p0S e a new x,y,z position of the camera, taking into account the changed suspension can be given by:
  • Ppose Rsusp (Pextr + [0, 0, hsusp] * )
  • P ex tr is the x, y, z position of the camera relative to a vehicle origin based on extrinsic calibration.
  • the matrix [0, 0, h] T is used, as in the embodiment there is only a height change measure derived from the suspension data. The zero parameters could be measured, but they will typically be so much smaller in magnitude than h SU sp that they can be ignored.
  • the processor 120 aligns the ROI within the acquired image based on one or more of the adjusted camera rotation parameter values or the adjusted height parameter value.
  • the processor 120 shifts and/or rotates the ROI within the FOV 112 of the camera to compensate for variable alignment of the camera 110 relative to a road surface.
  • the shift and/or rotation of the ROI is relative to the one or more of the adjusted camera rotation parameter values or the adjusted height parameter value.
  • the processor 120 provides an image of a given portion of the environment surrounding the vehicle from the ROI.
  • the processor 120 acquires an image that corresponds with the adjusted ROI and transforms such image into a flattened rectangular image representative of a portion of the environment surrounding the vehicle 102.
  • the processing method described hereinabove may be operable in the image acquisition system 100 to capture successive images and process them. These images can be displayed by an imaging system (not shown) as top plan views of the ground surrounding the vehicle 102. Thus, if a plurality of vehicle cameras is provided, each having a field of view different from or at least partially overlapping one another, the images of each camera displayed juxtapositioned along with the top plan view image of the vehicle provide an accurate representation of the top plan view of both the vehicle and the adjacent ground.
  • each camera rotational and height offset based on suspension data means that image information at the adjacent boundaries of images will tend to be more aligned and this reduces the need for extensive image analysis, transformation and stitching to produce a composite top plan view image.
  • the rate at which the extrinsic parameters (R ex tr, h ex tr) are determined by the MTC module 122 need not be the same as the rate at which the suspension parameters (R SU sp, h SU sp) and so the adjusted parameters (R p0S e, h p0S e) are determined by the intermediate module 124.
  • the MTC module 122 could nonetheless be updated quickly, i.e. at almost the update cycle of the suspension signal data from the CAN bus, by the intermediate module 124.
  • the module 124 can adjust camera extrinsic parameters after power up (ignition) in response to an increased the loading of the vehicle or when cornering on an embankment (roll-angle data), and/or whilst driving on a motorway/highway, when the suspension settings are actively changed by the vehicle controller.
  • each camera orientation will change by the same pitch, roll and yaw angle in response to the change in the vehicle stance. Therefore, the process 120 does not need to repeat the determining one or more changes in the camera rotation parameter values for each camera as the rotation matrix adjustment (R p0S e) derived from the suspension date (R SU sp, h SU sp) is the same for each camera.
  • the vertical height offset derived from the suspension data may need to be translated in accordance with each cameras X,Y displacement within the vehicle before this offset is applied.

Abstract

An image processing method comprises: determining camera rotation parameters (Rextr) around respective horizontal axes of a vehicle (102) and a height parameter (hextr) along a vertical axis extending from a reference plane of the vehicle; acquiring an image from the camera; and determining, from suspension data of the vehicle at a time of acquisition of the 5image, one or more changes in the camera rotation parameter values and the height parameter value. The changes are applied to the camera rotation parameter values (Rextr) and the height parameter value (hextr) to generate one or more adjusted camera rotation parameter values (Rpose) or an adjusted height parameter value (hpose). A region of interest within the acquired image is aligned based on the adjusted camera rotation parameter values or the adjusted height parameter value to provide an image of a given portion of the environment surrounding the vehicle from said ROI.

Description

Title
Image acquisition system
Field
The present invention relates to an image acquisition system and a method performed in such a system.
Background
Currently, vehicles are provided with driver assistance systems which rely on the use of an image acquisition system comprising one or more cameras that capture images of a portion of the environment surrounding the vehicle. In some cases, images from more than one camera can be stitched together to provide a top plan or birds eye view image of the environment surrounding vehicle. These images may be displayed on a driver display or processed for feature extraction and/or recognition to assist in the autonomous or semi-autonomous control of the vehicle.
WO 2012/139636 (Ref: SIE0025) relates to a method for online calibration of a vehicle video system using image frames captured by a camera containing features on a road surface. At least two different features within an image frame are chosen as respective reference points.
A sequence of at least one more image frames is acquired by the camera while locating within each new frame the features chosen as reference points. A geometrical shape is obtained by joining the located reference points in each image and considering the respective trajectories covered by the reference points during a driving interval between images of the sequence. A deviation of the resulting geometrical object from a parallelogram with corners defined by the reference points from at least two subsequent images is calculated with any measured deviation being used to define an offset correction of the camera.
WO 2012/139660 (Ref: SIE0028) relates to a method for online calibration of a vehicle video system using image frames captured by a camera containing longitudinal road features on a road surface. An identification of longitudinal road features within the image frame is performed. Points are extracted along the identified longitudinal road features to be transformed to a virtual road plan view via a perspective mapping taking into account prior known parameters of the camera. The transformed extracted points are analysed with respect to the vehicle by determining a possible deviation of the points from a line parallel to the vehicle with any measured deviation being used to define an offset correction of the camera. WO 2012/143036 (Ref: SIE0029) relates to a method for online calibration of a vehicle video system using image frames containing longitudinal road features. Adjacent portions of a road surface are captured by at least two cameras of the vehicle video system. Longitudinal road features are identified within the image frames. Identified longitudinal road features are selected within at least two image frames respectively taken by two cameras as matching a single line between the two image frames. An analysis of the single line is performed by determining an offset of the line between the two image frames. The consequently determined offset of the line is applied for a calibration of the cameras of the vehicle video system. Summary
According to a first aspect, there is provided a method operable in an image acquisition system according to claim 1.
In a second aspect there is provided a computer program product which when executed on a computing device is arranged to perform the method of claim 1. In still a further aspect, there is provided an image acquisition system as detailed in claim 13.
Advantageous embodiments are provided in the dependent claims.
Brief Description Of The Drawings
An embodiment of the invention will now be described, by way of example, with reference to the accompanying drawings, in which: Figure 1 illustrates a side view of a vehicle comprising an image acquisition system according to an embodiment of the invention;
Figure 2 illustrates a perspective view of a camera component of the system of Figure 1;
Figure 3 illustrates planes indicative of a stance of the vehicle of Figure 1;
Figure 4 is a block diagram illustrating an exemplary configuration of the image acquisition system of Figure 1 ; and
Figure 5 is a block diagram illustrating an image processing method according to an embodiment of the invention.
Detailed Description of the Drawings Figure 1 illustrates a vehicle 102 including an image acquisition system 100 according to an embodiment of the invention. The image acquisition system 100 comprises one or more cameras 110, only one shown, and a processor 120 for processing images captured by the camera 110.
The processor 120 can comprise a dedicated processor for processing images provided by the camera 110; or indeed the processor 120 could comprise any processor of a multiple processor core within a vehicle control system and which may be available as required for processing acquired images.
The camera 110 may be any type of image capture device. For example, the camera 110 can be of the type commonly referred to as a digital camera, such as complementary metal-oxide- semiconductor (CMOS) camera, charged coupled device (CCD) camera or the like. The camera 110 can capture images in the visible and/or not visible spectrum and provide these to the processor 120. The processor 120 processes images produced by the camera using any combination of chromatic or intensity image plane information provided by camera 110.
Notably in vehicle image acquisition systems such as described below, cameras tend to have a wide field of view (FOV) and so capture a highly distorted image. As such, typically only a portion of the rectangular image captured by the camera image sensor is employed to provide a rectilinear, distortion corrected image used by the remainder of a driver assistance system. This distorted portion of the captured image is referred to herein as a region of interest (ROI). Also, in order to compensate for variable alignment of the camera 110 relative to a road surface and possibly relative to the vehicle, the ROI does not extend to the boundaries of the camera image sensor, so allowing the ROI to be aligned as required within any image acquired from the camera 110.
The camera 110 shown is a front facing camera mounted in or on the grille of the vehicle 102. The camera 110 is configured to capture a downward looking plan view of the ground towards the front of the vehicle 102.
In alternative embodiments, the camera 110 may be mounted in any desired location, within the cabin or on the exterior of the vehicle 102, for capturing an image looking in a generally forward direction from the vehicle 102. So, for example, the camera 110 could also be located within an overhead console unit (not shown) within the cabin of the vehicle 102; on a forward-facing surface of a rear-view mirror (not shown) of the vehicle 102; or on a wing mirror (not shown) of the vehicle 102. Other cameras may be mounted in any other desired location of the vehicle 102 for capturing other portions of the environment surrounding the vehicle 102. For example, cameras may be mounted in any desired location of the vehicle 102 for capturing a downward looking plan view of the ground towards the rear of the vehicle 102, or for capturing an image looking in a generally backward direction from the vehicle 102. So, a camera may be mounted at a rear panel of the vehicle, inside a rear windscreen or on a rear-facing surface of a wing mirror. In another implementations, cameras could also be mounted in any desired location of the vehicle 102 for capturing a downward looking plan view of the ground on the left/right side of the vehicle 102. So, cameras may be mounted at the side middle panels of the vehicle 102, wing mirrors or on side windscreens.
So, while in some implementations of the present invention, the processor 120 is tasked with aligning a ROI within images acquired by the camera 110 based on measurements received from the suspension system simply in order to display a consistent image from a given camera on the driver display; in other implementations the fields of view of front, rear and side cameras may overlap to an extent that they can be stitched together to provide a single birds eye view of the environment surrounding the vehicle. Additionally, or alternatively, the processor 120 may perform parking assist function and/or autonomous or semi-autonomous driving such as active cruise control or indeed any other vehicle function based on a ROI aligned based on measurements received from the suspension system.
Figure 2 shows a perspective view of the camera 110 and a three-dimensional coordinate system Xc,Yc, Zc of the camera 110. In this system, the Xc-axis extends along a longitudinal direction corresponding to the optical axis of the camera and the Yc-axis extends in a transverse direction. The Zc-axis is perpendicular to Xc and Yc. The Zc and Xc axes are also shown in Figure 1, however, the Yc-axis is not shown in Figure 1 as this is perpendicular to the side view image of the vehicle. A roll (RotXc) is a rotation about the Xc-axis, a pitch (RotYc) is a rotation about the Yc-axis and a yaw (RotZc) is a rotation about the Zc-axis, where RotXc, RotYc and RotZc are the angles of rotation.
Note that while in the example below, the embodiment is described in relation to one camera, the invention is equally applicable to multiple cameras, although it will be appreciated that each of these will have nominal camera axes, in particular their X and Y axes, rotated relative to one another in accordance with the direction they face. In the present embodiment, the vehicle 102 further comprises a suspension system (not shown) comprising any combination of: springs, shock absorbers and linkages that connect the body of the vehicle 102 to respective wheels of the vehicle. The suspension system may be of the dependent, independent, or semi-independent type depending on how each wheel moves with respect to each other wheel of the vehicle 102.
The suspension system may be active or passive and may be controlled in response to the driver settings, in response to changes to the vehicle loading, or in response to body roll of the vehicle. The suspension system may react to signals from an electronic controller (not shown) to adjust the stance of the vehicle or adjust the stance of the vehicle in response to changes to the vehicle loading. For example, the suspension system may include a pneumatically controlled mechanism or an anti-roll bar that adjusts the vehicle stance such that the loading forces or roll forces are substantially cancelled.
In any case, the suspension system is operable to control the body position of the vehicle 102 with respect to it nominal (unloaded) position and so also the ground and thus to actively or reactively change an initial stance of the vehicle to an adjusted vehicle stance.
In the present embodiment, the vehicle 102 is equipped with one or more sensors (not shown) that directly or indirectly monitor changes of the suspension system from their nominal settings. The one or more sensors determine suspension data for the vehicle 102 including one of more of vehicle pitch around a horizontal transverse axis of the vehicle; roll around a horizontal longitudinal axis of the vehicle; and height of the vehicle. These one or more sensors measurements may be a set of rotation angles and/or distances.
For example, with reference to Figure 3, in one embodiment, the one or more sensors are configured to monitor the heights of the wheel arches 106 with respect to their nominal settings thus providing four height measurements E-H indicative of the vehicle stance. In this example point E corresponds to the height measurement of the rear right wheel arch of the vehicle, point F corresponds to the height measurement of the rear left wheel arch of the vehicle, point G corresponds to the height measurement of the front right wheel arch of the vehicle, point H corresponds to the height measurement of the front left wheel arch of the vehicle. Points E-H are vertically displaced from the corresponding reference points A-D. (In the example, the displacements are all positive, but this need not be the case. Reference points A-D correspond to the known nominal height settings for the wheel arches and define a reference plane 310 of the vehicle 102 (dashed line). The XR-axis extends along the longitudinal direction and YR-axis extends along the transverse direction of the reference plane 310. The ZR-axis is perpendicular to both XR and YR axes and extends vertically from the reference plane 310 of the vehicle 102. In alternative implementations, points A-D may be points where the wheels of the vehicle 102 touch the ground and in this case the displacements of points E-G would all be positive. In any case, the reference plane 310 is fixed in response to changes to the vehicle stance.
It will be appreciated that any such suspension measurements can be converted to a set of rotation angles comprising rotations RotXv, RotYv around the XR and YR axes respectively and collectively referred to as RSUsp and an overall height offset (hSUsp) from a nominal height of the vehicle 102. Note that when as in the present embodiment, only changes in the height of the suspension are measured, there should be no effect on the rotation angle RotZv around the axis Zr caused by changes in the suspension height.
The one or more sensors measurements may be provided at any frequency and/or basis. For example, such one or more sensors measurements may be performed continuously or periodically (asynchronously or synchronously). The measurements may be performed at a regular time interval within a predetermined time period or in response to a change of the road surface condition, the stance of the vehicle, the status of the suspension system, the loading of the vehicle 112, or in response to the acquisition of an image from the camera 110.
It is appreciated that these measurements may be run as a background service in a vehicle processor and may be rendered available to the processor 120 via a dedicated memory or via a vehicle network, such as a system bus 104. As it will be appreciated, the system bus 104 can comprise any suitable mechanism for communicating vehicle information from one part of a vehicle to another, for example, a CAN, FlexRay, Ethernet or the like.
Referring in more detail to Figure 4, there is shown a block diagram of a system according to an embodiment of the invention showing the processor 120 in more detail. In this system, the processor 120 includes a Motion Tracking Calibration (MTC) module 122 and an
intermediate module 124.
In the embodiment, the MTC module 122 is configured to determine extrinsically, based on at least one or more images acquired from a camera, a set of X, Y and possibly Z camera offsets from nominal rotation angles (ARotXc, ARotYc, and ARotZc), referred to collectively as Rextr and a height parameter hextr along the vertical axis ZR-axis extending from a reference plane of the vehicle 102. (This could be the same reference plane as the plane 310 in Figure 3, or the plane could be a ground plane.)
Note that speed information or odometry is available to the processor 120 across the vehicle system bus 104. These may include wheel rotation per minutes (rpm), yaw rate (deg/s), steering angle or the like and in certain embodiments, this information can be used by the MTC module 122 to determine the above extrinsic parameters Rextr , hextr.
Similarly, details of the vehicle geometry are also available to the processor 120, including for example, the nominal height of each of the wheel arches, the nominal rotation angles of each camera as well as their nominal X,Y, Z coordinates within the vehicle - in particular relative to the wheels.
The intermediate module 124 acquires the output of the MTC module 122 as well as the suspension information across the system bus 104 and in response generates one or more adjusted camera rotation parameter values (Rp0Se) and/or an adjusted height parameter value
(hpose) ·
Referring now to Figure 5 there is shown a block diagram illustrating a method according to an embodiment of the invention operable in the system of Figure 4.
In step 1, the processor 120 obtains an image N in a succession of images from the camera 110 covering a FOV 112, extending around a ROI, Figure 2.
As will be understood, the FOV 112 may change in accordance with vehicle stance. For example, as shown in Figure 1, if the vehicle 102 carries a load weight (not shown) which is unevenly distributed within the vehicle 102 this causes a change in the vehicle stance. In particular, the load weight may cause the suspension system to rise up the front of the vehicle 102 and lower down the rear of the vehicle 102 towards the ground. Alternatively, as the vehicle 102 comers, one part of the suspension may compress relative to another so causing the stance of the body of the vehicle to change relative to the road surface. Consequently, the FOV 112 of the camera 110 changes from an initial orientation to a different orientation according to the movement of the vehicle body.
In step 2, the processor 120 determines the one or more camera rotation parameters (Rextr) and a height parameter (hextr) along the vertical axis ZR-axis extending from a reference plane of the vehicle 102. In one embodiment, the processor determines Rextr and hextr parameters extrinsically using the MTC module 122 disclosed in WO 2012/139636 (Ref: SIE0025). As outlined above, the MTC module 122 can use the currently acquired image N as well as up to m previous (or possibly future relative to N) images, as well as knowledge of the nominal vehicle geometry and speed to determine the one or more camera rotation parameters (Rextr) and height parameter (hextr).
Note where images from the same camera which are to aligned as described below are also used to determine the extrinsic values, these images need not necessarily be the same images as the images whose ROI is to be aligned. For example, one or more of the extrinsic values may be calculated more or less frequently than the ROI in any given image is adjusted.
In any case, using the MTC module 122, the processor 120 uses reference points extracted from images of the road surface to determine an offset rotation about the XR-axis with respect to a measured deviation based on parallelism between lines obtained from subsequent image frames by joining the tracked reference points within each frame. The processor 120 applies a similar approach for an offset rotation about the YR-axis, although in this case the trajectory of tracked reference points in a plurality of up to m successive image frames is considered. The MTC module 122 can also provide an offset rotation about the vehicle ZR-axis, however, as this rotation is not significantly affected by changes in vehicle stance, in preferred embodiments, this parameter is not adjusted by the intermediate module 124.
Furthermore, where the speed of the vehicle 102 is known, the actual physical distance between the tracked features on the road surface can be determined and thus the height parameter (hextr) of the camera can be determined using triangulation.
Alternatively or in addition, the processor 120 may determine the one or more camera rotation parameters (Rextr) and the height parameter (hextr) using methods disclosed in WO 2012/139660 (Ref: SIE0028) or in WO 2012/143036 (Ref: SIE0029). Irrespective of the method used, such one or more camera rotation parameters (Rextr) and the height parameter (hextr) may be rendered available within the processor 120 or may be already available to the processor 120 via the dedicated memory or via the vehicle network.
In step 3, the intermediate module 124 determines, from suspension data of the vehicle 102 at a time of acquisition of the image, one or more changes in such one or more camera rotation parameter values and the height parameter value (RSUsp, hSUsp). In particular, the processor 120 acquires suspension data for the vehicle 102 via the dedicated memory or via the vehicle network. The processor 120 extracts a current plane 320 of the vehicle from the suspension data. Subsequently, the processor 120 determines one or more changes in such one or more camera rotation parameter values and the height parameter value (RSUsp, hSUsp) by comparing the current plane 320 with the reference plane 310. Note that the height offset hSUsp may need to be translated to account for the X,Y location of the camera within the vehicle i.e. positive offset values E or F at the rear of the vehicle, along with negative offset values G or H at the front of the vehicle will mean that the offset for any given camera location in between may or may not be the same.
With reference to Figure 3, the processor extracts a current plane 320 of the vehicle 102 from the data points E to H (solid line) and compares such plane with the plane defined by the reference points A-D to determine two rotations about the X and Y axis. Specifically, the processor 120 extracts the current plane 320 of the vehicle from the data points E to H using a linear least square planar fit. The least square method can advantageously identify an outlier in the four data points E to H as the data point that is displaced at a distance from the plane of fit. The outliner may correspond to a displacement of a wheel with respect to the other wheels if the distance is above a threshold. For example, the outlier may indicate that a wheel is parked on a speed bump or sidewalk. Alternatively, any other method can be used to fit a current plane 320 of the vehicle to data points E to H.
In response to the determination of a current plane 320, the processor determines the position of the current plane 320 with respect to the reference plane 310. Specifically, the processor 120 determines the current plane 320 as the rotation about XR axis and YR axis (ARotXv and ARotYv) of the reference plane 310 and the height parameter value (hSUsp). Such rotations correspond to respectively roll and pitch which can be described by corresponding basic rotation matrixes Rx-v and Ry-v. As will be appreciated the suspension changes would not typically lead to any significant yaw change, and so, as provided in this embodiment, the third rotation parameter (ARotZv) is ignored. In an alternative, the third rotation parameter (ARotZv) can be considered. In this case, any significant yaw change may be indicative of a response of the braking system that causes a net centring steering force substantially greater than zero at a zero steer-angle. Thus, the third rotation parameter (ARotZv) may be used as an indicator of an optimal or not optimal response of the braking system, such as when the braking system is unbalanced.
In step 4, the processor applies the one or more changes (RSUsp, hSUsp) to the one or more camera rotation parameter values (Rextr) and height parameter value (hextr) to generate one or more adjusted camera rotation parameter values (Rp0Se) or an adjusted height parameter value (hpose) . In one embodiment, the processor applies the one or more changes (RSUsp, hSUsp) by multiplying the one or more camera rotation parameter values (Rextr) with the one or more changes in the camera rotation parameter values (RSUsp) as follows: Rp0se= Rextr Rsusp. Rpose can thus be a 3x3 rotation matrix that better estimates the camera pose relative to the road surface.
In the present embodiment, the processor 120 determines the adjusted height parameter value (hpose) from the adjusted position of the camera 110 in response to the change in the vehicle stance.
At its simplest, this can simply involve adding the offset hSUsp for the camera derived from the suspension data to the extrinsically calculated value hextr to provide hp0Se.
Alternatively or in addition to the adjustment of the angular orientation of the camera due to changes in vehicle stance, Pp0Se a new x,y,z position of the camera, taking into account the changed suspension can be given by:
Ppose = Rsusp (Pextr + [0, 0, hsusp] * ) where Pextr is the x, y, z position of the camera relative to a vehicle origin based on extrinsic calibration. The matrix [0, 0, h] T is used, as in the embodiment there is only a height change measure derived from the suspension data. The zero parameters could be measured, but they will typically be so much smaller in magnitude than hSUsp that they can be ignored.
In step 5, the processor 120 aligns the ROI within the acquired image based on one or more of the adjusted camera rotation parameter values or the adjusted height parameter value. In particular, the processor 120 shifts and/or rotates the ROI within the FOV 112 of the camera to compensate for variable alignment of the camera 110 relative to a road surface. The shift and/or rotation of the ROI is relative to the one or more of the adjusted camera rotation parameter values or the adjusted height parameter value.
In step 6, the processor 120 provides an image of a given portion of the environment surrounding the vehicle from the ROI. In particular, the processor 120 acquires an image that corresponds with the adjusted ROI and transforms such image into a flattened rectangular image representative of a portion of the environment surrounding the vehicle 102.
The processing method described hereinabove may be operable in the image acquisition system 100 to capture successive images and process them. These images can be displayed by an imaging system (not shown) as top plan views of the ground surrounding the vehicle 102. Thus, if a plurality of vehicle cameras is provided, each having a field of view different from or at least partially overlapping one another, the images of each camera displayed juxtapositioned along with the top plan view image of the vehicle provide an accurate representation of the top plan view of both the vehicle and the adjacent ground.
The improved measurement of each camera’s rotational and height offset based on suspension data means that image information at the adjacent boundaries of images will tend to be more aligned and this reduces the need for extensive image analysis, transformation and stitching to produce a composite top plan view image.
Note that the rate at which the extrinsic parameters (Rextr, hextr) are determined by the MTC module 122 need not be the same as the rate at which the suspension parameters (RSUsp, hSUsp) and so the adjusted parameters (Rp0Se, hp0Se) are determined by the intermediate module 124. So, for example, if a vehicle speed were too high, for example, > 50 kph, for the MTC module 122 to generate new extrinsic parameters (Rextr, hextr) or if the MTC module 122 were only run at vehicle start-up, the extrinsic parameters (Rextr, hextr) could nonetheless be updated quickly, i.e. at almost the update cycle of the suspension signal data from the CAN bus, by the intermediate module 124. Thus, the module 124 can adjust camera extrinsic parameters after power up (ignition) in response to an increased the loading of the vehicle or when cornering on an embankment (roll-angle data), and/or whilst driving on a motorway/highway, when the suspension settings are actively changed by the vehicle controller.
As will be appreciated, where a plurality of vehicle cameras are provided, each camera orientation will change by the same pitch, roll and yaw angle in response to the change in the vehicle stance. Therefore, the process 120 does not need to repeat the determining one or more changes in the camera rotation parameter values for each camera as the rotation matrix adjustment (Rp0Se) derived from the suspension date (RSUsp, hSUsp) is the same for each camera. However, the vertical height offset derived from the suspension data may need to be translated in accordance with each cameras X,Y displacement within the vehicle before this offset is applied.

Claims

Claims:
1. A method operable in an image acquisition system (100) for a vehicle (102) comprising a camera (110) arranged to capture successive images having respective fields of view of a portion of an environment surrounding a vehicle (102), said fields of view changing in accordance with vehicle stance, the method comprising:
a) determining one or more camera rotation parameters (Rextr) around a respective horizontal axis of said vehicle (102) and a height parameter (hextr) along a vertical axis extending from a reference plane of the vehicle;
b) acquiring an image from said camera;
c) determining, from suspension data of the vehicle at a time of acquisition of said image, one or more changes in said one or more camera rotation parameter values and said height parameter value (RSUSp, hSUSp);
d) applying said one or more changes to said one or more of said camera rotation parameter values (Rextr) and said height parameter value (hextr) to generate one or more adjusted camera rotation parameter values (Rp0Se) or an adjusted height parameter value (hp0Se); and e) aligning a ROI within said acquired image based on one or more of the adjusted camera rotation parameter values or the adjusted height parameter value;
f) providing an image of a given portion of the environment surrounding the vehicle from said ROI; and
g) repeating steps b) to f) for successive acquired images.
2. The method of claim 1 wherein suspension data for the vehicle includes one of more of vehicle pitch around a horizontal transverse axis of the vehicle and roll around a horizontal longitudinal axis of the vehicle.
3. The method of claim 2 wherein the reference plane is defined by the respective points of contact of the vehicle wheels with a surface of the ground.
4. The method of claim 3 wherein said suspension data comprises a vertical displacement of a wheel arch for each wheel relative to a nominal position.
5. The method of claim 4 further comprising converting each of said vertical displacements into each of said vehicle pitch and said vehicle roll.
6. The method of claim 1 wherein said step of determining said one or more camera rotation parameters (Rextr) and said height parameter (hextr) is based on extrinsic information extracted from at least said acquired image.
7. The method of claim 6 wherein said step of determining said one or more camera rotation parameters (Rextr) and said height parameter (hextr) is based on extrinsic information extracted from at least one further image acquired before said acquired image.
8. The method of claim 1 further comprising reading said suspension data from a vehicle system bus.
9. The method of claim 1 further comprising defining a plane relative to said reference plane based on a least squares analysis of said vertical displacements.
10. The method of claim 1 wherein applying said changes to said camera rotation parameter values (Rextr) comprises multiplying a rotation matrix comprising said camera rotation parameter values (Rextr) by a matrix comprising said changes in said camera rotation parameter (RSUSp).
11. The method of claim 1 wherein applying said changes to said height parameter value (hextr) comprises transforming a matrix Pextr comprising an extrinsically calculated x, y, z position of the camera according to the formula:
Ppose = Rsusp (PeXtr + [0, 0, hsusp] * ) where RSUsp is a matrix comprising said one or more changes in said camera rotation parameter values;
hsusp is said height parameter value for said camera determined from said suspension data; and Ppose comprises a matrix including an adjusted height value for the camera.
12. A computer program product comprising computer readable instructions stored on a computer readable medium which when executed on a processor are arranged to carry out the method according to any one of the preceding claims.
13. An image acquisition system comprising one or more cameras, a plurality of suspension sensors and a processor operable to perform the method of any one of claims 1 to 11.
14. A vehicle comprising the image acquisition system of claim 13.
PCT/EP2020/063447 2019-05-29 2020-05-14 Image acquisition system WO2020239457A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019114404.3A DE102019114404A1 (en) 2019-05-29 2019-05-29 Image acquisition system
DE102019114404.3 2019-05-29

Publications (1)

Publication Number Publication Date
WO2020239457A1 true WO2020239457A1 (en) 2020-12-03

Family

ID=70738560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/063447 WO2020239457A1 (en) 2019-05-29 2020-05-14 Image acquisition system

Country Status (2)

Country Link
DE (1) DE102019114404A1 (en)
WO (1) WO2020239457A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021210574A1 (en) 2021-09-23 2023-03-23 Zf Friedrichshafen Ag Method and vehicle assistance system for monitoring a vehicle environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012139636A1 (en) 2011-04-13 2012-10-18 Connaught Electronics Limited Online vehicle camera calibration based on road surface texture tracking and geometric properties
WO2012139660A1 (en) 2011-04-15 2012-10-18 Connaught Electronics Limited Online vehicle camera calibration based on road marking extractions
WO2012143036A1 (en) 2011-04-18 2012-10-26 Connaught Electronics Limited Online vehicle camera calibration based on continuity of features
EP2942951A1 (en) * 2014-05-06 2015-11-11 Application Solutions (Electronics and Vision) Limited Image calibration

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10179543B2 (en) * 2013-02-27 2019-01-15 Magna Electronics Inc. Multi-camera dynamic top view vision system
DE102015221356B4 (en) * 2015-10-30 2020-12-24 Conti Temic Microelectronic Gmbh Device and method for providing a vehicle panoramic view

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012139636A1 (en) 2011-04-13 2012-10-18 Connaught Electronics Limited Online vehicle camera calibration based on road surface texture tracking and geometric properties
WO2012139660A1 (en) 2011-04-15 2012-10-18 Connaught Electronics Limited Online vehicle camera calibration based on road marking extractions
WO2012143036A1 (en) 2011-04-18 2012-10-26 Connaught Electronics Limited Online vehicle camera calibration based on continuity of features
EP2942951A1 (en) * 2014-05-06 2015-11-11 Application Solutions (Electronics and Vision) Limited Image calibration

Also Published As

Publication number Publication date
DE102019114404A1 (en) 2020-12-03

Similar Documents

Publication Publication Date Title
EP3416368B1 (en) Calibration system
US7822571B2 (en) Calibration device and calibration method for range image sensor
CN104220317B (en) Road surface state estimation apparatus
US11377029B2 (en) Vehicular trailering assist system with trailer state estimation
US9592834B2 (en) Vehicle environment recognizing apparatus
US7965871B2 (en) Moving-state determining device
JP5538411B2 (en) Vehicle perimeter monitoring device
WO2017069191A1 (en) Calibration apparatus, calibration method, and calibration program
US9661319B2 (en) Method and apparatus for automatic calibration in surrounding view systems
CN111381248A (en) Obstacle detection method and system considering vehicle bump
EP3418122B1 (en) Position change determination device, overhead view image generation device, overhead view image generation system, position change determination method, and program
US10839231B2 (en) Method for detecting a rolling shutter effect in images of an environmental region of a motor vehicle, computing device, driver assistance system as well as motor vehicle
US9892519B2 (en) Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle
JPH1123291A (en) Picture processing device for car
CN109345591A (en) A kind of vehicle itself attitude detecting method and device
WO2020239457A1 (en) Image acquisition system
EP3486871B1 (en) A vision system and method for autonomous driving and/or driver assistance in a motor vehicle
JP2018136739A (en) Calibration device
JP7236849B2 (en) External recognition device
KR20210125583A (en) Systems and methods for compensating for movement of vehicle components
CN206430715U (en) Vehicle-mounted monocular camera and six axle sensor combination range-measurement systems
JP7225079B2 (en) Obstacle recognition device
Baer et al. EgoMaster: A central ego motion estimation for driver assist systems
CN117177891A (en) Sensor device and vehicle control device
EP2293223B1 (en) Vision system and method for a motor vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20726091

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20726091

Country of ref document: EP

Kind code of ref document: A1