WO2015014883A1 - Procédé de génération d'une table de conversion dans l'exploitation d'un système de caméra, système de caméra et véhicule à moteur - Google Patents

Procédé de génération d'une table de conversion dans l'exploitation d'un système de caméra, système de caméra et véhicule à moteur Download PDF

Info

Publication number
WO2015014883A1
WO2015014883A1 PCT/EP2014/066354 EP2014066354W WO2015014883A1 WO 2015014883 A1 WO2015014883 A1 WO 2015014883A1 EP 2014066354 W EP2014066354 W EP 2014066354W WO 2015014883 A1 WO2015014883 A1 WO 2015014883A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
motor vehicle
transformation data
camera
lut
Prior art date
Application number
PCT/EP2014/066354
Other languages
English (en)
Inventor
Patrick Eoghan Denny
Mark Patrick GRIFFIN
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2015014883A1 publication Critical patent/WO2015014883A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data

Definitions

  • the invention relates to a method for operating a camera system of a motor vehicle, in which an image of an environmental region of the motor vehicle is provided by means of a camera of the camera system. The image is then transformed to an image presentation using transformation data by means of an image processing device, wherein camera parameters of the camera are taken into account in transforming the image. The image presentation is then displayed on a display of the camera system.
  • the invention relates to a camera system formed for performing such a method as well as to a motor vehicle with such a camera system.
  • Camera systems for motor vehicles are already known from the prior art.
  • several cameras can be employed in a motor vehicle, wherein it becomes increasingly common nowadays to use a camera assembly with at least two cameras for a camera system of a vehicle, which each capture an environmental region of the motor vehicle.
  • four cameras can be employed, which capture the entire environment around the motor vehicle.
  • an overall image presentation can be provided from the images of all of the cameras, such as for example the so-called "bird eye view”.
  • This image presentation represents a plan view of the motor vehicle as well as its environment from a bird's eye view and thus for example from a reference point of view directly above the motor vehicle.
  • the provision of such an environmental representation from the images of several cameras is for example known from the document US 201 1/0156887.
  • the display can usually be switched between different operating modes, which differ from each other with respect to the displayed image presentation and thus with respect to the view.
  • the driver of the motor vehicle can select between different views, which are conceived and optimized for different road situations.
  • the display can for example also be switched into a cross traffic operating mode, in which the so-called junction view is displayed, i.e. a view, which shows the cross traffic.
  • Such a junction view can for example be provided based on images of a camera, which is disposed on the front - for example on the front bumper - or else in the rear region - for example on the rear bumper or on a tailgate - and has a relatively wide opening angle of 160° to 200°.
  • an image presentation can be displayed on the display, which is generated from the images of a rear view camera and presents the environmental region behind the motor vehicle.
  • the raw images of the camera have to be transformed or mapped into the coordinate system of the display.
  • the raw images are communicated from the camera to a central electronic image processing device digitally processing the images. If multiple cameras are employed, thus, this central image processing device receives the digital image data of all of the cameras.
  • this central image processing device receives the digital image data of all of the cameras.
  • the images are transformed from the coordinate system of the respective camera into the coordinate system of the display.
  • transformation data is used, which defines the transformation of the raw image.
  • This transformation data is for example in the form of a so-called look-up table and also considers the camera parameters, which in particular include the position of the attachment of the camera to the vehicle as well as the orientation on the vehicle as well as the characteristics of the used lens.
  • the position on the vehicle is defined by three coordinate values (x, y, z), which specify the unique position of the camera with respect to the vehicle body.
  • the orientation of the camera on the vehicle in turn is preset by three angular values, which specify the angles of orientation of the camera around the three vehicle axes x, y, z.
  • the characteristics of the lens for example define a distortion of the images caused by the lens and therefore should be taken into account because so-called fish-eye lenses are usually employed, which cause a relatively great distortion of the images. This distortion is corrected within the scope of the mentioned transformation.
  • the transformation data (look-up table)
  • the so-called “viewport” is also defined by the transformation data, i.e. a partial region of the image, which is used for generating the image presentation for the display.
  • the transformation data i.e. a partial region of the image, which is used for generating the image presentation for the display.
  • This viewport is depending on the currently activated operating mode of the display and thus depending on the current view displayed on the display.
  • One object of the invention is to provide a method, a camera system as well as a motor vehicle improved with respect to the prior art. According to the invention, this object is solved by a method, by a camera system as well as by a motor vehicle having the features of the respective independent claims.
  • a method according to the invention serves for operating a camera system of a motor vehicle.
  • At least one camera of the camera system provides an image of an environmental region of the motor vehicle.
  • the image is transformed to an image presentation displayed on a display of the motor vehicle by means of a digital image processing device.
  • the transformation of the image is effected using transformation data, wherein preset camera parameters of the camera are taken into account in transforming the image.
  • a current vehicle level and thus the current chassis height is acquired by means of at least one sensor of the motor vehicle and the transformation data is generated depending on the measured vehicle level in the operation of the camera system.
  • the invention is based on the realization that the position and the orientation of the camera relative to the vehicle body are fixedly preset and thus known, but the position and the orientation of the camera relative to the ground or to the road, on which the motor vehicle is located, can vary over time.
  • the level of the camera above the ground as well as the orientation of the camera can be affected by a plurality of factors, such as for example by the loading of the trunk of the vehicle, by the number of the vehicle occupants, by uneven distribution of the weights in the vehicle, by coupling a trailer or else during cross country drive.
  • the invention is based on the realization that this variation of the vehicle level also causes variation of the current view on the display.
  • the variation of the vehicle level results in errors in the composition of the images of different cameras.
  • the invention is based on the realization that the disadvantages of the prior art can be avoided in that the current vehicle level is measured by means of at least one sensor and taken into account in generating the transformation data (in particular the so-called lookup table). In this manner, all of the variations of the vehicle level can be compensated for, and the desired image presentation can always be displayed on the display, i.e. always the same view with respect to the ground.
  • the "ride height” or “ground clearance” is understood by the term “vehicle level”. Then, at least the current level of the camera above the ground can be inferred from the current value of the vehicle level, and the current level of the camera above the ground can be taken into account in generating the transformation data.
  • the vehicle level can for example be measured in a damper of the motor vehicle, i.e. a component, which causes the oscillations of the sprung masses to decay.
  • the relative position of the piston with respect to the cylinder can for example be measured by the sensor, which then allows conclusions to the actual vehicle level and the level of the camera above the ground.
  • the invention is not restricted to the arrangement of the sensor in the damper; basically, the at least one sensor can be disposed in any position, which allows the acquisition of the vehicle level.
  • At least two, in particular at least three, preferably four such sensors are used, which each acquire the vehicle level in the respective corner regions of the motor vehicle.
  • uneven distributions of the loading in the motor vehicle can be detected such that the current orientation of the camera around all of the vehicle axes can also be determined and taken into account in the transformation data.
  • the transformation data is preferably in the form of a look-up table representing a transformation map, which is applied to the raw image of the camera in order to alter the pixels of the image and map them into the coordinate system of the display.
  • the transformation data thus represents a projection function or mapping function, by means of which the image is transformed into the coordinate system of the display.
  • this transformation data consider the camera parameters, which are known and for example can be stored in a memory of the image processing device.
  • the camera parameters include the characteristics of the lens of the camera, the position of the camera relative to the vehicle body - this position can be determined by three coordinate values x, y, z, i.e.
  • the orientation is then defined by three angular values, namely an angle around the vehicle longitudinal axis, an angle around the vehicle transverse axis as well as an angle around the vehicle vertical axis.
  • the camera parameters thus can describe the characteristics of the lens and therefore the optical characteristics of the camera; on the other hand, the camera parameters also include the fixed installation position of the camera on the vehicle.
  • the image processing device is preferably a component separate from the camera.
  • the image processing device can be constituted by a controller, which may include a digital signal processor.
  • the signal processor then serves for performing the transformation of the image and for generating the image presentation.
  • the "generation" of the transformation data presently in particular implies that template data stored in the image processing device is used and adapted or completed depending on the measured vehicle level in the operation of the camera system.
  • a lookup table can be stored, which is then updated or completed depending on the measured vehicle level.
  • the entire look-up table has to be generated, but only that portion, which depends on the level of the camera above the ground and depends on the orientation of the camera.
  • a partial region (in particular exclusively a partial region) of the image is used for generating the image presentation and the partial region is defined by the transformation data.
  • the generation of the transformation data can then include that the partial region of the image is determined depending on the measured vehicle level.
  • the viewport is understood by the "partial region", i.e. an image section used for generating the image presentation for the display. This viewport is determined depending on the measured vehicle level in this embodiment.
  • This embodiment has the advantage that always the same environmental region of the motor vehicle can be displayed on the display, independently of the current vehicle level.
  • this embodiment allows correct and leap-free composition of the images of different cameras, which proves particularly advantageous in particular in the above mentioned "bird eye view”.
  • the camera system can also include multiple cameras: In an embodiment, at least two cameras can be employed, which each provide an image of an environmental region of the motor vehicle. For generating the image presentation (for example the "bird eye view"), then, respective partial regions (viewports) of the images can be combined with each other such that the partial regions mutually overlap in an overlapping region.
  • the overlapping region of the respective partial regions can be defined by the transformation data, and the generation of the transformation data can include that the overlapping region of the respective partial regions is determined depending on the vehicle level.
  • the transition regions between the images of different cameras can thus be compensated for depending on the measured vehicle level, and an image presentation can be provided on the display, which is based on the images of different cameras and does not have any leaps and double image structures in the transition regions.
  • a suspension system of the motor vehicle is switched between at least two predetermined suspension modes, which can for example be selected by the driver himself.
  • the following suspension modes can be provided, in which the motor vehicle has different levels, which are factory-preset: a standard mode with an intermediate vehicle level; a sports mode with a low vehicle level as well as an off-road mode with a greater vehicle level.
  • the transformation data for the transformation of the image can be generated separately for each suspension mode.
  • the transformation of the image to the image presentation can therefore be particularly precisely performed in each suspension mode of the motor vehicle.
  • the basic vehicle level is fixedly preset in each suspension mode, but the vehicle level is also influenced by a plurality of factors (if level regulation is not present), such as in particular by the loading of the motor vehicle and the like.
  • the currently activated suspension mode can be acquired by the image processing device.
  • the transformation data can then be generated depending on the measured vehicle level. This once generated transformation data can then be again used for the same suspension mode if the suspension system is switched into another mode and then again into the original mode.
  • transformation data for at least one other (non-activated) suspension mode of the suspension system is also generated separately depending on the transformation data of the current suspension mode and/or depending on the measured vehicle level.
  • the separate transformation data can be virtually simultaneously generated for all of the suspension modes. For example, this can be performed upon activating the ignition of the motor vehicle such that the transformation data for all of the suspension modes is provided already at this time. If the suspension system is then switched into another mode, thus, the already generated transformation data can be directly accessed such that a correct image presentation can be displayed directly after switching the suspension system.
  • the generation of the transformation data for the other, currently not activated suspension modes of the suspension system is allowed in that the vehicle level is basically factory- predefined for all of the suspension modes and thus the difference of the vehicle level between the respective suspension modes is known. If the current vehicle level for the currently activated suspension mode is measured, thus, the vehicle level in the other suspension modes can also be inferred based on these measured values.
  • startup of a prime mover i.e. a source of momentum
  • the transformation data can be generated with the startup of the prime mover or with the activation of the ignition, in particular with each startup of the prime mover or each time the ignition is activated.
  • the transformation data can be generated with the startup of the prime mover or with the activation of the ignition, in particular with each startup of the prime mover or each time the ignition is activated.
  • transformation data for all of the suspension modes of the motor vehicle is generated.
  • the generation of the transformation data at the time of the startup of the prime mover or the ignition has the advantage that the transformation data is therefore available for the entire period of time of the current operation of the motor vehicle, in particular for all of the suspension modes of the suspension system.
  • transformation data for example does not have to be generated anymore such that even upon switching between different suspension modes, the transformation data for the respective suspension mode is already available.
  • the current vehicle level is - in particular continuously - acquired already before the startup of the prime mover or before the activation of the ignition and the acquired measured values of the vehicle level are stored in the image processing device for the subsequent generation of the transformation data.
  • the vehicle level is continuously acquired during the travel, for example in predetermined time intervals, in a predetermined suspension mode - in particular in the off-road mode - and the transformation data is also respectively newly generated continuously during travel, for example also in predetermined time intervals based on the current measured values of the vehicle level.
  • the frequent variations of the vehicle level can be fast compensated for by acquiring the current vehicle level and thus also the current level and/or the orientation of the camera during the travel and generating the transformation data.
  • the display or the camera system can be switched between at least two operating modes, which differ from each other with respect to the image presentation and thus with respect to the view displayed on the display.
  • the operating mode of the display and thus the view on the display is altered by the driver himself.
  • the transformation data for the transformation of the image can be generated separately for each operating mode of the display.
  • the transformation can therefore be provided individually and specifically for each one of the operating modes of the display. If the camera system includes multiple cameras, thus, it can also be provided that the transformation data is generated separately for each camera or for each operating mode of the display.
  • switching of the display from a previous operating mode into another, current operating mode is acquired by the image processing device.
  • the transformation data for this current operating mode can then be generated directly upon switching and thus due to the switching of the display into the current operating mode. This for example means that upon the startup of the prime mover or upon the activation the ignition, the
  • transformation data is generated exclusively for the currently activated operating mode of the display and in particular for all of the suspension modes of the suspension system. If the display is then switched into another operating mode in the operation of the motor vehicle, thus , the transformation data is also generated for this new operating mode, in particular also for all of the suspension modes.
  • the measured values of the vehicle level can be used, which have been acquired either previously before the startup of the prime mover or before the activation of the ignition, or else are currently acquired by means of the at least one sensor.
  • the generation of the transformation data upon switching the display has the advantage that the computational power of the image processing device can be optimally utilized because the transformation data does not have to be generated at the same time for all of the operating modes of the display and all of the suspension modes of the vehicle and the generation of the transformation data thus can be distributed over the time.
  • unnecessary generation of transformation data for that operating mode of the display, which is not activated at all in the current operation of the motor vehicle is additionally prevented.
  • the operational power can be saved.
  • multiple sensors can also be employed, which are disposed distributed on the motor vehicle (for example in the respective dampers) and each acquire the vehicle level at the respective installation location. If multiple sensors are present, thus, the exact current position and/or the current orientation of the camera relative to the ground or floor, on which the motor vehicle is located, can also be determined based on measured values of the sensors. The transformation data can then be generated based on the thus determined position and/or orientation of the camera. If the exact position and/or the orientation of the camera are known, thus, the viewport of the image can for example be optimally defined such that the desired view can be generated on the display.
  • measured values or a tilt sensor of at least one wheel of the motor vehicle can also be taken into account, which further improves the accuracy.
  • behavior of the suspension system in respect of the vehicle level can be predicted by the image processing device. For example, if the suspension reaches the vehicle minimum height level (due to damper behavior) the image processing device can predict that the suspension system will start moving back up increasing the vehicle height level. This prediction can be based on a suspension behavior model. Depending on this prediction the transformation data can be adapted.
  • the invention relates to a camera system for a motor vehicle, including at least one camera for providing an image of an environmental region of the motor vehicle, as well as including an image processing device for transforming the image to an image presentation using transformation data as well as considering camera parameters of the camera, wherein the image presentation is provided for displaying on a display.
  • the image processing device is adapted to generate the transformation data depending on a measured vehicle level in the operation of the camera system.
  • a motor vehicle according to the invention in particular a passenger car, includes a camera system according to the invention.
  • FIG. 1 in schematic illustration a motor vehicle with a camera system according to an embodiment of the invention
  • Fig. 2 a block diagram for explaining an image transformation
  • FIG. 3 a further block diagram
  • Fig. 4 and 5 an exemplary raw image as well as an exemplary image presentation provided by means of image transformation of the raw image
  • Fig. 6 a flow diagram of a method according to an embodiment of the invention.
  • FIG. 7 and 8 schematic illustrations for explaining the problem in composing images of different cameras.
  • a motor vehicle 1 illustrated in Fig. 1 is for example a passenger car.
  • the motor vehicle 1 includes a camera system 2, which has a plurality of cameras 3, 4, 5, 6 in the
  • a first camera 3 is for example disposed on the front bumper of the motor vehicle 1 .
  • a second camera 4 is for example disposed in the rear area, for instance on the rear bumper or on a tailgate.
  • the two lateral cameras 5, 6 can for example be integrated in the respective exterior mirrors.
  • the cameras 3, 4, 5, 6 are electrically coupled to a central image processing device 7, which in turn is coupled to a display 8.
  • the display 8 is any display device, for example an LCD display.
  • the cameras 3, 4, 5, 6 are video cameras, which are each able to capture a sequence of images per time unit and communicate it to the image processing device 7.
  • the cameras 3, 4, 5, 6 are video cameras, which are each able to capture a sequence of images per time unit and communicate it to the image processing device 7.
  • the cameras 3, 4, 5, 6 can for example be CCD cameras or CMOS cameras.
  • the camera 3 captures an environmental region 9 in front of the motor vehicle 1 .
  • the camera 4 captures an environmental region 10 behind the motor vehicle 1 .
  • the camera 5 captures a lateral environmental region 10 to the left besides the motor vehicle 1 , while the camera 6 captures an environmental region 12 on the right side of the motor vehicle 1 .
  • the cameras 3, 4, 5, 6 provide images of the respective environmental regions 9, 10, 1 1 , 12 and communicate these images to the image processing device 7. As is apparent from Fig. 1 , the imaged environmental regions 9, 10, 1 1 , 12 can also mutually overlap in pairs.
  • a sensor 13 can respectively be provided to each wheel of the motor vehicle 1 , by means of which the vehicle level and thus the ground clearance of the motor vehicle 1 at the respective installation location of the sensor 13 is acquired.
  • the sensors 13 can be disposed in the respective corner regions of the motor vehicle 1 .
  • the sensors 13 are integrated in the respective dampers and thus measure a relative position of the piston relative to the cylinder.
  • the image processing device 7 can pick up the respective measured values of the sensors 13 on a
  • the motor vehicle 1 has a suspension system not illustrated in more detail, which is operated in at least two suspension modes, which differ from each other with respect to the vehicle level.
  • three suspension modes are provided, such as for example: a standard mode with an intermediate vehicle level, a sporting suspension mode with a low vehicle level as well as an off-road suspension mode with a relatively high vehicle level.
  • the current suspension mode can be selected by the driver of the motor vehicle 1 .
  • the vehicle levels for the respective suspension modes are factory-preset such that the differences between the vehicle levels of the different suspension modes are also known. These differences are stored in the image processing device 7.
  • the display 8 and the camera system 2, respectively, can be switched between different operating modes, wherein the switching between the different operating modes is for example effected by the driver himself using a corresponding operating device.
  • This operating device can for example be integrated in the display 8, which can be configured as a touch display.
  • different image presentations are generated, which are displayed on the display 8.
  • the operating modes differ in the view, which is presented on the display 8.
  • an image presentation 14 can for example be generated, which is based on the images I3, I4, I5, I6 of all of the cameras 3, 4, 5, 6.
  • the image processing device 7 receives the images I3, I4, I5, I6 of all of the cameras 3, 4, 5, 6 and generates the image presentation 14 from the images I3, I4, I5, I6.
  • This image presentation 14 shows the motor vehicle 1 and the environment 9, 10, 1 1 , 12 for example from a bird's eye view and thus from a point of view, which is above the motor vehicle 1 .
  • the images I3, I4, I5, I6 are each subjected to a transformation and then composed. For the respective transformation, transformation data is used, which is provided in the form of a look-up table LUT. Therein, a separate look-up table LUT is provided for each camera 3, 4, 5, 6.
  • a partial region I3', I4', I5', I6' of the respective image I3, I4, I5, I6 is determined.
  • the partial regions I3', I4', ⁇ 5', I6' are so-called viewports, which are defined in the respective look-up table LUT3, LUT4, LUT5, LUT6.
  • partial regions I3', I4', I5', I6' can be used, which show the respective environmental region 9, 10, 1 1 , 12 of the motor vehicle 1 up to a predetermined distance from the motor vehicle 1 .
  • the respective partial region I3', I4', I5', I6' is transformed into the coordinate system of the display 8.
  • the look-up table LUT3, LUT4, LUT5, LUT6 represents a transformation map, by means of which the pixels of the respective image I3, I4, I5, I6 are correspondingly altered and mapped to the display 8.
  • correction of the distortion of the respective image I3, I4, I5, I6 can also be performed, which is caused by the above mentioned fish-eye lens.
  • the partial regions I3', I4', I5', I6' mutually overlap in the image presentation 14 (bird eye view) in pairs in overlapping regions 15. These overlapping regions 15 too and thus the composition of the partial regions I3', I4', I5', I6' are preset by the transformation data LUT3, LUT4, LUT5, LUT6.
  • FIG. 3 A further operating mode of the display 8 is explained in more detail with reference to Fig. 3.
  • an image presentation 14 is displayed on the display 8, which is based exclusively on the images of a single camera, namely for example the rear view camera 4.
  • This situation also corresponds to a camera system 2, in which only a single camera is employed.
  • the camera 4 provides the images I4 to the image processing device 7, which performs the image transformation of the images I4 to the image presentation 14.
  • transformation data is used, which is provided in the form of a look-up table LUT 4'.
  • This image transformation involves that a partial region I4' (viewport) is selected from the image I4, and this partial region I4' is then transformed into the coordinate system of the display 8.
  • the above mentioned distortion correction is also performed.
  • FIG. 4 and 5 An exemplary image transformation of the image I4 of the camera 4 is illustrated in Fig. 4 and 5.
  • an exemplary raw image I4 of the camera 4 is shown in Fig. 4.
  • a fish-eye lens is used, which causes a relatively great distortion of the image I4, in particular in the edge regions.
  • an image presentation 14 arises, as it is exemplarily shown in Fig. 5.
  • only a partial region of the image I4 is used for the image presentation 14, which is then also corrected with respect to the distortion and adapted to the display 8.
  • camera parameters of the respective cameras 3, 4, 5, 6 also have to be taken into account.
  • These camera parameters in particular include the respective installation location of the cameras 3, 4, 5, 6 - i.e. the position of the cameras 3, 4, 5, 6 in a coordinate system x, y, z defined to the vehicle body (see Fig. 1 ) - as well as the orientation of the cameras 3, 4, 5, 6, which is defined by three angular values: Rx - orientation angle around the x axis, Ry - orientation angle around the y axis, and Rz - orientation angle around the z axis of the motor vehicle 1 .
  • the camera parameters can also include the characteristics of the used lens, such as for example information about the caused distortion.
  • the vehicle level in the respective suspension modes is basically factory-preset, but it has turned out that the vehicle level is also influenced by other factors in all of the suspension modes, such as for example by the loading of the motor vehicle 1 or by a trailer being coupled. Thereby, the position and the orientation of all of the cameras 3, 4, 5, 6 also vary relatively to the ground or to the road. If the transformation data LUT (in particular the respective viewports) would remain constant, the respective image presentation 14 also would vary depending on the current vehicle level. This problem can be exemplified based on Fig. 7 and 8 on the example of the "bird eye view":
  • the partial regions I4', I6' of the images I4, I6 of the cameras 4, 6 are composed and mutually overlap in the overlapping region 15.
  • a correct image presentation 14 is generated, in which the partial regions I4', I6' are correctly composed with each other without leaps or double image structures arising in the image presentation 14.
  • Fig. 7 road markings 16 depicted both in the partial region I4' and in the partial region I6' cover each other and thus are overall correctly depicted in the image presentation 14.
  • the level of the cameras 3, 4, 5, 6 above the ground also changes.
  • an image presentation 14 according to Fig. 8 is generated, in which the road marking 16' depicted in the partial region I4' is no longer in the same position as the road marking 16 of the partial region I6'.
  • double image structures arise in the image presentation 14, which may result in confusion of the driver.
  • the transformation data LUT is generated depending on the measured values of the sensors 13 and depending on the stored camera parameters by means of the image processing device 7 in the operation of the camera system 2.
  • the method starts in a step S1 , in which the ignition of the motor vehicle 1 and the prime mover (internal combustion engine or electric motor) are turned off. With turned off ignition, the measured values of the sensors 13 are continuously received and stored by the image processing device 7 according to step S2. According to step S2, thus, the measurement of the vehicle level and thus of the current chassis level of the motor vehicle 1 in z direction is effected. This is the optimum point of time for the acquisition of the measured values, since the motor vehicle 1 is usually loaded before turning on the ignition and thus the final static vehicle level appears.
  • step S3 the driver activates the ignition of the motor vehicle 1 or the prime mover in a further step S3 such that the motor vehicle 1 is "started".
  • step S3 at least the activation of the on-board network of the motor vehicle 1 is effected.
  • This is acquired by the image processing device 7.
  • it is checked by the image processing device 7, which one of the suspension modes of the suspension system is currently activated. This information can for example be picked up on the mentioned communication bus.
  • the image processing device 7 generates the transformation data LUT at least for the currently activated operating mode of the display 8. It is first generated for the current suspension mode of the suspension system according to step S4. Therein, the transformation data LUT is generated separately for each camera 3, 4, 5, 6. For generating the transformation data LUT, the above mentioned camera parameters as well as the previously stored measured values of the sensors 13 are used. If multiple sensors 13 are present, thus, the current position and orientation of the cameras 3, 4, 5, 6 relative to the ground can be calculated based on these measured values and considered in generating the transformation data LUT. Optionally, this can also be configured such that a preset look-up table is used for generating the transformation data LUT, which represents a template and is factory-stored in the image processing device 7.
  • This preset look-up table can already include the position and orientation of the respective camera 3, 4, 5, 6. If it is then determined by the image processing device 7 that the current position and/or orientation deviate from the stored position and orientation, respectively, thus, the look-up table can be correspondingly corrected depending on the measured values.
  • the generation of the reference data LUT can therefore include that a look-up table already stored in the image processing device 7 is corrected and/or completed.
  • the transformation data LUT can also be generated for the other, currently not activated suspension modes of the motor vehicle 1 . This is possible since the difference in the vehicle level between the different suspension modes is known. The transformation data LUT can therefore also be generated for the other, currently not activated suspension modes.
  • it is checked by the image processing device 7 whether or not the driver alters the suspension mode. If this is the case, thus, new transformation data LUT is applied to the images I3, I4, I5, I6, which has been already previously generated for the new suspension mode.
  • step S6 the image processing device 7 can also check if the driver or another user changes the operating mode of the display 8 and thus the image
  • the image processing device 7 If this is detected, thus, the image processing device 7 generates new transformation data for the current view (for the current operating mode of the display 8) based on the stored camera parameters and the current measured values of the sensors 13.
  • This new transformation data LUT can be generated for the new view for all of the three suspension modes.
  • the measured values of the sensors 13 can also be continuously acquired during travel.
  • the measured values are acquired and evaluated in predetermined time intervals by the image processing device 7.
  • the transformation data LUT can also be continuously updated and thus dynamically adapted in the off-road suspension mode during the travel based on the respectively current measured values of the vehicle level.
  • This is in particular advantageous in uneven terrain because the frequent variations of the vehicle level can be fast compensated for and thus optimum view can always be displayed on the display 8. This is allowed in that the current vehicle level and the current level and/or the orientation of the cameras 3, 4, 5, 6 are acquired during travel and the transformation data LUT is dynamically adapted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne un procédé pour exploiter un système de caméra d'un véhicule à moteur par fourniture d'une image (I3, I4, I5, I6) d'une région environnementale du véhicule à moteur au moyen d'une caméra (3, 4, 5, 6) du système de caméra, par transformation de l'image (I3, I4, I5, I6) en une présentation d'image (14) à l'aide de données de transformation (LUT) au moyen d'un dispositif de traitement d'image (7), des paramètres de caméra de la caméra (3, 4, 5, 6) étant pris en considération dans la transformation de l'image (I3, I4, I5, I6), et par affichage de la présentation d'image (14) sur un dispositif d'affichage (8) du système de caméra, un niveau de véhicule courant du véhicule à moteur étant acquis au moyen d'au moins un capteur du véhicule à moteur et les données de transformation (LUT) étant générées en fonction du niveau de véhicule dans l'exploitation du système de caméra.
PCT/EP2014/066354 2013-08-01 2014-07-30 Procédé de génération d'une table de conversion dans l'exploitation d'un système de caméra, système de caméra et véhicule à moteur WO2015014883A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102013012808.0 2013-08-01
DE102013012808.0A DE102013012808B4 (de) 2013-08-01 2013-08-01 Verfahren zum Erzeugen einer Look-Up-Tabelle im Betrieb eines Kamerasystems, Kamerasystem und Kraftfahrzeug

Publications (1)

Publication Number Publication Date
WO2015014883A1 true WO2015014883A1 (fr) 2015-02-05

Family

ID=51300712

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/066354 WO2015014883A1 (fr) 2013-08-01 2014-07-30 Procédé de génération d'une table de conversion dans l'exploitation d'un système de caméra, système de caméra et véhicule à moteur

Country Status (2)

Country Link
DE (1) DE102013012808B4 (fr)
WO (1) WO2015014883A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019000701A1 (de) 2019-01-31 2019-06-13 Daimler Ag Verfahren zur Steuerung eines Kraftfahrzeugs sowie ein Kraftfahrzeug

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1157890A1 (fr) * 2000-05-26 2001-11-28 Matsushita Electric Industrial Co., Ltd. Traitement d'image et système de surveillance
JP2002293196A (ja) * 2001-03-29 2002-10-09 Matsushita Electric Ind Co Ltd 車載カメラの画像表示方法及びその装置
JP2006182108A (ja) * 2004-12-27 2006-07-13 Nissan Motor Co Ltd 車両周辺監視装置
US20090160940A1 (en) * 2007-12-20 2009-06-25 Alpine Electronics, Inc. Image display method and image display apparatus
JP2009253571A (ja) * 2008-04-04 2009-10-29 Clarion Co Ltd 車両用モニタ映像生成装置
EP2348279A1 (fr) * 2008-10-28 2011-07-27 PASCO Corporation Dispositif de mesure de route et procédé de mesure de route

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009035422B4 (de) 2009-07-31 2021-06-17 Bayerische Motoren Werke Aktiengesellschaft Verfahren zur geometrischen Bildtransformation
TWI417639B (zh) 2009-12-30 2013-12-01 Ind Tech Res Inst 全周鳥瞰影像無縫接合方法與系統
DE102010048143A1 (de) 2010-10-11 2011-07-28 Daimler AG, 70327 Verfahren zur Kalibrierung zumindest einer an einem Fahrzeug angeordneten Kamera
DE102010062589A1 (de) 2010-12-08 2012-06-14 Robert Bosch Gmbh Kamerabasiertes Verfahren zur Abstandsbestimmung bei einem stehenden Fahrzeug

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1157890A1 (fr) * 2000-05-26 2001-11-28 Matsushita Electric Industrial Co., Ltd. Traitement d'image et système de surveillance
JP2002293196A (ja) * 2001-03-29 2002-10-09 Matsushita Electric Ind Co Ltd 車載カメラの画像表示方法及びその装置
JP2006182108A (ja) * 2004-12-27 2006-07-13 Nissan Motor Co Ltd 車両周辺監視装置
US20090160940A1 (en) * 2007-12-20 2009-06-25 Alpine Electronics, Inc. Image display method and image display apparatus
JP2009253571A (ja) * 2008-04-04 2009-10-29 Clarion Co Ltd 車両用モニタ映像生成装置
EP2348279A1 (fr) * 2008-10-28 2011-07-27 PASCO Corporation Dispositif de mesure de route et procédé de mesure de route

Also Published As

Publication number Publication date
DE102013012808B4 (de) 2023-11-23
DE102013012808A1 (de) 2015-02-05

Similar Documents

Publication Publication Date Title
US8421865B2 (en) Method for calibrating a vehicular camera system
US9516277B2 (en) Full speed lane sensing with a surrounding view system
JP4903194B2 (ja) 車載用カメラユニット、車両外部ディスプレイ方法及びドライビングコリドーマーカー生成システム
KR101579100B1 (ko) 차량용 어라운드뷰 제공 장치 및 이를 구비한 차량
US10609339B2 (en) System for and method of dynamically displaying images on a vehicle electronic display
US9895974B2 (en) Vehicle control apparatus
WO2012145822A1 (fr) Procédé et système pour étalonner de façon dynamique des caméras de véhicule
JP2004354236A (ja) ステレオカメラ支持装置およびステレオカメラ支持方法ならびにステレオカメラシステム
US11917294B2 (en) Techniques to compensate for movement of sensors in a vehicle
US20170297491A1 (en) Image generation device and image generation method
CN107249934B (zh) 无失真显示车辆周边环境的方法和装置
JPWO2015045568A1 (ja) 予測進路提示装置及び予測進路提示方法
US11295704B2 (en) Display control device, display control method, and storage medium capable of performing appropriate luminance adjustment in case where abnormality of illuminance sensor is detected
JP5729110B2 (ja) 画像処理装置及び画像処理方法
CN104842872A (zh) 车载拍摄装置
JP2021013072A (ja) 画像処理装置および画像処理方法
JP5195776B2 (ja) 車両周辺監視装置
WO2015014883A1 (fr) Procédé de génération d'une table de conversion dans l'exploitation d'un système de caméra, système de caméra et véhicule à moteur
JP2020127171A (ja) 周辺監視装置
CN110316066B (zh) 基于车载显示终端的防倒影方法和装置及车辆
JP2021002790A (ja) カメラパラメータ設定装置、カメラパラメータ設定方法、及びカメラパラメータ設定プログラム
JP6855254B2 (ja) 画像処理装置、画像処理システム、及び、画像処理方法
KR101729473B1 (ko) 카메라 공차 보정 장치 및 방법
US11770495B2 (en) Generating virtual images based on captured image data
EP3761262B1 (fr) Dispositif de traitement d'images et procédé de traitement d'images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14749747

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14749747

Country of ref document: EP

Kind code of ref document: A1