WO2011082716A1 - Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image - Google Patents

Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image Download PDF

Info

Publication number
WO2011082716A1
WO2011082716A1 PCT/EP2010/000049 EP2010000049W WO2011082716A1 WO 2011082716 A1 WO2011082716 A1 WO 2011082716A1 EP 2010000049 W EP2010000049 W EP 2010000049W WO 2011082716 A1 WO2011082716 A1 WO 2011082716A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
signal processor
forming device
camera
brightness
Prior art date
Application number
PCT/EP2010/000049
Other languages
French (fr)
Inventor
Patrick Eoghan Denny
Christopher Gideon Reade
Derek Savage
Myles Friel
Original Assignee
Valeo Schalter Und Sensoren Gmbh
Application Solutions (Electronics And Vision) Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Schalter Und Sensoren Gmbh, Application Solutions (Electronics And Vision) Ltd filed Critical Valeo Schalter Und Sensoren Gmbh
Priority to EP10707812A priority Critical patent/EP2522126A1/en
Priority to PCT/EP2010/000049 priority patent/WO2011082716A1/en
Publication of WO2011082716A1 publication Critical patent/WO2011082716A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the invention relates to an image forming device for a vehicle, which has a first and at least a second camera.
  • the cameras are formed for image acquisition of a vehicle environment.
  • the image forming device includes an image storage unit, in which the acquired images of the cameras are stored.
  • the images of the cameras can be combined to an overall image formed as a top view or bird view.
  • the invention relates to a driver assistance facility including a corresponding image forming device, as well as to a vehicle with an image forming device and/or a driver assistance facility.
  • the invention relates to a method for forming an overall image, which is formed from image data of at least two cameras of an image forming device for a vehicle.
  • image forming devices are known, in which images are acquired by means of plural cameras, which can detect the environment of the vehicle to the front and sideward. Then, they are processed in the image forming device such that an overall image is formed, which shows a top view of the vehicle with the direct environment of the vehicle. This is particularly helpful to be able to identify obstacles or the like in the close range of the vehicle and to display them to the driver. In particular for assisting a parking procedure or the like, such information can be very helpful for a driver.
  • an individual image is acquired for each camera and it is then adjusted in its brightness before formation of an overall image. This is effected in the respective image adjusting unit of the respective camera itself and before storage in the image storage inside the camera. Only after this brightness adjustment of the individual images, then, the combination to an overall image is effected.
  • This is disadvantageous in that brightness differences occurring in the overall image are present or are only difficultly adjustable such that the presentation to the observer can only be effected in insufficient manner. Therefore, optionally, the image presentation of the overall image cannot present required information to the driver or only in a poorly identifiable manner such that misinterpretations by the driver can occur, which may result in driving maneuvers critical to security.
  • an image forming device having the features according to claim 1 , a driver assistance facility having the features according to claim 12, a vehicle having the features according to claim 14 and a method having the features according to claim 15.
  • the two cameras are formed for image acquisition in a vehicle environment of a vehicle.
  • the image forming device includes an image storage unit, in which an acquired image of a camera is stored.
  • the images of the cameras can be combined, wherein the image forming device is designed for forming an overall image from these individual images of the cameras, which presents a complete view of the vehicle and the direct environment of the vehicle.
  • a central signal processor external to the camera is arranged after the image storage unit external to the camera. This signal processor is designed for adjusting the brightness of at least two partial regions or fields of interest of the already combined overall image.
  • a complete view could be top view or bird view for example. With a top view an image perspective is shown which demonstrates the scene viewing from above to the bottom. However, a complete view could equally be any projection, such as to a subset of a top view, or a composite of a plurality of cameras to make a, for example, horizontal 250 degree view behind and to the sides of a vehicle (a ..panoramic" view).
  • the image forming device is structured such that not each individual camera does have or does not require a brightness adjusting unit, by means of which the respective image of the camera is adjusted in brightness already before formation of the overall image, but the combined overall image is adjustable in brightness as a whole.
  • This is preferably completely effected in a specific unit of the image forming device, namely the signal processor following the image storage unit.
  • the signal processor only one component, namely the signal processor, is required in order to be able to perform these functionalities and procedures. Therefore, compared to the prior art, it is no longer required to provide a plurality of separate hardware components and to perform specific partial image processing steps in the individual components.
  • the entire procedure of the image formation and processing can also be improved such that an overall image can be achieved, which is greatly improved with regard to the image quality.
  • substantial improvement can be achieved thereby.
  • the cameras of the image forming device are usually disposed on different sides of the vehicle and have their detection regions in different direction.
  • cameras for example detecting to the side of the vehicle which can for example be disposed on side-view mirrors of the vehicle, and are also oriented with the detection region in a way inclined downwards to the road, often darker images are formed than by the camera detecting to the front and/or the camera detecting rearwards.
  • a relatively uniform brightness of the detected vehicle environment can be represented in the overall image as a top view in particularly advantageous manner.
  • the central signal processor is disposed external to the at least one camera and in particular in an ECU (Electronic Control Unit).
  • ECU Electronic Control Unit
  • the signal processor is designed for internally performing an iterative adjustment loop for adjusting the brightness.
  • an iterative adjustment loop for adjusting the brightness.
  • the signal processor is designed for adjusting at least one further parameter describing the image quality.
  • an image equalization and/or mapping and/or color balance is mentioned here. They are only exemplary parameters for modifying the image quality, and this is not to be understood as conclusive.
  • a combined overall image is output at the output of the signal processor, which is adjusted in its brightness as a whole.
  • the signal processor is designed for extracting specific pixel data from the overall changing data obtained from the image storage unit and representing the overall image before brightness adjustment.
  • the specific pixel data By the specific pixel data, a pixel-reduced overall image is formed, which is provided for the brightness adjustment.
  • the signal processor is designed to the effect that it can select from the entire region detected in the image for selecting a region of primary interest, and thus can make a corresponding specific pixel data selection.
  • the amount of data is reduced on the one hand, and only the pixel data essential for the brightness adjustment are picked.
  • apart from the reduction of the amount of data faster image processing is also allowed, and in particular with regard to the formation of the overall image with adjusted brightness, a faster and more precise process is ensured.
  • this can also be performed in the signal processor itself, this can also be effected in a manner highly reduced in components, and in this connection too, a faster approach can be generated.
  • the signal processor is designed for comparing the overall brightness of the overall image to the individual images formed by the cameras, and the brightness adjustment can be performed in the signal processor itself depending on the comparison.
  • the individual input images of the cameras are taken to be able to perform corresponding differences and adjustments with regard to the brightness representation of the overall image.
  • a particularly adjusted approach considering the brightness of the individual images can be allowed in the signal processor.
  • the brightness adjustment is a linear adjustment. This means that pixel data to be modified in its brightness is modified by a specific factor and this factor is taken for all of the pixel data to be modified.
  • the signal processor is designed for identifying the vehicle in the overall image. Moreover, it is formed for suppressing a brightness adjustment of those partial regions of the overall image, in which the vehicle is shown. Since it is of minor importance in the overall image, which brightness the vehicle has, it is particularly advantageous with regard to a fast brightness adjustment of the overall image and fast data processing that this region is excluded and thus computational effort for an image quality adjustment is not required. Since in such an overall image formed as a complete view, the nearby vehicle environment is interesting to be able to identify obstacles therein, the image region with the vehicle is of minor importance with regard to the brightness thereof.
  • the signal processor is designed with a first step in forming the suitable overall image in the entire procedure, in which a determination of a contrast distribution of the image data, in particular by adjustment of a gamma value, can be performed before brightness adjustment and after receiving the image data from the image data storage unit.
  • a first input is connected to a first signal path of the camera. It is electrically connected to an image chip of the camera.
  • a microprocessor inside the camera is connected into the first signal path.
  • a transmitting/receiving unit inside the camera can also be connected into this first signal path.
  • the first signal path has a bidirectional bus, in particular an I2C bus, between the microprocessor and the image chip.
  • a camera has two separate inputs.
  • a second input and/or output is connected to a second signal path directly leading to an image chip of the camera.
  • the encoded image data is transmitted through it as video data stream to the central signal processor external to the camera.
  • Verifications of the image data in the registers of the image chip are performed through the data bus between the microprocessor and the image chip. For example, the color components red, green and blue of the acquired images are acquired, manipulated and verified. Furthermore, a white balance of the image chip can be performed.
  • both signal paths of the camera are formed for bidirectional signal transmission, the inputs of the camera are also to be considered as outputs.
  • the ECU having the central signal processor also has a first signal path.
  • a transmitting/receiving unit is also connected into it.
  • the first signal path of the ECU is connected to a host controller, which in turn is connected to the central signal processor.
  • the signal connections are preferably formed for bidirectional signal transmission.
  • the ECU also has a second signal path, which is connected to the second signal path of the camera. It is formed without connection to the host controller and directly connected to the signal processor.
  • the output of the central signal processor is connected to an encoder, in particular an NTSC encoder or a digital interface, like Ethernet, which then provides a correspondingly adjusted overall image at the output of the ECU.
  • a unit is connected into the first signal path of the ECU, which is designed for converting the serial image data stream received from the camera into a parallel image data stream.
  • the central signal processor has a storage, in which the image data are is stored. As the stored image data are combined to an overall image, then, in the signal processor, a required brightness adjustment is performed. Such a brightness adjustment is also effected after the image data storage and after the formation of an overall image in the signal processor.
  • the signal processor has a single input storage for the image data. It can also be provided that it has a separate storage for each camera. The storage or storages can also be arranged externally to the signal processor in the ECU.
  • the microprocessor inside the camera is also formed in the first signal path for brightness adjustment of image data. However, such an adjustment is then effected in the camera itself. However, with such an approach, it is not the image data sent through the second signal path to the central signal processor that is modified.
  • control signals of the central signal processor it is acted upon the microprocessor inside the camera through the first signal path, and then modification of the image data to be recorded is performed.
  • an operational parameter of the camera influencing the image quality for example the exposure time, is modified.
  • a unit In the second signal path of the camera, a unit can be connected, which is designed for converting the parallel image data stream of the image chip of the camera into a serial image data stream.
  • signals for camera control and for diagnostic purposes can be transmitted through the first signal path.
  • Such an architecture is also provided in the at least second camera, wherein here too, a further corresponding architecture with first and second signal paths is then formed in the ECU.
  • the ECU only has one central signal video processor.
  • the signal processor is connected to at least one camera, in particular all of the cameras of the image forming device, by a signal line.
  • the signal processor is designed for controlling a basic brightness adjustment of the respective camera through this signal line.
  • the exposure time of the respective camera can be controlled and individually set through this signal connection. It is particularly advantageous, since the functionality of the signal processor is thus once again extended and therefore, it can be directly acted upon the operational parameters of the camera under specific environmental conditions, and thus the image data generated by this camera can already be affected in advance. Since, thereby, the image data generated by the camera can already be provided in improved quality with regard to a desired brightness, thereby, the effort for the further brightness adjustment of an overall image generated from it can then be reduced in the signal processor. Thereby, the processing effort in the signal processor can be reduced and therefore, the period of time for forming an overall image adjusted with desired brightness can be reduced.
  • the signal processor is preferably designed for controlling an AEC (Automatic Exposure Control) adjustment. If a part of an image does not have content in it, e.g., if a scene presents input data to a camera that is either below the lowest numerical level or above the highest numerical level possible for the input data, then no amount of brightness changes will recover that data. However, changing the exposure control of the camera can cause data to be recovered. Details in parts of a scene that are too dark can recovered by increasing the exposure time and ones that are in a region that is too bright can be recovered by reducing the exposure time. So in this way brightness and exposure at the level of a camera are different.
  • AEC Automatic Exposure Control
  • the invention relates to a driver assistance facility including an image forming device according to the invention and an advantageous development thereof.
  • the driver assistance facility can include a parking assistance system.
  • the overall images generated by the image forming device as complete view, like a top view or panoramic view of the vehicle with the environment of the vehicle can then be displayed to a driver and the driver and/or electronic components of a assistance system of the vehicle can simply and securely identify obstacles or the like in the near periphery of the vehicle.
  • Driving the vehicle for example performing a parking procedure, can thereby substantially be assisted.
  • the driver assistance system is designed such that the image forming device includes at least four cameras. They are disposed on the vehicle such that they are formed for complete periphery detection around the vehicle.
  • a camera detects in the front region of the vehicle
  • a further camera detects in the rear region of the vehicle
  • two further cameras each detect to one side of the vehicle. From these arrangements and positions of the cameras, then, the entire periphery extending around the vehicle can be detected, and from these cameras in particular directed to the front and rearwards on the side, then, an overall image can be formed, which presents a top view. This can be effected with corresponding image processing algorithms.
  • the invention relates to a vehicle with an image forming device according to the invention or an advantageous development thereof, and/or a driver assistance facility according to the invention or an advantageous development thereof.
  • the invention relates to a method for forming an overall image, which is formed from images of at least two cameras of an image forming device in the vehicle.
  • the overall image for example and preferably a top view, is formed of the environment of the vehicle, wherein an image acquired by a camera is stored in a storage unit.
  • a central signal processor is arranged after the image storing unit, by which the brightness of at least partial regions of the combined overall image is adjusted.
  • an iterative adjustment loop is performed inside the signal processor, and thus passes of the adjustment of the brightness several times are performed until a desired overall brightness is achieved.
  • a brightness adjustment of the image data of the overall image is exclusively performed in the signal processor.
  • a comparison of the overall brightness of the overall image to the individual images formed by the cameras is performed in the signal processor, and depending on the comparison, the brightness adjustment of the combined overall image is performed in the signal processor.
  • three different approaches for brightness adjustment of the overall image are performed in the signal processor and combined.
  • these three possibilities of adjustment include the adjustment of the contrast distribution, the adjustment of the brightness and of the color balance as well as the adjustment in the exposure time of at least one camera of the image forming device.
  • the cameras of the image forming device can also be arranged on a vehicle with different orientations and this is understood with regard to their orientation of the detection region to the horizontal plane, some cameras, for example the cameras detecting sideward, can be slightly inclined to the bottom with respect to the horizontal such that they virtually increasingly detect to the roadway.
  • the camera detecting to the front and/or the camera detecting rearwards are substantially oriented horizontally with their main detection direction.
  • Luma refers to the amount of b!ack-wihts content in an image and refers to what conventionally is described as brightness.
  • Chroma refers to the hue, the degree to which a color looks like red or green or blue. The invention doesn't just allow to converge luma values for different camera outputs (conventional brightness) but for chroma also.
  • this is performed in the signal processor by modifying the color content for the interesting partial regions of the overall image, which are also seen and viewed by the user.
  • this can be effected in those partial regions of the overall image, which are contributed by the individual images of the respective cameras, such that corresponding overall partial regions representing an individual image in the overall image are correspondingly modified in this respect.
  • Fig. 1 a top view of an embodiment of a vehicle according to the invention with a driver assistance facility according to the invention
  • Fig. 2 a simplified schematic representation of an overall image formed by an
  • Fig. 3 a block diagram representation of partial components of an embodiment of an image forming device according to the invention.
  • a vehicle 1 in a top view representation, a vehicle 1 is shown, which is a passenger car.
  • the vehicle 1 includes a driver assistance facility 2 having an image forming device 3.
  • the image forming device 3 includes a first camera 4, a second camera 5, a third camera 6 and a fourth camera 7.
  • the first camera 4 is a front camera and formed for detecting the front region in the environment of the vehicle 1. Schematically, the edge regions or lateral boundaries of the detection region are partially drawn.
  • the second camera 5 is formed for detecting the rear region of the vehicle 1 , wherein here too, the boundaries of the detection region are schematically illustrated.
  • the cameras 6 and 7 are for example disposed in the side-view mirrors of the vehicle 1 and formed for detecting the lateral environment of the vehicle 1.
  • the exemplary edges of the detection regions are schematically drawn.
  • the four cameras 4 to 7 the entire periphery around the vehicle 1 can be detected.
  • the cameras 6 and 7 detecting sideward are disposed inclined to the bottom with their main detection direction with respect to the horizontal plane, which corresponds to the figure plane and a plane parallel thereto. Therefore, they are oriented towards the roadway.
  • the two cameras 4 and 5 are substantially horizontally oriented in their main detection direction.
  • the cameras 4 to 7 are each formed for acquiring images of the corresponding environmental regions and generate image data using an image chip in this connection.
  • This video data or the video data signal is appropriately processed by a data encoder of the image chip such that pixel data are formed.
  • a central, digital signal processor external to the camera is arranged, which is provided with these pixel data of the individual images.
  • this signal processor then, an overall image of the individual images of the cameras 4 to 7 is formed and further image processing is made. For this, brightness consideration of the overall image is performed and the overall image is adjusted in brightness at least in partial regions as needed. Due to the different positions and optionally the different orientations of the cameras 4 to 7, the respectively formed individual images are also formed with different brightnesses.
  • the overall image can develop in that it is provided with different degrees of brightness, and this is optionally disadvantageous for an observer.
  • plural approaches are then combined.
  • a contrast distribution in the image data of the overall image is effected, and in this connection, a specific adjustment of a gamma value is effected.
  • BSDF Bright Sky Darkening Foreground
  • the actual brightness adjustment is performed, wherein a comparison of the individual images formed by the camera 4 to 7 to the overall image is then preferably made to this, and in case of undesired brightness differences, the overall image is appropriately modified.
  • at least one further parameter describing the image quality is also performed in the signal processor.
  • Luma refers to the amount of black-wihte content in an image and refers to what conventionally is described as brightness.
  • Chroma refers to the hue, the degree to which a color looks like red or green or blue. The invention doesn't just allow to converge luma values for different camera outputs (conventional brightness) but for chroma also.
  • the signal processor is also formed such that the overall image quality adjustment of the overall image is performed inside the signal processor.
  • the digital signal processor is connected by each one separate direct signal line through a first signal path to the cameras 4 to 7, in particular a respective control unit of the cameras 4 to 7.
  • the signal processor can control the cameras 4 to 7 and in particular individually adjust their exposure times through these signal lines.
  • the subsequent required processing of the brightness adjustment of the overall image in the signal processor can thereby at least be reduced.
  • an overall image 8 as a top view or bird view formed in the signal processor from the individual images formed by the cameras 4 to 7 is shown. From the image data information of the cameras 4 to 7 detecting sideward and to the front as well as rearwards, an overall image 8 is formed by the image forming device 3, which shows a top view of the vehicle and the near environment of the vehicle 1.
  • the environment 9 in the overall image 8 is illustrated completely around the vehicle 1.
  • obstacles in the close region around the vehicle 1 and thus in the environment 9 of the vehicle 1 can be identified in the overall image 8.
  • This overall image 8 can be displayed to a driver on a display unit in the vehicle. This is a particularly helpful information presentation in specific driving situations, for example in parking into or out of a parking space.
  • the camera 4 includes an image chip, which in turn includes an imager or pixel array 41 and an image data encoder 42. Moreover, the image chip has a signal processing device 43.
  • This image chip is connected to a first input 4c of the camera 4 through a first signal path 4'.
  • a microprocessor 44 inside the camera is connected. It is electrically connected to the signal processing device 43 through a signal connection, which is a bidirectional data bus 4a.
  • a signal connection which is a bidirectional data bus 4a.
  • transmitting/receiving unit 45 is connected, which is a LIN transmitting/receiving unit in the embodiment. Other types of communication buses are possible too. Between the microprocessor 44 and the transmitting/receiving unit 45, there is also provided a bidirectional signal transmission.
  • the camera 4 includes a second signal path 4". It represents a connection between a second input 4d of the camera 4 and the image data encoder 42.
  • the image data of the image chip is transmitted to an electronic control unit (ECU) 10 external to the camera through this second signal path 4' including the signal line 4b.
  • this ECU 0 also has a first signal path 13' electrically connected to the first signal path 4' of the camera 4, with regard to the connection to the first camera 4.
  • a further transmitting/receiving unit 13, which is also a LIN transmitting/receiving unit, is connected into this first signal path 13' of the ECU 10.
  • This transmitting/receiving unit 13 is connected to a host controller 11 through a bidirectional data connection. It is a component of the ECU 10.
  • a second signal path 14' is also formed in the ECU 10, which is electrically connected to the second signal path 4" of the camera 4.
  • a unit 14 is connected in the second signal path 14' of the ECU 10. It is formed for converting the image data of the camera 4 converted into a serial data stream back into a parallel digital image data stream and for providing it to the central signal processor 12 of the unit ECU 10.
  • the image forming device 3 only has one such central signal processor 12.
  • a further unit is connected into the second signal path 4" in the camera 4, which is not drawn, which is formed for converting the parallel digital image data stream generated by the image chip into a serial image data stream.
  • the cameras 5 to 7 are formed analogously to the camera 4.
  • image chips are respectively provided, which have imagers or pixel arrays 51 , 61 and 71 , respectively, image data encoders, 52, 62 as well as 72, and signal processing devices 53, 63 and 73.
  • first signal paths 5', 6' and 7' as well as second signal paths 5", 6" and 7" are provided.
  • Microprocessors 54, 64 and 74 are connected into the first signal paths 5', 6" and 7', which are connected to the signal processing devices 53, 63 and 73 through data busses 5a, 6a, 7a.
  • the first signal paths 5', 6' and 7' also have transmitting/receiving units 55, 65 and 75.
  • the second signal paths 5", 6" and 7" have the signal lines 5b, 6b and 7b, which are electrically connected to the second inputs 5d, 6d and 7d.
  • the first signal paths 5', 6' and T are electrically connected to the first inputs 5c, 6c and 7c of the cameras 5 to 7.
  • first signal paths 15', 17' and 19' are also formed in the ECU 10, which are connected to the host controller 11 and are formed for bidirectional communication in this respect.
  • ECU-side transmitting/receiving units 15, 17 and 19 are connected into the first signal paths 15', 17' and 19', which are also formed as LIN transmitting/receiving units.
  • an ECU-side second signal path 16', 18' and 20' is formed associated with each further camera 5 to 7, which is each connected to the central signal processor 12.
  • units 16, 18 or 20 are each connected into these second ECU-side signal paths 16', 18' and 20', which are formed for converting the serial image data stream transmitted from the cameras 5 to 7 back into a parallel image data stream.
  • the operation of the image forming device 3 with regard to the representation in fig. 3 has already been explained above. It is exemplarily briefly summarized once again by way of the camera 4.
  • the camera 4 acquires the data of the environment via the imagers 41 , which is processed in the data encoder unit 43 and is provided as pixel data in the form of video data. It is directly transmitted into a storage 12a in the ECU 10 through the signal connections 4b.
  • the storage 12a can be disposed in the signal processor 12.
  • the corresponding is effected with the image data of the cameras 5 to 7.
  • an overall image representing a top view of the vehicle 1 and the environment 9 is then formed from the image data of the individual images in the signal processor 12.
  • a contrast distribution is adjusted by adjusting a gamma value of the received image data or pixel data. This is in particular effected if the image is a so called BSDF shot and has a very bright background and a very dark foreground. This is in particular identified based on threshold values for the number of bright and dark pixels in the image.
  • the further image processing is effected with regard to the mapping as well as the brightness adjustment.
  • the formed overall image is compared to the image information of the individual images in the signal processor 12, and depending on the comparison, a brightness adjustment of at least partial regions of the overall image is performed. The corresponding is effected with the color balance.
  • Such an adjustment of the image quality of at least the brightness and preferably also an equalization and/or a color balance is performed in the signal processor 12 in iterative manner until the desired image quality of the overall image is achieved.
  • the result of the image quality adjustment in the signal processor 12 in this respect is then the overall image according to fig. 2.
  • brightness differences which may occur upon combination of the individual images from the cameras 4 to 7 to a corresponding overall image, are then balanced.
  • the entire process of adjustment of the image quality is performed in the signal processor 12.
  • iterative quality adjustment loops and thus a modification several times, for example of the brightness are also performed in the signal processor 12 itself. Further components with regard to this image quality adjustment or even a loop external to the signal processor are not required.
  • the signal processor 12 can have an internal further image data storage, in which then the final overall image 8 schematically shown in fig. 2 and adjusted in image quality is stored.
  • a further encoder unit 21 is provided, which follows the signal processor 12 and which then preferably provides the overall image at the one output 10a of the ECU 10. This is then further transmitted for the further presentation on a display unit. It can also be provided that this image data storage for storing the finally completed overall image 8 is arranged externally to the signal processor 12 and is arranged subsequent to the signal processor 12 in the signal processing chain.
  • the signal processor 12 after the primary basic formation of the overall image from the individual images and before performing the brightness adjustment, fields of interest in the form of specific pixel data are selected, which are then adjusted in brightness afterwards.
  • these specific fields of interest represent a pixel-reduced overall image, which is then processed with regard to the image quality afterwards.
  • those fields of interest in the individual images of the cameras 4 to 7, which show the vehicle 1 are identified by the signal processor 12 and are not taken into account with regard to the further brightness adjustment of the overall image. This means that the image information concerning the vehicle 1 in the overall image is not modified in brightness.
  • the brightness adjustment of the specific pixel data extracted from the entire amount of the pixel data is modified in its brightness by a linear adjustment. This means in a preferably emodiment that they are modified by a fixed factor whereas this is depending on the input value.
  • the linear adjustment is subject to the limits and the discretization of the video output range. For example, if a numerical range is integers from 0 to 255 and an input value 200 that should be multiplied by a factor 1.3, then it is limited to an output value of 255 instead of 260. Similarly, if an input value is 21 and this value should be multiplied by a factor 1.3 then the output will not be 27.3 but will be rounded to the nearest integer 27. Multiplying the values by a linear factor in order to preserve relative contrast, but the outputs may be clipped or rounded as it is described above. For that reason, it is probably best to describe it as a quasi-linear operation. To avoid clipping, some changes could be done in the gamma region for example.

Abstract

The invention relates to an image forming device for a vehicle (1), including a first (4 to 7) and at least a second camera (4 to 7), which are each designed for image acquisition of a vehicle environment, and an image storage unit (12a), in which an acquired image of a camera (4 to 7) is stored, wherein the images of the cameras (4 to 7) can be combined to an overall image (8) formed as a top view, wherein in the signal processing chain, after the image storage unit (12a) a signal processor (10) is arranged, which is designed for adjusting the brightness of at least partial regions of the combined overall image (8). The invention also relates to a driver assistance facility (2) with an image forming device (3), as well as a vehicle (1) with corresponding facilities and/or devices. Furthermore, the invention relates to a method for forming an overall image (8) from individual images of cameras (4 to 7), which are disposed on a vehicle (1).

Description

Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image
The invention relates to an image forming device for a vehicle, which has a first and at least a second camera. The cameras are formed for image acquisition of a vehicle environment. Moreover, the image forming device includes an image storage unit, in which the acquired images of the cameras are stored. The images of the cameras can be combined to an overall image formed as a top view or bird view. Furthermore, the invention relates to a driver assistance facility including a corresponding image forming device, as well as to a vehicle with an image forming device and/or a driver assistance facility. Furthermore, the invention relates to a method for forming an overall image, which is formed from image data of at least two cameras of an image forming device for a vehicle.
For modern vehicles, image forming devices are known, in which images are acquired by means of plural cameras, which can detect the environment of the vehicle to the front and sideward. Then, they are processed in the image forming device such that an overall image is formed, which shows a top view of the vehicle with the direct environment of the vehicle. This is particularly helpful to be able to identify obstacles or the like in the close range of the vehicle and to display them to the driver. In particular for assisting a parking procedure or the like, such information can be very helpful for a driver.
From EP 1 718 064 B1 , such an image forming device is known. Furthermore, there is provided therein a signal processing chain starting with the image acquisition unit of the camera, a data encoder and subsequently a brightness adjusting device are arranged. Following this brightness adjusting device, then, an input frame is connected into the signal processing chain, wherein a rearranging device is then subsequently arranged. Moreover, a signal processing loop is generated, which is directed from this rearranging device to the brightness adjusting device and includes a control device. In this known image forming device, thus, a plurality of components are required in order to be able to perform a modification of the image data. Thus, in this known device, first, an individual image is acquired for each camera and it is then adjusted in its brightness before formation of an overall image. This is effected in the respective image adjusting unit of the respective camera itself and before storage in the image storage inside the camera. Only after this brightness adjustment of the individual images, then, the combination to an overall image is effected. This is disadvantageous in that brightness differences occurring in the overall image are present or are only difficultly adjustable such that the presentation to the observer can only be effected in insufficient manner. Therefore, optionally, the image presentation of the overall image cannot present required information to the driver or only in a poorly identifiable manner such that misinterpretations by the driver can occur, which may result in driving maneuvers critical to security.
Moreover, from US 7,139,412 B2, an image forming device is known, however, in which the overall images composed of individual images of individual cameras can only be modified in the overlapping regions of the individual images and thus the interfaces of the adjoining partial images. This is also only a very insufficient possibility of adjustment of an overall image and results in that image quality disadvantages with regard to a brightness adjustment cannot be adjusted at other locations. The difficulty resulting from it is analogous to that described above.
It is the object of the present invention to provide an image forming device for a vehicle as well as a driver assistance facility with such an image forming device as well as a vehicle with a corresponding device as well as a method for forming an overall image of a vehicle environment, in which the overall image composed of plural partial images has a higher image quality.
This object is solved by an image forming device having the features according to claim 1 , a driver assistance facility having the features according to claim 12, a vehicle having the features according to claim 14 and a method having the features according to claim 15.
An image forming device according to the invention for a vehicle includes a first camera and at least a second camera. The two cameras are formed for image acquisition in a vehicle environment of a vehicle. Moreover, the image forming device includes an image storage unit, in which an acquired image of a camera is stored. The images of the cameras can be combined, wherein the image forming device is designed for forming an overall image from these individual images of the cameras, which presents a complete view of the vehicle and the direct environment of the vehicle. In the signal processing chain formed by the individual components of the image forming device, a central signal processor external to the camera is arranged after the image storage unit external to the camera. This signal processor is designed for adjusting the brightness of at least two partial regions or fields of interest of the already combined overall image.
A complete view could be top view or bird view for example. With a top view an image perspective is shown which demonstrates the scene viewing from above to the bottom. However, a complete view could equally be any projection, such as to a subset of a top view, or a composite of a plurality of cameras to make a, for example, horizontal 250 degree view behind and to the sides of a vehicle (a ..panoramic" view).
Thus, the image forming device is structured such that not each individual camera does have or does not require a brightness adjusting unit, by means of which the respective image of the camera is adjusted in brightness already before formation of the overall image, but the combined overall image is adjustable in brightness as a whole. This is preferably completely effected in a specific unit of the image forming device, namely the signal processor following the image storage unit. Thus, in this connection, only one component, namely the signal processor, is required in order to be able to perform these functionalities and procedures. Therefore, compared to the prior art, it is no longer required to provide a plurality of separate hardware components and to perform specific partial image processing steps in the individual components.
It is possible to use known knowledge for example, or by other means, to change the brightness or some characteristics of the image being captured by a camera to maximize the information or contrast in a particular subregion of interest. I
It is possible to choose the brightness different from camera to camera, but statically so, i.e., the respective cameras have a specific brightness bias programmed into them. Or all cameras could be assigned with the same brightness.
Apart from economy of components, thus, the entire procedure of the image formation and processing can also be improved such that an overall image can be achieved, which is greatly improved with regard to the image quality. In particular, with regard to disturbing brightness differences in the overall image due to the combination of individual images of different cameras, substantial improvement can be achieved thereby. This is particularly advantageous in use in a vehicle, since there, the cameras of the image forming device are usually disposed on different sides of the vehicle and have their detection regions in different direction. Thus, with cameras for example detecting to the side of the vehicle, which can for example be disposed on side-view mirrors of the vehicle, and are also oriented with the detection region in a way inclined downwards to the road, often darker images are formed than by the camera detecting to the front and/or the camera detecting rearwards.
Thus, a relatively uniform brightness of the detected vehicle environment can be represented in the overall image as a top view in particularly advantageous manner.
Thereby, nearby obstacles in the environment of the vehicle can be reliably and securely and correctly identified by a driver in the overall image. Thereby, the user friendliness is increased and the secure operation of the vehicle is supported. Preferably, the central signal processor is disposed external to the at least one camera and in particular in an ECU (Electronic Control Unit).
Preferably, the signal processor is designed for internally performing an iterative adjustment loop for adjusting the brightness. Thus, if brightness adjustment of the overall image several times is required, this is performed in the signal processor itself. Therefore, external loops are no longer required here, as it is the case in the prior art. Thereby, too, components are saved. Not least, in this connection it is also pointed out that these iterative multiple brightness adjustment passes are each made on the overall image.
Preferably, the signal processor is designed for adjusting at least one further parameter describing the image quality. Preferably, an image equalization and/or mapping and/or color balance is mentioned here. They are only exemplary parameters for modifying the image quality, and this is not to be understood as conclusive.
Preferably, a combined overall image is output at the output of the signal processor, which is adjusted in its brightness as a whole.
Preferably, the signal processor is designed for extracting specific pixel data from the overall changing data obtained from the image storage unit and representing the overall image before brightness adjustment. By the specific pixel data, a pixel-reduced overall image is formed, which is provided for the brightness adjustment. Thus, the signal processor is designed to the effect that it can select from the entire region detected in the image for selecting a region of primary interest, and thus can make a corresponding specific pixel data selection. By such a configuration, the amount of data is reduced on the one hand, and only the pixel data essential for the brightness adjustment are picked. By such a configuration, apart from the reduction of the amount of data, faster image processing is also allowed, and in particular with regard to the formation of the overall image with adjusted brightness, a faster and more precise process is ensured.
In that this can also be performed in the signal processor itself, this can also be effected in a manner highly reduced in components, and in this connection too, a faster approach can be generated.
Preferably, the signal processor is designed for comparing the overall brightness of the overall image to the individual images formed by the cameras, and the brightness adjustment can be performed in the signal processor itself depending on the comparison. In this advantageous implementation, thus, the individual input images of the cameras are taken to be able to perform corresponding differences and adjustments with regard to the brightness representation of the overall image. Thereby, a particularly adjusted approach considering the brightness of the individual images can be allowed in the signal processor.
Preferably, the brightness adjustment is a linear adjustment. This means that pixel data to be modified in its brightness is modified by a specific factor and this factor is taken for all of the pixel data to be modified.
Particularly preferred, it is provided that the signal processor is designed for identifying the vehicle in the overall image. Moreover, it is formed for suppressing a brightness adjustment of those partial regions of the overall image, in which the vehicle is shown. Since it is of minor importance in the overall image, which brightness the vehicle has, it is particularly advantageous with regard to a fast brightness adjustment of the overall image and fast data processing that this region is excluded and thus computational effort for an image quality adjustment is not required. Since in such an overall image formed as a complete view, the nearby vehicle environment is interesting to be able to identify obstacles therein, the image region with the vehicle is of minor importance with regard to the brightness thereof.
Preferably, the signal processor is designed with a first step in forming the suitable overall image in the entire procedure, in which a determination of a contrast distribution of the image data, in particular by adjustment of a gamma value, can be performed before brightness adjustment and after receiving the image data from the image data storage unit.
In particular, a first input is connected to a first signal path of the camera. It is electrically connected to an image chip of the camera. A microprocessor inside the camera is connected into the first signal path. A transmitting/receiving unit inside the camera can also be connected into this first signal path. In particular, the first signal path has a bidirectional bus, in particular an I2C bus, between the microprocessor and the image chip. Preferably, a camera has two separate inputs. Thus, a second input and/or output is connected to a second signal path directly leading to an image chip of the camera. The encoded image data is transmitted through it as video data stream to the central signal processor external to the camera. Verifications of the image data in the registers of the image chip are performed through the data bus between the microprocessor and the image chip. For example, the color components red, green and blue of the acquired images are acquired, manipulated and verified. Furthermore, a white balance of the image chip can be performed.
Since, preferably, both signal paths of the camera are formed for bidirectional signal transmission, the inputs of the camera are also to be considered as outputs.
Preferably, the ECU having the central signal processor also has a first signal path. A transmitting/receiving unit is also connected into it. The first signal path of the ECU is connected to a host controller, which in turn is connected to the central signal processor. Here too, the signal connections are preferably formed for bidirectional signal transmission.
In particular, the ECU also has a second signal path, which is connected to the second signal path of the camera. It is formed without connection to the host controller and directly connected to the signal processor. In particular, the output of the central signal processor is connected to an encoder, in particular an NTSC encoder or a digital interface, like Ethernet, which then provides a correspondingly adjusted overall image at the output of the ECU.
Preferably, a unit is connected into the first signal path of the ECU, which is designed for converting the serial image data stream received from the camera into a parallel image data stream. Preferably, the central signal processor has a storage, in which the image data are is stored. As the stored image data are combined to an overall image, then, in the signal processor, a required brightness adjustment is performed. Such a brightness adjustment is also effected after the image data storage and after the formation of an overall image in the signal processor.
It can be provided that the signal processor has a single input storage for the image data. It can also be provided that it has a separate storage for each camera. The storage or storages can also be arranged externally to the signal processor in the ECU.
It can be provided that the microprocessor inside the camera is also formed in the first signal path for brightness adjustment of image data. However, such an adjustment is then effected in the camera itself. However, with such an approach, it is not the image data sent through the second signal path to the central signal processor that is modified.
Rather, here, by control signals of the central signal processor, it is acted upon the microprocessor inside the camera through the first signal path, and then modification of the image data to be recorded is performed. In particular, herein, an operational parameter of the camera influencing the image quality, for example the exposure time, is modified. With such an approach, thus, the image data stream is adjusted in its brightness not already in the camera through the second signal path of the camera and is not modified through the first signal path by the approach.
In the second signal path of the camera, a unit can be connected, which is designed for converting the parallel image data stream of the image chip of the camera into a serial image data stream.
Preferably, signals for camera control and for diagnostic purposes can be transmitted through the first signal path.
Such an architecture is also provided in the at least second camera, wherein here too, a further corresponding architecture with first and second signal paths is then formed in the ECU. However, the ECU only has one central signal video processor.
Preferably, the signal processor is connected to at least one camera, in particular all of the cameras of the image forming device, by a signal line. The signal processor is designed for controlling a basic brightness adjustment of the respective camera through this signal line. In particular, the exposure time of the respective camera can be controlled and individually set through this signal connection. It is particularly advantageous, since the functionality of the signal processor is thus once again extended and therefore, it can be directly acted upon the operational parameters of the camera under specific environmental conditions, and thus the image data generated by this camera can already be affected in advance. Since, thereby, the image data generated by the camera can already be provided in improved quality with regard to a desired brightness, thereby, the effort for the further brightness adjustment of an overall image generated from it can then be reduced in the signal processor. Thereby, the processing effort in the signal processor can be reduced and therefore, the period of time for forming an overall image adjusted with desired brightness can be reduced.
The signal processor is preferably designed for controlling an AEC (Automatic Exposure Control) adjustment. If a part of an image does not have content in it, e.g., if a scene presents input data to a camera that is either below the lowest numerical level or above the highest numerical level possible for the input data, then no amount of brightness changes will recover that data. However, changing the exposure control of the camera can cause data to be recovered. Details in parts of a scene that are too dark can recovered by increasing the exposure time and ones that are in a region that is too bright can be recovered by reducing the exposure time. So in this way brightness and exposure at the level of a camera are different.
Furthermore, the invention relates to a driver assistance facility including an image forming device according to the invention and an advantageous development thereof. For example, the driver assistance facility can include a parking assistance system. The overall images generated by the image forming device as complete view, like a top view or panoramic view of the vehicle with the environment of the vehicle can then be displayed to a driver and the driver and/or electronic components of a assistance system of the vehicle can simply and securely identify obstacles or the like in the near periphery of the vehicle. Driving the vehicle, for example performing a parking procedure, can thereby substantially be assisted.
Preferably, the driver assistance system is designed such that the image forming device includes at least four cameras. They are disposed on the vehicle such that they are formed for complete periphery detection around the vehicle. In particular, it is provided in this connection that a camera detects in the front region of the vehicle, a further camera detects in the rear region of the vehicle, and two further cameras each detect to one side of the vehicle. From these arrangements and positions of the cameras, then, the entire periphery extending around the vehicle can be detected, and from these cameras in particular directed to the front and rearwards on the side, then, an overall image can be formed, which presents a top view. This can be effected with corresponding image processing algorithms.
Furthermore, the invention relates to a vehicle with an image forming device according to the invention or an advantageous development thereof, and/or a driver assistance facility according to the invention or an advantageous development thereof.
Advantageous developments of the image forming device according to the invention are to be considered as advantageous developments of the driver assistance facility according to the invention.
Furthermore, the invention relates to a method for forming an overall image, which is formed from images of at least two cameras of an image forming device in the vehicle. The overall image, for example and preferably a top view, is formed of the environment of the vehicle, wherein an image acquired by a camera is stored in a storage unit. In the signal processing chain, a central signal processor is arranged after the image storing unit, by which the brightness of at least partial regions of the combined overall image is adjusted.
Preferably, for adjusting the brightness of the overall image, an iterative adjustment loop is performed inside the signal processor, and thus passes of the adjustment of the brightness several times are performed until a desired overall brightness is achieved. In particular, a brightness adjustment of the image data of the overall image is exclusively performed in the signal processor. In particular, a comparison of the overall brightness of the overall image to the individual images formed by the cameras is performed in the signal processor, and depending on the comparison, the brightness adjustment of the combined overall image is performed in the signal processor.
Advantageous developments of the invention according to the image forming device are to be considered as advantageous developments of the method according to the invention.
Preferably, three different approaches for brightness adjustment of the overall image are performed in the signal processor and combined. Preferably, these three possibilities of adjustment include the adjustment of the contrast distribution, the adjustment of the brightness and of the color balance as well as the adjustment in the exposure time of at least one camera of the image forming device. Since the cameras of the image forming device can also be arranged on a vehicle with different orientations and this is understood with regard to their orientation of the detection region to the horizontal plane, some cameras, for example the cameras detecting sideward, can be slightly inclined to the bottom with respect to the horizontal such that they virtually increasingly detect to the roadway. In distinction from this, preferably, it can be provided that the camera detecting to the front and/or the camera detecting rearwards are substantially oriented horizontally with their main detection direction. Due to these different orientations of the cameras, per se, different image qualities with regard to the brightness can occur by the individual cameras. Then, this is also reflected in the overall image, which then is adjusted in brightness as a whole according to the invention. This means that, first, the overall image is combined and generated, and then specific regions are adjusted in brightness such that an overall image arises, which conveys a uniform degree of brightness.
The term color can be expressed in terms of luma and chroma. Luma refers to the amount of b!ack-wihts content in an image and refers to what conventionally is described as brightness. Chroma refers to the hue, the degree to which a color looks like red or green or blue. The invention doesn't just allow to converge luma values for different camera outputs (conventional brightness) but for chroma also.
With regard to the modification of the brightness and of the color balance, this is performed in the signal processor by modifying the color content for the interesting partial regions of the overall image, which are also seen and viewed by the user. In particular, this can be effected in those partial regions of the overall image, which are contributed by the individual images of the respective cameras, such that corresponding overall partial regions representing an individual image in the overall image are correspondingly modified in this respect.
For example, if a region of the overall image appears bluish and dark, and another partial region, which is provided by another camera as an individual image, appears reddish and brighter, thus, the more bluish region is adjusted more reddish, and the reddish region is adjusted more bluish.
By the modification and active control of the exposure time of a camera by the central signal processor through the specific signal line of the first signal path, a brighter foreground in the image data of this driven camera can be effected, and thus, a more detailed image formation can be allowed. This is particularly advantageous for those cameras in the vehicle, which are inclined to the bottom with respect to the horizontal in their main detection direction, and thus detect towards the roadway. Furthermore, by this control of the exposure time of a camera, a higher degree of the data to be intensified can be achieved, whereby here too, the image quality can then be improved.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone, without departing from the scope of the invention.
Below, embodiments are explained in more detail based on schematic drawings. There show:
Fig. 1 a top view of an embodiment of a vehicle according to the invention with a driver assistance facility according to the invention;
Fig. 2 a simplified schematic representation of an overall image formed by an
embodiment of an image forming device according to the invention; and
Fig. 3 a block diagram representation of partial components of an embodiment of an image forming device according to the invention.
In the figures, the same or functionally equivalent elements are provided with the same reference characters.
In fig. 1 , in a top view representation, a vehicle 1 is shown, which is a passenger car. The vehicle 1 includes a driver assistance facility 2 having an image forming device 3. The image forming device 3 includes a first camera 4, a second camera 5, a third camera 6 and a fourth camera 7. The first camera 4 is a front camera and formed for detecting the front region in the environment of the vehicle 1. Schematically, the edge regions or lateral boundaries of the detection region are partially drawn.
Moreover, the second camera 5 is formed for detecting the rear region of the vehicle 1 , wherein here too, the boundaries of the detection region are schematically illustrated. Moreover, the cameras 6 and 7 are for example disposed in the side-view mirrors of the vehicle 1 and formed for detecting the lateral environment of the vehicle 1. Here too, the exemplary edges of the detection regions are schematically drawn. By the four cameras 4 to 7, the entire periphery around the vehicle 1 can be detected. In particular, the cameras 6 and 7 detecting sideward are disposed inclined to the bottom with their main detection direction with respect to the horizontal plane, which corresponds to the figure plane and a plane parallel thereto. Therefore, they are oriented towards the roadway. In contrast, the two cameras 4 and 5 are substantially horizontally oriented in their main detection direction.
The cameras 4 to 7 are each formed for acquiring images of the corresponding environmental regions and generate image data using an image chip in this connection. This video data or the video data signal is appropriately processed by a data encoder of the image chip such that pixel data are formed.
In the signal processing chain after storage, a central, digital signal processor external to the camera is arranged, which is provided with these pixel data of the individual images. In this signal processor, then, an overall image of the individual images of the cameras 4 to 7 is formed and further image processing is made. For this, brightness consideration of the overall image is performed and the overall image is adjusted in brightness at least in partial regions as needed. Due to the different positions and optionally the different orientations of the cameras 4 to 7, the respectively formed individual images are also formed with different brightnesses. Thus, the overall image can develop in that it is provided with different degrees of brightness, and this is optionally disadvantageous for an observer. For brightness adjustment of this overall image, which is performed in the signal processor, preferably, plural approaches are then combined.
For this, it can be provided that first, in a first step, a contrast distribution in the image data of the overall image is effected, and in this connection, a specific adjustment of a gamma value is effected. This is in particular advantageous if the image has a bright background, for example a sky. and a dark foreground. This is referred to as BSDF (Bright Sky Darkening Foreground) adjustment. Afterwards, then, the actual brightness adjustment is performed, wherein a comparison of the individual images formed by the camera 4 to 7 to the overall image is then preferably made to this, and in case of undesired brightness differences, the overall image is appropriately modified. Preferably, apart from the brightness adjustment, at least one further parameter describing the image quality, for example equalization and/or color balance, is also performed in the signal processor.
The term color can be expressed in terms of luma and chroma. Luma refers to the amount of black-wihte content in an image and refers to what conventionally is described as brightness. Chroma refers to the hue, the degree to which a color looks like red or green or blue. The invention doesn't just allow to converge luma values for different camera outputs (conventional brightness) but for chroma also.
Preferably, it is provided that the signal processor is also formed such that the overall image quality adjustment of the overall image is performed inside the signal processor.
Moreover, for further improving the image quality and adjusting the brightness of the overall image, it can also be provided that the digital signal processor is connected by each one separate direct signal line through a first signal path to the cameras 4 to 7, in particular a respective control unit of the cameras 4 to 7. The signal processor can control the cameras 4 to 7 and in particular individually adjust their exposure times through these signal lines. Thereby too, thus, the brightness of the images acquired by the respective cameras 4 to 7 can already be affected upon acquisition. The subsequent required processing of the brightness adjustment of the overall image in the signal processor can thereby at least be reduced. in fig. 2, in a schematic representation, an overall image 8 as a top view or bird view formed in the signal processor from the individual images formed by the cameras 4 to 7 is shown. From the image data information of the cameras 4 to 7 detecting sideward and to the front as well as rearwards, an overall image 8 is formed by the image forming device 3, which shows a top view of the vehicle and the near environment of the vehicle 1.
Therein, the environment 9 in the overall image 8 is illustrated completely around the vehicle 1. Thus, obstacles in the close region around the vehicle 1 and thus in the environment 9 of the vehicle 1 can be identified in the overall image 8. This overall image 8 can be displayed to a driver on a display unit in the vehicle. This is a particularly helpful information presentation in specific driving situations, for example in parking into or out of a parking space.
In fig. 3, in a schematic block diagram, the image forming device 3 with essential components is shown. The camera 4 includes an image chip, which in turn includes an imager or pixel array 41 and an image data encoder 42. Moreover, the image chip has a signal processing device 43.
This image chip is connected to a first input 4c of the camera 4 through a first signal path 4'. In the first signal path 4', a microprocessor 44 inside the camera is connected. It is electrically connected to the signal processing device 43 through a signal connection, which is a bidirectional data bus 4a. Moreover, in the first signal path 4', a
transmitting/receiving unit 45 is connected, which is a LIN transmitting/receiving unit in the embodiment. Other types of communication buses are possible too. Between the microprocessor 44 and the transmitting/receiving unit 45, there is also provided a bidirectional signal transmission.
Moreover, the camera 4 includes a second signal path 4". It represents a connection between a second input 4d of the camera 4 and the image data encoder 42. The image data of the image chip is transmitted to an electronic control unit (ECU) 10 external to the camera through this second signal path 4' including the signal line 4b. For this, it is provided that this ECU 0 also has a first signal path 13' electrically connected to the first signal path 4' of the camera 4, with regard to the connection to the first camera 4. A further transmitting/receiving unit 13, which is also a LIN transmitting/receiving unit, is connected into this first signal path 13' of the ECU 10. This transmitting/receiving unit 13 is connected to a host controller 11 through a bidirectional data connection. It is a component of the ECU 10.
Moreover, between the ECU 10 and the camera 4, a second signal path 14' is also formed in the ECU 10, which is electrically connected to the second signal path 4" of the camera 4. A unit 14 is connected in the second signal path 14' of the ECU 10. It is formed for converting the image data of the camera 4 converted into a serial data stream back into a parallel digital image data stream and for providing it to the central signal processor 12 of the unit ECU 10. The image forming device 3 only has one such central signal processor 12.
In the representation according to fig. 3, a further unit is connected into the second signal path 4" in the camera 4, which is not drawn, which is formed for converting the parallel digital image data stream generated by the image chip into a serial image data stream.
In corresponding configuration, in the embodiment, it is provided that the cameras 5 to 7 are formed analogously to the camera 4. Thus, here too, image chips are respectively provided, which have imagers or pixel arrays 51 , 61 and 71 , respectively, image data encoders, 52, 62 as well as 72, and signal processing devices 53, 63 and 73.
Moreover, here too, first signal paths 5', 6' and 7' as well as second signal paths 5", 6" and 7" are provided. Microprocessors 54, 64 and 74 are connected into the first signal paths 5', 6" and 7', which are connected to the signal processing devices 53, 63 and 73 through data busses 5a, 6a, 7a. Moreover, the first signal paths 5', 6' and 7' also have transmitting/receiving units 55, 65 and 75. The second signal paths 5", 6" and 7" have the signal lines 5b, 6b and 7b, which are electrically connected to the second inputs 5d, 6d and 7d. The first signal paths 5', 6' and T are electrically connected to the first inputs 5c, 6c and 7c of the cameras 5 to 7.
Moreover, first signal paths 15', 17' and 19' are also formed in the ECU 10, which are connected to the host controller 11 and are formed for bidirectional communication in this respect. Here too, ECU-side transmitting/receiving units 15, 17 and 19 are connected into the first signal paths 15', 17' and 19', which are also formed as LIN transmitting/receiving units.
Moreover, here too, an ECU-side second signal path 16', 18' and 20' is formed associated with each further camera 5 to 7, which is each connected to the central signal processor 12. Here too, units 16, 18 or 20 are each connected into these second ECU-side signal paths 16', 18' and 20', which are formed for converting the serial image data stream transmitted from the cameras 5 to 7 back into a parallel image data stream.
The operation of the image forming device 3 with regard to the representation in fig. 3 has already been explained above. It is exemplarily briefly summarized once again by way of the camera 4. The camera 4 acquires the data of the environment via the imagers 41 , which is processed in the data encoder unit 43 and is provided as pixel data in the form of video data. It is directly transmitted into a storage 12a in the ECU 10 through the signal connections 4b. The storage 12a can be disposed in the signal processor 12. The corresponding is effected with the image data of the cameras 5 to 7. Then, an overall image representing a top view of the vehicle 1 and the environment 9 is then formed from the image data of the individual images in the signal processor 12. In this connection, in a first step, a contrast distribution is adjusted by adjusting a gamma value of the received image data or pixel data. This is in particular effected if the image is a so called BSDF shot and has a very bright background and a very dark foreground. This is in particular identified based on threshold values for the number of bright and dark pixels in the image. In subsequent steps, then, the further image processing is effected with regard to the mapping as well as the brightness adjustment. With regard to the brightness adjustment and a color balance, the formed overall image is compared to the image information of the individual images in the signal processor 12, and depending on the comparison, a brightness adjustment of at least partial regions of the overall image is performed. The corresponding is effected with the color balance. Such an adjustment of the image quality of at least the brightness and preferably also an equalization and/or a color balance is performed in the signal processor 12 in iterative manner until the desired image quality of the overall image is achieved. The result of the image quality adjustment in the signal processor 12 in this respect is then the overall image according to fig. 2. Therein, brightness differences, which may occur upon combination of the individual images from the cameras 4 to 7 to a corresponding overall image, are then balanced. The
corresponding applies to color differences, which may occur upon combination of the individual images of the cameras 4 to 7 to a corresponding overall image.
Preferably, the entire process of adjustment of the image quality, in particular the brightness adjustment, is performed in the signal processor 12. In this connection, iterative quality adjustment loops and thus a modification several times, for example of the brightness, are also performed in the signal processor 12 itself. Further components with regard to this image quality adjustment or even a loop external to the signal processor are not required.
The signal processor 12 can have an internal further image data storage, in which then the final overall image 8 schematically shown in fig. 2 and adjusted in image quality is stored. Preferably, a further encoder unit 21 is provided, which follows the signal processor 12 and which then preferably provides the overall image at the one output 10a of the ECU 10. This is then further transmitted for the further presentation on a display unit. It can also be provided that this image data storage for storing the finally completed overall image 8 is arranged externally to the signal processor 12 and is arranged subsequent to the signal processor 12 in the signal processing chain.
In particular with regard to the formation of the overall image 8, it is provided that in the signal processor 12, after the primary basic formation of the overall image from the individual images and before performing the brightness adjustment, fields of interest in the form of specific pixel data are selected, which are then adjusted in brightness afterwards. Thus, these specific fields of interest represent a pixel-reduced overall image, which is then processed with regard to the image quality afterwards. By such an approach, on the one hand, the number of data to be processed is reduced, and on the other hand, that image data is processed in specific manner, which is then actually of interest to the user.
In particular, it is provided that those fields of interest in the individual images of the cameras 4 to 7, which show the vehicle 1 , are identified by the signal processor 12 and are not taken into account with regard to the further brightness adjustment of the overall image. This means that the image information concerning the vehicle 1 in the overall image is not modified in brightness.
The brightness adjustment of the specific pixel data extracted from the entire amount of the pixel data is modified in its brightness by a linear adjustment. This means in a preferably emodiment that they are modified by a fixed factor whereas this is depending on the input value.
Preferably the linear adjustment is subject to the limits and the discretization of the video output range. For example, if a numerical range is integers from 0 to 255 and an input value 200 that should be multiplied by a factor 1.3, then it is limited to an output value of 255 instead of 260. Similarly, if an input value is 21 and this value should be multiplied by a factor 1.3 then the output will not be 27.3 but will be rounded to the nearest integer 27. Multiplying the values by a linear factor in order to preserve relative contrast, but the outputs may be clipped or rounded as it is described above. For that reason, it is probably best to describe it as a quasi-linear operation. To avoid clipping, some changes could be done in the gamma region for example.
With regard to the improvement of the image quality of the combined overall image, thus, various approaches are combined, namely the modification of the contrast distribution by adjustment of the gamma value on the one hand, the modification of the brightness and/or of the color balance of the overall image on the other hand, and further the possible control of the exposure time of at least one of the cameras 4 to 7 by the signal processor 12 through the first signal paths 4', 5', 6' and 7'. In particular such a manifold combination allows particularly advantageously a fast and efficient approach for forming an overall image 8, which is substantially improved with regard to brightness differences and color differences with respect to the prior art. Moreover, this result can be achieved in particularly fast and resource saving manner.

Claims

Claims
1. Image forming device for a vehicle (1), including a first (4 to 7) and at least a second camera (4 to 7), which are each designed for image acquisition of a vehicle environment, and an image storing unit (12a), in which an acquired image of a camera (4 to 7) is stored, wherein the images of the cameras (4 to 7) can be combined to an overall image (8),
characterized in that
in the signal processing chain, following the image storage unit (12a) external to the camera, a central signal processor (12) external to the camera is arranged, which is designed for adjusting the brightness of at least partial regions of the combined overall image (8).
2. Image forming device according to claim 1 ,
characterized in that
the signal processor (12) is designed for internally performing an iterative adjustment loop for adjusting the brightness of the overall image (8).
3. Image forming device according to claim 1 or 2,
characterized in that
the signal processor (12) is designed for adjusting at least one further parameter describing the image quality, in particular equalization and/or mapping and/or color balance.
4. Image forming device according to anyone of the preceding claims,
characterized in that an overall image (8) is output at the output of the signal processor (12), which is adjusted in its brightness as a whole.
5. Image forming device according to anyone of the preceding claims,
characterized in that
the signal processor (12) is designed for extracting specific pixel data from the overall pixel data obtained from the image storage unit (12a) and representing the overall image (8) before the brightness adjustment, wherein a pixel-reduced overall image is formed by the specific pixel data, which is provided for the brightness adjustment.
6. Image forming device according to anyone of the preceding claims,
characterized in that
the signal processor (12) is designed for comparing the overall brightness of the overall image (8) to the individual images designed by the cameras (4 to 7) and the brightness adjustment can be performed in the signal processor (12) dependent on the comparison.
7. Image forming device according to anyone of the preceding claims,
characterized in that
the brightness adjustment is a linear adjustment, especially depending on the input value to be adjusted.
8. Image forming device according to anyone of the preceding claims,
characterized in that
the signal processor ( 2) is designed for identifying image data representing the vehicle (1) in the overall image and is designed for suppressing a brightness adjustment of that image data of the overall image (8), in which the vehicle (1) is represented.
9. Image forming device according to anyone of the preceding claims,
characterized in that
the signal processor (12) is designed for determining a contrast distribution of the image data, in particular by adjusting a gamma value, in a first step before brightness adjustment after receiving the image data from the image storage unit (12a).
10. Image forming device according to anyone of the preceding claims,
characterized in that
the signal processor (12) is connected to a camera (4 to 7) through a first signal path (4', 13'; 5', 15'; 6', 17'; 7', 19') and is formed for controlling at least one operational parameter, in particular the exposure time, of the camera (4 to 7) through this first signal path (4\ 13'; 5', 15'; 6', 17'; 7', 19').
11. Image forming device according to anyone of the preceding claims,
characterized in that
the signal processor (12) is connected to a camera (4 to 7) through a second signal path (4", 14'; 5", 16'; 6", 18'; 7", 20'), through which the image data of the cameras (4 to 7) can be transmitted to the signal processor (12).
12. Driver assistance facility including an image forming device (3) according to anyone of the preceding claims.
13. Driver assistance facility according to claim 12, which includes an image forming device (3) with at least four cameras (4 to 7), which are formed for complete environmental detection (9) around the vehicle (1).
14. Vehicle with an image forming device (3) according to any one of claims 1 to 11 and/or a driver assistance facility (2) according to claim 12 or 13.
15. Method for forming an overall image (8) representing at least the environment of a vehicle (1), which is formed from image data of at least two cameras (4 to 7) of an image forming device (3) for a vehicle (1), wherein an image acquired by a camera (4 to 7) is stored in an image storage unit (12a),
characterized in that
in the signal processing chain, after the image storage unit (12a) external to the camera, a central signal processor (12) external to the camera is arranged, by which the brightness of at least partial regions of the combined overall image (8) is adjusted.
16. Method according to claim 15, characterized in that
for iterative adjustment of the brightness, brightness adjustment loops are performed inside the signal processor (12), in particular a brightness adjustment of the image data is exclusively performed in the signal processor, in particular a comparison of the overall brightness of the overall image (8) to the individual images formed by the cameras (4 to 7) is performed in the signal processor (12), and depending on the comparison, the brightness adjustment of the combined overall image (8) is performed in the signal processor (12).
PCT/EP2010/000049 2010-01-08 2010-01-08 Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image WO2011082716A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP10707812A EP2522126A1 (en) 2010-01-08 2010-01-08 Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image
PCT/EP2010/000049 WO2011082716A1 (en) 2010-01-08 2010-01-08 Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/000049 WO2011082716A1 (en) 2010-01-08 2010-01-08 Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image

Publications (1)

Publication Number Publication Date
WO2011082716A1 true WO2011082716A1 (en) 2011-07-14

Family

ID=42060673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/000049 WO2011082716A1 (en) 2010-01-08 2010-01-08 Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image

Country Status (2)

Country Link
EP (1) EP2522126A1 (en)
WO (1) WO2011082716A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013001644A1 (en) 2013-01-31 2014-07-31 Connaught Electronics Ltd. Method for white balance of an image representation and camera system for a motor vehicle
DE102013011844A1 (en) 2013-07-16 2015-02-19 Connaught Electronics Ltd. Method for adjusting a gamma curve of a camera system of a motor vehicle, camera system and motor vehicle
EP2846532A1 (en) 2013-09-06 2015-03-11 Application Solutions (Electronics and Vision) Limited System, device and method for displaying a harmonised combined image
DE102014110516A1 (en) * 2014-07-25 2016-01-28 Connaught Electronics Ltd. Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle
EP3035676A1 (en) 2014-12-19 2016-06-22 Conti Temic microelectronic GmbH Surround view system and vehicle including a surround view system
US10166921B2 (en) 2014-02-11 2019-01-01 Robert Bosch Gmbh Brightness and color matching video from multiple-camera system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040201708A1 (en) * 2001-02-23 2004-10-14 Takaaki Endo Imaging apparatus controller and control method thereof, image processing apparatus and method thereof, and program code and storage medium
EP1718064A1 (en) * 2005-04-27 2006-11-02 Nissan Motor Company, Ltd. Image generating apparatus for vehicles and method
US7139412B2 (en) 2001-04-24 2006-11-21 Matsushita Electric Industrial Co., Ltd. Image synthesis display method and apparatus for vehicle camera
EP2012271A2 (en) * 2007-07-02 2009-01-07 Nissan Motor Co., Ltd. Image processing system and method
US20090290033A1 (en) * 2007-11-16 2009-11-26 Tenebraex Corporation Systems and methods of creating a virtual window

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007304407A (en) * 2006-05-12 2007-11-22 Alpine Electronics Inc Automatic exposure device and method for vehicle-mounted camera
JP4325642B2 (en) * 2006-05-31 2009-09-02 ソニー株式会社 Mobile camera system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040201708A1 (en) * 2001-02-23 2004-10-14 Takaaki Endo Imaging apparatus controller and control method thereof, image processing apparatus and method thereof, and program code and storage medium
US7139412B2 (en) 2001-04-24 2006-11-21 Matsushita Electric Industrial Co., Ltd. Image synthesis display method and apparatus for vehicle camera
EP1718064A1 (en) * 2005-04-27 2006-11-02 Nissan Motor Company, Ltd. Image generating apparatus for vehicles and method
EP1718064B1 (en) 2005-04-27 2009-07-15 Nissan Motor Company, Ltd. Image generating apparatus for vehicles and method
EP2012271A2 (en) * 2007-07-02 2009-01-07 Nissan Motor Co., Ltd. Image processing system and method
US20090290033A1 (en) * 2007-11-16 2009-11-26 Tenebraex Corporation Systems and methods of creating a virtual window

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2522126A1

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013001644A1 (en) 2013-01-31 2014-07-31 Connaught Electronics Ltd. Method for white balance of an image representation and camera system for a motor vehicle
WO2014118167A1 (en) 2013-01-31 2014-08-07 Connaught Electronics Ltd. Method for white balance of an image presentation and camera system for a motor vehicle
DE102013011844A1 (en) 2013-07-16 2015-02-19 Connaught Electronics Ltd. Method for adjusting a gamma curve of a camera system of a motor vehicle, camera system and motor vehicle
EP2843937A1 (en) 2013-07-16 2015-03-04 Connaught Electronics Ltd. Method for adapting a gamma curve of a camera system of a motor vehicle, camera system and motor vehicle
EP2846532A1 (en) 2013-09-06 2015-03-11 Application Solutions (Electronics and Vision) Limited System, device and method for displaying a harmonised combined image
US9214034B2 (en) 2013-09-06 2015-12-15 Application Solutions (Electronics and Vision) Ltd. System, device and method for displaying a harmonized combined image
US10166921B2 (en) 2014-02-11 2019-01-01 Robert Bosch Gmbh Brightness and color matching video from multiple-camera system
DE102014110516A1 (en) * 2014-07-25 2016-01-28 Connaught Electronics Ltd. Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle
WO2016012288A1 (en) * 2014-07-25 2016-01-28 Connaught Electronics Ltd. Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle
EP3035676A1 (en) 2014-12-19 2016-06-22 Conti Temic microelectronic GmbH Surround view system and vehicle including a surround view system
WO2016096506A1 (en) 2014-12-19 2016-06-23 Conti Temic Microelectronic Gmbh Surround view system and vehicle including a surround view system

Also Published As

Publication number Publication date
EP2522126A1 (en) 2012-11-14

Similar Documents

Publication Publication Date Title
EP2833618B1 (en) Method for activating and deactivating an image correction function, camera system and motor vehicle
US20170330053A1 (en) Color night vision system and operation method thereof
US11477372B2 (en) Image processing method and device supporting multiple modes and improved brightness uniformity, image conversion or stitching unit, and computer readable recording medium realizing the image processing method
JP4869795B2 (en) Imaging control apparatus, imaging system, and imaging control method
US11115636B2 (en) Image processing apparatus for around view monitoring
KR101367637B1 (en) Monitoring apparatus
WO2011082716A1 (en) Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image
US9214034B2 (en) System, device and method for displaying a harmonized combined image
US20220210324A1 (en) Multi-camera vehicular vision system
US20150042806A1 (en) Vehicle vision system with reduction of temporal noise in images
US20170347008A1 (en) Method for adapting a brightness of a high-contrast image and camera system
US10455159B2 (en) Imaging setting changing apparatus, imaging system, and imaging setting changing method
KR20180001869A (en) Image Improving Apparatus for AVM System and Improving Method thereof
JP6903925B2 (en) Imaging display system, passenger equipment
JP6317914B2 (en) In-vehicle image processing device
KR101822344B1 (en) Motor vehicle camera device with histogram spreading
CN111435972B (en) Image processing method and device
EP2843937B1 (en) Method for adapting a gamma curve of a camera system of a motor vehicle, camera system and motor vehicle
US11941897B2 (en) Vehicle occupant monitoring system including an image acquisition device with a rolling shutter image sensor
US11778315B2 (en) Vehicle occupant monitoring system including an image acquisition device with a rolling shutter image sensor
CN114097216A (en) Image processing apparatus and image processing program
EP3389257A1 (en) Method for adapting brightness of image data, image capturing system and advanced driver assistance system
US11902671B2 (en) Vehicle occupant monitoring system including an image acquisition device with a rolling shutter image sensor
US20230291880A1 (en) Image processing apparatus
US20230326177A1 (en) Image processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10707812

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010707812

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE