WO2019057807A1 - Harmonization of image noise in a camera device of a motor vehicle - Google Patents

Harmonization of image noise in a camera device of a motor vehicle Download PDF

Info

Publication number
WO2019057807A1
WO2019057807A1 PCT/EP2018/075440 EP2018075440W WO2019057807A1 WO 2019057807 A1 WO2019057807 A1 WO 2019057807A1 EP 2018075440 W EP2018075440 W EP 2018075440W WO 2019057807 A1 WO2019057807 A1 WO 2019057807A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
noise
unit
motor vehicle
Prior art date
Application number
PCT/EP2018/075440
Other languages
French (fr)
Inventor
Vladimir Zlokolica
Mark Patrick Griffin
Aidan Casey
Brian Michael Thomas DEEGAN
Barry Dever
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2019057807A1 publication Critical patent/WO2019057807A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive

Definitions

  • the characteristics each include a spatial distribution of a relative and/or absolute intensity of the respective image noise and/or a measure of a spatial correlation of the respective image noise, for example between adjacent camera pixels of the respective camera image, and/or a level value as a measure of an absolute intensity of the image noise.
  • the characteristics for the respective camera images can include a spatially resolved level value of the (expected or predicted) image noise.
  • the prediction model is dependent on a property of the respective camera unit, in particular on a lens property, preferably a focal length, and/or on a position, in which the camera unit is disposed at the motor vehicle in intended use.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a method for operating a camera device (2) of a motor vehicle (1), including a) capturing an environment (4) of the motor vehicle (1) by a first camera unit (3a) of the camera device (2) and a second camera unit (3b) of the camera device 2); b) generating a first camera image representing the environment (4) captured by the first camera unit (3a), by the first camera unit (3a) and a second camera image representing the environment (4) captured by the second camera unit (3b), by the second camera unit (3b); c) predicting a first characteristic for image noise in the first camera image and a second characteristic for image noise in the second camera image by a computing unit (5) of the camera device (2) depending on a prediction model for the image noise recorded in the computing unit (5); d) matching the image noise of the first and the second camera image with each other depending on the characteristic respectively predicted for the image noise in the first and in the second camera image by the computing unit (5) by application of a respective noise filter to the first or second camera image; and e) generating an overall image (6) representing the environment (4) of the motor vehicle (1) based on the two camera images by the computing unit (5), to improve an appearance of the overall image (6) representing the environment (4) generated from multiple individual camera images.

Description

Harmonization of image noise in a camera device of a motor vehicle
The invention relates to a method for operating a camera device of a motor vehicle. The invention also relates to a camera device for a motor vehicle including a first and a second camera unit for capturing an environment of the motor vehicle and for generating a first camera image representing the environment captured by the first camera unit as well as a second camera image representing the environment captured by the second camera unit.
In known camera devices of motor vehicles with multiple cameras, different views to the environment are generated, which are based on multiple camera images generated by the cameras. Therein, the individual camera images of the different cameras are typically first projected to a surface, for example a two-dimensional plane or a curved bowl, and merged. Next, the merged images projected to the target surface are then rendered, thus a view to the environment from the perspective of a virtual camera, the position and perspective of which can be freely preset, is generated from them. Thus, a panorama view is overall generated with the individual camera images as mosaic images. Thus, views to the motor vehicle and the environment thereof from a bird's eye perspective can for example be generated if the individual camera images are projected to a two-dimensional, flat plane as the target surface and the virtual camera is positioned directly above the motor vehicle with the view to the motor vehicle perpendicular to the two-dimensional plane. In this case, the motor vehicle itself can be represented by a previously stored model image.
In each of the described multi-camera views, it is the central idea to merge multiple individual camera images or projections of multiple individual camera images in a single overall image with the individual camera images as mosaic images. Therein, the overall image is to appear as a single camera image, which is captured by a single camera, for example by the virtual camera above the motor vehicle in case of the bird's eye perspective. This means that the textures originate from different cameras in different areas of the overall image, but nevertheless represent the same three-dimensional environment and are to be of similar brightness, color and resolution. Therein, there are already methods to adapt brightness or color reproduction or image sharpness of the different individual images of the overall image to each other.
Therein, in motor vehicles, noise filters or noise reduction algorithms are typically also employed in camera devices, but which suppress image noise only incompletely and not satisfactorily with regard to the suppression of the image noise and a loss of image sharpness. This is mainly due to the fact that these noise filters are implemented in the camera units or cameras themselves and thus only have limited computing capacities available. Therein, broadly speaking, averaging over adjacent image pixels of a respective camera image is usually performed. Here, this "averaging" is a simplification, a plurality of possible processing algorithms exist to use information of adjacent pixels for reducing image noise. With a spatial noise filter, thus, adjacent image pixels of a two-dimensional image are taken into account, with a temporal noise filter also in time, thus image pixels adjacent in a three-dimensional space.
Thus, there arises the object to improve an appearance of an overall image representing the environment of a motor vehicle generated from multiple individual camera images.
This object is solved by the subject matters of the independent claims. Advantageous embodiments are apparent from the dependent claims, the description and the figures.
One aspect of the invention relates to a method for operating a camera device of a motor vehicle. Therein, a first method step is capturing an environment of the motor vehicle by at least a first camera unit of the camera device and a second camera unit of the camera device. Alternatively, capturing can also be effected by more than two camera units, for example by four camera units. A next method step is generating and providing a first camera image representing the environment captured by the first camera unit, in particular a corresponding environmental region of the environment captured by the first camera unit, by the first camera unit and a second camera image representing the environment captured by the second camera unit, in particular a corresponding environmental region of the environment captured by the second camera unit, by the second camera unit.
Analogously, further camera images, for example a third and a fourth camera image, can be correspondingly generated and provided with more camera units, which represent the environment captured by the respective camera unit.
A further method step is predicting a first characteristic for image noise in the first camera image and a second characteristic for image noise in the second camera image by a computing unit of the camera device depending on a prediction model for the image noise recorded in the computing unit. Thus, the first and the second characteristic can be or include qualitatively identical characteristics, for example both include a level value for the image noise, that is the first characteristic can include a first level value for image noise in the first camera image and the second characteristic can include a second level value for image noise in the second camera image. Here, the level value can be understood in terms of a value for an intensity of the image noise. Therein, the prediction can be effected after capturing the environment and generating the camera images, however, the prediction of the respective characteristics can preferably also be effected before generating and/or before capturing. Therein, the prediction model can depend on parameters, as it is described below.
A further method step is matching, which can be understood in terms of making similar, thus increasing a similarity of, the image noise, thus the characteristics, for example the respective actual level values for the image noise, of the first and the second camera image with each other. Therein, the matching is effected depending on the characteristic respectively predicted for the image noise in the first and in the second camera image by the computing unit, namely by application of a respective (A) noise filter to the first and/or the second camera image. Therein, the noise filter can be understood in terms of a noise filter algorithm. This noise filter algorithm can therefore be applied to the first and/or the second camera image with a respective individual, potentially different parameter setting. For differentiating with respect to later introduced further (B) noise filters, the noise filter introduced here can be referred to as a first (A) noise filter.
Therein, the noise filter can also have a negative filter strength, that is, add noise to the image. This can be advantageous if for example an image quality, for example a sharpness, of the first camera image would undesirably suffer upon further reducing the noise, that is application of the first noise filter with a positive filter strength to the first camera image, such that reduction of the noise in the first camera image is not contemplated. By adding the noise in the other, second camera image, the image noise of the two camera images can thus still be matched, namely by applying the noise filter with the described negative filter strength to the second camera image. The negative filter strength can be advantageous since the appearance of the overall image suffers more from lack of image sharpness than from image noise under some conditions and therein uniform image noise is more pleasant for a viewer at the same time. With more camera units, corresponding image noise of the multiple, for example four, camera images can here be correspondingly matched with each other depending on the respective predicted characteristics.
Then, a further method step is generating an overall image representing the environment of the motor vehicle based on the two or, with more camera units, multiple, for example four, camera images by the computing unit. Therein, the overall image can represent the captured environment of the motor vehicle in particular from a bird's eye perspective. Here, matching of the image noise is preferably effected before generating the overall image. Thereby, the image noise can be matched independently of distortions and transformations, respectively, as they occur in generating the overall image, especially if it represents the environment from the bird's eye perspective, such that particularly few image artifacts are produced. Finally, displaying the overall image on a display unit of the camera device can be effected as a further method step.
This has the advantage that the image noise is adapted with each other or harmonized in different areas of the overall image, namely in the areas respectively determined by the individual camera images, such that the overall image has a better, more homogeneous image quality and thereby an improved appearance. By predicting the respective image noise depending on the recorded prediction model, here, different circumstances or situations, as they are explained in the following, can be - also dynamically - taken into account. Therein, a respective strength of the noise filter can also be dynamically adapted in matching by the prediction model. If the method is iteratively, thus repeatedly, performed, for example continuously applied to an image data stream with continuously provided camera images, thus, the respective image noise can be adapted or matched in a dynamic environment by the prediction model, which can accordingly be a dynamic prediction model. Thus, the invention provides noise harmonization and in particular noise reduction in the overall image by applying an adaptive noise filter adaptable via the prediction model to the individual camera images, on which the overall image is based. This is particularly advantageous for an overall image from a bird's eye perspective, since here the area of the respectively individual camera images is typically similarly large in the overall image such that different image noise is here particularly conspicuous and disturbing. By the extremely flexible approach proposed here, thus, a plurality of different problems can be solved or reduced in context of the image noise as further explained in the following, thus for example the increased occurrence of noise in a dark environment, a variable intensity of the image noise, which is for example effected to high (brightness) dynamics in a three-dimensional scene as it for example occurs in HDR pictures, the influence of exposure or gain (so-called "gain control") as well as of local noise reduction mechanisms, which can be implemented in the respective camera units, an influence of a lens geometry on a spatial distribution of the image noise as well as a geometric distortion of the image noise by the projection of the camera images to the target surface.
In a particularly advantageous embodiment, it is provided that the characteristics each include a spatial distribution of a relative and/or absolute intensity of the respective image noise and/or a measure of a spatial correlation of the respective image noise, for example between adjacent camera pixels of the respective camera image, and/or a level value as a measure of an absolute intensity of the image noise. For example, the characteristics for the respective camera images can include a spatially resolved level value of the (expected or predicted) image noise.
This has the advantage that the image noise can be particularly effectively predicted such that the matching results in a particularly well harmonized image noise in the overall image. Especially if, as described in the following, mutually adjoining or overlapping camera images are to be matched with each other in their image noise, here, knowledge about a spatial distribution of a relative and/or absolute intensity of the image noise is particularly advantageous.
In a further advantageous embodiment, it is provided that a filter strength of the respective (first, in the above sense) noise filter is preset independently of each other, thus for example differently, for different image areas of the respective camera image. Thus, a boundary image area of the respective camera image, which adjoins to or overlaps with or is overlaid on the respectively other camera image in the overall image, or which represents a near area of the environment of the motor vehicle, which adjoins to the motor vehicle, can for example be adapted to or matched with the other camera image with its image noise, that is the image noise of the other camera image, for example also of a corresponding boundary image area.
This has the advantage that a particularly dynamic or adaptive filtering, namely spatially adaptive filtering, is realized. Thus, an image area can for example be more filtered than another one, or filtering of the noise can for example not be effected at all in certain image areas. Thus, filtering or for example temporal filtering of the image noise cannot be effected in certain image areas also in extreme cases and image noise can even be added in other image areas to achieve an overall image harmonized or matched with respect to the image noise. Especially in matching the image noise of multiple camera images, this is also advantageous since here for example in a central camera image, a first boundary area adjoining to a first further camera image can be matched to this first further camera image, and the image noise in a second boundary area of the central camera image, which adjoins to a second adjoining camera image, can be adapted to the second adjoining camera image independently of the adaptation of the image noise in the first boundary area of the central camera image. Especially in the boundary areas of the overall image, but also in the areas near to motor vehicle in the overall image, changes of the characteristic of the image noise are particularly conspicuous since the viewer generally gives particular attention to an environment adjoining to the motor vehicle and especially in a direct comparison of image noise of camera images adjoining to each other, a differently developed characteristic for the image noise is particularly visible, respectively.
In a further advantageous embodiment, it is provided that the prediction model is dependent on a property of the respective camera unit, in particular on a lens property, preferably a focal length, and/or on a position, in which the camera unit is disposed at the motor vehicle in intended use.
This has the advantage that inhomogeneity in the image noise, as it even occurs with uniform brightness conditions over the entire image, can be compensated for. Thus, for example in the automobile area, the light intensity in a peripheral image area can decrease down to 30 percent of the light intensity in a central image area in camera units with a fish-eye camera. This entails increased image noise in edge areas of the camera image compared to the central area of the camera image. Here, modeling the image noise as a spatial distribution is correspondingly particularly advantageous, that is that the characteristic(s) each include(s) a spatial distribution of the relative and/or absolute intensity of the image noise. Thereby, the image noise can be particularly well reduced and maximum image resolution can be kept at the same time.
In a further advantageous embodiment, it is provided that the prediction model is dependent on a setting of the respective camera unit, in particular on an exposure setting and/or an image gain, which is also referred to as a so-called "camera gain", and/or a spectrum of exposure settings, as it for example occurs in a highly dynamic picture, a so- called high dynamic range or HDR picture, or else in a night picture with different exposure settings for different image areas and/or sub-images, from which the image is composed or generated. Therein, the respectively current or activated setting can for example be provided from the camera unit to the computing unit via a data channel. This can also be effected independently of providing the respective camera image, for example before capturing the respective camera image with the corresponding setting. The setting of the respective camera unit can therefore be a parameter of the prediction model.
This has the advantage that especially in scenarios with poor illumination, for example in a dark or non-uniformly illuminated environment, at night or in a tunnel, the image quality of the overall image can be particularly considerably improved by the matched image noise. The principal reason for this is that the camera images basically have more image noise in a darker environment. Therein, in HDR pictures with high image or exposure dynamics, the dynamics of the camera images are generally imaged with a particularly great color depth, for example 20 or more bits per image pixel. However, such a HDR picture generally has to be compressed to a color depth of 8 bits for displaying on a display unit to be able to be displayed. This compression with a corresponding tint adaptation can often cause image noise to more clearly appear. The exposure setting and/or image gain are the most important settings of the respective camera unit to optimize an optical appearance of the respective camera images, that is an image quality of the respective camera images. With poor illumination, the image noise will be more intense due to greater exposure and more pronounced image gain. However, this effect will be largely homogeneously distributed over the camera image as long as a respective lens property of the camera unit remains unconsidered. The spectrum of exposure settings, as it occurs in a HDR picture, will differently influence the image noise in different areas of the camera image, since the camera is adapted to both a comparatively great and a comparatively low brightness value of the environment within the camera image such that the image noise will be not uniformly distributed over the camera image in this case, even if the said lens properties remain unconsidered.
In a further advantageous embodiment, it can be provided that the prediction model is also dependent on a property of the respective camera image, for example a brightness and/or an intensity, in particular an intensity distribution, of image pixels.
This has the advantage that in iteratively performing the method, as it will be described below, the first and the second characteristic, respectively, can be predicted based on a preceding first and second camera image, respectively, for a following first and second camera image, respectively. In this case, the mentioned (in particular additional) data channel for example either is not required.
In a further advantageous embodiment, it can be provided that the prediction model depends on a density map, which indicates how many pixels of the respective camera image are mapped to a pixel of the overall image. Thus, the density map is dependent on a perspective of the overall image, thus can be preset by the perspective of the overall image, for example the bird's eye perspective. Namely, by the perspective of the overall image, a corresponding geometric transformation for the camera images is determined and thus image areas are also determined, in which image noise is particularly easily or particularly difficultly perceived, that is image areas, in which the image noise is more or less disturbing. Thus, for example in image areas of the overall image, for which downsampling, thus merging of multiple pixels of the camera images to a pixel of the overall image, is effected, noise can be very efficiently suppressed by a spatial noise filter. Conversely, in image areas of the overall images, for which only few pixels of the camera image are present, so-called upsampling can be required, in which multiple pixels in the overall image are generated from one pixel of the camera image. Here, the noise in the camera image will naturally particularly severely manifest itself in the overall image.
This has the advantage that in matching the image noise by application of the respective noise filter, a transformation of the camera images (preferably effected later) is (already) taken into account in generating the overall image. Since the camera images can here be referred to as raw data for the overall image, the processing of the camera images as raw data with the noise filter naturally provides better results than subsequent suppression of the image noise in the overall image, thus the processed raw data, due to the not yet effected transformation(s) to the overall image. The knowledge about the properties of the overall image, which manifests itself in the density map and thus is previously taken into account, thus results in a particularly good image quality of the overall image.
In a further advantageous embodiment, it is provided that the respective first (A) noise filter includes a temporal noise filter, the filter strength of which is preset depending on the predicted characteristic for the respective image noise, wherein different image areas of the respective camera image in particular have a different image sharpness.
The use of the temporal noise filter has the advantage that an image sharpness is not influenced by filtering.
Therein, the (A) noise filter can in particular be recorded in the computing unit and preferably only include the temporal noise filter. This has the advantage that the computing capacity of the computing unit generally greater compared to for example the camera units can be used for the noise filter. Moreover, a buffer can thus also be preset in presettable size for temporally filtering. In particularly advantageous manner, thus, the noise filter in the computing unit as a temporal noise filter can be combined with a spatial noise filter, which is recorded in the camera unit, such that the present resources of the computing unit and the camera unit, in which a spatial noise filter is often already provided, can be used in particularly efficient manner. Therein, the spatial noise filter in the camera unit can also be controlled and thus be dynamically preset by the computing unit as described below such that the described advantages arise in particularly efficient manner.
Therein, it can be provided in a particularly advantageous embodiment that the filter strength of the temporal noise filter is set depending on a speed of the motor vehicle and/or the exposure setting of the respective camera unit.
This has the advantage that motion blur, which is caused by the own motion and/or by motions in the environment of the motor vehicle, can be reduced or an influence of this motion blur on the noise filtering in the noise filter can be taken into account such that better image quality is achieved..
In another advantageous embodiment, it is provided that the respective first (A) noise filter includes a spatial noise filter, the filter strength of which is preset depending on the predicted characteristic for the respective image noise, in particular with different filter strength for different image areas of the respective camera image. Alternatively, the first noise filter cannot include a spatial noise filter, but exclusively the temporal noise filter.
This has the advantage that image noise can be reduced or adapted with particularly little resources and low complexity. In that the filter strength can be differently preset for differently developed characteristics, loss of image sharpness as it generally occurs with a spatial noise filter can be minimized. Here, it is particularly advantageous if the filter strength is independently preset for different image areas of the respective camera image since thus a presettable image sharpness can for example be obtained in a preset image area and a preset intensity of the image noise can be achieved in a preset further image area at the same time.
Therein, it can be provided that the filter strength of the spatial noise filter is set depending on the speed of the motor vehicle and/or the setting of the respective camera unit, in particular the exposure setting and/or the image gain and/or the spectrum of exposure settings.
This has the advantage that a loss of image sharpness can be dynamically minimized.
In another advantageous embodiment, it is provided that the first and/or second camera unit, with more camera units also the respective multiple, in particular third and fourth camera unit, comprise an additional second spatial (B) noise filter integrated in the respective camera unit and the filter strength of the respective first (A) noise filter, in particular of the respective first spatial noise filter, is set depending on a setting and/or property of the additional noise filter.
This has the advantage that the present resources particularly effectively cooperate and thus can be efficiently utilized. Thereby, a particularly good image quality can also be achieved in the overall image.
In another advantageous embodiment, it is provided that the first and/or second camera unit, with more camera units also corresponding further camera units, alternatively or additionally comprise one or the second spatial (B) noise filter additionally integrated in the respective camera unit, and at least one setting, in particular a filter strength, of the integrated spatial noise filter is set by the computing unit in matching the image noise. Thus, the setting of the camera unit can in particular be effected depending on the respective associated predicted characteristic and depending on the image noise predicted by the prediction model, respectively. In particular, the setting can be effected depending on the predicted first and/or second and/or further predicted characteristics of further camera units. In repeatedly passing the method described further in the following, in particular in repeatedly passing capturing and generating, this can also be effected depending on a characteristic predicted for a previously generated first and/or second camera image or corresponding ascertained image noise. Alternatively or additionally, it can also be provided that at least one setting of the camera unit is set by the computing unit in matching according to the image noise.
This has the advantage that feedback from the computing unit to the camera unit is realized and thus the camera unit can realize a particularly good image quality especially in repeatedly passing the method as it will be described below.
In a further advantageous embodiment, it is provided that at least capturing the environment and generating the camera images and matching the image noise and generating the overall image are repeatedly or iteratively performed. In particular, the prediction of the respective characteristic can also be iteratively performed. Instead of predicting, the respective characteristic can also be measured in the iterative method in the further repetitions.
This has the advantage that the image quality is dynamically improved, thus also for varying environments. This is advantageous in the described method due to its inherent flexibility, in which the filtering can be individually adapted to the present conditions in each iteration.
A further aspect of the invention relates to a camera device for a motor vehicle including a first and a second camera unit for capturing an environment of the motor vehicle and for generating a first camera image representing the environment captured by the first camera unit as well as a second camera image representing the environment captured by the second camera unit. Therein, the camera device comprises a computing unit, which is formed to predict a first characteristic for image noise in the first camera image and a second characteristic for image noise in the second camera image depending on a prediction model for the image noise recorded or stored in the computing unit.
Furthermore, the computing unit is formed to match the image noise of the first and the second camera image with each other depending on the characteristic respectively predicted for the image noise in the first and in the second camera image, namely by application of a respective first (A) noise filter to the first and/or second camera image, as well as to further generate an overall image representing the environment of the motor vehicle based on the two camera images.
Here, advantages and advantageous embodiments of the camera device correspond to advantages and advantageous embodiments of the described method.
The invention also relates to a motor vehicle with such a camera device.
The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim.
Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims. Below, embodiments of the invention are explained in more detail based on a schematic drawing. Therein, the only figure shows a schematic representation of a motor vehicle with an exemplary embodiment of a camera device.
Therein, the motor vehicle 1 is equipped with a camera device 2, which comprises multiple camera units 3a to 3d for capturing an environment 4 of the motor vehicle as well as for generating respective camera images representing the environment captured by the respective camera units 3a to 3d, that is the respectively captured areas of the
environment 4. In the shown example, the camera device 2 therein comprises a first camera unit 3a, a second camera unit 3b, a third camera unit 3c and a fourth camera unit 3d. Therein, the first camera unit 3a serves for capturing the environment 4 and generating a first camera image, the second camera unit 3b serves for capturing the environment and generating a second camera image, which represents the environment captured by the second camera unit 3b. The same applies to the third and the fourth camera unit 3c, 3d.
Presently, the camera device 2 also includes a computing unit 5, which is formed to predict a respective characteristic for image noise in the respective camera image depending on a prediction model for the image noise recorded in the computing unit 5. Thus, the computing unit 5 is formed to predict a first characteristic for image noise in the first camera image, to predict a second characteristic for the image noise in the second camera image as well as presently to correspondingly predict a third and a fourth characteristic.
Furthermore, the computing unit 5 is formed to match the image noise of the respective camera images with each other depending on the characteristic respectively predicted for the image noise in the corresponding camera images, namely by application of a respective noise filter to the respective camera image. Thus, in the operation of the computing unit 5, the image noise of the first and the second camera image is for example matched with each other depending on the characteristic respectively predicted for the image noise in the first and the second camera image by the computing unit 5. This is also correspondingly effected for the further camera images. Thus, the image noise in presently all of the four camera images of the camera units 3a to 3d is overall alternately matched to each other. Therein, the computing unit 5 is further formed to generate an overall image 6 representing the environment 4 of the motor vehicle 1 based on the respective, presently the four, camera images. In the shown example, the camera device 2 also comprises a display unit 7, which serves for displaying the generated overall image 6. In the shown example, the bird's eye perspective is preset as the perspective for the overall image 6. By the perspective, presently the bird's eye perspective, of the overall image 6, thus, a geometric
transformation for the individual camera images is correspondingly preset, which is required to be able to represent different image areas 6a to 6d of the overall image 6 correctly in perspective based on the individual camera images of the camera units 3a to 3d.
By adapting the image noise in the camera images of the camera units 3a to 3d depending on the respective predicted characteristic, thus, harmonized image noise is achieved in the overall image 6, in which the image noise in the different image areas 6a to 6d is largely homogeneous and thus better image quality is achieved for the overall image 6.
Therein, for modeling the image noise, a spatial distribution of a relative intensity of the image noise is recorded for the prediction model depending on a lens property of the respective camera units 3a to 3d. Here, a level value for the image noise, thus an absolute intensity of the image noise, is presently additionally recorded in the prediction model for the respective camera units 3a to 3d for different exposure scenarios. Thereby, a spatial distribution for the absolute intensity of the image noise is overall preset. For recognizing the different exposure scenarios, corresponding setting values, for example an exposure setting and/or an image gain setting, a so-called camera gain, of the respective camera units 3a to 3d can be retrieved from the respective camera units 3a to 3d by the computing unit 5 in the shown example.
Presently, the prediction model is also dependent on the perspective selected for the overall view 6 via a density map. From the density map, it is apparent how severely a geometric deformation of the camera images is pronounced in different image areas and thereby in which image areas the image noise can be particularly clearly or particularly little clearly perceived by a human viewer.
The predicted characteristics for the image noise are now compared for the different camera images in the computing device 5 and then respective noise filters are adapted, that is a respective spatial noise filter in the corresponding camera units 3a to 3d in the present case on the one hand as well as an adaptive temporal noise filter for setting an absolute intensity of the image noise in combination with a further spatial noise filter in the computing device 5 are adapted to harmonize the image noise in the different camera images and thus the different image areas 6a to 6d of the overall image 6. Therein, the noise filter in the computing unit 6 is presently determined in its strength by an exposure setting and an image gain setting of the respective camera 3a to 3d. In addition, the spatial noise filter is presently additionally increased in a filter strength depending on a speed of the motor vehicle 1 as well as the strength of the temporal noise filter is conversely correspondingly reduced depending on the speed of the motor vehicle 1 . For a faster motor vehicle, the strength of the temporal noise filter is therefore reduced compared to a slower or stationary motor vehicle 1 , in contrast, the strength of the spatial noise filter is increased. Furthermore, the respective settings of the noise filters
implemented in the camera units 3a to 3d can presently be taken into account in the noise filter in the computing unit 5, as well as the spatial noise filter in the respective camera units 3a to 3d can be controlled as a part of the entire noise filter by the computing unit 5.
By the first mentioned steps, the image noise in the camera images is modeled by different light conditions depending on the respective type of the camera unit 3a to 3d. Here, modeling can relate to a type of the noise, for example to the effect whether or not it is correlated noise, and/or to a noise level and/or to a spatial distribution over the image. For correlated noise, the assumption that the noise is spatially independent does not apply. Therein, the generated spatial distribution for the image noise is used in
combination with the density map to predict respective actually present image noise for certain spatial positions of the camera image, which is used for the overall image 6. Since the respective noise filter is adapted for the individual camera images corresponding to the established knowledge about the image noise to be expected via the prediction model, thus, a harmonized, uniform image noise is achieved in the overall image 6. Thereto, different image areas of the respective camera image can also be differently severely filtered such that for example a boundary area of the first camera image, which adjoins to the image area 6b in the overall image 6, and the corresponding boundary area of the second camera image, which adjoins to the image area 6a in the overall image 6, are matched with each other in their image noise. This can be correspondingly performed for the boundary area of the first camera image, which adjoins to the image area 6d in the overall image 6, and for the boundary area of the fourth camera image, which adjoins to the image area 6a in the overall image 6. Therein, the boundary areas of the image can also overlap in the overall image 6. Thus, particularly good image quality can overall be achieved for the overall image 6 via filtering the image noise spatially adapted in the individual camera images and thereby also in the overall image 6.

Claims

Claims
Method for operating a camera device (2) of a motor vehicle (1 ) including a) capturing an environment (4) of the motor vehicle (1 ) by a first camera unit (3a) of the camera device (2) and a second camera unit (3b) of the camera device (2); b) generating a first camera image, which represents the environment (4) captured by the first camera unit (3a), by the first camera unit (3a) and a second camera image, which represents the environment (4) captured by the second camera unit (3b), by the second camera unit (3b);
c) predicting a first characteristic for image noise in the first camera image and a second characteristic for image noise in the second camera image by a computing unit (5) of the camera device (2) depending on a prediction model for the image noise recorded in the computing unit (5);
d) matching the image noise of the first and the second camera image with each other depending on the characteristic respectively predicted for the image noise in the first and the second camera image by the computing unit (5) by applying a respective noise filter to the first or second camera image;
e) generating an overall image (6) representing the environment (4) of the motor vehicle (1 ) based on the two camera images by the computing unit (5).
Method according to claim 1 ,
characterized in that
the characteristics each include a spatial distribution of a relative intensity of the image noise and/or a measure of a spatial correlation of the image noise and/or a level value of the image noise.
Method according to any one of the preceding claims,
characterized in that
a filter strength of the respective noise filter is preset independently of each other for different image areas of the respective camera image.
4. Method according to any one of the preceding claims,
characterized in that
the prediction model is dependent on a property of the respective camera unit (3a, 3b, 3c, 3d), in particular on a lens property, preferably a focal length, and/or on a position, in which the camera unit (3a, 3b, 3c, 3d) is disposed at the motor vehicle (1 ) in intended use.
5. Method according to any one of the preceding claims,
characterized in that
the prediction model is dependent on a setting of the respective camera unit (3a, 3b, 3c, 3d), in particular on an exposure setting and/or an image gain and/or a spectrum of exposure settings.
6. Method according to any one of the preceding claims,
characterized in that
the prediction model depends on a density map, which indicates how many pixels of the respective camera image are mapped to a pixel of the overall image (6).
7. Method according to any one of the preceding claims,
characterized in that
the respective noise filter includes a temporal noise filter, the filter strength of which is preset depending on the predicted characteristic for the respective image noise.
8. Method according to claim 7,
characterized in that
the filter strength of the temporal noise filter is set depending on a speed of the motor vehicle (1 ) and/or the exposure setting of the respective camera unit (3a, 3b, 3c, 3d).
9. Method according to any one of the preceding claims,
characterized in that
the respective noise filter includes a spatial noise filter, the filter strength of which is preset depending on the predicted characteristic for the respective image noise.
10. Method according to claim 9,
characterized in that
the filter strength of the spatial noise filter is set depending on the setting of the respective camera unit (3a, 3b, 3c, 3d), in particular the exposure setting and/or the image gain and/or the spectrum of exposure settings.
1 1 . Method according to any one of the preceding claims,
characterized in that
the first and/or the second camera unit (3a, 3b) comprise an additional spatial noise filter integrated in the respective camera unit (3a, 3b) and the filter strength of the respective, in particular the respective spatial, noise filter is set depending on a setting of the additional noise filter.
12. Method according to any one of the preceding claims,
characterized in that
at least one setting of the camera unit (3a, 3b, 3c, 3d) is set by the computing unit (5) in matching according to method step d) and/or the first and/or the second camera unit (3a, 3b) comprise the additional spatial noise filter integrated in the respective camera unit (3a, 3b), and a setting of the integrated spatial noise filter is set by the computing unit (5) in matching according to method step d).
13. Method according to any one of the preceding claims,
characterized in that
capturing the environment (4) according to method step a), generating the camera images according to method step b), matching the image noise according to method step d) and generating the overall image (6) according to method step e) are repeatedly performed, in particular also the prediction of the respective
characteristic according to method step c).
14. Camera device (2) for a motor vehicle (1 ), including
- a first and a second camera unit (3a, 3b) for capturing an environment (4) of the motor vehicle (1 ) and for generating a first camera image, which represents the environment (4) captured by the first camera unit (3a, 3b), as well as a second camera image, which represents the environment (4) captured by the second camera unit (3a, 3b) ;
characterized by
- a computing unit (5), which is formed to predict a first characteristic for image noise in the first camera image and a second characteristic for image noise in the second camera image depending on a prediction model for the image noise recorded in the computing unit (5), as well as to match the image noise of the first and the second camera image with each other depending on the characteristic respectively predicted for the image noise in the first and in the second camera image, namely by application of a respective noise filter to the first or second camera image, and further to generate an overall image (6) representing the environment (4) of the motor vehicle (1 ) based on the two camera images.
15. Motor vehicle (1 ) with a camera device (2) according to claim 14.
PCT/EP2018/075440 2017-09-21 2018-09-20 Harmonization of image noise in a camera device of a motor vehicle WO2019057807A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017121916.1 2017-09-21
DE102017121916.1A DE102017121916A1 (en) 2017-09-21 2017-09-21 Harmonization of image noise in a camera device of a motor vehicle

Publications (1)

Publication Number Publication Date
WO2019057807A1 true WO2019057807A1 (en) 2019-03-28

Family

ID=63667926

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/075440 WO2019057807A1 (en) 2017-09-21 2018-09-20 Harmonization of image noise in a camera device of a motor vehicle

Country Status (2)

Country Link
DE (1) DE102017121916A1 (en)
WO (1) WO2019057807A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754565A (en) * 2019-03-29 2020-10-09 浙江宇视科技有限公司 Image processing method and device
CN116709046A (en) * 2023-07-03 2023-09-05 深圳市度申科技有限公司 Fixed pattern noise calculation and compensation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1361747A2 (en) * 2002-05-06 2003-11-12 Eastman Kodak Company Method and apparatus for enhancing digital images utilizing non-image data
US20130089262A1 (en) * 2008-08-29 2013-04-11 Adobe Systems Incorporated Metadata-Driven Method and Apparatus for Constraining Solution Space in Image Processing Techniques
EP2590397A2 (en) * 2011-11-02 2013-05-08 Robert Bosch Gmbh Automatic image equalization for surround-view video camera systems
WO2016012288A1 (en) * 2014-07-25 2016-01-28 Connaught Electronics Ltd. Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1361747A2 (en) * 2002-05-06 2003-11-12 Eastman Kodak Company Method and apparatus for enhancing digital images utilizing non-image data
US20130089262A1 (en) * 2008-08-29 2013-04-11 Adobe Systems Incorporated Metadata-Driven Method and Apparatus for Constraining Solution Space in Image Processing Techniques
EP2590397A2 (en) * 2011-11-02 2013-05-08 Robert Bosch Gmbh Automatic image equalization for surround-view video camera systems
WO2016012288A1 (en) * 2014-07-25 2016-01-28 Connaught Electronics Ltd. Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754565A (en) * 2019-03-29 2020-10-09 浙江宇视科技有限公司 Image processing method and device
CN111754565B (en) * 2019-03-29 2024-04-26 浙江宇视科技有限公司 Image processing method and device
CN116709046A (en) * 2023-07-03 2023-09-05 深圳市度申科技有限公司 Fixed pattern noise calculation and compensation method
CN116709046B (en) * 2023-07-03 2023-12-15 深圳市度申科技有限公司 Fixed pattern noise calculation and compensation method

Also Published As

Publication number Publication date
DE102017121916A1 (en) 2019-03-21

Similar Documents

Publication Publication Date Title
US11558558B1 (en) Frame-selective camera
CN111986129B (en) HDR image generation method, equipment and storage medium based on multi-shot image fusion
US9336574B2 (en) Image super-resolution for dynamic rearview mirror
Rao et al. A Survey of Video Enhancement Techniques.
US8625881B2 (en) Enhanced ghost compensation for stereoscopic imagery
CN113168670A (en) Bright spot removal using neural networks
CN104980652B (en) Image processing apparatus and image processing method
CN111406275A (en) Method for generating an output image showing a motor vehicle and an environmental region of the motor vehicle in a predetermined target view, camera system and motor vehicle
KR20150045877A (en) Image processing apparatus and image processing method
Mangiat et al. Spatially adaptive filtering for registration artifact removal in HDR video
JPWO2019146226A1 (en) Image processing device, output information control method, and program
CN113039576A (en) Image enhancement system and method
JP6087612B2 (en) Image processing apparatus and image processing method
WO2019057807A1 (en) Harmonization of image noise in a camera device of a motor vehicle
JP2010278890A (en) Image forming apparatus, and image forming method
CN112819699A (en) Video processing method and device and electronic equipment
Bengtsson et al. Regularized optimization for joint super-resolution and high dynamic range image reconstruction in a perceptually uniform domain
US11544830B2 (en) Enhancing image data with appearance controls
JPWO2018220780A1 (en) Image generation apparatus, image generation method, and program
CN112740264A (en) Design for processing infrared images
EP4090006A2 (en) Image signal processing based on virtual superimposition
US11145093B2 (en) Semiconductor device, image processing system, image processing method and computer readable storage medium
Sá et al. Range-enhanced active foreground extraction
WO2020084894A1 (en) Multi-camera system, control value calculation method and control device
KR101633634B1 (en) Method and system for color matching between left and right images acquired from stereoscopic camera in digital cinemas

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18773435

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18773435

Country of ref document: EP

Kind code of ref document: A1