WO2016012288A1 - Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle - Google Patents
Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle Download PDFInfo
- Publication number
- WO2016012288A1 WO2016012288A1 PCT/EP2015/065925 EP2015065925W WO2016012288A1 WO 2016012288 A1 WO2016012288 A1 WO 2016012288A1 EP 2015065925 W EP2015065925 W EP 2015065925W WO 2016012288 A1 WO2016012288 A1 WO 2016012288A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- motor vehicle
- correction function
- overall image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012937 correction Methods 0.000 claims abstract description 52
- 230000007613 environmental effect Effects 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 6
- 230000008901 benefit Effects 0.000 description 8
- 230000006872 improvement Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 206010034960 Photophobia Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000000265 homogenisation Methods 0.000 description 1
- 208000013469 light sensitivity Diseases 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration by non-spatial domain filtering
-
- G06T5/73—
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the invention relates to a method for operating a camera system of a motor vehicle, in which at least a first and a second camera capture different environmental regions of the motor vehicle. A first image is captured by the first camera and a second image is captured by the second camera. From at least the first and the second image, an overall image is provided by an image processing device of the camera system.
- the invention relates to a camera system for a motor vehicle with at least a first and a second camera, which capture different environmental regions of the motor vehicle.
- the invention relates to a driver assistance system with such a camera system as well as to a motor vehicle with such a driver assistance system.
- an overall image of a first image of a first camera and a second image of a second camera is for example processed by an image processing device of the camera system.
- methods can be applied to the overall image in order to adapt brightness values and/or color values.
- methods can be used, which globally smooth the overall image like a Gaussian filter. This is required because the overall image is composed of different images and thus is heterogeneous.
- the overall image is also heterogeneous because the images, of which the overall image is composed, are provided by different cameras.
- each of the cameras can have different settings for the light sensitivity, the exposure time and the aperture.
- the respective cameras are differently oriented or have a different field of view.
- objects in the environmental region are also differently far away from the respective camera.
- Methods which wish to counteract the inhomogeneity of the overall image, are furthermore known as spatial correction methods, algorithms for the calibration or algorithms to fuse the respective images of the overall image and therein to slur the edge areas of the respective images.
- this object is solved by a method, by a camera system, by a driver assistance system as well as by a motor vehicle having the features according to the respective independent claims.
- Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
- a method according to the invention serves for operating a camera system of a motor vehicle, in which at least a first and a second camera capture different environmental regions of the motor vehicle.
- a first image is captured by the first camera and a second image is captured by the second camera.
- an overall image is provided from at least the first and the second image by an image processing device of the camera system, wherein the overall image is displayed on a display in the motor vehicle.
- At least a partial area of the overall image is determined, which differs in image sharpness and/or image contrast with respect to at least another image area of the overall image.
- a correction function is applied to the at least one partial area by means of the image processing device, wherein the difference in the image sharpness and/or the image contrast is reduced by the correction function.
- the inhomogeneity of the image sharpness is remedied.
- the varying contrast of the overall image can also be matched or adapted. This has the advantage that the overall image appears particularly homogeneous. Furthermore, the overall image with the high image sharpness is better suitable for recognizing objects in the environmental region of the motor vehicle.
- the first and the second camera are preferably mounted on the motor vehicle such that the entire environmental region of the motor vehicle can be captured.
- the first and the second camera are preferably a video camera, which is able to provide a plurality of images (frames) per second.
- the first and/or the second camera can be a CCD camera or a CMOS camera.
- the overall image is provided as a plan view image of the environmental regions of the motor vehicle with a superimposed picture of the motor vehicle. This plan view image is displayed on a display in the motor vehicle. Based on the plan view image, a driver of the motor vehicle can for example see, which objects or obstacles are located in the environmental region of the motor vehicle. The superimposed picture of the motor vehicle assists the driver in being able to assess a distance to the objects from the motor vehicle.
- the correction function is applied for at least one pixel within the partial area and the correction function is adapted depending on a position of the at least one pixel in the partial area.
- This is advantageous because thus it is not applied a global method to the overall image, but the method can be executed in locally adapted manner.
- Locally adapted means that the position of the pixel to be adapted in the overall image plays a role in applying the correction function.
- a position of the first and/or the second camera is specified in the overall image and the correction function is adapted depending on the position of the first camera and/or the second camera with respect to the at least one pixel.
- the correction function can be adapted with respect to a distance from the pixel to the respective camera.
- the distance of an object or of the pixel from the camera is a component of the quality of the image and thus of the quality of the overall image.
- the advantage arising by the consideration of the position of the camera is a homogenization or improvement of the overall image, which better corresponds to the local characteristics of the overall image.
- the correction function is adapted depending on an orientation of the first and/or the second camera to the at least one pixel.
- the orientation or a posture of the camera to the object in the environmental region also has an effect on the quality of the image and thus the overall image in the respective position of the pixel.
- the objects captured in an edge area of a field of view of the camera are depicted with greater distortions than objects close to the optical axis of the camera.
- the advantage in considering the orientation of the camera is now that the correction function can be applied with higher accuracy.
- the correction function is adapted depending on geometric transformation parameters of the camera.
- the geometric transformation parameters can for example be an internal orientation and/or an external orientation.
- the internal orientation can for example be used to consider biases or distortions tracing back to a lens of the camera.
- the external orientation can be used to arrive from the position of the object in the environmental region at a position of the pixel in the image and thus in the overall image.
- the advantage is in that the use of the geometric transformation parameters results in further improvement of the overall image.
- the correction function is applied to the at least one partial area within a predetermined computing time.
- the application of the correction function therefore has to proceed as fast as it is matched or consistent with the current speed of movement of the motor vehicle.
- the driver always has to get displayed the current information about the environmental region in the display very contemporarily to be able to correspondingly react to possible obstacles or risks.
- Advantageous in the predetermined computing time for the improvement of the overall image by the correction function is therefore the fast provision of the overall image in the display of the motor vehicle.
- the correction function is adapted depending on a variation of a direction of travel and/or a traveling speed of the motor vehicle.
- This information can for example be tapped from a CAN bus and is usually determined by means of sensors of the motor vehicle.
- This has the advantage that areas of the overall image, which change upon movement of the respective camera or the motor vehicle depending on the direction of travel and/or the traveling speed, can be taken into account.
- the area of the overall image which is captured with a laterally oriented camera of the motor vehicle, changes for example other than the area of the overall image captured with a forward or rearward oriented camera of the motor vehicle.
- edges or the image sharpness also change differently.
- the image sharpness of an edge of the image extending in direction of travel of the motor vehicle changes other than an edge of the image extending transversely to the direction of travel of the motor vehicle.
- the correction function can be specially adapted for such cases.
- the correction function is adapted depending on a current temperature of the first camera and/or the second camera.
- the temperature of the camera in particular the temperature of an image sensor of the camera, has an effect on the quality of the provided image. Usually, more intense image noise arises by a higher temperature.
- the advantage is that the intensity of the image noise can be inferred with the knowledge of the current temperature and the image or the overall image can be improved with the correspondingly adapted correction function.
- a wavelet shrinkage method is performed with the correction function.
- the wavelet shrinkage method which can also be referred to as wavelet reduction method, the quantity of the noise on the wavelet coefficients, which result by a wavelet
- the wavelet coefficients are improved with a shrinkage curve.
- the wavelet coefficients are obtained by splitting the overall image in low-frequency image contents and high-frequency image contents. This then occurs by means of a so-called wavelet transformation, which designates a certain family of linear time-frequency transformations.
- the advantage of the wavelet shrinkage method is that therewith the image sharpness and/or the homogeneity of the partial area of the overall image or also the entire overall image can be very precisely increased.
- the wavelet shrinkage method is executed for a predetermined number of wavelet scales.
- a predetermined number of scales can be used, which serve for dividing the image into different frequency ranges.
- the different frequency ranges can also be treated separated from each other, namely presently in the form of wavelet coefficients.
- a camera system according to the invention for a motor vehicle includes at least a first and a second camera, which capture different environmental regions of the motor vehicle, wherein the camera system is adapted to perform a method according to the invention.
- a driver assistance system includes a camera system according to the invention.
- a motor vehicle according to the invention includes a driver assistance system according to the invention.
- Fig. 1 in schematic plan view a motor vehicle with a camera system capturing an environmental region of the motor vehicle;
- Fig. 2 in schematic illustration an overall image as a plan view image of the
- Fig. 3 a flow diagram of a method according to an embodiment of the invention.
- Fig. 4 a graph of a correction function, which is piecewise linear
- Fig. 5 a pixel density map with fields of the plan view image, wherein each of the fields represents a portion of a pixel;
- Fig. 6 in schematic illustration the overall image, which is displayed on a display in the motor vehicle, wherein the overall image is inhomogeneous and has blurred edges;
- Fig. 7 in schematic illustration the overall image analogous to Fig. 6, wherein the overall image is homogeneous and the edges are sharp.
- Fig. 1 a plan view of a motor vehicle 1 with a camera system 2 according to an embodiment of the invention is schematically illustrated.
- the camera system 2 includes a first camera 3, a second camera 4 and an image processing device 5.
- the image processing device 5 can be disposed in any position in the motor vehicle 1 .
- the respective cameras 3, 4 are disposed such that they capture an environmental region 6 of the motor vehicle 1 .
- any number of further cameras 7 and 9 can be provided. These further cameras 7, 9 are also disposed on the motor vehicle 1 such that they capture the environmental region 6.
- the respective cameras 3, 4, 7, 9 are CMOS cameras or else CCD cameras or any image capturing device, by which the environmental region 6 can be captured.
- the cameras 3, 4, 7, 9 are disposed in a rear region and/or in a front region and/or in a lateral region of the motor vehicle 1 .
- the invention is not restricted to such an arrangement of the cameras 3, 4, 7, 9.
- the arrangement of the cameras 3, 4, 7, 9 can be different according to embodiment.
- multiple cameras can also be disposed in the lateral region.
- the cameras 3, 4, 7, 9 are video cameras, which continuously capture a sequence of images.
- the image processing device 5 then processes the sequence of images in real time or in a predetermined computing time and provides an overall image 8 from a first image of the first camera 3, a second image of the second camera 4 and alternatively or additionally a further image of the further cameras 7, 9.
- Fig. 2 exemplarily shows the overall image 8, which is displayed by the image processing device 5 on a display 15 in the motor vehicle 1 in the situation according to Fig. 1 .
- the overall image 8 is provided in the form of a plan view image.
- the overall image 8 exhibits artifacts and differently sharp edges due to the composition of several images.
- the overall image 8 can be referred to as inhomogeneous.
- the following method is performed for improving the overall image 8.
- the overall image 8 is divided or decomposed with a wavelet transformation in a step S1 .
- a certain family of linear time-frequency transformations is designated by the wavelet transformation.
- the wavelet transformation is composed of a wavelet analysis, which designates a transition from a time representation of the overall image 8 into a wavelet representation, and a wavelet synthesis, which designates a retransformation of the wavelet representation into the time representation.
- a result of the wavelet analysis of step S1 are wavelet coefficients 10, which contain high frequencies of the overall image, and wavelet coefficients 1 1 , which contain low frequencies of the overall image 8.
- the wavelet analysis is performed for three scales.
- more or less scales can also be used, for example two or four.
- three high-frequency wavelet coefficients 10 result, wherein one of these wavelet coefficients 10 describes the horizontal frequencies of the overall image 8, another one describes the vertical frequencies of the overall image 8 and the last wavelet coefficient 10 describes the diagonal frequencies of the overall image 8. From the last scale of the wavelet analysis, a wavelet coefficient 1 1 with the low frequencies of the overall image 8 is also present.
- the high-frequency wavelet coefficients 10 are treated with a correction function 14.
- the result of this step are the corrected wavelet coefficients 12.
- the corrected wavelet coefficients 12 are now retransformed into the time representation or the overall image 8 together with the low-frequency wavelet coefficients 1 1 by means of the wavelet synthesis.
- the result after step S3 is an improved overall image 13.
- the improved overall image 13 is a harmonized or matched image with respect to the image sharpness, local image contrast and image noise. Generally, it can be said that the improved overall image 13 is more homogeneous than the original overall image 8.
- a graph of the correction function 14 is shown.
- an input value I thus, the value of the high-frequency wavelet coefficients 10
- an output value O thus, the value of the corrected wavelet coefficients 12
- the correction function 14 presently corresponds to a shrinkage curve, which is piecewise linear.
- the shrinkage curve can be adapted for the respective scale and thus purposefully improve the homogeneity and/or the image sharpness and/or the image contrast and/or the image artifacts for certain frequency ranges of the overall image 8.
- the basic idea is to vary the shrinkage curve or the transfer curve depending on properties or characteristics of the overall image 8 and physical properties or
- Fig. 5 shows, how inhomogeneous the image resolution of the overall image has become due to the transformation to the plan view image.
- the representation of Fig. 5 shows an image resolution for individual areas of the overall image 8.
- the image resolution becomes lower the further an area is away from the camera.
- the further an area is away from the camera 3, 4, 7, 9, the less pixels are available from the image of the respective camera 3, 4, 7, 9 for the overall image 8.
- the overall image 8 is therefore more out of focus or blurred in the areas farther away from the camera 3, 4, 7, 9.
- the respective areas of Fig. 5 can be determined by means of a calibration of the camera system 2.
- Fig. 6 shows the original overall image 8, which is displayed on the display in the motor vehicle 1 . It is clearly apparent that the edges are not sharply presented in the overall image 8. In contrast to this, in Fig. 7, the improved overall image 13 is presented on the display in the motor vehicle 1 . In synopsis of Fig. 6 and Fig. 7, it can now be observed that the edges are presented sharper, the image contrast is matched and the entire corrected overall image 13 appears more homogeneous in the improved overall image 13.
- the overall image 8 is divided into partial areas before applying the correction function 14 in order to increase a quality of the image
- the correction function 14 can be locally adapted depending on the local image sharpness or the local image artifacts.
- the adapted correction function 14 also can only be employed in the case of a limit value exceedance. For example, if the inhomogeneity of the overall image 8 is not extreme.
- the improvement can also be limited up to a certain point by the correction function 14. This saves computing time in improving the overall image 8 and contributes to the aim to improve the overall image 8 immediately or within a predetermined computing time.
- an output or a reconstruction of the improved overall image 13 can also be effected.
- the correction function 14 can also be only partially applied to the overall image 8. Thus, it can for example be that denoising is desired to reduce the image noise, but the image sharpness is not to be changed. For example, this can be the case because a texture with sharp edges is not in the environmental region 6. The computing time to increase the image sharpness can therefore be saved.
- a partial color desaturation can be applied for high-frequency areas of the overall image 8 by the correction function 14. This approach is helpful to attenuate an effect from the image artifacts, in particular Moire artifacts.
- the Moire artifacts make themselves felt in the superposition of regular fine rasters by additional seemingly coarse rasters.
- the Moire artifacts are a special case of an alias effect, which arises by subsampling.
- previous knowledge can also be used for correcting these edges or high-frequency areas.
- the previous knowledge includes for example a change of a direction of travel of the motor vehicle 1 and/or a traveling speed of the motor vehicle 1 .
- edges, which are longitudinal to the direction of travel change less severely than edges, which are transverse to the direction of travel.
- the edges extending transversely to the direction of travel can be specially sharpened, while the longitudinally extending edges are not sharpened by the correction function 14. This approach allows effective employment of the correction function 14 with respect to the computing time.
- the sharpening of the edges in transverse direction can also be adapted to the current traveling speed by the correction function 14.
- the correction function 14 For example, in the case of an acceleration, an increasing blurriness of the edges transverse to the direction of travel has to be expected.
- the correction function 14 can also be applied as follows. Thus, in the case of a turning procedure of the motor vehicle 1 , the correction function 14 can be accordingly adjusted, and various radial movement artifacts can be avoided.
- a further useful way to newly calculate the limit values for the image sharpness can be effected by counting the edges or the high-frequency areas in a certain direction. From or below a certain number of the edges in the certain direction, the limit value for the application of the correction function 14 with respect to the image sharpness can be adapted. This is effected such that it can be ensured that the improved overall image 13 is always presented with adapted image sharpness for the respective current situation in the environmental region 6.
- the correction function 14 can be applied depending on a respective temperature of the respective camera 3, 4, 7, 9.
- a known relation between sensor noise of the respective camera 3, 4, 7, 9 and the temperature of the respective camera 3, 4, 7, 9 can be used.
Abstract
The invention relates to a method for operating a camera system (2) of a motor vehicle (1), in which at least a first camera (3) and a second camera (4) capture different environmental regions (6) of the motor vehicle (1). A first image is captured by the first camera (3) and a second image is captured by the second camera (4). Furthermore, an overall image (8) is provided from at least the first and the second image by an image processing device (5) of the camera system (2). At least a partial area of the overall image (8), which differs with respect to at least another image area of the overall image (8) in an image sharpness and/or an image contrast, is determined, and a correction function (14) is applied to the at least one partial area by means of the image processing device (5), wherein the difference in the image sharpness and/or the image contrast is reduced by the correction function (14).
Description
Method for operating a camera system of a motor vehicle, camera system, d
assistance system and motor vehicle
The invention relates to a method for operating a camera system of a motor vehicle, in which at least a first and a second camera capture different environmental regions of the motor vehicle. A first image is captured by the first camera and a second image is captured by the second camera. From at least the first and the second image, an overall image is provided by an image processing device of the camera system. In addition, the invention relates to a camera system for a motor vehicle with at least a first and a second camera, which capture different environmental regions of the motor vehicle. Furthermore, the invention relates to a driver assistance system with such a camera system as well as to a motor vehicle with such a driver assistance system.
Methods for operating a camera system of a motor vehicle, in which at least a first and a second camera capture different environmental regions of the motor vehicle, are known from the prior art. Thus, an overall image of a first image of a first camera and a second image of a second camera is for example processed by an image processing device of the camera system. Herein, methods can be applied to the overall image in order to adapt brightness values and/or color values. In addition, methods can be used, which globally smooth the overall image like a Gaussian filter. This is required because the overall image is composed of different images and thus is heterogeneous. The overall image is also heterogeneous because the images, of which the overall image is composed, are provided by different cameras. Thus, each of the cameras can have different settings for the light sensitivity, the exposure time and the aperture. Furthermore, the respective cameras are differently oriented or have a different field of view. Thus, objects in the environmental region are also differently far away from the respective camera.
Methods, which wish to counteract the inhomogeneity of the overall image, are furthermore known as spatial correction methods, algorithms for the calibration or algorithms to fuse the respective images of the overall image and therein to slur the edge areas of the respective images.
It is disadvantageous in the mentioned prior art that, as already above mentioned, the existing algorithms only focus to the adaptation of the brightness and the color, or merely algorithms for the spatial correction, calibration or fusion are used. However, other
characteristics of the overall image are also to be taken into account to make the overall image to appear homogenous.
It is the object of the invention to demonstrate a solution, how the quality of an overall image provided with a camera system of a motor vehicle of the initially mentioned kind can be improved.
According to the invention, this object is solved by a method, by a camera system, by a driver assistance system as well as by a motor vehicle having the features according to the respective independent claims. Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
A method according to the invention serves for operating a camera system of a motor vehicle, in which at least a first and a second camera capture different environmental regions of the motor vehicle. A first image is captured by the first camera and a second image is captured by the second camera. Furthermore, an overall image is provided from at least the first and the second image by an image processing device of the camera system, wherein the overall image is displayed on a display in the motor vehicle. At least a partial area of the overall image is determined, which differs in image sharpness and/or image contrast with respect to at least another image area of the overall image. In addition, a correction function is applied to the at least one partial area by means of the image processing device, wherein the difference in the image sharpness and/or the image contrast is reduced by the correction function.
By the method according to the invention, it becomes possible that the inhomogeneity of the image sharpness is remedied. Furthermore, the varying contrast of the overall image can also be matched or adapted. This has the advantage that the overall image appears particularly homogeneous. Furthermore, the overall image with the high image sharpness is better suitable for recognizing objects in the environmental region of the motor vehicle.
The first and the second camera are preferably mounted on the motor vehicle such that the entire environmental region of the motor vehicle can be captured. Thus, at least a front camera and/or at least a rear camera and/or at least each one lateral camera are provided. The first and the second camera are preferably a video camera, which is able to provide a plurality of images (frames) per second. The first and/or the second camera can be a CCD camera or a CMOS camera.
In an embodiment, it is provided that the overall image is provided as a plan view image of the environmental regions of the motor vehicle with a superimposed picture of the motor vehicle. This plan view image is displayed on a display in the motor vehicle. Based on the plan view image, a driver of the motor vehicle can for example see, which objects or obstacles are located in the environmental region of the motor vehicle. The superimposed picture of the motor vehicle assists the driver in being able to assess a distance to the objects from the motor vehicle.
In particular, it is provided that the correction function is applied for at least one pixel within the partial area and the correction function is adapted depending on a position of the at least one pixel in the partial area. This is advantageous because thus it is not applied a global method to the overall image, but the method can be executed in locally adapted manner. Locally adapted means that the position of the pixel to be adapted in the overall image plays a role in applying the correction function.
In a further development, it is provided that a position of the first and/or the second camera is specified in the overall image and the correction function is adapted depending on the position of the first camera and/or the second camera with respect to the at least one pixel. Thus, the correction function can be adapted with respect to a distance from the pixel to the respective camera. The distance of an object or of the pixel from the camera is a component of the quality of the image and thus of the quality of the overall image. The advantage arising by the consideration of the position of the camera is a homogenization or improvement of the overall image, which better corresponds to the local characteristics of the overall image.
In particular development, it is provided that the correction function is adapted depending on an orientation of the first and/or the second camera to the at least one pixel. The orientation or a posture of the camera to the object in the environmental region also has an effect on the quality of the image and thus the overall image in the respective position of the pixel. Thus, for example, the objects captured in an edge area of a field of view of the camera are depicted with greater distortions than objects close to the optical axis of the camera. The advantage in considering the orientation of the camera is now that the correction function can be applied with higher accuracy.
Preferably, the correction function is adapted depending on geometric transformation parameters of the camera. The geometric transformation parameters can for example be an internal orientation and/or an external orientation. The internal orientation can for
example be used to consider biases or distortions tracing back to a lens of the camera. The external orientation can be used to arrive from the position of the object in the environmental region at a position of the pixel in the image and thus in the overall image. Thus, the advantage is in that the use of the geometric transformation parameters results in further improvement of the overall image.
Furthermore, it is preferably provided that the correction function is applied to the at least one partial area within a predetermined computing time. This means that the overall image is displayed on the display in the motor vehicle directly after application of the correction function. The application of the correction function therefore has to proceed as fast as it is matched or consistent with the current speed of movement of the motor vehicle. Thus, the driver always has to get displayed the current information about the environmental region in the display very contemporarily to be able to correspondingly react to possible obstacles or risks. Advantageous in the predetermined computing time for the improvement of the overall image by the correction function is therefore the fast provision of the overall image in the display of the motor vehicle.
In a development, it is provided that the correction function is adapted depending on a variation of a direction of travel and/or a traveling speed of the motor vehicle. This information can for example be tapped from a CAN bus and is usually determined by means of sensors of the motor vehicle. This has the advantage that areas of the overall image, which change upon movement of the respective camera or the motor vehicle depending on the direction of travel and/or the traveling speed, can be taken into account. Thus, the area of the overall image, which is captured with a laterally oriented camera of the motor vehicle, changes for example other than the area of the overall image captured with a forward or rearward oriented camera of the motor vehicle. Within one of the respective images of the overall image, edges or the image sharpness also change differently. Thus, for example, the image sharpness of an edge of the image extending in direction of travel of the motor vehicle changes other than an edge of the image extending transversely to the direction of travel of the motor vehicle. The edge extending
transversely to the direction of travel of the motor vehicle will usually be more severely blurred or more out of focus than the edge extending longitudinally to the direction of travel. With the knowledge about the change of the direction of travel and/or the traveling speed, now, the correction function can be specially adapted for such cases.
Similarly, it is provided that the correction function is adapted depending on a current temperature of the first camera and/or the second camera. The temperature of the
camera, in particular the temperature of an image sensor of the camera, has an effect on the quality of the provided image. Usually, more intense image noise arises by a higher temperature. Now, the advantage is that the intensity of the image noise can be inferred with the knowledge of the current temperature and the image or the overall image can be improved with the correspondingly adapted correction function.
In particular, a wavelet shrinkage method is performed with the correction function. In the wavelet shrinkage method, which can also be referred to as wavelet reduction method, the quantity of the noise on the wavelet coefficients, which result by a wavelet
decomposition or a wavelet analysis, is estimated. In order to again remove the intensity of the noise or the additive noise proportion from the coefficients, the wavelet coefficients are improved with a shrinkage curve. The wavelet coefficients are obtained by splitting the overall image in low-frequency image contents and high-frequency image contents. This then occurs by means of a so-called wavelet transformation, which designates a certain family of linear time-frequency transformations. The advantage of the wavelet shrinkage method is that therewith the image sharpness and/or the homogeneity of the partial area of the overall image or also the entire overall image can be very precisely increased.
In a further development, it is provided that the wavelet shrinkage method is executed for a predetermined number of wavelet scales. In the wavelet transformation or a wavelet decomposition, a predetermined number of scales can be used, which serve for dividing the image into different frequency ranges. The more scales are used, the higher frequency ranges can be processed individually, thus isolated from the other frequencies. It is advantageous in that the different frequency ranges can also be treated separated from each other, namely presently in the form of wavelet coefficients.
A camera system according to the invention for a motor vehicle includes at least a first and a second camera, which capture different environmental regions of the motor vehicle, wherein the camera system is adapted to perform a method according to the invention.
A driver assistance system according to the invention includes a camera system according to the invention.
A motor vehicle according to the invention includes a driver assistance system according to the invention.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the camera system according to the invention, to the driver assistance system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone.
Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.
There show:
Fig. 1 in schematic plan view a motor vehicle with a camera system capturing an environmental region of the motor vehicle;
Fig. 2 in schematic illustration an overall image as a plan view image of the
environmental region with a superimposed picture of the motor vehicle;
Fig. 3 a flow diagram of a method according to an embodiment of the invention;
Fig. 4 a graph of a correction function, which is piecewise linear;
Fig. 5 a pixel density map with fields of the plan view image, wherein each of the fields represents a portion of a pixel;
Fig. 6 in schematic illustration the overall image, which is displayed on a display in the motor vehicle, wherein the overall image is inhomogeneous and has blurred edges; and
Fig. 7 in schematic illustration the overall image analogous to Fig. 6, wherein the overall image is homogeneous and the edges are sharp.
In Fig. 1 , a plan view of a motor vehicle 1 with a camera system 2 according to an embodiment of the invention is schematically illustrated. The camera system 2 includes a first camera 3, a second camera 4 and an image processing device 5. The image processing device 5 can be disposed in any position in the motor vehicle 1 . In the embodiment, the respective cameras 3, 4 are disposed such that they capture an environmental region 6 of the motor vehicle 1 . In order to capture the environmental region 6, any number of further cameras 7 and 9 can be provided. These further cameras 7, 9 are also disposed on the motor vehicle 1 such that they capture the environmental region 6.
The respective cameras 3, 4, 7, 9 are CMOS cameras or else CCD cameras or any image capturing device, by which the environmental region 6 can be captured. In the
embodiment according to Fig. 1 , the cameras 3, 4, 7, 9 are disposed in a rear region and/or in a front region and/or in a lateral region of the motor vehicle 1 . However, the invention is not restricted to such an arrangement of the cameras 3, 4, 7, 9. The arrangement of the cameras 3, 4, 7, 9 can be different according to embodiment. For example, multiple cameras can also be disposed in the lateral region.
The cameras 3, 4, 7, 9 are video cameras, which continuously capture a sequence of images. The image processing device 5 then processes the sequence of images in real time or in a predetermined computing time and provides an overall image 8 from a first image of the first camera 3, a second image of the second camera 4 and alternatively or additionally a further image of the further cameras 7, 9.
Fig. 2 exemplarily shows the overall image 8, which is displayed by the image processing device 5 on a display 15 in the motor vehicle 1 in the situation according to Fig. 1 .
Presently, the overall image 8 is provided in the form of a plan view image. The overall image 8 exhibits artifacts and differently sharp edges due to the composition of several images. Thus, the overall image 8 can be referred to as inhomogeneous. In order to reduce the inhomogeneity, the following method is performed for improving the overall image 8.
According to Fig. 3, the overall image 8 is divided or decomposed with a wavelet transformation in a step S1 . A certain family of linear time-frequency transformations is designated by the wavelet transformation. The wavelet transformation is composed of a wavelet analysis, which designates a transition from a time representation of the overall image 8 into a wavelet representation, and a wavelet synthesis, which designates a
retransformation of the wavelet representation into the time representation. A result of the wavelet analysis of step S1 are wavelet coefficients 10, which contain high frequencies of the overall image, and wavelet coefficients 1 1 , which contain low frequencies of the overall image 8.
According to the embodiment, the wavelet analysis is performed for three scales.
However, more or less scales can also be used, for example two or four. For each of these scales, three high-frequency wavelet coefficients 10 result, wherein one of these wavelet coefficients 10 describes the horizontal frequencies of the overall image 8, another one describes the vertical frequencies of the overall image 8 and the last wavelet coefficient 10 describes the diagonal frequencies of the overall image 8. From the last scale of the wavelet analysis, a wavelet coefficient 1 1 with the low frequencies of the overall image 8 is also present.
In the steps S2a to S2n, now, the high-frequency wavelet coefficients 10 are treated with a correction function 14. The result of this step are the corrected wavelet coefficients 12. The corrected wavelet coefficients 12 are now retransformed into the time representation or the overall image 8 together with the low-frequency wavelet coefficients 1 1 by means of the wavelet synthesis. The result after step S3 is an improved overall image 13. The improved overall image 13 is a harmonized or matched image with respect to the image sharpness, local image contrast and image noise. Generally, it can be said that the improved overall image 13 is more homogeneous than the original overall image 8.
According to Fig. 4, a graph of the correction function 14 is shown. On the abscissa, an input value I, thus, the value of the high-frequency wavelet coefficients 10, is plotted, while an output value O, thus, the value of the corrected wavelet coefficients 12, is plotted on the ordinate of the graph. The correction function 14 presently corresponds to a shrinkage curve, which is piecewise linear. The shrinkage curve can be adapted for the respective scale and thus purposefully improve the homogeneity and/or the image sharpness and/or the image contrast and/or the image artifacts for certain frequency ranges of the overall image 8.
The basic idea is to vary the shrinkage curve or the transfer curve depending on properties or characteristics of the overall image 8 and physical properties or
characteristics such as for example a relative distance between the respective camera 3, 4, 7, 9 and an object in the environmental region 6 or a pixel in the overall image 8.
The inhomogeneity of the overall image 8 arises by the composition of the images of the cameras 3, 4, 7, 9 among other things. Fig. 5 shows, how inhomogeneous the image resolution of the overall image has become due to the transformation to the plan view image. The representation of Fig. 5 shows an image resolution for individual areas of the overall image 8. Thus, the image resolution becomes lower the further an area is away from the camera. The further an area is away from the camera 3, 4, 7, 9, the less pixels are available from the image of the respective camera 3, 4, 7, 9 for the overall image 8. The overall image 8 is therefore more out of focus or blurred in the areas farther away from the camera 3, 4, 7, 9. The respective areas of Fig. 5 can be determined by means of a calibration of the camera system 2.
Fig. 6 shows the original overall image 8, which is displayed on the display in the motor vehicle 1 . It is clearly apparent that the edges are not sharply presented in the overall image 8. In contrast to this, in Fig. 7, the improved overall image 13 is presented on the display in the motor vehicle 1 . In synopsis of Fig. 6 and Fig. 7, it can now be observed that the edges are presented sharper, the image contrast is matched and the entire corrected overall image 13 appears more homogeneous in the improved overall image 13.
While in Fig. 6 and Fig. 7 the overall image 8, 13 is illustrated on the left image half in the form of the plan view image, a side view is respectively presented on the right image half.
Furthermore, it can be helpful that the overall image 8 is divided into partial areas before applying the correction function 14 in order to increase a quality of the image
improvement. Thus, the correction function 14 can be locally adapted depending on the local image sharpness or the local image artifacts. The adapted correction function 14 also can only be employed in the case of a limit value exceedance. For example, if the inhomogeneity of the overall image 8 is not extreme. On the other hand, the improvement can also be limited up to a certain point by the correction function 14. This saves computing time in improving the overall image 8 and contributes to the aim to improve the overall image 8 immediately or within a predetermined computing time. At the same time, while the wavelet coefficients 10 are improved, an output or a reconstruction of the improved overall image 13 can also be effected.
The correction function 14 can also be only partially applied to the overall image 8. Thus, it can for example be that denoising is desired to reduce the image noise, but the image sharpness is not to be changed. For example, this can be the case because a texture with
sharp edges is not in the environmental region 6. The computing time to increase the image sharpness can therefore be saved.
Additionally or alternatively, a partial color desaturation can be applied for high-frequency areas of the overall image 8 by the correction function 14. This approach is helpful to attenuate an effect from the image artifacts, in particular Moire artifacts. The Moire artifacts make themselves felt in the superposition of regular fine rasters by additional seemingly coarse rasters. The Moire artifacts are a special case of an alias effect, which arises by subsampling.
Furthermore, by a certain number of image sharpness operations by the correction function 14, information about the texture of the overall image 8 can be inferred. In this case, a certain number of horizontal and/or vertical edges of the overall image 8 can be determined with a very simple method.
After horizontal, vertical and diagonal edges are separately present in the form of wavelet coefficients 10 by the wavelet analysis, previous knowledge can also be used for correcting these edges or high-frequency areas. The previous knowledge includes for example a change of a direction of travel of the motor vehicle 1 and/or a traveling speed of the motor vehicle 1 . Thus, edges, which are longitudinal to the direction of travel, change less severely than edges, which are transverse to the direction of travel. Now, the edges extending transversely to the direction of travel can be specially sharpened, while the longitudinally extending edges are not sharpened by the correction function 14. This approach allows effective employment of the correction function 14 with respect to the computing time.
Furthermore, the sharpening of the edges in transverse direction can also be adapted to the current traveling speed by the correction function 14. Thus, for example, in the case of an acceleration, an increasing blurriness of the edges transverse to the direction of travel has to be expected.
The correction function 14 can also be applied as follows. Thus, in the case of a turning procedure of the motor vehicle 1 , the correction function 14 can be accordingly adjusted, and various radial movement artifacts can be avoided.
A further useful way to newly calculate the limit values for the image sharpness, can be effected by counting the edges or the high-frequency areas in a certain direction. From or
below a certain number of the edges in the certain direction, the limit value for the application of the correction function 14 with respect to the image sharpness can be adapted. This is effected such that it can be ensured that the improved overall image 13 is always presented with adapted image sharpness for the respective current situation in the environmental region 6.
In particular, it is provided that the correction function 14 can be applied depending on a respective temperature of the respective camera 3, 4, 7, 9. For this case, a known relation between sensor noise of the respective camera 3, 4, 7, 9 and the temperature of the respective camera 3, 4, 7, 9 can be used.
Claims
Method for operating a camera system (2) of a motor vehicle (1 ), in which at least a first (3) and a second camera (4) capture different environmental regions (6) of the motor vehicle (1 ), including the steps of:
- capturing a first image by the first camera (3),
- capturing a second image by the second camera (4),
- providing an overall image (8) from at least the first and the second image by an image processing device (5) of the camera system (2),
- determining at least one partial area of the overall image (8), which differs with respect to at least another image area of the overall image (8) in an image sharpness and/or an image contrast, and
- applying a correction function (14) to the at least one partial area by means of the image processing device (5), wherein the difference in the image sharpness and/or the image contrast is reduced by the correction function (14)
a wavelet shrinkage method is performed with the correction function (14).
Method according to claim 1 ,
characterized in that
the overall image (8) is provided as a plan view image of the environmental regions (6) of the motor vehicle (1 ) with a superimposed picture of the motor vehicle (1 ).
Method according to any one of claims 1 to 2,
characterized in that
the correction function (14) is applied for at least one pixel within the partial area and the correction function (14) is adapted depending on a position of the at least one pixel in the partial area.
Method according to claim 3,
characterized in that
a position of the first camera (3) and/or the second camera (4) is specified in the overall image (8) and the correction function (14) is adapted depending on the
position of the first camera (3) and/or the second camera (4) with respect to the at least one pixel.
5. Method according to any one of claims 3 or 4,
characterized in that
the correction function (14) is adapted depending on an orientation of the first camera (3) and/or the second camera (4) to the at least one pixel.
6. Method according to any one of the preceding claims,
characterized in that
the correction function (14) is adapted depending on geometric transformation parameters of the camera (3, 4, 7, 9).
7. Method according to any one of the preceding claims,
characterized in that
the correction function (14) is applied to the at least one partial area within a predetermined computing time.
8. Method according to any one of the preceding claims,
characterized in that
the correction function (14) is adapted depending on a change of a direction of travel and/or a traveling speed of the motor vehicle (1 ).
9. Method according to any one of the preceding claims,
characterized in that
the correction function (14) is adapted depending on a current temperature of the first camera (3) and/or the second camera (4).
10. Method according to claim 10,
characterized in that
the wavelet shrinkage method is executed for a predetermined number of wavelet scales.
1 1 . Camera system (2) for a motor vehicle (1 ) including at least a first camera (3) and a second camera (4), which capture different environmental regions (6) of the motor
vehicle (1 ),
characterized in that
the camera system (2) is adapted to perform a method according to any one of the preceding claims.
12. Driver assistance system with a camera system (2) according to claim 1 1 .
13. Motor vehicle (1 ) with a driver assistance system according to claim 12.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102014110516.8A DE102014110516A1 (en) | 2014-07-25 | 2014-07-25 | Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle |
DE102014110516.8 | 2014-07-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016012288A1 true WO2016012288A1 (en) | 2016-01-28 |
Family
ID=53761329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2015/065925 WO2016012288A1 (en) | 2014-07-25 | 2015-07-13 | Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle |
Country Status (2)
Country | Link |
---|---|
DE (1) | DE102014110516A1 (en) |
WO (1) | WO2016012288A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016217489A1 (en) | 2016-09-14 | 2018-03-15 | Robert Bosch Gmbh | Method and an associated device for guiding a means of locomotion |
WO2019057807A1 (en) * | 2017-09-21 | 2019-03-28 | Connaught Electronics Ltd. | Harmonization of image noise in a camera device of a motor vehicle |
WO2019072909A1 (en) * | 2017-10-10 | 2019-04-18 | Connaught Electronics Ltd. | Method for generating an output image showing a motor vehicle and an environmental region of the motor vehicle in a predetermined target view, camera system as well as motor vehicle |
KR20200013728A (en) * | 2017-06-30 | 2020-02-07 | 코너트 일렉트로닉스 리미티드 | Method for generating at least one merged perspective image of a car and a surrounding area of the car, camera system and car |
US11508043B2 (en) * | 2020-06-18 | 2022-11-22 | Connaught Electronics Ltd. | Method and apparatus for enhanced anti-aliasing filtering on a GPU |
EP4109393A4 (en) * | 2020-02-21 | 2023-08-16 | Huawei Technologies Co., Ltd. | Method and device for removing moiré patterns |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016112483A1 (en) * | 2016-07-07 | 2018-01-11 | Connaught Electronics Ltd. | Method for reducing interference signals in a top view image showing a motor vehicle and a surrounding area of the motor vehicle, driver assistance system and motor vehicle |
AT522757A1 (en) * | 2019-06-25 | 2021-01-15 | MAN TRUCK & BUS OESTERREICH GesmbH | Commercial vehicle and method for retrofitting a commercial vehicle with an assistance system to support turning maneuvers |
DE102021214952A1 (en) | 2021-12-22 | 2023-06-22 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for displaying a virtual view of an environment of a vehicle, computer program, control unit and vehicle |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011082716A1 (en) * | 2010-01-08 | 2011-07-14 | Valeo Schalter Und Sensoren Gmbh | Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image |
US20140198992A1 (en) * | 2013-01-15 | 2014-07-17 | Apple Inc. | Linear Transform-Based Image Processing Techniques |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100866450B1 (en) * | 2001-10-15 | 2008-10-31 | 파나소닉 주식회사 | Automobile surrounding observation device and method for adjusting the same |
JP5269026B2 (en) * | 2010-09-29 | 2013-08-21 | 日立建機株式会社 | Work machine ambient monitoring device |
-
2014
- 2014-07-25 DE DE102014110516.8A patent/DE102014110516A1/en not_active Withdrawn
-
2015
- 2015-07-13 WO PCT/EP2015/065925 patent/WO2016012288A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011082716A1 (en) * | 2010-01-08 | 2011-07-14 | Valeo Schalter Und Sensoren Gmbh | Image forming device for a vehicle as well as driver assistance facility with such an image forming device as well as method for forming an overall image |
US20140198992A1 (en) * | 2013-01-15 | 2014-07-17 | Apple Inc. | Linear Transform-Based Image Processing Techniques |
Non-Patent Citations (3)
Title |
---|
AL BOVIK (ED) ED - BOVIK A (ED): "Handbook of Image and Video Processing, Passages", 1 January 2000, HANDBOOK OF IMAGE AND VIDEO PROCESSING; [COMMUNICATIONS, NETWORKING AND MULTIMEDIA], SAN DIEGO, CA : ACADEMIC PRESS, US, PAGE(S) 71 - 267,687-704,C-25-C-40,705-715, ISBN: 978-0-12-119790-2, XP002507635 * |
WEISHENG DONG ET AL: "Image Deblurring and Super-Resolution by Adaptive Sparse Domain Selection and Adaptive Regularization", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 20, no. 7, 1 July 2011 (2011-07-01), pages 1838 - 1857, XP011411837, ISSN: 1057-7149, DOI: 10.1109/TIP.2011.2108306 * |
ZONG X ET AL: "DE-NOISING AND CONTRAST ENHANCEMENT VIA WAVELET SHRINKAGE AND NONLINEAR ADAPTIVE GAIN", PROCEEDINGS OF SPIE, S P I E - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, US, vol. 2762, 8 April 1996 (1996-04-08), pages 566 - 574, XP008006539, ISSN: 0277-786X, DOI: 10.1117/12.236028 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016217489A1 (en) | 2016-09-14 | 2018-03-15 | Robert Bosch Gmbh | Method and an associated device for guiding a means of locomotion |
EP3306524A2 (en) | 2016-09-14 | 2018-04-11 | Robert Bosch GmbH | Method and related device for guiding a means of locomotion |
CN110809780B (en) * | 2017-06-30 | 2024-01-12 | 康诺特电子有限公司 | Method for generating a combined viewing angle view image, camera system and motor vehicle |
KR20200013728A (en) * | 2017-06-30 | 2020-02-07 | 코너트 일렉트로닉스 리미티드 | Method for generating at least one merged perspective image of a car and a surrounding area of the car, camera system and car |
CN110809780A (en) * | 2017-06-30 | 2020-02-18 | 康诺特电子有限公司 | Method for generating at least one merged perspective view of a motor vehicle and of an environmental region of a motor vehicle, camera system and motor vehicle |
KR102257727B1 (en) | 2017-06-30 | 2021-06-01 | 코너트 일렉트로닉스 리미티드 | Method, camera system and vehicle for generating at least one merged perspective image of a vehicle and surrounding areas of the vehicle |
WO2019057807A1 (en) * | 2017-09-21 | 2019-03-28 | Connaught Electronics Ltd. | Harmonization of image noise in a camera device of a motor vehicle |
JP2020537250A (en) * | 2017-10-10 | 2020-12-17 | コノート、エレクトロニクス、リミテッドConnaught Electronics Ltd. | A method of generating an output image showing a motor vehicle and the environmental area of the motor vehicle in a predetermined target view, a camera system, and a motor vehicle. |
CN111406275A (en) * | 2017-10-10 | 2020-07-10 | 康诺特电子有限公司 | Method for generating an output image showing a motor vehicle and an environmental region of the motor vehicle in a predetermined target view, camera system and motor vehicle |
KR20200052357A (en) * | 2017-10-10 | 2020-05-14 | 코너트 일렉트로닉스 리미티드 | How to generate an output image showing a vehicle and its environmental area in a predefined target view, camera system and vehicle |
KR102327762B1 (en) * | 2017-10-10 | 2021-11-17 | 코너트 일렉트로닉스 리미티드 | A method of generating output images showing a car and its environment areas in a predefined target view, a camera system and a car |
JP7053816B2 (en) | 2017-10-10 | 2022-04-12 | コノート、エレクトロニクス、リミテッド | How to generate an output image showing a powered vehicle and the environmental area of the powered vehicle in a given target view, a camera system, and a powered vehicle. |
CN111406275B (en) * | 2017-10-10 | 2023-11-28 | 康诺特电子有限公司 | Method for generating an image showing a motor vehicle and its environment in a predetermined target view, camera system and motor vehicle |
WO2019072909A1 (en) * | 2017-10-10 | 2019-04-18 | Connaught Electronics Ltd. | Method for generating an output image showing a motor vehicle and an environmental region of the motor vehicle in a predetermined target view, camera system as well as motor vehicle |
EP4109393A4 (en) * | 2020-02-21 | 2023-08-16 | Huawei Technologies Co., Ltd. | Method and device for removing moiré patterns |
US11508043B2 (en) * | 2020-06-18 | 2022-11-22 | Connaught Electronics Ltd. | Method and apparatus for enhanced anti-aliasing filtering on a GPU |
Also Published As
Publication number | Publication date |
---|---|
DE102014110516A1 (en) | 2016-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016012288A1 (en) | Method for operating a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle | |
KR102327762B1 (en) | A method of generating output images showing a car and its environment areas in a predefined target view, a camera system and a car | |
US20140193032A1 (en) | Image super-resolution for dynamic rearview mirror | |
CN110023810B (en) | Digital correction of optical system aberrations | |
US9898803B2 (en) | Image processing apparatus, image processing method, and recording medium storing image processing program | |
JP2011041089A5 (en) | ||
WO2018226437A1 (en) | Shutterless far infrared (fir) camera for automotive safety and driving systems | |
DE102013114996A1 (en) | Method for applying super-resolution to images detected by camera device of vehicle e.g. motor car, involves applying spatial super-resolution to area-of-interest within frame to increase the image sharpness within area-of-interest | |
CN109873997A (en) | Projected picture correcting method and device | |
WO2012011227A1 (en) | Stereo distance measurement apparatus and stereo distance measurement method | |
DE102016105753A1 (en) | Method and apparatus for determining lens tint correction for a multiple camera device with different fields of view | |
US9554058B2 (en) | Method, apparatus, and system for generating high dynamic range image | |
DE102015206477A1 (en) | Method for displaying a vehicle environment of a vehicle | |
CN102821230A (en) | Image processing apparatus, image processing method | |
WO2014069103A1 (en) | Image processing device | |
CN106572285B (en) | Camera module, electronic device and operation method thereof | |
DE102012001858A1 (en) | Method for calibrating wafer level camera of stereo camera assembly, for vehicle for environmental detection, involves identifying corresponding pixels within data sets detected by image capture units during correspondence analysis | |
EP3176748B1 (en) | Image sharpening with an elliptical model of a point spread function | |
EP3127324A1 (en) | System and method for images distortion correction | |
US20150092089A1 (en) | System And Method For Under Sampled Image Enhancement | |
EP3231172A1 (en) | Method for adapting a brightness of a high-contrast image and camera system | |
EP1943626B1 (en) | Enhancement of images | |
US10748252B2 (en) | Method and device for image correction | |
CN108010005B (en) | Method and device for adjusting image brightness and vehicle | |
US10943334B2 (en) | Method and system for representation of vehicle surroundings with reduced artefacts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15744136 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15744136 Country of ref document: EP Kind code of ref document: A1 |