CN111800586A - Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device - Google Patents
Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device Download PDFInfo
- Publication number
- CN111800586A CN111800586A CN202010595985.2A CN202010595985A CN111800586A CN 111800586 A CN111800586 A CN 111800586A CN 202010595985 A CN202010595985 A CN 202010595985A CN 111800586 A CN111800586 A CN 111800586A
- Authority
- CN
- China
- Prior art keywords
- image
- revert
- atmospheric
- illumination intensity
- transmission rate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 21
- 238000003672 processing method Methods 0.000 title claims abstract description 14
- 238000005286 illumination Methods 0.000 claims abstract description 74
- 230000005540 biological transmission Effects 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims description 43
- 238000004364 calculation method Methods 0.000 claims description 23
- 230000006870 function Effects 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 7
- 238000011084 recovery Methods 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 5
- 238000012417 linear regression Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 4
- 230000008447 perception Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims description 2
- 125000004432 carbon atom Chemical group C* 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 11
- 238000003384 imaging method Methods 0.000 abstract description 11
- 235000019557 luminance Nutrition 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000446 fuel Substances 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000001931 thermography Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 241000519995 Stachys sylvatica Species 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012907 on board imaging Methods 0.000 description 1
- 239000003208 petroleum Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/13—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a virtual exposure processing method, a vehicle-mounted image splicing processing method and an image processing device for a vehicle-mounted image, which particularly comprise the steps of carrying out virtual exposure processing on an image shot by a visible camera, and improving the atmospheric intensity and the transmission rate to obtain a clear image. By the technical scheme provided by the invention, the technical problem of poor image imaging effect caused by local image distortion of strong light sources of the car lamp and the street lamp when the illumination is insufficient can be solved, so that the processed image is smoother and the imaging effect is better.
Description
Technical Field
The invention relates to the field of automobiles, in particular to a virtual exposure processing method of a vehicle-mounted image, a vehicle-mounted image splicing processing method and an image processing device.
Background
In the field of vehicle driving assist, a visible light camera is generally employed as an imaging device in front of a vehicle. However, most target detection technologies are based on the daytime driving condition, the photo is sufficient and uniform during daytime driving, the noise in the video data acquired by the camera is low, and the intelligent algorithm can be successfully applied to the scene, so that efficient target detection is realized, and an auxiliary driving decision is provided for the driver. The visible light camera is particularly poor in imaging quality at night often due to poor exposure conditions, and is particularly obvious in the situations of imperfect street lamp facilities and the like (such as rural roads). Under the not good condition of light at night, the driver can't see the environment around clearly through the naked eye, and the visible light camera is because the exposure is not enough, and the imaging effect is not good, also can't provide better image effect for the driver. Therefore, the driver often fails to detect the pedestrian, the vehicle, and the road surface defect in front in time, thereby causing traffic accidents. In order to better improve the imaging effect of the camera, patent CN105991938A provides a virtual exposure method, in which when t (x, y) is close to 0 in the process of calculating the image restoration model, the color of the region is directly seriously distorted. In addition, in the application scene of the vehicle-mounted camera, under the condition of insufficient illumination, the vehicle lamp is required to be turned on for road illumination, the illumination intensity is influenced by the vehicle lamp, street lamps at two sides of a road and the like, the calculation mode of the atmospheric illumination intensity A is that the gray-scale images are cut into gray-scale image blocks with preset number according to a preset segmentation sequence, and the segmentation sequence comprises from top to bottom and from left to right; when the gray image blocks are smaller than a preset size, acquiring the mean value and the variance of the brightness of each gray image block; the method for calculating the brightness of the pixel points in the gray-scale image blocks with the largest mean value and the smallest variance is selected as the atmospheric illumination intensity, and the calculation mode is feasible under the condition of no light interference, but when the illumination is poor and the number of lamps is large, the method for taking the maximum value of the brightness of the pixel points as the atmospheric illumination intensity can excessively estimate and improve the estimated value of the atmospheric illumination intensity, so that the final imaging effect is poor. For example, the light intensity at the position with the lamp light may be larger than that at the position without the lamp light, and when the estimation is performed by using the method, the current atmosphere illumination intensity is estimated according to the intensity of the lamp light position, so that the current estimation value is more deviated from the true value.
Therefore, under the condition of insufficient light, how to obtain the atmospheric illumination intensity and the transmission rate more accurately to obtain a better virtual exposure image processing technology is a difficult problem to be solved by the current automobile.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a virtual exposure method of a vehicle-mounted image, which is characterized by comprising the following steps:
step S1, acquiring an image captured by the visible light camera,
step S2 of inverting the acquired image to generate an inverted image;
step S3, obtaining the atmospheric illumination intensity;
step S4, obtaining the transmission rate of the gray image;
step S5 of generating a restored image based on a pre-arranged image restoration model, the inverted image, the atmospheric light intensity, and the transmission rate;
in step S3, the obtaining of the atmospheric illumination intensity in one obtaining manner includes: calculating the atmospheric illumination intensity in blocks by taking preset blocks as units, and taking the average value in the blocks as the final atmospheric illumination intensity A of the blocksΨk:
In the above formula, AΨkIs represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station; i isd(x, y) is ΨkDifference values of all pixels in the local area; a isk,bkIs ΨkCoefficients within a local region; (x, y) represents two-dimensional coordinates of the pixel;
the disparity value is defined as the difference between the maximum pass image value and the minimum pass image value obtained from the inverted image.
A virtual exposure method for vehicle-mounted images is further characterized in that a calculation mode for acquiring a maximum pass image value in a reverse image is as follows: i ismax(x,y)=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
The calculation method for acquiring the minimum pass image value in the reverse image comprises the following steps: i ismin(x,y)=min{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)};
Calculating formula of difference value: i isd(x,y)=Imax(x,y)-Imin(x,y);
In the above formula, Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) represent the values of the R, G, B channels of the inverted image, respectively.
A virtual exposure method of vehicle-mounted image, further, ak,bkThe calculating step comprises: establishing a relation between the atmospheric illumination intensity and the difference value of pixel points in the local region Ψ k, and then establishing an objective function E (a) between the atmospheric intensity A (x, y) and the brightness of the reversed imagek,bk) Solving the objective function by linear regression or least square method to obtain ak,bk;
bk=μk-ak*μk
In the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local areadThe mean and variance of (x, y), I (x, y) is the luminance of the pixels of the inverted image,is ΨkAverage of I (x, y) of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
In step S3, the method for obtaining another manner of obtaining the intensity of the atmospheric illumination includes: acquiring an image of a maximum channel, specifically including acquiring an image composed of the maximum values of pixel points in an RGB (red, green and blue) reverse image:
Im=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
and sequencing the brightness of the pixel points from the maximum channel image according to the size, selecting the pixel points 0.1-10% of the brightness, selecting the brightness of the original image at the same positions of the selected pixel points, and calculating the average value of the brightness of the pixel points to be used as the atmospheric illumination intensity.
A method for virtual exposure of a vehicle-mounted image, wherein in step S4, an acquisition mode of the transmission rate comprises:
step S41, acquiring a grayscale image of the inverted image:
Igray(x,y)=0.3*Irevert_R(x,y)+0.59*Irevert_G(x,y)+0.11Irevert_B(x,y)
step S42, blocking the grayscale image;
step S43, taking block as unit, according to the atmospheric illumination intensity A and the formulaSetting the transmission rates t (x, y) of the current block to be a plurality of values between 0.1 and 0.9 respectively, calculating corresponding J (x, y) respectively, and then obtaining J (x, y) which enables the image contrast to obtain the maximum value and recording the J (x, y) as t1(x,y)
The formula for calculating the contrast is:
wherein J (x, y) is a restored image generated at a predetermined transmission rate,n is the number of pixel points of each block corresponding to J (x, y) as a common mode variable of the restored image;
Wherein p (x, y) represents a correction factor, and
a virtual exposure method of vehicle-mounted images, further, another acquisition mode of transmission rate comprises the following steps:
step S4A, obtaining the transmission rate t corresponding to the maximum channel of the light source areaLRTransmission rate t corresponding to minimum channel of non-light source regionNRThe calculation formulas are respectively as follows:
in the above formula, Ic(x, y) denotes that the inverted image has LR or NR ΨkThe values of the r, g, b channels, Ψ, of the pixels in the regionkRepresenting a local region centered on pixel k; a. thec(x, y) is ΨkAtmospheric intensity of r, g, b channel of each pixel in the region;
step S4B, obtaining a brightness perception coefficient alpha (x, y);
step S4C, calculating the final transmission rate t (x, y):
t(x,y)=tLR(x,y)*α(x,y)+tNR(x,y)*(1-α(x,y)
in step S5, the method for virtual exposure of an onboard image further includes:
respectively converting the three channel values I according to a recovery formularevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y), the atmospheric illumination intensity A (x, y) and the transmission rate t (x, y) are substituted to obtain a restored image Jrevert(x,y) Three channels of restitution Jrevert_R(x,y),Jrevert_G(x,y),Jrevert_B(x,y);
The calculation formula of the restored image is as follows:
the calculation formula of the values of the three-channel restored image is as follows:
a virtual exposure method of vehicle-mounted images further comprises the steps that images shot by an infrared camera are used, and an interested target area to be processed is selected from pictures shot by the infrared camera; the method for acquiring the interested target area comprises the following steps: sorting the brightness values of the pixels of the whole infrared image, and then taking the pixels with the brightness values arranged before fifty percent as the pixels of the interested area;
or selecting a target from the infrared image for detection by a target detection method, wherein the target at least comprises one or more of people, vehicles and street lamps.
A vehicle-mounted image stitching processing method comprises the following steps:
acquiring an infrared image and a visible light image at the same visual angle;
selecting a region image with the same coordinate as the region of interest in the infrared image from the visible light image according to the pixel coordinate of the region of interest selected from the infrared image for virtual exposure processing;
splicing the infrared image and the processed visible light image left and right to obtain a spliced image and sending the spliced image to a display screen;
the virtual exposure processing comprises the virtual exposure processing method of the vehicle-mounted image.
An image processing apparatus characterized by comprising:
an image acquisition module for acquiring the image shot by the visible camera,
the reverse image generation module is used for reversing the acquired image to generate a reverse image;
the atmospheric illumination intensity generating module is used for acquiring atmospheric illumination intensity;
the transmission rate generating module is used for acquiring the transmission rate of the gray level image;
the image restoration module is used for generating a restoration image according to a preset image restoration model, a reversal image, the atmospheric illumination intensity and the transmission rate;
the process of obtaining the atmospheric illumination intensity comprises the following steps: calculating the atmospheric intensity in blocks by taking a preset block as a unit, and taking the average value in the blocks as the final atmospheric intensity A of the blocksΨk:
In the above formula, AΨkIs represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station; i isd(x, y) is ΨkDifference values of all pixels in the local area; a isk,bkCoefficients in the psi k local region; (x, y) represents two-dimensional coordinates of the pixel;
the disparity value is defined as the difference between the maximum pass image value and the minimum pass image value obtained from the inverted image.
An image processing apparatus, further, ak,bkThe calculating step comprises: establishing a relation between the atmospheric illumination intensity and the difference value of pixel points in the local region Ψ k, and then establishing an objective function E (a) between the atmospheric intensity A (x, y) and the brightness of the reversed imagek,bk) Solving the object by linear regression or least squaresA under the condition that the function obtains the minimum valuek,bk;
bk=μk-ak*μk
In the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local aread(x, y), I (x, y) is the luminance of a pixel in the luminance values of the inverted image,is ΨkAverage of I (x, y) of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
Has the advantages that:
1. the invention provides a virtual exposure image processing method aiming at poor imaging effect caused by insufficient exposure of the current visible light camera, and the technical scheme of the invention is enabled to interfere with car lamps and street lamps or brighter white physics when the illumination is insufficient through improving the calculation method of the atmospheric illumination intensity and transmission rate in the prior art, so that the processed image is smoother and the imaging effect is better.
Drawings
The following drawings are only schematic illustrations and explanations of the present invention, and do not limit the scope of the present invention.
FIG. 1 is a schematic view illustrating a virtual exposure process of a vehicle-mounted image according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of an apparatus for on-board imaging with a visible light camera according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an apparatus for vehicle-mounted image including an infrared camera and a visible light camera according to an embodiment of the present invention.
Detailed Description
For a more clear understanding of the technical features, objects, and effects herein, embodiments of the present invention will now be described with reference to the accompanying drawings, in which like reference numerals refer to like parts throughout. For the sake of simplicity, the drawings are schematic representations of relevant parts of the invention and are not intended to represent actual structures as products. In addition, for simplicity and clarity of understanding, only one of the components having the same structure or function is schematically illustrated or labeled in some of the drawings.
As for the control system, the functional module, application program (APP), is well known to those skilled in the art, and may take any suitable form, either hardware or software, and may be a plurality of functional modules arranged discretely, or a plurality of functional units integrated into one piece of hardware. In its simplest form, the control system may be a controller, such as a combinational logic controller, a micro-programmed controller, or the like, so long as the operations described herein are enabled. Of course, the control system may also be integrated as a different module into one physical device without departing from the basic principle and scope of the invention.
The term "connected" in the present invention may include direct connection, indirect connection, communication connection, and electrical connection, unless otherwise specified.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, values, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, values, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items
It should be understood that the term "vehicle" or "vehicular" or other similar terms as used herein generally includes motor vehicles such as passenger automobiles including Sport Utility Vehicles (SUVs), buses, trucks, various commercial vehicles, watercraft including a variety of boats, ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles, and other alternative fuel vehicles (e.g., fuels derived from non-petroleum sources). As referred to herein, a hybrid vehicle is a vehicle having two or more power sources, such as both gasoline-powered and electric-powered vehicles.
A virtual exposure processing device for images shot by a vehicle-mounted camera is disclosed, referring to FIGS. 2 and 3, and specifically comprises: the system comprises at least one visible light camera and a micro-processing controller, wherein the visible light camera is connected with the micro-processing controller through a vehicle-mounted Ethernet bus and is arranged on a vehicle and used for shooting pictures;
the visible light camera comprises a fisheye camera, and a rotating part and a lifting part are arranged on the visible light camera;
the rotating part can drive the visible light camera to rotate, and the ascending part can drive the visible light camera to ascend or descend;
the virtual exposure processing device also comprises an infrared camera which is connected with the micro-processing controller.
The connection mode of the visible light camera and the infrared camera with the microprocessor comprises a vehicle-mounted Ethernet bus;
a virtual exposure method of an onboard image, see fig. 1, comprising: acquiring a picture shot by a visible light camera, detecting the current illumination intensity through an illumination sensor, and starting a virtual exposure method to perform virtual exposure processing on the picture shot by the visible light when the illumination intensity is smaller than a preset threshold value;
step S1, converting the image shot by the visible camera into an RGB format diagram;
specifically, there are various formats of images of the camera, such as a widely used bayer format image, and when the acquired image is a bayer image, the bayer image needs to be converted into an RGB image;
step S2: inverting the RGB format image to generate an inverted image;
step S3, obtaining the atmospheric illumination intensity;
step S4, obtaining the transmission rate of the gray image;
step S5 of generating a restored image based on a pre-configured image restoration model, the inverted image, the atmospheric light intensity, and the transmission rate;
in step S1, the image format is RGB format, and if the original image captured by the camera is a bayer format image, the bayer format image is converted into an RGB format image by interpolation;
in step S2, the RGB components of the input visible light image are inverted to obtain an inverted image Irevert(x,y);
Irevert_R(x,y),Irevert_G(x,y),Irevert_BAnd (x, y) respectively represent R, G and B channel images of the pixel points after inversion, and (x, y) represent position coordinates of two-dimensional pixel points.
In the prior art, one way to calculate the illumination intensity of the atmosphere is to select the brightest pixel of the first 0.1% in the dark channel picture, then correspond the positions of the pixel points to the corresponding positions on the original picture, find the highest value of the brightness points, and use the highest value as the whole atmosphere illumination intensity a, but the calculation of the atmosphere illumination intensity by the method is very inaccurate, especially when the illumination is insufficient, the original RGB image is originally very dark, and the dark channel image obtained through the RGB channel can cause deviation and a large amount of color spots under the illumination condition of existing high beam lamps and street lamps. In order to reduce the interference of the existence of the car lights, the street lamps, etc. to the atmospheric light value under the condition of insufficient light, and at the same time, on the premise that the real-time performance of the image acquired by the camera is also guaranteed, in the embodiment step S3, a method for calculating the atmospheric light value is as follows:
maximum channel image acquisition methodThe method comprises the following steps: in acquiring the RGB reverse image, an image composed of the points having the maximum value of luminance becomes the maximum channel image Im:
Im=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
And sequencing the brightness of the pixel points from the maximum channel image according to the size, selecting the pixel points 0.1-10% of the brightness, then selecting the brightness of the original image at the same positions of the pixel points, and calculating the average value of the brightness of the pixel points to be used as an atmospheric light value, thereby avoiding overestimating the atmospheric light value. For example, in a real low-light image, if there is brighter light, such as light of a light or moonlight, the atmospheric light value will not be brightest, in which case noise may be introduced if we select the "brightest pixel" as atmospheric light. To avoid such problems, in this implementation, we have chosen the average intensity based on the largest channel image as our estimate. The experimental result shows that the quality of the enhanced image is improved, and the maximum channel image is more easily resistant to the interference of lamplight and moonlight than the dark channel image. By the calculation mode, the atmospheric illumination intensity can be calculated quickly, and the real-time performance of real vehicle picture observation or calculation is met.
Although the average value of the preset number of luminances as the whole atmospheric light intensity is solved based on the estimation of the bright channel in the above calculation, in practice, the atmospheric light intensity does not remain unchanged, and particularly, the light intensity is insufficient, such as at night. Due to the lack of illumination from a large area in the sky, due to spatially varying illumination and the lack of illumination from the sky in a local area at night, the prior art method of using the maximum value for the calculation has a large error.
It is assumed that the human eye perceives the brightness of an object from ambient illumination and reflection from the surface of the object. Mathematically, the restored image J can be written as the product of the atmospheric illumination intensity a and the reflectance R:
J(x,y)=A(x,y)R(x,y);
in the above equation, J (x, y) represents a restored image, a (x, y) represents an atmospheric illumination intensity, R (x, y) represents a reflectance, and x, y represent two-dimensional coordinates of a pixel.
The inverted image can be expressed as:
Irevert(x,y)=A(x,y)(R(x,y)*t(x,y)+(1-t(x,y));
in the above formula, IrevertWhich represents a converted reverse image of an image photographed by a camera, a (x, y) represents an atmospheric illumination intensity, R (x, y) represents a reflectance, and t (x, y) represents a transmission rate.
A (x, y) is estimated for low frequency components by conventional algorithms using Gaussian filtering, but Gaussian smoothing is isotropic and does not preserve edges, which results in inaccurate results, resulting in some local information loss.
To solve the technical problem of smoothing the image and preserving the edges, another method for calculating the atmospheric illumination intensity in the embodiment step S3 is as follows:
firstly S31, according to the preset maximum pass value and minimum pass value, the difference value between the maximum pass value and the minimum pass value is solved and recorded as the difference value Id,
The maximum channel value is:
Imax(x,y)=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
the minimum channel values are:
Imin(x,y)=min{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
Id(x,y)=Imax(x,y)-Imin(x,y)
in step S32, the atmospheric illumination intensity a (x, y) is linear with the difference value in the local region centered on k pixels:
A(x,y)=ak*Id(x,y)+bk,(x,y)∈Ψk;
in the above formula, ak,bkThe coefficients in the local region of Ψ k, which are constant in this implementation
Step S33, establishing the atmospheric illumination intensity A (x, y) and the reverse imageSolving for an objective function between the luminances of (a)k,bk;
E(ak,bk) Expressing an objective function, and taking lambda as an error adjusting factor;
solving the objective function by linear regression or least square method to obtain ak,bk
bk=μk-ak*μk
in the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local aread(x, y), I (x, y) is the luminance of the pixel in the inverted image,is ΨkAverage of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
Step S34, when calculating the final atmospheric illumination intensity A, the atmospheric illumination intensity is calculated in blocks by taking a preset block as a unit, and the average value in the blocks is taken as the final atmospheric illumination intensity of the block;
in the above formula, AΨkIs represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station;
through the calculation method, the problem that the atmospheric illumination intensity of the whole image is replaced by the maximum value or a numerical value in the prior art, so that the final imaging distortion is caused can be solved, and through the adoption of the calculation method, the technical problems of image smoothing and edge reservation can be solved, and a better imaging effect can be obtained.
The image transmission rate is also a more critical factor, and in the prior art, some methods for calculating the transmission rate still have large errors, especially when there are lights, moonlights and other bright and high objects, the result calculated by selecting the mode with the largest image contrast as the initial transmission rate causes the enhancement effect of the bright foreground object to be insignificant. To solve this problem, one possible calculation method is:
in step S4, one way of calculating the transmission rate includes:
step S41, acquiring a grayscale image of the inverted image:
Igray(x,y)=0.3*Irevert_R(x,y)+0.59*Irevert_G(x,y)+0.11Irevert_B(x,y)
step S42, the gray image is partitioned into blocks Ψk(e.g., may be divided into 15x15 chunks);
step S43, using the block as the unit, according to the atmospheric illumination intensity A and the recovery formulaWhen the transmission rates t (x, y) of the current block are respectively set to 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9, corresponding J (x, y) is obtained, and then the J (x, y) which enables the image contrast to obtain the maximum value is obtained and recorded as t1(x,y);
The formula for calculating the contrast is:
wherein J (x, y) is a restored image generated at a predetermined transmission rate,n is the number of pixel points of each block corresponding to J (x, y) as a common mode variable of the restored image;
by introducing the correction factor, the final transmission rate can effectively enhance the information of objects such as lamplight and white spots and maintain the spatial continuity of the transmission rate, so that the restored scene image has a smoother visual effect.
One method of calculating the transmission rate includes:
in night photographing, unlike an image captured in the daytime, there are generally a plurality of light sources in a night image, and the color characteristics of a light source region (LR) are greatly different from those of a non-light source region (NR). Therefore, both LR and NR regions need to be considered in calculating the transmission rate.
Step S4A, obtaining the atmospheric illumination intensity value, wherein the transmission rate t corresponding to the maximum channel of the light source area existsLRTransmission rate t corresponding to minimum channel of non-light source regionNRThe calculation formulas of (A) and (B) are respectively as follows:
in the above formula, Ic(x, y) denotes that the reverse image is at Ψ having an LR (light Source region) or NR (non-light Source region)kThe values of the r, g, b channels, Ψ, of the pixels in the regionkRepresenting a local region centered on pixel k; a. thec(x, y) is ΨkThe atmospheric illumination intensity of the r, g, b channel of each pixel in the region.
Note that the transmission rate of the maximum channel in which the light source region exists and the transmission rate of the minimum channel in which the light source region does not exist are effective only in the light source region and the non-light source region, respectively, and they need to be combined together to calculate the final transmission rate. One method in the prior art is to use a clear boundary line in an image to divide a light source area and a non-light source area, and to determine the attribution of pixels near the boundary simply and difficultly. In the present embodiment, a luminance perceptual weighting method is proposed to calculate the probability α (x, y) of each pixel belonging to the light source region. In the light source area, there is at least one pixel with high intensity in one of the RGB channels, the higher the value, the more likely the pixel belongs to the light source area;
step S4B, obtaining the brightness perception coefficient alpha (x, y)),
step S4C, calculating the final transmission rate t (x, y):
t(x,y)=tLR(x,y)*α(x,y)+tNR(x,y)*(1-α(x,y))
in step S5, the acquiring of the restored image specifically includes:
respectively converting the three channel values I according to a recovery formularevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y), the atmospheric illumination intensity A (x, y) and the transmission rate t (x, y) are substituted to obtain a restored image JrevertThe recovery value J of the (x, y) three channelsrevert_R(x,y),Jrevert_G(x,y),Jrevert_B(x,y);
The calculation formula of the restored image is as follows:
the three-channel recovery value obtaining and calculating formula is as follows:
in the above equation, if the calculated t (x, y) is close to 0, the color of the region is severely distorted by direct restoration, and the lower limit value t is set0In this embodiment, t0The value range is 0.1-0.15;
step S6, the restored image J is processedrevert_R(x,y),Jrevert_G(x,y),Jrevert_B(x, y) Gamma correction to obtain corrected image Jgamma_R(x,y),Jgamma_G(x,y),Jgamma_B(x,y);
Step S7, the corrected image is inverted to obtain an enhanced and clear visible light image Ioutput(x,y);
In the images shot by the camera, the main purpose is to help the driver to identify moving objects and obstacles more clearly, so in order to quickly find out interested objects from the images, an infrared camera and a visible light camera can be used in a matched mode, and a target area to be processed is selected from the images shot by the infrared camera.
Specifically, acquiring a region of interest from an infrared thermal imaging image;
the acquisition method for acquiring the region of interest comprises the following steps: sorting the brightness values of the pixels of the whole image, and then taking the pixels with the brightness values before fifty percent as the pixels of the interested area; or by means of target detection and the like, the region of the vehicle, the pedestrian and the like is detected in the infrared thermal imaging image, and then the pixel of the region of the vehicle, the pedestrian and the like is used as the pixel of the region of interest. In this embodiment, we use a pixel brightness ordering method to obtain the region of interest.
A vehicle-mounted image stitching processing method comprises the following steps:
acquiring an infrared image and a visible light image at the same visual angle;
selecting a region image with the same coordinate as the region of interest in the infrared image from the visible light image according to the pixel coordinate of the region of interest selected from the infrared image for virtual exposure processing;
and splicing the infrared image and the visible light image left and right, and sending the spliced image to a screen.
The driver can view the processed high-definition image through the display screen.
What has been described above is only a preferred embodiment of the present invention, and the present invention is not limited to the above examples. It is clear to those skilled in the art that the form in this embodiment is not limited thereto, and the adjustable manner is not limited thereto. It is to be understood that other modifications and variations, which may be directly derived or suggested to one skilled in the art without departing from the basic concept of the invention, are to be considered as included within the scope of the invention.
Claims (11)
1. A virtual exposure method for vehicle-mounted images is characterized by comprising the following steps:
step S1, acquiring an image shot by a visible light camera;
step S2 of inverting the acquired image to generate an inverted image;
step S3, obtaining the atmospheric illumination intensity;
step S4, obtaining the transmission rate of the gray image;
step S5 of generating a restored image based on a pre-arranged image restoration model, the inverted image, the atmospheric light intensity, and the transmission rate;
in step S3, the obtaining of the atmospheric illumination intensity in one obtaining manner includes: calculating the atmospheric illumination intensity in blocks by taking preset blocks as units, and taking the average value in the blocks as the final atmospheric illumination intensity A of the blocksΨk:
In the above formula, the first and second carbon atoms are,AΨkis represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station; i isd(x, y) is ΨkDifference values of all pixels in the local area; a isk,bkIs ΨkCoefficients within a local region; (x, y) represents two-dimensional coordinates of the pixel;
the disparity value is defined as the difference between the maximum pass image value and the minimum pass image value obtained from the inverted image.
2. The virtual exposure method for the vehicle-mounted image according to claim 1, wherein the calculation method for obtaining the maximum pass image value in the reversed image is as follows: i ismax(x,y)=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
The calculation method for acquiring the minimum pass image value in the reverse image comprises the following steps: i ismin(x,y)=min{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)};
Calculating formula of difference value: i isd(x,y)=Imax(x,y)-Imin(x,y);
In the above formula, Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y) represent the values of the R, G, B channels of the inverted image, respectively.
3. The virtual exposure method for vehicle-mounted image according to claim 1, wherein ak,bkThe calculating step comprises: establishing a relation between the atmospheric illumination intensity and the difference value of pixel points in the local region Ψ k, and then establishing an objective function E (a) between the atmospheric intensity A (x, y) and the brightness of the reversed imagek,bk) Solving the objective function by linear regression or least square method to obtain ak,bk;
bk=μk-ak*μk
In the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local areadThe mean and variance of (x, y), I (x, y) is the luminance of the pixels of the inverted image,is ΨkAverage of I (x, y) of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
4. The virtual exposure method for vehicle-mounted images according to claim 1, wherein the step S3, the obtaining of the other obtaining mode of the atmospheric illumination intensity comprises: acquiring an image of a maximum channel, specifically including acquiring an image composed of the maximum values of pixel points in an RGB (red, green and blue) reverse image:
Im=max{Irevert_R(x,y),Irevert_G(x,y),Irevert_B(x,y)}
and sequencing the brightness of the pixel points from the maximum channel image according to the size, selecting the pixel points 0.1-10% of the brightness, selecting the brightness of the original image at the same positions of the selected pixel points, and calculating the average value of the brightness of the pixel points to be used as the atmospheric illumination intensity.
5. The virtual exposure method for vehicle-mounted images according to claim 1, wherein in step S4, the transmission rate is obtained in a manner including:
step S41, acquiring a grayscale image of the inverted image:
Igray(x,y)=0.3*Irevert_R(x,y)+0.59*Irevert_G(x,y)+0.11Irevert_B(x,y)
step S42, blocking the grayscale image;
step S43, take block as singleBit according to the atmospheric illumination intensity A and formulaSetting the transmission rates t (x, y) of the current block to be a plurality of values between 0.1 and 0.9 respectively, calculating corresponding J (x, y) respectively, and then obtaining J (x, y) which enables the image contrast to obtain the maximum value and recording the J (x, y) as t1(x,y)
The formula for calculating the contrast is:
wherein J (x, y) is a restored image generated at a predetermined transmission rate,n is the number of pixel points of each block corresponding to J (x, y) as a common mode variable of the restored image;
6. the method of claim 1, wherein the another acquisition of the transmission rate comprises:
step S4A, obtaining the transmission rate t corresponding to the maximum channel of the light source areaLRTransmission rate t corresponding to minimum channel of non-light source regionNRThe calculation formulas are respectively as follows:
in the above formula, Ic(x, y) denotes inversion of the image at Ψ with LR or NRkThe values of the r, g, b channels, Ψ, of the pixels in the regionkRepresenting a local region centered on pixel k; a. thec(x, y) is ΨkAtmospheric intensity of r, g, b channel of each pixel in the region;
step S4B, obtaining a brightness perception coefficient alpha (x, y);
step S4C, calculating the final transmission rate t (x, y):
t(x,y)=tLR(x,y)*α(x,y)+tNR(x,y)*(1-α(x,y)。
7. the method for virtually exposing an on-vehicle image according to claim 1, wherein in step S5, the obtaining of the restored image specifically includes:
respectively converting the three channel values I according to a recovery formularevert_R(x,y),Irevert_G(x,y),Irevert_B(x, y), the atmospheric illumination intensity A (x, y) and the transmission rate t (x, y) are substituted to obtain a restored image JrevertThe recovery value J of the (x, y) three channelsrevert_R(x,y),Jrevert_G(x,y),Jrevert_B(x,y);
The calculation formula of the restored image is as follows:
the calculation formula of the values of the three-channel restored image is as follows:
8. the virtual exposure method of the vehicle-mounted image according to claim 1, further comprising the steps of taking an image by an infrared camera, selecting an interested target area to be processed from the image taken by the infrared camera; the method for acquiring the interested target area comprises the following steps: sorting the brightness values of the pixels of the whole infrared image, and then taking the pixels with the brightness values arranged before fifty percent as the pixels of the interested area;
or selecting a target from the infrared image for detection by a target detection method, wherein the target at least comprises one or more of people, vehicles and street lamps.
9. A vehicle-mounted image stitching processing method is characterized in that,
acquiring an infrared image and a visible light image at the same visual angle;
selecting a region image with the same coordinate as the region of interest in the infrared image from the visible light image according to the pixel coordinate of the region of interest selected from the infrared image for virtual exposure processing;
splicing the infrared image and the processed visible light image left and right to obtain a spliced image and sending the spliced image to a display screen;
the virtual exposure processing includes a virtual exposure processing method of the in-vehicle image according to any one of claims 1 to 8.
10. An image processing apparatus characterized by comprising:
an image acquisition module for acquiring the image shot by the visible camera,
the reverse image generation module is used for reversing the acquired image to generate a reverse image;
the atmospheric illumination intensity generating module is used for acquiring atmospheric illumination intensity;
the transmission rate generating module is used for acquiring the transmission rate of the gray level image;
the image restoration module is used for generating a restoration image according to a preset image restoration model, a reversal image, the atmospheric illumination intensity and the transmission rate;
the process of obtaining the atmospheric illumination intensity comprises the following steps: calculating the atmospheric intensity in blocks by taking a preset block as a unit, and taking the average value in the blocks as the final atmospheric intensity A of the blocksΨk:
In the above formula, AΨkIs represented at ΨkAtmospheric illumination intensity, | ΨkI denotes ΨkThe total number of all pixel points in the area in the station; i isd(x, y) is ΨkDifference values of all pixels in the local area; a isk,bkCoefficients in the psi k local region; (x, y) represents two-dimensional coordinates of the pixel;
the disparity value is defined as the difference between the maximum pass image value and the minimum pass image value obtained from the inverted image.
11. An image processing apparatus as claimed in claim 10, characterized in that ak,bkThe calculating step comprises: establishing a relation between the atmospheric illumination intensity and the difference value of pixel points in the local region Ψ k, and then establishing an objective function E (a) between the atmospheric intensity A (x, y) and the brightness of the reversed imagek,bk) Solving the objective function by linear regression or least square method to obtain ak,bk;
bk=μk-ak*μk
In the above formula,. mu.k,σkAre respectively at preset psikDifference value I of all pixels in local aread(x, y), I (x, y) is the luminance of a pixel in the luminance values of the inverted image,is ΨkAverage of I (x, y) of all pixels in the local region, | ΨkL is ΨkThe total number of all pixels in the local region; and lambda is an error adjustment factor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010595985.2A CN111800586B (en) | 2020-06-24 | 2020-06-24 | Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010595985.2A CN111800586B (en) | 2020-06-24 | 2020-06-24 | Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111800586A true CN111800586A (en) | 2020-10-20 |
CN111800586B CN111800586B (en) | 2021-08-20 |
Family
ID=72803198
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010595985.2A Active CN111800586B (en) | 2020-06-24 | 2020-06-24 | Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111800586B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113840123A (en) * | 2020-06-24 | 2021-12-24 | 上海赫千电子科技有限公司 | Image processing device of vehicle-mounted image and automobile |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899844A (en) * | 2015-06-30 | 2015-09-09 | 北京奇艺世纪科技有限公司 | Image defogging method and device |
CN105991938A (en) * | 2015-03-04 | 2016-10-05 | 深圳市朗驰欣创科技有限公司 | Virtual exposure method, device and traffic camera |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
CN109427041A (en) * | 2017-08-25 | 2019-03-05 | 中国科学院上海高等研究院 | A kind of image white balance method and system, storage medium and terminal device |
US20190304120A1 (en) * | 2018-04-03 | 2019-10-03 | Altumview Systems Inc. | Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles |
-
2020
- 2020-06-24 CN CN202010595985.2A patent/CN111800586B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105991938A (en) * | 2015-03-04 | 2016-10-05 | 深圳市朗驰欣创科技有限公司 | Virtual exposure method, device and traffic camera |
CN104899844A (en) * | 2015-06-30 | 2015-09-09 | 北京奇艺世纪科技有限公司 | Image defogging method and device |
CN106530246A (en) * | 2016-10-28 | 2017-03-22 | 大连理工大学 | Image dehazing method and system based on dark channel and non-local prior |
CN109427041A (en) * | 2017-08-25 | 2019-03-05 | 中国科学院上海高等研究院 | A kind of image white balance method and system, storage medium and terminal device |
US20190304120A1 (en) * | 2018-04-03 | 2019-10-03 | Altumview Systems Inc. | Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles |
Non-Patent Citations (1)
Title |
---|
陆天舒等: "基于全视野数字图像的能见度估算方法", 《应用气象学报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113840123A (en) * | 2020-06-24 | 2021-12-24 | 上海赫千电子科技有限公司 | Image processing device of vehicle-mounted image and automobile |
Also Published As
Publication number | Publication date |
---|---|
CN111800586B (en) | 2021-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109435852B (en) | Panoramic auxiliary driving system and method for large truck | |
CN108515909B (en) | Automobile head-up display system and obstacle prompting method thereof | |
CN108460734B (en) | System and method for image presentation by vehicle driver assistance module | |
CN107301623B (en) | Traffic image defogging method and system based on dark channel and image segmentation | |
CN103218778B (en) | The disposal route of a kind of image and video and device | |
Negru et al. | Exponential contrast restoration in fog conditions for driving assistance | |
TW201610912A (en) | Method and system for image haze removal based on hybrid dark channel prior | |
CN105374013A (en) | Method and image processing apparatus for image visibility restoration on the base of dual dark channel prior | |
CN107360344B (en) | Rapid defogging method for monitoring video | |
CN105913390B (en) | A kind of image defogging method and system | |
CN109584176B (en) | Vision enhancement system for motor vehicle driving | |
CN108650495A (en) | A kind of automobile-used panoramic looking-around system and its adaptive light compensation method | |
CN110099268B (en) | Blind area perspective display method with natural color matching and natural display area fusion | |
CN103914820A (en) | Image haze removal method and system based on image layer enhancement | |
CN104766286A (en) | Image defogging device and method based on pilotless automobile | |
Cheng et al. | Visibility enhancement of single hazy images using hybrid dark channel prior | |
Choi et al. | Fog detection for de-fogging of road driving images | |
CN111800586B (en) | Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device | |
CN106780362B (en) | Road video defogging method based on dichromatic reflection model and bilateral filtering | |
Kenk et al. | Visibility enhancer: adaptable for distorted traffic scenes by dusty weather | |
CN111491103B (en) | Image brightness adjusting method, monitoring equipment and storage medium | |
CN112465720A (en) | Image defogging method and device based on image sky segmentation and storage medium | |
CN113840123A (en) | Image processing device of vehicle-mounted image and automobile | |
KR101535630B1 (en) | Apparatus for enhancing the brightness of night image using brightness conversion model | |
US20230171510A1 (en) | Vision system for a motor vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |