CN113038029A - Compensation method for enhancing night vision effect - Google Patents

Compensation method for enhancing night vision effect Download PDF

Info

Publication number
CN113038029A
CN113038029A CN202110332698.7A CN202110332698A CN113038029A CN 113038029 A CN113038029 A CN 113038029A CN 202110332698 A CN202110332698 A CN 202110332698A CN 113038029 A CN113038029 A CN 113038029A
Authority
CN
China
Prior art keywords
night vision
module
image
infrared
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110332698.7A
Other languages
Chinese (zh)
Inventor
段小燕
梁卫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinjingyuan Technology Co ltd
Original Assignee
Shenzhen Xinjingyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinjingyuan Technology Co ltd filed Critical Shenzhen Xinjingyuan Technology Co ltd
Priority to CN202110332698.7A priority Critical patent/CN113038029A/en
Publication of CN113038029A publication Critical patent/CN113038029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Abstract

The invention provides a compensation method for enhancing night vision effect, and relates to the technical field of enhancing night vision effect. The method at least comprises the following steps: providing a night vision effect device, the night vision effect device comprising: the device comprises a night vision module, a statistical module, an infrared video module and a compensation module; the statistic module is respectively connected with the night vision module, the infrared video module and the compensation module; the compensation module comprises a filtering processing module, a matting processing module and a fusion module, wherein the filtering processing module is connected with the matting processing module, and the matting processing module is connected with the fusion module. The method belongs to a passive video imaging system, does not actively emit a light source, is not easy to be found by other equipment or people, and improves the secrecy and the safety.

Description

Compensation method for enhancing night vision effect
Technical Field
The invention relates to the technical field of night vision effect enhancement, in particular to a compensation method for enhancing night vision effect.
Background
With the development of the technology, the low-light level imaging is generally applied gradually, especially on some special devices, which need passive low-light level imaging to improve the secrecy and the safety, but the low-light level imaging system of the special device is not enough in light sensitivity when the special device works in some dark or dim outdoor environments, so that the imaging is not obvious enough and is not clear.
At present, the night vision system is enhanced mainly by improving the light sensitivity in an active exciting mode, but in some special equipment, the light sensitivity is not suitable, the secrecy and the safety of the night vision system are reduced, or in the fusion of other existing night vision systems and infrared systems, the frame synchronization problem of the two systems is not fully considered, and the fusion effect is not good, and the problems of saw teeth and noise points are caused due to the incomplete fusion method or parameters.
At present, some night vision and infrared video fusion methods mainly take out pixels of partial area images of an infrared video to be overlapped or covered in a visible light video, and as a result, the color of the overlapped area is abrupt, or the image is distorted, and the final effect is poor.
In the existing visible light video and infrared fusion video methods, fusion conditions under different illumination conditions are not considered, and superposition is performed under all illumination conditions in a general manner. The effect is that after the infrared is superimposed, the visible light image double image is caused, and the infrared video image is abrupt after being fused and is not fused with the original visible light color.
At present, some visible light video and infrared fusion methods do not pay attention to which areas need to be fused and which areas do not. There is no process of analyzing, counting and re-fusing the visible light image in different areas. When the infrared salient region image is fused with visible light, phenomena such as ghosting and the like are generated.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a compensation method for enhancing the night vision effect, which solves the problems of fusion mode and matching under different illumination conditions; meanwhile, the problem of fusion of visible light in regions is solved, the actual image is analyzed, and the regions needing fusion are determined; which areas do not need to be fused, the problem of image or virtual image of fused video is avoided, and the night vision effect applied to special equipment is really improved.
(II) technical scheme
At present, the fusion of visible light video and infrared video is mainly performed, but the system focuses on the fusion of night vision video and infrared video. Night vision video and general video system all belong to visible light imaging system, but the sensor light sensing ability of this system night vision video is stronger, has improved the imaging effect under the low light level condition.
The invention provides a compensation method for enhancing night vision effect, which comprises the following steps:
a compensation method for enhancing night vision effect comprises the following steps:
step 1: providing a night vision effect device, the night vision effect device comprising: the device comprises a night vision module, a statistical module, an infrared video module and a compensation module;
the statistic module is respectively connected with the night vision module, the infrared video module and the compensation module;
the compensation module comprises a filtering processing module, a matting processing module and a fusion module, wherein the filtering processing module is connected with the matting processing module, and the matting processing module is connected with the fusion module;
step 2: setting a first central point of a video frame of the night vision module to be overlapped with a second central point of the video frame of the infrared module;
and step 3: setting an infrared emission source as a reference target to be shot, setting a group of illumination intensity, then respectively imaging by utilizing a night vision video and an infrared video under different set illumination intensities to obtain a night vision video image and an infrared video image, superposing the image of the infrared emission source in the infrared video image to the night vision video image, adjusting the intensity of a brightness component Y1 of the image of the infrared emission source in the infrared video image until the intensity is well matched with visible light, and recording the compensation light quantity at the moment as C1; by analogy, data sets { Y1, C2}, { Y2, C2}. { Yn, Cn } under different intensities are obtained, and an illumination and compensation intensity threshold value list is formed and prestored;
and 4, step 4: and setting the infrared emission source in a shooting space, respectively recording the pixel positions of the infrared emission source in the night vision video image and the infrared imaging, and recording and storing the deviation f of the infrared emission source in the night vision video image and the infrared imaging.
And 5: the statistical module outputs the data set { Y1, C1}, { Y2, C2}. { Yn, Cn } to the compensation module;
step 6: dividing the night vision video image into a plurality of night vision video image sub-blocks, and counting the R/G/B mean value of the night vision video image by the counting module to obtain first mean value data; counting the R/G/B mean values of all pixels in each night vision video image sub-block, meanwhile, counting the number of pixels meeting the requirements of the set upper limit and the set lower limit of the R/G/B in each night vision video image sub-block to obtain a plurality of groups of second mean value data which are the same as the number of blocks of the night vision video image sub-block, and sending the first mean value data and the second mean value data to the compensation module;
and 7: carrying out noise reduction processing on the infrared video image by using the filtering processing module, and converting the gray image of the infrared video image from a spatial domain into a frequency domain to obtain frequency domain data;
and 8: filtering the frequency domain data, and then converting the frequency domain data back to a grey-scale map;
and step 9: carrying out matting processing on the noise-reduced image by using the matting processing module;
step 10: and converting, fusing and correcting the image obtained after the matting processing to obtain a processed night vision video image, and carrying out white balance processing on the processed night vision video image.
Further, the step 7 specifically includes the following steps:
step 7A, changing the mode of the infrared video imaging module from a black hot palette mode to an original gray map mode;
and 7B, carrying out Fourier transform to obtain a continuous frequency spectrum of the signal x (t), wherein the transform formula is as follows:
Figure BDA0002996273120000041
step 7C, calculating the frequency spectrum of the signal x (t) by using the discrete signal x (nT) according to the formula
Figure BDA0002996273120000042
And 7D, converting the gray-scale image of the infrared video image from the spatial domain into the frequency domain.
Further, step 8 specifically includes the following steps:
and 8A, performing Gaussian low-pass filtering on the frequency domain data, wherein a filtering formula is as follows:
Figure BDA0002996273120000043
d0 is a constant parameter, D (u, v) is the distance between a point (u, v) in the frequency domain and the center of the rectangle in the frequency domain;
step 8B, saving the data obtained in the step 8A as H1;
and 8C, processing the frequency domain data through a Gaussian high-pass filter, wherein the formula is as follows:
Figure BDA0002996273120000044
step 8D, saving the data obtained in the step 8C as H2;
and step 8E: summing the H1 and H2 sub-pixels into H;
step 8F, the summed H-frequency domain data is transformed back into a grey scale map by an inverse fourier transform, wherein,
the inverse fourier transform equation is as follows:
Figure BDA0002996273120000045
further, step 9 specifically includes the following steps:
step 9A, carrying out black hot toning on the denoised image according to a palette;
step 9B, performing line drawing and segmentation on the whole color-mixed image from outside to inside according to gradient grades;
step 9C, performing mode matching on the graph obtained by line drawing and dividing and a prestored graph, and entering step 10 if the graph obtained by line drawing and dividing can be identified by mode matching; otherwise, processing according to the minimum shape of the circle, and entering the step 10.
Further, in step 10, the image obtained after the matting processing is subjected to conversion, fusion and correction processing, which specifically includes the following steps:
step 10A, mapping the pixel position of the image scratched out in the step 9 into a night vision video according to the pre-measured position deviation f: mapping pixel X coordinates, the formula is: xdst=Xsrc*(Widthdst/Widthsrc)+fdst;
Wherein: xdstRepresenting the pixel X-coordinate, X, in night vision videosrcX-coordinate value, Width, representing infrared videodstRepresenting a resolution width value in the night vision video; widthsrcRepresenting resolution width value, f, of infrared videodstA compensated position offset representing the infrared video;
mapping pixel Y coordinate, the formula is:
Ydst=Ysrc*(Heightdst/Heightsrc)+fdst
wherein, YdstRepresenting the y coordinate, y, of a pixel in night vision videosrcA y coordinate value representing an infrared video; height is the image resolution;
step 10B, forming a group of pixel coordinates { x1, y1}, { x2, y2}, … … { xn, yn } of the outline edge of the matting;
and step 10C, searching an area surrounded by the group of pixel coordinates in the infrared video image: the method comprises the steps that a peripheral pixel area formed by looping of a sectional image occupies an area in a night vision video image, if the number of statistical values of the occupied area is close to the number of pixels of the occupied area, namely the occupied area has no object or invalid blank pixel area, the pixel values of the occupied area of an infrared emission source graph in the night vision video image are the same, the occupied area has no object or invalid blank pixel area, and a first condition that the occupied area can be covered by a type image of the infrared emission source is met; analyzing the brightness Y component value provided by the statistical module, if the brightness Y component value is smaller, indicating that the environment is dark at the moment, and meeting a second condition of infrared compensation; only if the first condition and the second condition are met simultaneously, the fusion is allowed to start, otherwise, the fusion is ended;
step 10D, start of fusion: mapping the fused pixel coordinates according to the coordinates in the step 10A, namely firstly, blending the extracted black heat image into colors according to a white heat plate so as to be convenient for human eyes to identify, and then, carrying out YUV conversion on a color RGB image so as to carry out brightness compensation; the conversion formula from color RGB to YUV is as follows:
Gray=R*0.299+G*0.587+B*0.114
after converting into YUV component, setting Y as YoldIn the reference illumination and compensation intensity threshold list, the luminance Y is related to the compensation C { Y, C } and is set to YAt presentAnd CSupplement device(ii) a New YnewThe component values are as follows:
Ynew=Yold*Csupplement device/YAt present
Wherein:
Yoldthe value is obtained after infrared RGB is converted into YUV;
Yat presentThe current Y value provided to the compensation module for the statistical module is also equal to the Y value in the threshold list;
Csupplement deviceIs a value of the threshold list;
similarly, the U and V components are converted in the same way to obtain new Y, U and V components.
And converting YUV into RGB color domain. The formula is as follows:
R=Y+1.4075*(V-128)
G=Y–0.3455*(U–128)–0.7169*(V–128)
B=Y+1.779*(U–128)
all pixels in the cutout are converted into new RGB according to the principle;
and copying the new RGB sectional image into the night vision video according to the pixel mapping coordinate relation.
Further, the infrared emission source is a human
(III) advantageous effects
The invention provides a compensation method for enhancing night vision effect. The method has the following beneficial effects:
1. the method provided by the invention carries out depth effective fusion through the night vision video and the infrared video module, belongs to a passive video imaging system, does not actively emit a light source, is not easy to be found by other equipment or people, and improves the confidentiality and the safety.
2. According to the invention, a fusion mode under different brightness conditions is adopted; the method is combined with the recognition analysis processing of the subareas in the night vision visible light video to perform the targeted subarea fusion method, so that the night vision effect is improved, and the problems of double images and ghost images are avoided.
3. According to the invention, through matting and separating effective object image blocks in the infrared video to compensate into the night vision image, and through night vision AWB white balance and color restoration, the object scene richness and effect of the night vision system imaging are obviously enhanced.
Drawings
FIG. 1 is a block diagram of the present invention;
FIG. 2 is a schematic view of the processing direction of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
as shown in fig. 1-2, an embodiment of the present invention provides a compensation method for enhancing night vision effect, including:
the system is arranged and connected with reference to the figure 1, the night vision module is connected with the statistical module, the infrared video module is connected with the compensation module, and the statistical module is connected with the compensation module; the compensation module comprises a filtering processing module, a matting processing module and a fusion module, wherein the filtering processing module is interconnected with the matting processing module, and the matting processing module is interconnected with the fusion module.
In an actual situation, in view of cost, the resolution of the video frame of the night vision module is higher than that of the video frame of the infrared module, for example, the resolution of the video frame of the night vision module is 1920x1080 or higher, and the resolution of the infrared module is 320x 240;
and a presetting stage, namely, measuring and storing the brightness and the compensation parameters. (general video imaging cannot measure absolute brightness data, and brightness detection and capture can be realized by a photosensitive system of a sensor & ISP under a standard environment) the preset method comprises the following steps:
a sealed space with controllable illumination intensity or grade is arranged, and the device of the method is arranged in the sealed space. And adding an infrared emission source as a reference target to be photographed, such as a natural person as a reference target. A set of illumination intensities L1, L2.. Ln is set. And then respectively imaging by utilizing the night vision video and the infrared video under different set illumination intensities to derive two pictures. And superposing the image of the person in the infrared video to the night vision picture through an auxiliary tool, adjusting the intensity of the Y1 component in the pixel of the person-shaped cutout in the infrared picture until the intensity is well matched with the visible light, and recording the C1 at the moment as the compensation light quantity. The current luminance reference Y1(sensor & ISP output) is also recorded. And analogically, acquiring a group of data { Y1, C2}, { Y2, C2}. the { Yn, Cn } under different intensities, and forming and pre-storing an illumination and compensation intensity threshold list.
The relative positions are preprocessed. Because of the difference between the positions and the sizes of the night vision video image and the infrared video image at the same point in the same picture due to installation factors, the deviation needs to be calculated. The method is characterized in that a shot object such as a natural person is added in a shooting space, pixel positions of the person in a night vision video image and a pixel position of an infrared imaging are respectively recorded, and deviation f of a reference object 'person' image in two images is calculated, recorded and stored (the deviation f takes a night vision image as a reference), so that position mapping is conveniently carried out by a fusion module during fusion matting.
And electrifying to ensure that the whole system works normally. And the night vision video module and the infrared video module are respectively imaged. Then, the statistical module detects the current brightness Y value of the night vision video in real time (note that the sensor configuration parameters during detection are consistent with the preset stage, such as aperture size, exposure mode, exposure time, gain, etc. parameters are consistent). And searching a list of compensation intensity thresholds { Y1, Y2,. Yn } through the detected brightness Y value. The corresponding compensation information (enhanced luminance C1 and current Y value) is output to the compensation module.
Meanwhile, the statistical module blocks the entire image of the current night vision video, preferably into 32x32 blocks. And counting the R/G/B mean values of all pixels in each block, and simultaneously sending the R/G/B mean value statistical data of the whole image and the effective statistical number data of all blocks divided into 32x32 blocks to the compensation module according to the number of pixels meeting the set RGB upper limit and lower limit requirements in each block. The RGB statistics bring the part of the function in the AWB of the sensor & ISP or the dsp, and the RGB statistics can be directly used without an additional hardware unit.
7. And imaging the infrared video, and performing noise reduction processing through the filtering processing module. The mode of setting the infrared video imaging module is changed from a black hot palette mode to an original gray map mode. Because the infrared video is easily interfered and has larger noise, the Fourier transform (fast Fourier transform, namely FFT is generally adopted to accelerate the transform speed) is firstly carried out. The transformation formula is as follows:
Figure BDA0002996273120000091
the continuous spectrum of the signal x (t) is calculated in the above formula. However, in practical control systems discrete sampled values x (nt) of the continuous signal x (t) are available. It is therefore necessary to calculate the spectrum of the signal x (t) using the discrete signal x (nt).
The DFT of a finite-length discrete signal x (N), N ═ 0, 1, …, N-1 is defined as:
Figure BDA0002996273120000092
and converting the gray-scale image of the infrared video image from a spatial domain into a frequency domain.
8. And after the frequency domain data is converted into the frequency domain, filtering the frequency domain data through a Gaussian filter. The basic materials of the picture can be well reserved for the frequency domain data by Gaussian low-pass filtering, so that the Gaussian low-pass filtering is firstly carried out, and the filtering formula is as follows:
Figure BDA0002996273120000093
in the above formula, D0 is a reasonable constant, and D (u, v) is the distance between a point (u, v) in the frequency domain and the center of the rectangle in the frequency domain. Wherein D0 is a constant parameter, which is a preset value according to actual conditions. The low-pass filtered data is saved as H1.
Since noise is likely to occur in a high frequency domain, the noise is further processed by a gaussian high pass filter, and the formula is as follows:
Figure BDA0002996273120000101
the formula: in the formula of the Gaussian high-pass filter, D (u, v) is the distance between a point (u, v) in the frequency domain and the center of the rectangle in the frequency domain, and D0Is also a reasonably preset constant value. The high-pass filtered data is saved as H2.
Then, H1 and H2 sub-pixels are summed to H.
Finally, the summed H-frequency domain data is converted back to a grayscale map by an inverse fourier transform.
The inverse fourier transform equation is as follows:
Figure BDA0002996273120000102
the converted grayscale image will have significantly less noise than the original image.
9. And carrying out cutout processing on the image subjected to noise reduction through the cutout processing module. The main purpose of matting is to separate important, more concerned effective image blocks and avoid background images which are not concerned. Firstly, carrying out black hot toning on the denoised image according to a palette, wherein the black toning accords with the observation of human eyes and is suitable for digital processing. And performing line drawing and segmentation on the whole image after color mixing from outside to inside according to the color gradient grade. Similar to the hill terrain height marking method, circle-shaped marking lines with different gradient levels are carried out on the image. The gradient level here is not a height value but a color component boundary value. In the black hot toning mode, the larger the brightness component is, the higher the energy is, and the higher the temperature is; conversely, the blacker the capacity is low and the temperature is low. And carrying out pattern matching on the circled shape (in the pattern matching, a plurality of situation data such as various objects of people, dogs, animals and the like are prestored). If the circled image can be identified by pattern matching, the next step is carried out; otherwise, processing according to the minimum shape of the circle and entering the next step.
Note: the image matting and identification can also adopt a neural network deep learning algorithm, but the calculation amount is large, the cost is high, and the real-time performance and the economic benefit are slightly poor.
Through the fusion module, the following steps are adopted to convert, fuse and correct the extracted image. Firstly, mapping the pixel position of the scratched image to a night vision video according to the position deviation f of the pre-measurement. Due to the difference of the pixel size and the position of the infrared video image and the night vision video image, remapping is needed, otherwise, the fusion is disordered.
Mapping pixel X coordinates, the formula is:
Xdst=Xsrc*(Widthdst/Widthsrc)+fdst
wherein: xdstRepresenting the pixel X-coordinate, X, in night vision videosrcCoordinate value, Width, representing the infrared videodstRepresenting a resolution width value in the night vision video; widthsrcRepresenting resolution width value, f, of infrared videodstRepresenting the compensated position offset of the infrared video.
In the same way, the method for preparing the composite material,
mapping pixel Y coordinate, the formula is:
Ydst=Ysrc*(Heightdst/Heightsrc)+fdst
where Height is the image resolution.
After the mapping is completed, the outline edges of the matting form a group of pixel coordinates { x1, y1}, { x2, y2}, … … { xn, yn }.
And searching the area surrounded by the coordinates in the infrared video image. The retrieved rule is to refer to the statistical value provided by the statistical module. The method comprises the following steps: the circled peripheral pixel area of the cutout is occupied by which blocks in the night vision video image (e.g., the "person" area in the upper right corner of fig. 2). If the number of the statistical values of the occupied areas of the human type is close to the number of the pixels of the occupied areas, the pixel values of the occupied areas of the human type figure in the night vision video are the same. I.e. a blank pixel area representing that this area has no object or is invalid, satisfies the first condition that it can be covered by a "human" type image.
And analyzing the brightness Y component value provided by the statistical module, wherein if the brightness Y component value is smaller, the environment is dark at the moment, and the second condition of infrared compensation is met.
And if the brightness is higher, the fusion is not needed, because the night vision video effect is better than that of infrared under the high-brightness environment, or if objects exist in a certain area of the night vision image, the fusion is not needed, because the night vision visible light effect is better than that of infrared. If the conditions 2 and 1 are satisfied, the fusion process is started.
And starting fusion, and mapping the fused pixel coordinates according to the coordinates in the fusion module. The extracted black heat image is firstly mixed into color according to the white heat or other color plates, so that the color is convenient for human eyes to identify. And then, YUV conversion is carried out on the color image, and brightness compensation is carried out. The conversion formula of color RGB to YUV is as follows:
Gray=R*0.299+G*0.587+B*0.114
after converting into YUV component, setting Y as YoldIn the reference illumination and compensation intensity threshold list, the luminance Y is related to the compensation C { Y, C } and is set to YAt present、CSupplement device. New YnewThe component values are as follows:
Ynew=Yold*Csupplement device/YAt present
Wherein:
Yoldthe value is obtained after infrared RGB is converted into YUV.
YAt presentThe current Y value provided to the compensation module for the statistics module is also equal to the Y value in the threshold list.
CSupplement deviceIs the value of the threshold list.
Similarly, the U and V components are converted in the same way to obtain new Y, U and V components.
And converting YUV into RGB color domain. The formula is as follows:
R=Y+1.4075*(V-128)
G=Y–0.3455*(U–128)–0.7169*(V–128)
B=Y+1.779*(U–128)
all pixels in the cutout are converted into new RGB according to the principle.
And copying the new RGB sectional image into the night vision video according to the pixel mapping coordinate relation.
And finally, carrying out AWB white balance processing on the new night vision video image to realize better color restoration and smoothing processing.
As shown in fig. 2, when the luminance component Y is small (i.e., in a dim environment), the night vision video cannot catch a person, but only the vehicle. At the moment, the infrared video can capture a vehicle and a person, and the person-shaped image can be extracted through the processing and then fused to the night vision video; the images of the 'car' are not fused, so that the fusion mode under different light conditions is well solved, and the phenomena of double images, ghost images and salience are avoided. By modifying the threshold value list value, the fusion weight of the night vision video and the infrared video can be adjusted, and finally, the depth fusion is realized.
The method can output the real-time preview video, and enhances the night vision effect output by the night vision module through compensation to a great extent.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A compensation method for enhancing night vision effect, comprising the steps of:
step 1: providing a night vision effect device, the night vision effect device comprising: the device comprises a night vision module, a statistical module, an infrared video module and a compensation module;
the statistic module is respectively connected with the night vision module, the infrared video module and the compensation module;
the compensation module comprises a filtering processing module, a matting processing module and a fusion module, wherein the filtering processing module is connected with the matting processing module, and the matting processing module is connected with the fusion module;
step 2: setting a first central point of a video frame of the night vision module to be overlapped with a second central point of the video frame of the infrared module;
and step 3: setting an infrared emission source as a reference target to be shot, setting a group of illumination intensity, then respectively imaging by utilizing a night vision video and an infrared video under different set illumination intensities to obtain a night vision video image and an infrared video image, superposing the image of the infrared emission source in the infrared video image to the night vision video image, adjusting the intensity of a brightness component Y1 of the image of the infrared emission source in the infrared video image until the intensity is well matched with visible light, and recording the compensation light quantity at the moment as C1; by analogy, data sets { Y1, C2}, { Y2, C2}. { Yn, Cn } under different intensities are obtained, and an illumination and compensation intensity threshold value list is formed and prestored;
and 4, step 4: and setting the infrared emission source in a shooting space, respectively recording the pixel positions of the infrared emission source in the night vision video image and the infrared imaging, and recording and storing the deviation f of the infrared emission source in the night vision video image and the infrared imaging.
And 5: the statistical module outputs the data set { Y1, C1}, { Y2, C2}. { Yn, Cn } to the compensation module;
step 6: dividing the night vision video image into a plurality of night vision video image sub-blocks, and counting the R/G/B mean value of the night vision video image by the counting module to obtain first mean value data; counting the R/G/B mean values of all pixels in each night vision video image sub-block, meanwhile, counting the number of pixels meeting the requirements of the set upper limit and the set lower limit of the R/G/B in each night vision video image sub-block to obtain a plurality of groups of second mean value data which are the same as the number of blocks of the night vision video image sub-block, and sending the first mean value data and the second mean value data to the compensation module;
and 7: carrying out noise reduction processing on the infrared video image by using the filtering processing module, and converting the gray image of the infrared video image from a spatial domain into a frequency domain to obtain frequency domain data;
and 8: filtering the frequency domain data, and then converting the frequency domain data back to a grey-scale map;
and step 9: carrying out matting processing on the noise-reduced image by using the matting processing module;
step 10: and converting, fusing and correcting the image obtained after the matting processing to obtain a processed night vision video image, and carrying out white balance processing on the processed night vision video image.
2. A compensation method for enhancing night vision effect as claimed in claim 1, wherein said step 7 comprises the following steps:
step 7A, changing the mode of the infrared video imaging module from a black hot palette mode to an original gray map mode;
and 7B, carrying out Fourier transform to obtain a continuous frequency spectrum of the signal x (t), wherein the transform formula is as follows:
Figure FDA0002996273110000021
step 7C, calculating the frequency spectrum of the signal x (t) by using the discrete signal x (nT) according to the formula
Figure FDA0002996273110000022
And 7D, converting the gray-scale image of the infrared video image from the spatial domain into the frequency domain.
3. A compensation method for enhancing night vision effect as claimed in claim 1, wherein step 8 comprises the following steps:
and 8A, performing Gaussian low-pass filtering on the frequency domain data, wherein a filtering formula is as follows:
Figure FDA0002996273110000031
in the above formula, D0Is a constant parameter, D (u, v) is the distance between a point (u, v) in the frequency domain and the center of the frequency domain rectangle;
step 8B, saving the data obtained in the step 8A as H1;
and 8C, processing the frequency domain data through a Gaussian high-pass filter, wherein the formula is as follows:
Figure FDA0002996273110000032
step 8D, saving the data obtained in the step 8C as H2;
and step 8E: summing the H1 and H2 sub-pixels into H;
step 8F, the summed H-frequency domain data is transformed back into a grey scale map by an inverse fourier transform, wherein,
the inverse fourier transform equation is as follows:
Figure FDA0002996273110000033
4. a compensation method for enhancing night vision effects as claimed in claim 1, wherein: step 9 specifically comprises the following steps:
step 9A, carrying out black hot toning on the denoised image according to a palette;
step 9B, performing line drawing and segmentation on the whole color-mixed image from outside to inside according to gradient grades;
step 9C, performing mode matching on the graph obtained by line drawing and dividing and a prestored graph, and entering step 10 if the graph obtained by line drawing and dividing can be identified by mode matching; otherwise, processing according to the minimum shape of the circle, and entering the step 10.
5. A compensation method for enhancing night vision effects as claimed in claim 1, wherein: in step 10, the image obtained after the matting processing is converted, fused and corrected, and the method specifically comprises the following steps:
step 10A, mapping the pixel position of the image scratched out in the step 9 into a night vision video according to the pre-measured position deviation f: mapping pixel X coordinates, the formula is: xdst=Xsrc*(Widthdst/Widthsrc)+fdst;
Wherein: xdstRepresenting the pixel X-coordinate, X, in night vision videosrcX-coordinate value, Width, representing infrared videodstRepresenting night vision videoA resolution width value of (1); widthsrcRepresenting resolution width value, f, of infrared videodstA compensated position offset representing the infrared video;
mapping pixel Y coordinate, the formula is:
Ydst=Ysrc*(Heightdst/Heightsrc)+fdst
wherein, YdstRepresenting the y coordinate, y, of a pixel in night vision videosrcA y coordinate value representing an infrared video; height is the image resolution;
step 10B, forming a group of pixel coordinates { x1, y1}, { x2, y2}, … … { xn, yn } of the outline edge of the matting;
and step 10C, searching an area surrounded by the group of pixel coordinates in the infrared video image: the method comprises the steps that a peripheral pixel area formed by looping of a sectional image occupies an area in a night vision video image, if the number of statistical values of the occupied area is close to the number of pixels of the occupied area, namely the occupied area has no object or invalid blank pixel area, the pixel values of the occupied area of an infrared emission source graph in the night vision video image are the same, the occupied area has no object or invalid blank pixel area, and a first condition that the occupied area can be covered by a type image of the infrared emission source is met; analyzing the brightness Y component value provided by the statistical module, if the brightness Y component value is smaller, indicating that the environment is dark at the moment, and meeting a second condition of infrared compensation; only if the first condition and the second condition are met simultaneously, the fusion is allowed to start, otherwise, the fusion is ended;
step 10D, start of fusion: mapping the fused pixel coordinates according to the coordinates in the step 10A, namely firstly, blending the extracted black heat image into colors according to a white heat plate so as to be convenient for human eyes to identify, and then, carrying out YUV conversion on a color RGB image so as to carry out brightness compensation; the conversion formula from color RGB to YUV is as follows:
Gray=R*0.299+G*0.587+B*0.114
after converting into YUV component, setting Y as YoldIn the reference illumination and compensation intensity threshold list, the luminance Y is related to the compensation C { Y, C } and is set to YAt presentAnd CSupplement device(ii) a NovelYnewThe component values are as follows:
Ynew=Yold*Csupplement device/YAt present
Wherein:
Yoldthe value is obtained after infrared RGB is converted into YUV;
Yat presentThe current Y value provided to the compensation module for the statistical module is also equal to the Y value in the threshold list;
Csupplement deviceIs a value of the threshold list;
similarly, the U and V components are converted in the same way to obtain new Y, U and V components.
And converting YUV into RGB color domain. The formula is as follows:
R=Y+1.4075*(V-128)
G=Y–0.3455*(U–128)–0.7169*(V–128)
B=Y+1.779*(U–128)
all pixels in the cutout are converted into new RGB according to the principle;
and copying the new RGB sectional image into the night vision video according to the pixel mapping coordinate relation.
6. A compensation method for enhancing night vision effect as claimed in any one of claims 1-5, wherein the infrared emission source is selected from human.
CN202110332698.7A 2021-03-29 2021-03-29 Compensation method for enhancing night vision effect Pending CN113038029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110332698.7A CN113038029A (en) 2021-03-29 2021-03-29 Compensation method for enhancing night vision effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110332698.7A CN113038029A (en) 2021-03-29 2021-03-29 Compensation method for enhancing night vision effect

Publications (1)

Publication Number Publication Date
CN113038029A true CN113038029A (en) 2021-06-25

Family

ID=76452547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110332698.7A Pending CN113038029A (en) 2021-03-29 2021-03-29 Compensation method for enhancing night vision effect

Country Status (1)

Country Link
CN (1) CN113038029A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993729A (en) * 2023-09-26 2023-11-03 南京铂航电子科技有限公司 Night vision device imaging system and method based on second harmonic

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993729A (en) * 2023-09-26 2023-11-03 南京铂航电子科技有限公司 Night vision device imaging system and method based on second harmonic
CN116993729B (en) * 2023-09-26 2024-03-29 南京铂航电子科技有限公司 Night vision device imaging system and method based on second harmonic

Similar Documents

Publication Publication Date Title
EP3631754B1 (en) Image processing apparatus and method
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
JP4208909B2 (en) Image processing device and photographing device
CN106454014B (en) A kind of method and device improving backlight scene vehicle snapshot picture quality
CN101605209A (en) Camera head and image-reproducing apparatus
CN101626454B (en) Method for intensifying video visibility
JP6352547B2 (en) Image processing apparatus and image processing method
CN110536068A (en) Focusing method and device, electronic equipment, computer readable storage medium
Várkonyi-Kóczy et al. Gradient-based synthesized multiple exposure time color HDR image
CN113902657A (en) Image splicing method and device and electronic equipment
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
CN111970432A (en) Image processing method and image processing device
CN106412448A (en) Single-frame image based wide dynamic range processing method and system
CN107481214A (en) A kind of twilight image and infrared image fusion method
CN113676629A (en) Image sensor, image acquisition device, image processing method and image processor
CN113038029A (en) Compensation method for enhancing night vision effect
CN111666869B (en) Face recognition method and device based on wide dynamic processing and electronic equipment
CN105118032B (en) A kind of wide method for dynamically processing of view-based access control model system
CN114821440B (en) Mobile video stream content identification and analysis method based on deep learning
JP4359662B2 (en) Color image exposure compensation method
CN110020999A (en) A kind of uncooled ir thermal imagery self organizing maps method based on homomorphic filtering
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
CN111988577A (en) Video monitoring method based on image enhancement
CN110706168A (en) Image brightness adjusting method
CN103841384A (en) Image-quality optimization method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210625