CN116664460A - Infrared night vision fusion method - Google Patents

Infrared night vision fusion method Download PDF

Info

Publication number
CN116664460A
CN116664460A CN202310415586.7A CN202310415586A CN116664460A CN 116664460 A CN116664460 A CN 116664460A CN 202310415586 A CN202310415586 A CN 202310415586A CN 116664460 A CN116664460 A CN 116664460A
Authority
CN
China
Prior art keywords
image
night vision
infrared
fusion
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310415586.7A
Other languages
Chinese (zh)
Inventor
郭健
刘春华
位小龙
王懿斌
莫志君
黄广成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avic Guohua Shanghai Laser Display Technology Co ltd
Original Assignee
Avic Guohua Shanghai Laser Display Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avic Guohua Shanghai Laser Display Technology Co ltd filed Critical Avic Guohua Shanghai Laser Display Technology Co ltd
Priority to CN202310415586.7A priority Critical patent/CN116664460A/en
Publication of CN116664460A publication Critical patent/CN116664460A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • G06T5/70
    • G06T5/80
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application relates to the technical field of night vision image fusion, in particular to an infrared night vision fusion method, which respectively utilizes a low-light camera device and a night vision camera device to simultaneously acquire a low-light night vision image and an infrared image of an observation area; then fusing the obtained low-light night vision image and the infrared image to generate a fused image; s1, acquiring a night vision sub-band source image; s2, preprocessing a low-light night vision image and an infrared image of a night vision sub-band respectively; the preprocessing comprises enhancement, signal-to-noise ratio processing, noise removal and distortion correction on the low-light night vision image, and equalization and filtering of the processed image; s3, combining the source images to obtain a processed image; and S4, mapping the processed image to a color space, and fusing to obtain a color night vision image. The application has the advantages of promoting the fusion speed and improving the quality of the fusion image.

Description

Infrared night vision fusion method
Technical Field
The application relates to the technical field of night vision image fusion, in particular to an infrared night vision fusion method.
Background
The low-light night vision device has the advantages that visual perception is close to that of visible light, objects with different visible light reflectivity can be well identified, layering is strong, and the low-light night vision device has the defects that the low-light night vision device is insensitive to temperature and is difficult to detect hidden targets, the low-light vision distance is greatly influenced by the environmental condition of the climate, and the vision distance is relatively close; the infrared thermal imager has the advantages of being sensitive to temperature, capable of well detecting temperature difference targets, long in detection distance, capable of penetrating smoke and the like, and strong in camouflage recognition capability, and has the defects of being weak in image layering sense and difficult to recognize for scene targets with very close temperature. The images of the low-light sensor and the infrared sensor are fused, so that information is complemented, scene understanding can be enhanced, targets are highlighted, the targets can be detected more quickly and accurately under the military background of hiding, camouflage and confusion, the fused image is displayed in a natural form suitable for human eye observation, the recognition performance of human eyes can be obviously improved, and the fatigue feeling of operators is reduced.
The infrared/low-light image fusion technology is widely applied to the military field, integrates the advantages of abundant details of a low-light image scene and strong infrared image target and background contrast into one image, so that an observer can obtain more accurate, comprehensive and reliable image information of a certain scene. However, the image quality and the fusion speed obtained after fusion still have a large lifting space.
Disclosure of Invention
The application provides an infrared night vision fusion method and system, which have the beneficial effects of promoting the fusion speed and improving the quality of fusion images, and solve the problem that the image quality and the fusion speed obtained after fusion in the background technology still have a large improvement space.
The application provides the following technical scheme: an infrared night vision fusion method respectively utilizes a low-light camera device and a night vision camera device to simultaneously acquire a low-light night vision image and an infrared image of an observation area; then fusing the obtained low-light night vision image and the infrared image to generate a fused image;
the specific steps include that,
s1, acquiring a night vision sub-band source image;
s2, preprocessing a low-light night vision image and an infrared image of a night vision sub-band respectively;
the preprocessing comprises enhancement, signal-to-noise ratio processing, noise removal, distortion correction, equalization and filtering of the low-light night vision image.
S3, combining the source images to obtain a processed image;
and S4, mapping the processed image to a color space, and fusing to obtain a color night vision image.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: after the image intensifier amplifies the energy, the image intensifier can obtain the intensified visual field images with different color states (red, green and blue) after the electro-optical filter controlled by a circuit, and finally, the human eyes observe the color night vision images through the ocular;
this process requires that the control circuit generate a high frequency signal to control the electro-optic filter so that the image formed in the vision does not flicker.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the equation for fusion of the glint and infrared images is as follows:
wherein E is enhanced; i I Is an input image that causes the experience calibration center; i 2 Is an input image in which the periphery of the receptive field is suppressed; g(s) is an abbreviated form of a Gaussian function;
i, j is the pixel position; a, C is an adjusting parameter;
and obtaining a color night vision image with natural sense color after the color night vision image is fused with the thermal infrared imager image.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the micro light and the infrared are imaged at different wavelengths, the imaging is carried out after the target is synchronously detected, the two-dimensional geometric space and the one-dimensional spectrum information of the target are synchronously detected, the fusion mode in the claim 3 is utilized for processing, and the useful information in various channels is utilized for synthesizing the image.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the image processing in the S1 comprises signal-to-noise ratio processing, wherein the signal-to-noise ratio processing is specifically that under low illumination, the signal-to-noise ratio of an image shot by a detector is low, and even under extremely low illumination, an image with negative dB signal-to-noise ratio (the signal-to-noise ratio is smaller than 1) can be formed;
at this time, the signal from the target scene is extremely weak and is almost submerged by noise, which belongs to the typical 'invisible' phenomenon in night vision imaging;
the negative dB signal-to-noise ratio signal inversion imaging technology establishes a cross-correlation theoretical model among the inter-frame target scene signal, non-uniformity noise, dynamic noise and imaging projection matrixes based on the short-time constant characteristic of the reflection (radiation) energy of the same signal source and the short-time constant characteristic of fixed pattern noise (dark current noise and non-uniformity), and utilizes the correlation of homologous signals among different pixels in the inter-frame and space to calculate and invert the target scene signal from an original negative dB signal-to-noise ratio image, so that the signal-to-noise ratio can be improved by an order of magnitude or even higher, and finally a clear target image is obtained.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the preprocessing step of distortion correction includes: for infrared night vision images, the noise of the infrared night vision images is mainly caused by image non-uniformity caused by pixel discreteness, has strong spatial and time correlation, and changes along with heat radiation signals and environmental temperature;
an active calibration correction processing mode is adopted, namely, a baffle plate is used for fixed-temperature timing correction. When background thermal radiation is complex and changeable, the mismatch of correction parameters of the detector is easy to cause, and the image quality is rapidly deteriorated;
modulating the light intensity of an aperture plane of the imaging system by using a mask, and regulating and controlling the point spread function of the imaging system based on the light intensity so as to be anisotropic and cover a plurality of physical pixels; the imaging system samples the same scene for multiple times under the regulation and control of different aperture codes, and then iteratively reconstructing and dealiasing the shot low-resolution aliased image in a frequency domain; finally, high-frequency detail components smaller than the pixel size in the target scene can be inverted.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the method comprises the specific steps of enhancing a low-light night vision image, transmitting an optical image with low pixel brightness into an original optical signal through an input window by an enhancer image, and growing a photocathode film layer with high quantum efficiency on an optical substrate by evaporating an alkali source through a photocurrent monitoring method under the condition of complete vacuum by a photocathode received signal;
the enhancer converts the weak or invisible optical image into an electronic image, the gain system amplifies the electronic image by hundreds of times, and the enhanced electronic image is processed by the fluorescent screen.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the specific steps in the step S3 include:
1) The method comprises the steps of extracting an infrared thermal target, firstly carrying out edge extraction on an infrared image to be segmented by utilizing the characteristic that the brightness of a thermal target area in the infrared image is generally higher than that of a non-target area, calculating the average gray level of a pixel point set as one gray level threshold condition for growth, and carrying out area growth segmentation on the image to be segmented to obtain the infrared thermal target area so as to prepare for primary fusion;
2) The infrared thermal target area is fused with the visible light image once, and important target information in the infrared image is added into the visible light image by taking the extracted result of the infrared target as the basis of fusion decision;
3) The result of the primary fusion is secondarily fused with the two original images; in order to fully add the original information of the two original images, avoid the loss of important information in primary fusion caused by some segmentation errors possibly occurring in the segmentation of the infrared images, use the primary fusion result as another image source to fuse with the two original images based on a secondary fusion method of lifting wavelets;
the method comprises the steps of firstly decomposing three source images through lifting wavelets to obtain respective low-frequency components and high-frequency components, then respectively fusing the low-frequency components and the high-frequency components by adopting different fusion rules to obtain low-frequency and high-frequency components of a fused image, and finally carrying out lifting wavelets reconstruction to obtain a secondary fusion result.
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the fourth step S4 specifically includes:
1) Transmitting the reference image color to the double-channel night vision fusion image;
2) Establishing a color lookup table according to gray information of the obtained color night vision fusion image and the dual-channel sub-band image;
3) Real-time output of color night vision images through lookup tables
As an alternative to the infrared night vision fusion method of the present application, the method comprises: the infrared device is arranged as a thermal infrared imager; the low-light camera device is arranged as a low-light night vision device.
The application has the following beneficial effects:
1. the infrared night vision fusion method and system are characterized in that after the low-light night vision image and the infrared image are collected, the source images are combined and fused and mapped in a color space after preprocessing of enhancement, signal-to-noise ratio processing, noise removal, distortion correction, equalization and filtering, and the method for multiple fusion is also used for promoting the full utilization of useful information in various channels to synthesize images, promoting the fusion speed and improving the quality of the fused images.
2. According to the infrared night vision fusion method and system, equation fusion is carried out through the enhancement in the equation, the pixel position, the adjustment parameters and other parameters, and before fusion, useful information synthetic images in a channel are obtained, especially, the night vision image fusion can enhance scene understanding, highlight targets, be beneficial to detecting targets in a hidden manner faster and more accurately, display the fused images into a natural form suitable for human eye observation, obviously improve human eye recognition performance and reduce fatigue feeling of operators.
3. According to the infrared night vision fusion method and system, in the step S3, extraction fusion is carried out, secondary fusion is carried out after primary fusion, so that the focus of an original image is conveniently and fully added for fusion, and finally, wavelet reconstruction is promoted, and the fusion effect is promoted to be improved.
Drawings
FIG. 1 is a schematic flow chart of the method of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Example 1
The low-light night vision device has the advantages that visual perception is close to that of visible light, objects with different visible light reflectivity can be well identified, layering is strong, and the low-light night vision device has the defects that the low-light night vision device is insensitive to temperature and is difficult to detect hidden targets, the low-light vision distance is greatly influenced by the environmental condition of the climate, and the vision distance is relatively close; the infrared thermal imager has the advantages of being sensitive to temperature, capable of well detecting temperature difference targets, long in detection distance, capable of penetrating smoke and the like, and strong in camouflage recognition capability, and has the defects of being weak in image layering sense and difficult to recognize for scene targets with very close temperature. The images of the low-light sensor and the infrared sensor are fused, so that information is complemented, scene understanding can be enhanced, targets are highlighted, the targets can be detected more quickly and accurately under the military background of hiding, camouflage and confusion, the fused image is displayed in a natural form suitable for human eye observation, the recognition performance of human eyes can be obviously improved, and the fatigue feeling of operators is reduced.
The infrared/low-light image fusion technology is widely applied to the military field, integrates the advantages of abundant details of a low-light image scene and strong infrared image target and background contrast into one image, so that an observer can obtain more accurate, comprehensive and reliable image information of a certain scene. However, the image quality and the fusion speed obtained after fusion still have a large lifting space.
The application provides the following technical scheme: referring to fig. 1, an infrared night vision fusion method is disclosed, wherein a low-light night vision image and an infrared image of an observation area are obtained by using a low-light camera device and a night vision camera device respectively; then fusing the obtained low-light night vision image and the infrared image to generate a fused image;
the specific steps include that,
s1, acquiring a night vision sub-band source image;
s2, preprocessing a low-light night vision image and an infrared image of a night vision sub-band respectively;
the preprocessing comprises enhancement, signal-to-noise ratio processing, noise removal, distortion correction, equalization and filtering of the low-light night vision image.
S3, combining the source images to obtain a processed image;
and S4, mapping the processed image to a color space, and fusing to obtain a color night vision image.
In this embodiment, in steps S1 to S4, after the low-light night vision image and the infrared image are collected, the preprocessing of enhancement, signal-to-noise ratio processing, noise removal, distortion correction, equalization and filtering is performed, then the source images are combined and fused and mapped in a color space, the infrared technology and the low-light technology are used to perform fusion imaging at different wavelengths, the two-dimensional geometric space and the one-dimensional spectral information of the target are synchronously detected, then a certain image processing algorithm is used to analyze and process the multiband image, the useful information in various channels is fully utilized to synthesize the image, and the fusion speed and the fusion image quality are promoted.
Example 2
This embodiment is illustrated in embodiment 1, please refer to fig. 1, wherein: after the image intensifier amplifies the energy, the image intensifier can obtain the intensified visual field images with different color states (red, green and blue) after the electro-optical filter controlled by a circuit, and finally, the human eyes observe the color night vision images through the ocular;
this process requires that the control circuit generate a high frequency signal to control the electro-optic filter so that the image formed in the vision does not flicker.
Wherein: the equation for fusion of the glint and infrared images is as follows:
wherein E is enhanced; i I Is an input image that causes the experience calibration center; i 2 Is an input image in which the periphery of the receptive field is suppressed; g(s) is an abbreviated form of a Gaussian function;
i, j is the pixel position; a, C is an adjusting parameter;
and obtaining a color night vision image with natural sense color after the color night vision image is fused with the thermal infrared imager image.
Wherein: the glimmer light and infrared light are imaged at different wavelengths, the imaging is carried out after the target is synchronously detected, the two-dimensional geometric space and the one-dimensional spectrum information of the target are synchronously detected, the processing is carried out by utilizing a fusion equation of the glimmer light and infrared images, and the useful information in various channels is utilized to synthesize the images.
In the embodiment, the equation fusion is performed by enhancing parameters in the equation, pixel positions, adjusting parameters and the like, and before the fusion, useful information synthetic images in a channel are obtained, especially night vision image fusion can enhance scene understanding, highlight targets, facilitate faster and more accurate detection of targets in hiding, display the fusion images into a natural form suitable for human eye observation, obviously improve human eye recognition performance and reduce fatigue of operators.
Example 3
This example is illustrated in example 1, wherein: the image processing in the S1 comprises signal-to-noise ratio processing, wherein the signal-to-noise ratio processing is specifically that under low illumination, the signal-to-noise ratio of an image shot by a detector is low, and even under extremely low illumination, an image with negative dB signal-to-noise ratio (the signal-to-noise ratio is smaller than 1) can be formed;
at this time, the signal from the target scene is extremely weak and is almost submerged by noise, which belongs to the typical 'invisible' phenomenon in night vision imaging;
the negative dB signal-to-noise ratio signal inversion imaging technology establishes a cross-correlation theoretical model among the inter-frame target scene signal, non-uniformity noise, dynamic noise and imaging projection matrixes based on the short-time constant characteristic of the reflection (radiation) energy of the same signal source and the short-time constant characteristic of fixed pattern noise (dark current noise and non-uniformity), and utilizes the correlation of homologous signals among different pixels in the inter-frame and space to calculate and invert the target scene signal from an original negative dB signal-to-noise ratio image, so that the signal-to-noise ratio can be improved by an order of magnitude or even higher, and finally a clear target image is obtained.
The preprocessing step of distortion correction includes: for infrared night vision images, the noise of the infrared night vision images is mainly caused by image non-uniformity caused by pixel discreteness, has strong spatial and time correlation, and changes along with heat radiation signals and environmental temperature;
an active calibration correction processing mode is adopted, namely, a baffle plate is used for fixed-temperature timing correction. When background thermal radiation is complex and changeable, the mismatch of correction parameters of the detector is easy to cause, and the image quality is rapidly deteriorated;
modulating the light intensity of an aperture plane of the imaging system by using a mask, and regulating and controlling the point spread function of the imaging system based on the light intensity so as to be anisotropic and cover a plurality of physical pixels; the imaging system samples the same scene for multiple times under the regulation and control of different aperture codes, and then iteratively reconstructing and dealiasing the shot low-resolution aliased image in a frequency domain; finally, high-frequency detail components smaller than the pixel size in the target scene can be inverted.
Wherein: the method comprises the specific steps of enhancing a low-light night vision image, transmitting an optical image with low pixel brightness into an original optical signal through an input window by an enhancer image, and growing a photocathode film layer with high quantum efficiency on an optical substrate by evaporating an alkali source through a photocurrent monitoring method under the condition of complete vacuum by a photocathode received signal;
the enhancer converts the weak or invisible optical image into an electronic image, the gain system amplifies the electronic image by hundreds of times, and the enhanced electronic image is processed by the fluorescent screen.
In this embodiment, preprocessing is performed on the collected low-light night vision image and the infrared image, the original collected image is conveniently enhanced through distortion correction, noise ratio processing and enhancement processing, multiband information of a single sensor or information provided by different types of sensors is synthesized, redundancy and contradiction possibly existing between the information of the multiple sensors are eliminated, so that information transparency in an image is enhanced, accuracy, reliability and use rate of interpretation are improved, clear, complete and accurate information description of a target is formed, image quality with clear quality after processing is obtained, later fusion is facilitated, and finally high-quality image is synthesized, so that the utilization rate of image information is improved, computer interpretation accuracy and reliability are improved, spatial resolution and spectral resolution of the original image are improved, and monitoring is facilitated.
Example 4
This embodiment is illustrated in embodiment 1, please refer to fig. 1, wherein: the specific steps in the step S3 include:
1) The method comprises the steps of extracting an infrared thermal target, firstly carrying out edge extraction on an infrared image to be segmented by utilizing the characteristic that the brightness of a thermal target area in the infrared image is generally higher than that of a non-target area, calculating the average gray level of a pixel point set as one gray level threshold condition for growth, and carrying out area growth segmentation on the image to be segmented to obtain the infrared thermal target area so as to prepare for primary fusion;
2) The infrared thermal target area is fused with the visible light image once, and important target information in the infrared image is added into the visible light image by taking the extracted result of the infrared target as the basis of fusion decision;
3) The result of the primary fusion is secondarily fused with the two original images; in order to fully add the original information of the two original images, avoid the loss of important information in primary fusion caused by some segmentation errors possibly occurring in the segmentation of the infrared images, use the primary fusion result as another image source to fuse with the two original images based on a secondary fusion method of lifting wavelets;
the method comprises the steps of firstly decomposing three source images through lifting wavelets to obtain respective low-frequency components and high-frequency components, then respectively fusing the low-frequency components and the high-frequency components by adopting different fusion rules to obtain low-frequency and high-frequency components of a fused image, and finally carrying out lifting wavelets reconstruction to obtain a secondary fusion result.
In this embodiment, in the step S3, the extraction and fusion are performed, and the secondary fusion is performed after the primary fusion, so that the focus of the original image is fully added for fusion, and finally, the lifting wavelet reconstruction is performed, so that the fusion effect is promoted to be improved.
Example 5
This embodiment is illustrated in embodiment 1, please refer to fig. 1, wherein: the fourth step S4 specifically includes:
1) Transmitting the reference image color to the double-channel night vision fusion image;
2) Establishing a color lookup table according to gray information of the obtained color night vision fusion image and the dual-channel sub-band image;
3) And realizing real-time output of the color night vision image through the lookup table.
Wherein: the infrared device is arranged as a thermal infrared imager; the low-light camera device is arranged as a low-light night vision device.
In this embodiment, in the fusion process, a color night vision image is output, so that the defect that details such as a face and a license plate may appear unclear in some special shooting scenes is reduced, reflection wavelengths of different colors are identified through a fusion technology, colors in the scenes are predicted and reconstructed, infrared full-color imaging is achieved, and the fused image quality is promoted to be clearer.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Finally: the foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the application are intended to be included within the scope of the application.
The foregoing is merely a preferred embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that several modifications and variations can be made without departing from the technical principle of the present application, and these modifications and variations should also be regarded as the scope of the application.

Claims (10)

1. An infrared night vision fusion method is characterized in that: the method comprises the steps of respectively utilizing a low-light camera device and a night vision camera device, and simultaneously acquiring a low-light night vision image and an infrared image of an observation area; then fusing the obtained low-light night vision image and the infrared image to generate a fused image;
the specific steps include that,
s1, acquiring a night vision sub-band source image;
s2, preprocessing a low-light night vision image and an infrared image of a night vision sub-band respectively;
the preprocessing comprises enhancement, signal-to-noise ratio processing, noise removal and distortion correction on the low-light night vision image, and equalization and filtering of the processed image;
s3, combining the source images to obtain a processed image;
and S4, mapping the processed image to a color space, and fusing to obtain a color night vision image.
2. The infrared night vision fusion method of claim 1, wherein: after the image intensifier amplifies the energy, the image intensifier can obtain the intensified visual field images with different color states (red, green and blue) after the electro-optical filter controlled by a circuit, and finally, the human eyes observe the color night vision images through the ocular;
this process requires that the control circuit generate a high frequency signal to control the electro-optic filter so that the image formed in the vision does not flicker.
3. The infrared night vision fusion method of claim 2, wherein: the equation for fusion of the glint and infrared images is as follows:
wherein E is enhanced; i I Is an input image that causes the experience calibration center; i 2 Is an input image in which the periphery of the receptive field is suppressed; g(s) is an abbreviated form of a Gaussian function;
i, j is the pixel position; a, C is an adjusting parameter;
and obtaining a color night vision image with natural sense color after the color night vision image is fused with the thermal infrared imager image.
4. An infrared night vision fusion method according to claim 3, characterized in that: the micro light and the infrared are imaged at different wavelengths, the imaging is carried out after the target is synchronously detected, the two-dimensional geometric space and the one-dimensional spectrum information of the target are synchronously detected, the fusion mode in the claim 3 is utilized for processing, and the useful information in various channels is utilized for synthesizing the image.
5. The infrared night vision fusion method of claim 1, wherein: the image processing in the S1 comprises signal-to-noise ratio processing, wherein the signal-to-noise ratio processing is specifically that under low illumination, the signal-to-noise ratio of the image shot by the detector is low, and even under extremely low illumination, a negative dB signal-to-noise ratio (the signal-to-noise ratio is smaller than 1) image can be formed;
at this time, the signal from the target scene is extremely weak and is almost submerged by noise, which belongs to the typical 'invisible' phenomenon in night vision imaging;
the negative dB signal-to-noise ratio signal inversion imaging technology establishes a cross-correlation theoretical model among the inter-frame target scene signal, non-uniformity noise, dynamic noise and imaging projection matrixes based on the short-time constant characteristic of the reflection (radiation) energy of the same signal source and the short-time constant characteristic of fixed pattern noise (dark current noise and non-uniformity), and utilizes the correlation of homologous signals among different pixels in the inter-frame and space to calculate and invert the target scene signal from an original negative dB signal-to-noise ratio image, so that the signal-to-noise ratio can be improved by an order of magnitude or even higher, and finally a clear target image is obtained.
6. The infrared night vision fusion method of claim 5, wherein: the preprocessing step of distortion correction includes: for infrared night vision images, the noise of the infrared night vision images is mainly caused by image non-uniformity caused by pixel discreteness, has strong spatial and time correlation, and changes along with heat radiation signals and environmental temperature;
an active calibration correction processing mode is adopted, namely a baffle plate is used for fixed-temperature timing correction, when background heat radiation is complex and changeable, the mismatch of correction parameters of a detector is easy to cause, and the image quality is rapidly deteriorated;
modulating the light intensity of an aperture plane of the imaging system by using a mask, and regulating and controlling the point spread function of the imaging system based on the light intensity so as to be anisotropic and cover a plurality of physical pixels; the imaging system samples the same scene for multiple times under the regulation and control of different aperture codes, and then iteratively reconstructing and dealiasing the shot low-resolution aliased image in a frequency domain; finally, high-frequency detail components smaller than the pixel size in the target scene can be inverted.
7. The infrared night vision fusion method of claim 6, wherein: the method comprises the specific steps of enhancing a low-light night vision image, transmitting an optical image with low pixel brightness into an original optical signal through an input window by an enhancer image, and growing a photocathode film layer with high quantum efficiency on an optical substrate by evaporating an alkali source through a photocurrent monitoring method under the condition of complete vacuum by a photocathode received signal;
the enhancer converts the weak or invisible optical image into an electronic image, the gain system amplifies the electronic image by hundreds of times, and the enhanced electronic image is processed by the fluorescent screen.
8. The infrared night vision fusion method of claim 1, wherein: the specific steps in the step S3 include:
1) The method comprises the steps of extracting an infrared thermal target, firstly carrying out edge extraction on an infrared image to be segmented by utilizing the characteristic that the brightness of a thermal target area in the infrared image is generally higher than that of a non-target area, calculating the average gray level of a pixel point set as one gray level threshold condition for growth, and carrying out area growth segmentation on the image to be segmented to obtain the infrared thermal target area so as to prepare for primary fusion;
2) The infrared thermal target area is fused with the visible light image once, and important target information in the infrared image is added into the visible light image by taking the extracted result of the infrared target as the basis of fusion decision;
3) The result of the primary fusion is secondarily fused with the two original images; in order to fully add the original information of the two original images, avoid the loss of important information in primary fusion caused by some segmentation errors possibly occurring in the segmentation of the infrared images, use the primary fusion result as another image source to fuse with the two original images based on a secondary fusion method of lifting wavelets;
the method comprises the steps of firstly decomposing three source images through lifting wavelets to obtain respective low-frequency components and high-frequency components, then respectively fusing the low-frequency components and the high-frequency components by adopting different fusion rules to obtain low-frequency and high-frequency components of a fused image, and finally carrying out lifting wavelets reconstruction to obtain a secondary fusion result.
9. The infrared night vision fusion method of claim 1, wherein: the step S4 specifically comprises the following steps:
1) Transmitting the reference image color to the double-channel night vision fusion image;
2) Establishing a color lookup table according to gray information of the obtained color night vision fusion image and the dual-channel sub-band image;
3) And realizing real-time output of the color night vision image through the lookup table.
10. The infrared night vision fusion method of claim 1, wherein: the infrared device is arranged as a thermal infrared imager; the low-light camera device is arranged as a low-light night vision device.
CN202310415586.7A 2023-04-18 2023-04-18 Infrared night vision fusion method Pending CN116664460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310415586.7A CN116664460A (en) 2023-04-18 2023-04-18 Infrared night vision fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310415586.7A CN116664460A (en) 2023-04-18 2023-04-18 Infrared night vision fusion method

Publications (1)

Publication Number Publication Date
CN116664460A true CN116664460A (en) 2023-08-29

Family

ID=87714301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310415586.7A Pending CN116664460A (en) 2023-04-18 2023-04-18 Infrared night vision fusion method

Country Status (1)

Country Link
CN (1) CN116664460A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993729A (en) * 2023-09-26 2023-11-03 南京铂航电子科技有限公司 Night vision device imaging system and method based on second harmonic

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993729A (en) * 2023-09-26 2023-11-03 南京铂航电子科技有限公司 Night vision device imaging system and method based on second harmonic
CN116993729B (en) * 2023-09-26 2024-03-29 南京铂航电子科技有限公司 Night vision device imaging system and method based on second harmonic

Similar Documents

Publication Publication Date Title
CN109377469B (en) Processing method, system and storage medium for fusing thermal imaging with visible light image
EP2721828B1 (en) High resolution multispectral image capture
WO2005072431A2 (en) A method and apparatus for combining a plurality of images
US20090175535A1 (en) Improved processing of multi-color images for detection and classification
US11948277B2 (en) Image denoising method and device, apparatus, and storage medium
CN104899836B (en) A kind of Misty Image intensifier and method based near infrared multispectral imaging
Ko et al. Artifact-free low-light video enhancement using temporal similarity and guide map
CN110349114A (en) Applied to the image enchancing method of AOI equipment, device and road video monitoring equipment
CN110163807B (en) Low-illumination image enhancement method based on expected bright channel
CN116664460A (en) Infrared night vision fusion method
Wang et al. Low-light image joint enhancement optimization algorithm based on frame accumulation and multi-scale Retinex
Zhang et al. Image dehazing based on dark channel prior and brightness enhancement for agricultural remote sensing images from consumer-grade cameras
Joshi et al. Quantification of retinex in enhancement of weather degraded images
CN113676629A (en) Image sensor, image acquisition device, image processing method and image processor
Qian et al. Underwater image recovery method based on hyperspectral polarization imaging
Honda et al. Make my day-high-fidelity color denoising with near-infrared
Qian et al. Effective contrast enhancement method for color night vision
Khan et al. Robust contrast enhancement method using a retinex model with adaptive brightness for detection applications
Christinal et al. A novel color image fusion for multi sensor night vision images
Wang et al. Nighttime image dehazing using color cast removal and dual path multi-scale fusion strategy
Yuan et al. Tunable-liquid-crystal-filter-based low-light-level color night vision system and its image processing method
CN110020999B (en) Uncooled infrared thermal image self-adaptive mapping method based on homomorphic filtering
CN114170668A (en) Hyperspectral face recognition method and system
Jiao Optimization of Color Enhancement Processing for Plane Images Based on Computer Vision
Chaudhuri et al. Frequency and spatial domains adaptive-based enhancement technique for thermal infrared images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination