CN112651469A - Infrared and visible light image fusion method and system - Google Patents

Infrared and visible light image fusion method and system Download PDF

Info

Publication number
CN112651469A
CN112651469A CN202110089292.0A CN202110089292A CN112651469A CN 112651469 A CN112651469 A CN 112651469A CN 202110089292 A CN202110089292 A CN 202110089292A CN 112651469 A CN112651469 A CN 112651469A
Authority
CN
China
Prior art keywords
frequency coefficient
low
coefficient information
image
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110089292.0A
Other languages
Chinese (zh)
Inventor
路陈红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Peihua University
Original Assignee
Xian Peihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Peihua University filed Critical Xian Peihua University
Priority to CN202110089292.0A priority Critical patent/CN112651469A/en
Publication of CN112651469A publication Critical patent/CN112651469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an infrared and visible light image fusion method and a system, which are used for acquiring an infrared image and a visible light image; respectively carrying out multi-scale transformation on the infrared image and the visible light image to obtain first low-frequency coefficient information, first high-frequency coefficient information, second low-frequency coefficient information and second high-frequency coefficient information; determining third low-frequency coefficient information according to the first low-frequency coefficient information and the second low-frequency coefficient information based on a global contrast sequence maximum principle; determining third high-frequency coefficient information according to the first high-frequency coefficient information and the second high-frequency coefficient information; generating a fusion image according to the third low-frequency coefficient information and the third high-frequency coefficient information; the method and the device provided by the invention have the advantages that the inter-pixel relative value information is fully utilized, and the inter-pixel relative value in the low-frequency coefficient information is obtained by directly carrying out multi-scale transformation decomposition on the infrared image and the visible light image and then is fused, so that the contrast of the fused image is improved, and more detailed information in the image is reserved.

Description

Infrared and visible light image fusion method and system
Technical Field
The invention belongs to the technical field of image fusion, and particularly relates to an infrared and visible light image fusion method and system.
Background
Image fusion is a technique for fusing information contained in images acquired by multiple imaging systems of the same scene into one image. The fused image is more suitable for human eyes and machine perception than the image acquired by a single imaging system, so the fused image is widely applied to the fields of industry, medicine, military affairs and the like. Because the images collected by the infrared imaging system and the visible light imaging system have the information complementary characteristic and the imaging equipment is relatively simple and easy to obtain, the fusion of the infrared image and the visible light image has more superiority, and the method is an important way for improving the application values of video monitoring, medical diagnosis, target detection and the like.
The fusion of infrared images and visible light images generally adopts multi-scale transformation consistent with the visual characteristics of human eyes to represent image information, and generally comprises three steps: 1) carrying out multi-scale transformation decomposition on the input infrared image and the input visible light image respectively to obtain respective multi-scale representation coefficients; 2) fusing the multi-scale representation coefficients of the infrared image and the visible light image to obtain the multi-scale representation coefficient of the fused image; 3) and applying multi-scale inverse transformation to the multi-scale representation coefficient of the fused image to obtain the fused image. The transformation coefficient fusion in the step 2), especially the fusion of low-frequency coefficients, is the key of image fusion, because the low-frequency coefficients contain more information of the source image.
In the early low-frequency coefficient fusion method, a method of selecting the maximum value of the low-frequency coefficients of the infrared image and the visible light image or a simple equal-coefficient weighted average method of the low-frequency coefficients of the infrared image and the visible light image is adopted corresponding to each pixel position, and the methods are easy to introduce artifacts or lose image detail information in the fused image. Therefore, how to optimize the fusion weight value for assigning the weighted average of the low frequency coefficients becomes the focus of recent discussion of image fusion researchers. For example, chen and others propose to allocate a fusion weight based on the energy ratio of the adjacent regions of the infrared image and the low-frequency coefficient of the visible light image of each pixel, and even some researchers propose to allocate a fusion weight of the low-frequency coefficient of the infrared image and the low-frequency coefficient of the visible light image by using a CNN (convolutional neural network) network.
The image fusion method based on the adjacent region energy ratio and the CNN considers the mutual influence among the pixels when fusing the low-frequency coefficients obtained by the multi-scale decomposition of the source image, and distributes the fusion weight of the weighted average of the low-frequency coefficients by referring to the information of the adjacent pixels. Therefore, the contrast of the obtained fusion image is improved relative to the early fusion algorithm. However, the use of relative value information between adjacent pixels is indirect, and the fusion is still an operation on the absolute values of the low-frequency coefficients from different source images at a single pixel position.
In fact, the above method still has the problem of loss of detail information, so that the contrast of the fused image still needs to be further improved. In addition, although the CNN-based image fusion method can utilize relative information between adjacent pixels through a large amount of learning depth in a network, the method is limited in that fusion image target data is difficult to obtain during network model learning, fusion of low-frequency coefficients of the whole region is difficult to achieve, and the contrast of the fusion image is low.
Disclosure of Invention
The invention aims to provide an infrared and visible light image fusion method and system, which can improve the contrast of a fusion image by fusing relative values between low-frequency coefficient pixels after multi-scale transformation and decomposition of an infrared image and a visible light image.
The invention adopts the following technical scheme: an infrared and visible light image fusion method comprises the following steps:
acquiring an infrared image and a visible light image;
respectively carrying out multi-scale transformation on the infrared image and the visible light image to obtain first low-frequency coefficient information, first high-frequency coefficient information, second low-frequency coefficient information and second high-frequency coefficient information;
determining third low-frequency coefficient information according to the first low-frequency coefficient information and the second low-frequency coefficient information based on a global contrast sequence maximum principle;
determining third high-frequency coefficient information according to the first high-frequency coefficient information and the second high-frequency coefficient information;
and determining a fused image according to the third low-frequency coefficient information and the third high-frequency coefficient information.
Further, determining the third low frequency coefficient information according to the first low frequency coefficient information and the second low frequency coefficient information includes:
generating a first global contrast sequence according to the first low-frequency coefficient information, and generating a second global contrast sequence according to the second low-frequency coefficient information;
determining a third global contrast sequence according to the first global contrast sequence and the second global contrast sequence;
and determining third low-frequency coefficient information according to the third global contrast sequence.
Further, determining third low frequency coefficient information from the third global contrast sequence comprises:
calculating third low-frequency coefficient intermediate information according to the third global contrast sequence;
determining third low-frequency coefficient information according to the third low-frequency coefficient intermediate information;
the calculation method of the third low-frequency coefficient intermediate information comprises the following steps:
setting the coefficient of at least one pixel in the third low-frequency coefficient intermediate information as a preset value, and minimizing
Figure BDA0002912123380000031
Calculating to obtain third low-frequency coefficient intermediate information; (ii) a
Wherein the content of the first and second substances,
Figure BDA0002912123380000032
is the third low-frequency coefficient intermediate information, J is the total number of layers of multi-scale transformation, M is the total number of pixels in the fused image, x and y are pixel ordinal numbers in the image, and x is less than y,
Figure BDA0002912123380000033
is the coefficient value of the xth pixel in the third low frequency coefficient intermediate information,
Figure BDA0002912123380000034
the coefficient value of the y-th pixel in the third low-frequency coefficient intermediate information, dF(x, y) is the contrast value between the x-th pixel and the y-th pixel in the fused image, and F is the fused image.
Further, determining the third low-frequency coefficient information according to the third low-frequency coefficient intermediate coefficient includes:
to pair
Figure BDA0002912123380000041
The coefficient value of each pixel in the third low-frequency coefficient information is compensated to obtain the coefficient value of each pixel in the third low-frequency coefficient information.
Further, determining the third high frequency coefficient information from the first high frequency coefficient information and the second high frequency coefficient information includes:
constructing a third high-frequency coefficient information calculation model
Figure BDA0002912123380000042
m∈{1,2,...,M};
Wherein the content of the first and second substances,
Figure BDA0002912123380000043
for the coefficient value of the mth pixel in the high-frequency subgraph of the l-th directional subband of the fusion image j-th layer multi-scale transformation,
Figure BDA0002912123380000044
for the coefficient value, w of the m-th pixel in the high-frequency subgraph of the jth layer multi-scale transformation ith direction sub-band of the infrared imagemIs composed of
Figure BDA0002912123380000045
The fusion weight of (a) is calculated,
Figure BDA0002912123380000046
the coefficient value of the mth pixel in the high-frequency subgraph of the ith directional subband is subjected to multi-scale transformation on the jth layer of the visible light image;
determining
Figure BDA0002912123380000047
Is fused with the weight wm
According to a calculation model and
Figure BDA0002912123380000048
is fused with the weight wmDetermining a high frequency coefficient of each pixel in the third high frequency coefficient informationCounting and obtaining third high-frequency coefficient information.
Further, determining
Figure BDA0002912123380000049
The fusion weight value comprises:
by passing
Figure BDA00029121233800000410
Computing
Figure BDA00029121233800000411
Is fused with the weight wm
Wherein R ismIs a local region of the m-th pixel, RnIs a local region of the nth pixel, n belongs to {1, 2.., M },
Figure BDA00029121233800000412
is a local region RnThe fusion weight of (2).
Further, local region fusion weight
Figure BDA00029121233800000413
The calculating method comprises the following steps:
respectively calculating the local area variance of the nth pixel in the infrared high-frequency subgraph corresponding to the first high-frequency coefficient information and the visible light high-frequency subgraph corresponding to the second high-frequency coefficient information
Figure BDA00029121233800000414
And
Figure BDA00029121233800000415
by passing
Figure BDA0002912123380000051
Determining local region fusion weights
Figure BDA0002912123380000052
The other technical scheme of the invention is as follows: an infrared and visible image fusion system comprising the steps of:
the acquisition module is used for acquiring an infrared image and a visible light image;
the multi-scale transformation module is used for respectively carrying out multi-scale transformation on the infrared image and the visible light image to obtain first low-frequency coefficient information, first high-frequency coefficient information, second low-frequency coefficient information and second high-frequency coefficient information;
the first determining module is used for determining third low-frequency coefficient information according to the first low-frequency coefficient information and the second low-frequency coefficient information based on a global contrast sequence maximum principle;
the second determining module is used for determining third high-frequency coefficient information according to the first high-frequency coefficient information and the second high-frequency coefficient information;
and the generating module is used for generating a fusion image according to the third low-frequency coefficient information and the third high-frequency coefficient information.
The other technical scheme of the invention is as follows: an infrared and visible image fusion system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing an infrared and visible image fusion method according to any one of the preceding claims when executing the computer program.
The other technical scheme of the invention is as follows: a computer-readable storage medium, in which a computer program is stored, wherein the computer program, when executed by a processor, implements any of the above-mentioned infrared and visible image fusion methods.
The invention has the beneficial effects that: the method and the device provided by the invention have the advantages that the inter-pixel relative value information is fully utilized, the inter-pixel relative value in the low-frequency coefficient information is obtained by directly carrying out multi-scale transformation decomposition on the infrared image and the visible light image, the fusion is carried out, the contrast of the fused image is improved, more detailed information in the image is reserved, meanwhile, the implementation is simple compared with a CNN-based fusion method, and the application scene is not constrained.
Drawings
FIG. 1 is a flowchart of a method for fusing infrared and visible light images according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the detailed steps in an embodiment of the present invention;
FIG. 3 is a diagram illustrating an image decomposition process by NSCT according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an index sequence of 5 rows and 5 columns of pixels according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an infrared and visible light input source image sample 1 in a validation embodiment of the present invention;
FIG. 6 is a fused image output from the source image of FIG. 5 after passing through different fusion parties;
FIG. 7 is a schematic illustration of an infrared and visible light input source image sample 2 in a validation embodiment of the present invention;
FIG. 8 is a fused image obtained by different fusion methods of the source image in FIG. 7;
FIG. 9 is a schematic structural diagram of an infrared and visible light image fusion system according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an infrared and visible light image fusion system according to another embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
In the image fusion process, the human eye is visually sensitive not to the absolute value of a single pixel but to the relative value between pixels. Therefore, the inter-pixel relative value information is not fully utilized, and the problem of loss of detail information still exists, so that the contrast of the fused image needs to be further improved. In addition, although the image fusion method based on the CNN can utilize relative information between adjacent pixels through a great amount of learning depth of a network, the method is limited in that fusion image target data is difficult to obtain during network model learning, and can only be applied to fusion of low-frequency coefficients of partial local regions where an infrared image and a visible light image are very similar at present.
The purpose of image fusion is to obtain new information which is helpful for subsequent application decision by improving the sensitivity of human vision to fused images. Since the image fusion technology is still a new research field at present, it is still necessary to continuously experiment and prove how to improve the sensitivity of human vision to the fused image by using the relative value information between pixels.
The invention aims to provide an infrared and visible light image fusion method and system, which are used for directly fusing the relative values among the pixels of the low-frequency coefficient obtained by carrying out multi-scale transformation decomposition on an infrared image and a visible light image, so that the contrast of the fused image is improved.
Specifically, the embodiment of the invention discloses an infrared and visible light image fusion method, as shown in fig. 1, comprising the following steps: step S110, acquiring an infrared image and a visible light image; step S120, respectively carrying out multi-scale transformation on the infrared image and the visible light image to obtain first low-frequency coefficient information, first high-frequency coefficient information, second low-frequency coefficient information and second high-frequency coefficient information; step S130, determining third low-frequency coefficient information according to the first low-frequency coefficient information and the second low-frequency coefficient information based on a global contrast sequence maximum principle; step S140, determining third high-frequency coefficient information according to the first high-frequency coefficient information and the second high-frequency coefficient information; and S150, generating a fusion image according to the third low-frequency coefficient information and the third high-frequency coefficient information.
The method and the device provided by the invention have the advantages that the inter-pixel relative value information is fully utilized, the inter-pixel relative value in the low-frequency coefficient information is obtained by directly carrying out multi-scale transformation decomposition on the infrared image and the visible light image, the fusion is carried out, the contrast of the fused image is improved, more detailed information in the image is reserved, meanwhile, the implementation is simple compared with a CNN-based fusion method, and the application scene is not constrained.
As a specific implementation manner, in the more detailed method flowchart of this embodiment, as shown in fig. 2, the infrared image and the visible light image acquired in step S110 may be a pair of images corresponding to the same scene respectively acquired by the infrared acquisition device and the visible light acquisition device, which are used as input images. Therefore, the size and the content of the collected image can be ensured to be corresponding, and the subsequent image fusion process can be facilitated. Preferably, the two types of image capturing devices capture images simultaneously at the same time to avoid different changes of image information at different times.
In step S120, the input infrared image I is processedAAnd a visible light image IBAnd respectively carrying out multi-scale transformation to obtain a low-frequency coefficient subgraph and a plurality of high-frequency coefficient subgraphs which are respectively decomposed in a multi-scale mode, wherein the coefficient value corresponding to each pixel in the low-frequency coefficient subgraph is the low-frequency coefficient, and all the low-frequency coefficients form low-frequency coefficient information corresponding to the low-frequency coefficient subgraph. And the coefficient value corresponding to each pixel in the high-frequency coefficient subgraph is a high-frequency coefficient, and each high-frequency coefficient subgraph corresponds to high-frequency coefficient information.
As for the multi-scale transform method, any one of pyramid transform, CT (curve transform), WT (wavelet transform), NSCT (non-subsampled contourlet transform), and other multi-scale transform methods may be selected, and in the present embodiment, the NSCT method is selected.
The NSCT is a multi-scale and multi-direction image decomposition method, and consists of two core modules, namely NSPFB (non-subsampled pyramid filter bank) and NSDFB (non-subsampled directional filter bank).
The image decomposition process by NSCT is shown in fig. 3, in which NSPFB is a set of cascaded NSPFs (non-subsampled pyramid filters) for completing the decomposition of the image in multiple scales. Each NSPF is a two-channel non-down sampling two-dimensional filter composed of a low-pass decomposition module and a high-pass decomposition module, and two sub-images with the same size as the input image are output after the input image is subjected to NSPF filtering decomposition and respectively correspond to the low-frequency part and the high-frequency part of the input image. And the low-frequency part output by each NSPF in the cascaded group of NSPFs is further decomposed as an input image of the next scale NSPF, and the high-frequency part obtains high-frequency coefficient subgraphs in multiple directions corresponding to the current scale through one NSDFB. The number of cascades of the NSPFBs, i.e., the number of layers J of the image multi-scale decomposition in the NSPFB, is called a scale parameter of multi-scale transformation, and may be configured as any integer greater than or equal to 1, and is preferably 4 in this embodiment.
NSDFB is a groupAnd the multi-directional non-down-sampling directional filter is used for completing the multi-directional decomposition of the high-frequency image output by each stage of NSPF. The NSDFB effectively divides the signal into L-2 through the decomposition of a k-layer tree structurekThe frequency band of each directional sub-band is divided into wedges. The direction parameter k of the NSDFB can be configured as any integer greater than or equal to 1, and values can be different under different scales, preferably between 2 and 4.
The non-down sampling contour transformation is adopted, the transformed scale and direction parameter adopt J-4 scales, and the direction parameter k of each scalejThe example illustrates the multi-scale transform decomposition results with j being 1,2,3, and 4. Infrared image IAWhen J is 4, kjNSCT decomposition of 3, j 1,2,3,4 yields a low-frequency subgraph and 4 × 2332 high frequency subgraphs
Figure BDA0002912123380000091
Wherein the content of the first and second substances,
Figure BDA00029121233800000916
Figure BDA0002912123380000092
representing an infrared image IAIn a high-frequency subgraph in the direction l of the dimension j (i.e. the j-th layer), the value corresponding to each pixel position in the subgraph is a high-frequency coefficient. Low-frequency subgraph of infrared image
Figure BDA0002912123380000093
And the value corresponding to each pixel position in the subgraph is the infrared image low-frequency coefficient. Infrared image IAThe subgraph set obtained by multi-scale transformation decomposition can be represented as
Figure BDA0002912123380000094
Likewise, visible light image IBThe set of subgraphs decomposed by the multi-scale transformation can be represented as
Figure BDA0002912123380000095
Wherein the content of the first and second substances,
Figure BDA0002912123380000096
as a visible light image IBThe low-frequency sub-graph of (a),
Figure BDA0002912123380000097
as a visible light image IBHigh frequency subgraph in the direction l of the dimension j.
In order to facilitate subsequent calculation processing, the image pixel indexes are ordered line by line from top to bottom, and each line is numbered sequentially from left to right. At this time
Figure BDA0002912123380000098
Can be expressed as
Figure BDA0002912123380000099
Figure BDA00029121233800000910
Can be expressed as
Figure BDA00029121233800000911
M denotes the total number of pixels of the image and x is the ordinal number of the pixel. In the same way as above, the first and second,
Figure BDA00029121233800000912
can be expressed as
Figure BDA00029121233800000913
Figure BDA00029121233800000914
Can be expressed as
Figure BDA00029121233800000915
In step S130, determining the third low frequency coefficient information from the first low frequency coefficient information and the second low frequency coefficient information includes:
generating a first global contrast sequence according to the first low-frequency coefficient information, and generating a second global contrast sequence according to the second low-frequency coefficient information; and determining a third global contrast sequence according to the first global contrast sequence and the second global contrast sequence.
The global contrast sequence of the infrared image low-frequency subgraph is obtained in step 120
Figure BDA0002912123380000101
The contrast value between each pixel in the sequence and the subsequent pixels is given by { d }A(x,y)}x=1,...,M-1;y=x+1,x+2,...,MWherein the contrast value between two pixels is the difference between the low frequency coefficient corresponding to the previous pixel and the low frequency coefficient corresponding to the next pixel, i.e. the contrast value is
Figure BDA0002912123380000102
Accordingly, the global contrast sequence of the low-frequency subgraph of the visible-light image is { d }B(x,y)}x=1,...,M-1;y=x+1,x+2,...,MWherein, in the step (A),
Figure BDA0002912123380000103
when determining the third global contrast sequence, the embodiment obtains the global contrast sequence of the desired low-frequency subgraph of the fused image based on the maximum principle of the global contrast sequence of the fused image, { dF(x,y)}x=1,...,M-1;y=x+1,x+2,...,MWherein, in the step (A),
Figure BDA0002912123380000104
that is to say, the contrast value between two pixels in the global contrast sequence of the low-frequency subgraph of the fused image is determined by the contrast value of the low-frequency subgraph of the infrared image corresponding to the two pixels and the contrast value of the low-frequency subgraph of the visible image, and the contrast value with the maximum absolute value in the contrast values of the low-frequency subgraph of the infrared image and the low-frequency subgraph of the visible image is selected as the contrast value corresponding to the two pixels in the global contrast sequence of the low-frequency subgraph of the fused image.
And determining third low-frequency coefficient information according to the third global contrast sequence. In this embodiment, the low-frequency coefficient of the fused image, i.e. the third low-frequency coefficient, is calculated according to the global contrast sequence of the low-frequency subgraph of the fused image, and the specific solution is to make the objective function
Figure BDA0002912123380000105
Minimal fused image low-frequency sub-image sequence
Figure BDA0002912123380000111
Wherein the value of each pixel x
Figure BDA0002912123380000112
I.e. the low frequency coefficients of the fused image. The optimization solving process with the minimum objective function of the above formula can be preferably realized by adopting a Gaussian elimination method.
More specifically, the method for determining the third low-frequency coefficient information by using the intermediate information includes the following steps:
since the global contrast sequence defines the relative value between pixels and does not define the reference value, there are an infinite number of possible solutions to satisfy the above equation. To facilitate the solution, it may be assumed that the coefficient of at least one pixel is a predetermined value, in this embodiment, the following assumptions may be made:
Figure BDA0002912123380000113
or
Figure BDA0002912123380000114
Or assume that
Figure BDA0002912123380000115
Or
Figure BDA0002912123380000116
Or assume that the low frequency coefficients for any other pixel location are arbitrary.
After the above setting, the Gaussian elimination method is usedOver minimization
Figure BDA0002912123380000117
Solved to obtain
Figure BDA0002912123380000118
Wherein the content of the first and second substances,
Figure BDA0002912123380000119
is the third low-frequency coefficient intermediate information, J is the total number of layers of multi-scale transformation, M is the total number of pixels in the fused image, x and y are pixel ordinal numbers in the image, and x is less than y,
Figure BDA00029121233800001110
for the coefficient median of the xth pixel in the fused image,
Figure BDA00029121233800001111
for the coefficient median of the y-th pixel in the fused image, dF(x, y) is the contrast value between the x-th pixel and the y-th pixel in the fused image, and F is the fused image.
However, the sequences obtained under the above assumptions
Figure BDA00029121233800001112
It is not necessarily the best fused image low frequency coefficient, therefore, it is also necessary to
Figure BDA00029121233800001113
Further adjustments are made.
In the embodiment of the invention, a simple further adjustment mode is based on prior information that the average of the low-frequency coefficient mean value of the fusion image is consistent with the average of the low-frequency coefficient mean values of the infrared image and the visible light image, namely
Figure BDA00029121233800001114
To pair
Figure BDA00029121233800001115
Middle pairCompensating the difference value of the mean value according to the coefficient of each pixel so as to obtain the low-frequency subgraph of the expected fused image
Figure BDA0002912123380000121
Wherein
Figure BDA0002912123380000122
Figure BDA0002912123380000123
Namely to
Figure BDA0002912123380000124
The coefficient value of each pixel in the third low-frequency coefficient information is compensated, the compensation value is delta, and then the coefficient value of each pixel in the third low-frequency coefficient information is obtained.
As described above, the process of fusing the first low-frequency coefficient information and the second low-frequency coefficient information into the third low-frequency coefficient information is completed.
The process of determining the third high frequency coefficient information from the first high frequency coefficient information and the second high frequency coefficient information in step S140 will be described. The infrared image high-frequency subgraph obtained by the decomposition in the step S120
Figure BDA0002912123380000125
And visible image high frequency subgraph
Figure BDA0002912123380000126
And calculating by using a weighted average method to obtain a high-frequency subgraph corresponding to the fused image, wherein the distribution of the fused weight of the weighted average of the high-frequency coefficients is calculated based on the local variance maximum principle.
The specific implementation mode comprises the following steps:
constructing a third high-frequency coefficient information calculation model
Figure BDA0002912123380000127
M ∈ {1,2,.., M }; determining
Figure BDA0002912123380000128
The fusion weight of (2); according to a calculation model and
Figure BDA0002912123380000129
determining the high-frequency coefficient of each pixel in the third high-frequency coefficient information, and obtaining the third high-frequency coefficient information. Wherein the content of the first and second substances,
Figure BDA00029121233800001210
for the coefficient value of the mth pixel in the high-frequency subgraph of the l-th directional subband of the fusion image j-th layer multi-scale transformation,
Figure BDA00029121233800001211
for the coefficient value, w of the m-th pixel in the high-frequency subgraph of the jth layer multi-scale transformation ith direction sub-band of the infrared imagemIs composed of
Figure BDA00029121233800001212
The fusion weight of (1) is not less than 0 and not more than wm≤1,
Figure BDA00029121233800001213
The coefficient value of the mth pixel in the high-frequency subgraph of the ith directional subband is subjected to multi-scale transformation on the jth layer of the visible light image;
further, determining
Figure BDA00029121233800001214
The fusion weight value comprises:
firstly, determining a pixel set covered by a local region corresponding to each pixel in the high-frequency subgraph, and determining a local region R of the mth pixel in the infrared imagem. The local area window parameter W is set, W being a positive integer larger than 0, preferably a number between 1 and 3. Local region R corresponding to pixel m in high frequency sub-imagemThe image area is an image area covered by a rectangle having a width of (2 × W +1) and a height of (2 × W +1) with the pixel m as the center. Taking W as 1 as an example, the width of the local region is (2 × 1+1) 3, and the height is (2 × 1+1) 3. Local region R corresponding to pixel m in high frequency sub-imagemThis means that 3 × 3 — 9 pixels centered on pixel m are covered. In particular, for pixels located at the sides and corners of the image, the number of pixels covered by the local area is smaller because there is no other pixel on the left or right side or on the top or bottom side.
Taking an example of a 5-row and 5-column image, as shown in fig. 4, the image pixel indices are numbered 1,2, …, 25 sequentially in a row-by-row serial order from top to bottom and left to right. Then, the local region R corresponding to the pixel m 13m=13Covering 9 pixels centered at 13 {7, 8, 9, 12, 13, 14, 17, 18, 19 }. Still taking the image with 5 rows and 5 columns and W equal to 1 as an example, the pixel m equal to 1 is located at the upper left corner of the image, and there are no other pixels at the upper edge and the left edge, so that the local region R corresponding to the pixel m equal to 1 corresponds tom=1Only 4 pixels 1,2, 6, 7 are covered.
The fusion weight of the local region of the pixel m is recorded as
Figure BDA0002912123380000131
Computing fused image high frequency subgraph using computational model
Figure BDA0002912123380000132
Local area variance of middle pixel m
Figure BDA0002912123380000133
Wherein the content of the first and second substances,
Figure BDA0002912123380000134
Figure BDA0002912123380000135
|Rmi denotes a local region RmThe total number of pixels covered.
For the nth pixel, the local area variance thereof is maximized
Figure BDA0002912123380000136
Obtaining the local region fusion weight of the nth pixel
Figure BDA0002912123380000137
The process is as follows: respectively calculating the local area variance of the nth pixel in the infrared high-frequency subgraph corresponding to the first high-frequency coefficient information and the visible light high-frequency subgraph corresponding to the second high-frequency coefficient information
Figure BDA0002912123380000138
And
Figure BDA0002912123380000139
wherein the content of the first and second substances,
Figure BDA00029121233800001310
Figure BDA00029121233800001311
Figure BDA0002912123380000141
Figure BDA0002912123380000142
by passing
Figure BDA0002912123380000143
Determining local region fusion weights
Figure BDA0002912123380000144
Finally, by
Figure BDA0002912123380000145
Computing
Figure BDA0002912123380000146
Is fused with the weight wm
And after the third low-frequency coefficient information and the third high-frequency coefficient information are obtained, the third low-frequency coefficient information and the third high-frequency coefficient information are used as a low-frequency coefficient and a high-frequency coefficient of the fused image, and then multi-scale inverse transformation is applied to obtain the fused image. The multi-scale inverse transformation method and the method adopted in step S120The multi-scale transformation method corresponds. That is, when the non-downsampling contour transformation is adopted in the corresponding step S120, the contour transformation is performed on the contour information obtained in the steps S130 and S140
Figure BDA0002912123380000147
And applying non-downsampling contour inverse transformation to obtain a fused image.
In the embodiment of the invention, a global contrast sequence is defined for representing the relative value information of the inter-pixel low-frequency coefficient in the image multi-scale transformation low-frequency sub-image; the image fusion operation is the fusion of the relative value information of the low-frequency coefficient between pixels in the infrared image low-frequency subgraph and the visible image low-frequency subgraph, namely the fusion of the defined global contrast sequence of the infrared image low-frequency subgraph and the global contrast sequence of the visible image low-frequency subgraph. A simple method for solving the absolute value of the low-frequency coefficient of the fusion image by using the relative value information among the low-frequency coefficients of the fusion image is provided.
Verification of the examples:
FIG. 5 is an infrared and visible light input source image sample 1, (a) is an input infrared source image, and (b) is an input visible source image. FIG. 6 is a fused image obtained from the source image of FIG. 5 by different fusion methods, (a) is a low-frequency fusion method using early direct equal-coefficient weighted averaging of multi-scale decomposition low-frequency coefficients; (b) is a low-frequency fusion method based on adjacent region energy ratio proposed by Chen and the like; (c) adopting a low-frequency fusion method based on a CNN model; (d) the fusion method of the present invention is used. It can be seen that the fused image output by the low-frequency fusion method with equal coefficient weighted average has low contrast, and particularly, the sky and the branch region lose the details of a plurality of branches. The low-frequency fusion method based on the adjacent region energy ratio and the CNN has the advantages that the contrast of a fusion image is greatly improved compared with the early low-frequency fusion method based on the equal coefficient weighted average, but the problem of detail loss still exists, for example, the details of a fused image peaked house output by the low-frequency fusion method based on the adjacent region energy ratio are lost, and the details of a fused image cabin output by the low-frequency fusion method based on the CNN and a skirt-wearing person are lost. The fusion image detail output by the fusion method is relatively better preserved.
Fig. 7 is an infrared and visible light input source image sample 2, (a) is an input infrared source image, and (b) is an input visible light source image. FIG. 8 is a fused image obtained from the source image of FIG. 7 by different fusion methods, (a) is a low frequency fusion method using early direct equal coefficient weighted averaging of the multi-scale decomposition low frequency coefficients; (b) is a low-frequency fusion method based on adjacent region energy ratio proposed by Chen and the like; (c) adopting a low-frequency fusion method based on a CNN model; (d) the fusion method of the present invention is used. The fusion method can also be used for outputting the fusion image with the best fusion result in different fusion methods, and better accords with the visual effect of human eyes.
Another embodiment of the present invention discloses an infrared and visible light image fusion system, as shown in fig. 9, comprising the steps of:
an acquiring module 210, configured to acquire an infrared image and a visible light image; the multi-scale transformation module 220 is configured to perform multi-scale transformation on the infrared image and the visible light image respectively to obtain first low-frequency coefficient information, first high-frequency coefficient information, second low-frequency coefficient information, and second high-frequency coefficient information; a first determining module 230, configured to determine, based on a global contrast sequence maximization principle, third low-frequency coefficient information according to the first low-frequency coefficient information and the second low-frequency coefficient information; a second determining module 240, configured to determine third high frequency coefficient information according to the first high frequency coefficient information and the second high frequency coefficient information; and a generating module 250, configured to generate a fused image according to the third low-frequency coefficient information and the third high-frequency coefficient information.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules are based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof may be referred to specifically in the method embodiment section, and are not described herein again.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, the specific names of the functional modules are only for convenience of distinguishing from each other and are not used for limiting the protection scope of the present invention. The specific working process of the modules in the system may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Another embodiment of the present invention further discloses an infrared and visible light image fusion system, as shown in fig. 10, which includes a memory 31, a processor 32, and a computer program 33 stored in the memory 31 and executable on the processor 32, and when the computer program is executed by the processor, the processor implements an infrared and visible light image fusion method of any one of the above.
Another embodiment of the present invention further discloses a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements any one of the above-mentioned infrared and visible light image fusion methods.
The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

Claims (10)

1. An infrared and visible light image fusion method is characterized by comprising the following steps:
acquiring an infrared image and a visible light image;
respectively carrying out multi-scale transformation on the infrared image and the visible light image to obtain first low-frequency coefficient information, first high-frequency coefficient information, second low-frequency coefficient information and second high-frequency coefficient information;
determining third low-frequency coefficient information according to the first low-frequency coefficient information and the second low-frequency coefficient information based on a global contrast sequence maximum principle;
determining third high-frequency coefficient information according to the first high-frequency coefficient information and the second high-frequency coefficient information;
and generating a fusion image according to the third low-frequency coefficient information and the third high-frequency coefficient information.
2. The method of claim 1, wherein determining third low frequency coefficient information based on the first low frequency coefficient information and the second low frequency coefficient information comprises:
generating a first global contrast sequence according to the first low-frequency coefficient information, and generating a second global contrast sequence according to the second low-frequency coefficient information;
determining a third global contrast sequence according to the first global contrast sequence and the second global contrast sequence;
and determining third low-frequency coefficient information according to the third global contrast sequence.
3. The method of claim 2, wherein determining third low frequency coefficient information from the third global contrast sequence comprises:
calculating third low-frequency coefficient intermediate information according to the third global contrast sequence;
determining third low-frequency coefficient information according to the third low-frequency coefficient intermediate information;
the calculation method of the third low-frequency coefficient intermediate information comprises the following steps:
setting the coefficient of at least one pixel in the third low-frequency coefficient intermediate information as a preset value, and minimizing
Figure FDA0002912123370000011
Calculating to obtain the third low-frequency coefficient intermediate information;
wherein the content of the first and second substances,
Figure FDA0002912123370000021
is the third low-frequency coefficient intermediate information, J is the total number of layers of multi-scale transformation, M is the total number of pixels in the fused image, x and y are pixel ordinal numbers in the image, and x is less than y,
Figure FDA0002912123370000022
is the coefficient value of the xth pixel in the third low frequency coefficient intermediate information,
Figure FDA0002912123370000023
the coefficient value of the y-th pixel in the third low-frequency coefficient intermediate information, dF(x, y) is the contrast value between the x-th pixel and the y-th pixel in the fused image, and F is the fused image.
4. The method of claim 2 or 3, wherein determining the third low frequency coefficient information based on the third low frequency coefficient intermediate information comprises:
to pair
Figure FDA0002912123370000024
The coefficient value of each pixel in the third low-frequency coefficient information is compensated to obtain the coefficient value of each pixel in the third low-frequency coefficient information.
5. The method of claim 4, wherein determining the third high frequency coefficient information based on the first high frequency coefficient information and the second high frequency coefficient information comprises:
constructing a third high-frequency coefficient information calculation model
Figure FDA0002912123370000025
Figure FDA0002912123370000026
Wherein the content of the first and second substances,
Figure FDA0002912123370000027
for the coefficient value of the mth pixel in the high-frequency subgraph of the l-th directional subband of the fusion image j-th layer multi-scale transformation,
Figure FDA0002912123370000028
the coefficient value, w, of the mth pixel in the high-frequency subgraph of the jth layer multi-scale transformation ith direction sub-band of the infrared imagemIs composed of
Figure FDA0002912123370000029
The fusion weight of (a) is calculated,
Figure FDA00029121233700000210
performing multi-scale transformation on the coefficient value of the mth pixel in the high-frequency subgraph of the ith directional subband of the jth layer of the visible light image;
determining
Figure FDA00029121233700000211
To makeResultant weight wm
According to the calculation model and the
Figure FDA00029121233700000212
Is fused with the weight wmAnd determining the high-frequency coefficient of each pixel in the third high-frequency coefficient information, and obtaining the third high-frequency coefficient information.
6. The method of claim 4, wherein determining comprises determining the image of the subject based on the image of the subject
Figure FDA0002912123370000031
The fusion weight value comprises:
by passing
Figure FDA0002912123370000032
Computing
Figure FDA0002912123370000033
Is fused with the weight wm
Wherein R ismIs a local region of the m-th pixel, RnIs a local region of the nth pixel, n belongs to {1, 2.., M },
Figure FDA0002912123370000034
is a local region RnThe fusion weight of (2).
7. The infrared and visible light image fusion method of claim 6, wherein the local region fusion weights
Figure FDA0002912123370000035
The calculating method comprises the following steps:
respectively calculating the local area variance of the nth pixel in the infrared high-frequency subgraph corresponding to the first high-frequency coefficient information and the visible light high-frequency subgraph corresponding to the second high-frequency coefficient information
Figure FDA0002912123370000036
And
Figure FDA0002912123370000037
by passing
Figure FDA0002912123370000038
Determining local region fusion weights
Figure FDA0002912123370000039
8. An infrared and visible image fusion system comprising the steps of:
the acquisition module is used for acquiring an infrared image and a visible light image;
the multi-scale transformation module is used for respectively carrying out multi-scale transformation on the infrared image and the visible light image to obtain first low-frequency coefficient information, first high-frequency coefficient information, second low-frequency coefficient information and second high-frequency coefficient information;
the first determining module is used for determining third low-frequency coefficient information according to the first low-frequency coefficient information and the second low-frequency coefficient information based on a global contrast sequence maximum principle;
the second determining module is used for determining third high-frequency coefficient information according to the first high-frequency coefficient information and the second high-frequency coefficient information;
and the generating module is used for generating a fusion image according to the third low-frequency coefficient information and the third high-frequency coefficient information.
9. An infrared and visible image fusion system comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements an infrared and visible image fusion method according to any one of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the infrared and visible image fusion method according to any one of claims 1 to 7.
CN202110089292.0A 2021-01-22 2021-01-22 Infrared and visible light image fusion method and system Pending CN112651469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110089292.0A CN112651469A (en) 2021-01-22 2021-01-22 Infrared and visible light image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110089292.0A CN112651469A (en) 2021-01-22 2021-01-22 Infrared and visible light image fusion method and system

Publications (1)

Publication Number Publication Date
CN112651469A true CN112651469A (en) 2021-04-13

Family

ID=75370659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110089292.0A Pending CN112651469A (en) 2021-01-22 2021-01-22 Infrared and visible light image fusion method and system

Country Status (1)

Country Link
CN (1) CN112651469A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470055A (en) * 2021-07-16 2021-10-01 南京信息工程大学 Image fusion processing method based on FPGA acceleration
CN113628151A (en) * 2021-08-06 2021-11-09 苏州东方克洛托光电技术有限公司 Infrared and visible light image fusion method
CN116503454A (en) * 2023-06-27 2023-07-28 季华实验室 Infrared and visible light image fusion method and device, electronic equipment and storage medium
CN116580062A (en) * 2023-07-12 2023-08-11 南京诺源医疗器械有限公司 Data processing method of infrared laser diagnostic device suitable for infrared excitation light source
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242888A (en) * 2018-09-03 2019-01-18 中国科学院光电技术研究所 Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN109308691A (en) * 2017-07-28 2019-02-05 南京理工大学 Infrared and visible light image fusion method based on image enhancement and NSCT
US20190318463A1 (en) * 2016-12-27 2019-10-17 Zhejiang Dahua Technology Co., Ltd. Systems and methods for fusing infrared image and visible light image
CN111899209A (en) * 2020-08-11 2020-11-06 四川警察学院 Visible light infrared image fusion method based on convolution matching pursuit dictionary learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190318463A1 (en) * 2016-12-27 2019-10-17 Zhejiang Dahua Technology Co., Ltd. Systems and methods for fusing infrared image and visible light image
CN109308691A (en) * 2017-07-28 2019-02-05 南京理工大学 Infrared and visible light image fusion method based on image enhancement and NSCT
CN109242888A (en) * 2018-09-03 2019-01-18 中国科学院光电技术研究所 Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN111899209A (en) * 2020-08-11 2020-11-06 四川警察学院 Visible light infrared image fusion method based on convolution matching pursuit dictionary learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈瑜;党建武;王阳萍;: "基于NSCT和Bilateral滤波器的含噪声图像融合", 兰州交通大学学报, no. 04, 15 August 2017 (2017-08-15) *
罗萍;黄良学;黎羡飞;马强;: "基于NSCT的自适应多判决图像融合方法", 激光与红外, no. 03, 20 March 2016 (2016-03-20) *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470055A (en) * 2021-07-16 2021-10-01 南京信息工程大学 Image fusion processing method based on FPGA acceleration
CN113628151A (en) * 2021-08-06 2021-11-09 苏州东方克洛托光电技术有限公司 Infrared and visible light image fusion method
CN113628151B (en) * 2021-08-06 2024-04-26 苏州东方克洛托光电技术有限公司 Infrared and visible light image fusion method
CN116503454A (en) * 2023-06-27 2023-07-28 季华实验室 Infrared and visible light image fusion method and device, electronic equipment and storage medium
CN116503454B (en) * 2023-06-27 2023-10-20 季华实验室 Infrared and visible light image fusion method and device, electronic equipment and storage medium
CN116580062A (en) * 2023-07-12 2023-08-11 南京诺源医疗器械有限公司 Data processing method of infrared laser diagnostic device suitable for infrared excitation light source
CN116580062B (en) * 2023-07-12 2024-04-12 南京诺源医疗器械有限公司 Data processing method of infrared laser diagnostic device suitable for infrared excitation light source
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117218048B (en) * 2023-11-07 2024-03-08 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model

Similar Documents

Publication Publication Date Title
CN112651469A (en) Infrared and visible light image fusion method and system
CN105096280B (en) Handle the method and device of picture noise
CN111402146B (en) Image processing method and image processing apparatus
Ancuti et al. Single-scale fusion: An effective approach to merging images
CN108399611B (en) Multi-focus image fusion method based on gradient regularization
CN109993707B (en) Image denoising method and device
CN114041161A (en) Method and device for training neural network model for enhancing image details
CN103020933B (en) A kind of multisource image anastomosing method based on bionic visual mechanism
CN111951195A (en) Image enhancement method and device
CN111914997A (en) Method for training neural network, image processing method and device
CN114120176A (en) Behavior analysis method for fusion of far infrared and visible light video images
CN109658354A (en) A kind of image enchancing method and system
CN113362338B (en) Rail segmentation method, device, computer equipment and rail segmentation processing system
CN115311186B (en) Cross-scale attention confrontation fusion method and terminal for infrared and visible light images
CN112446835A (en) Image recovery method, image recovery network training method, device and storage medium
CN114187214A (en) Infrared and visible light image fusion system and method
Luo et al. Deep wavelet network with domain adaptation for single image demoireing
CN115131256A (en) Image processing model, and training method and device of image processing model
CN116757986A (en) Infrared and visible light image fusion method and device
CN109064402A (en) Based on the single image super resolution ratio reconstruction method for enhancing non local total variation model priori
CN116847209A (en) Log-Gabor and wavelet-based light field full-focusing image generation method and system
Guo et al. Multi-scale multi-attention network for moiré document image binarization
CN111353982B (en) Depth camera image sequence screening method and device
CN111462004B (en) Image enhancement method and device, computer equipment and storage medium
CN109614976A (en) A kind of heterologous image interfusion method based on Gabor characteristic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination