CN114627035A - Multi-focus image fusion method, system, device and storage medium - Google Patents

Multi-focus image fusion method, system, device and storage medium Download PDF

Info

Publication number
CN114627035A
CN114627035A CN202210116063.8A CN202210116063A CN114627035A CN 114627035 A CN114627035 A CN 114627035A CN 202210116063 A CN202210116063 A CN 202210116063A CN 114627035 A CN114627035 A CN 114627035A
Authority
CN
China
Prior art keywords
fusion
image
network model
focus
focus image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210116063.8A
Other languages
Chinese (zh)
Inventor
尹海涛
周伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210116063.8A priority Critical patent/CN114627035A/en
Publication of CN114627035A publication Critical patent/CN114627035A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-focus image fusion method, a system, a device and a storage medium, belonging to the technical field of multi-focus image fusion, wherein the method comprises the following steps: acquiring a multi-focus image to be fused; inputting the multi-focus images into a pre-trained fusion network model for fusion to obtain a fusion image; the fusion network model is constructed by the following method: constructing a fusion network model by utilizing a void convolution network and an attention-based dense convolution neural network; the method solves the defects that the extraction of the source image features is single and the importance of the obvious features cannot be effectively highlighted in the prior art, realizes the retention of rich detail information during the fusion of the multi-focus images, and ensures that the fused image obtained by fusion has better visual quality.

Description

Multi-focus image fusion method, system, device and storage medium
Technical Field
The invention relates to a multi-focus image fusion method, a system, a device and a storage medium, belonging to the technical field of multi-focus image fusion.
Background
In the process of collecting a digital image, due to the limitation of the depth of field of the lens of the optical sensor, the camera system is difficult to obtain a full depth of field image during imaging, and the problems of clear scenery in the depth of field range and fuzzy scenery outside the depth of field range often occur; however, the partially unclear region is not beneficial to the understanding and application of the subsequent images, and may cause certain errors. The multi-focus image fusion technology can effectively solve the problem, and the images with different focus areas and the target scene image are combined in a complementary way, so that the problem of clear imaging of the whole scene is effectively solved, the same image contains more abundant information, and the actual use requirement is met; at present, the multi-focus image fusion technology plays a vital role in the fields of machine vision, remote sensing monitoring, military medicine and the like.
The existing multi-focus image fusion technology can be mainly divided into a transform domain method, a spatial domain method and a deep learning method. The transform domain method generally decomposes an original image into different transform coefficients, then fuses the transform coefficients through corresponding fusion rules, and finally performs inverse transformation on the fused coefficients to obtain a fused image; the spatial domain method directly performs fusion operation on pixels or regions of the source image to extract clear pixels in the focus region; however, both the transform domain method and the spatial domain method need to artificially design the activity level measurement and the fusion rule of the significant information, so that the universality of the fusion algorithm is limited to a certain extent; in recent years, due to the strong feature extraction and data characterization capabilities of deep learning, a multi-focus image fusion technology based on deep learning is popular; at present, the key point of most of multi-focus image fusion technology based on deep learning lies in that a convolutional neural network is utilized to systematically and accurately detect a focus area from a multi-focus source image, and then focus areas from different source images are fused to generate a full-scene clear image; compared with the traditional multi-focus image fusion technology, the multi-focus image fusion technology based on deep learning improves the fusion quality to a certain extent, still has some limitations, and is mainly embodied as follows: 1. the feature extraction scale of the source image is single; 2. the significance of the salient features is not effectively highlighted.
Disclosure of Invention
The invention aims to provide a multi-focus image fusion method, a multi-focus image fusion system, a multi-focus image fusion device and a storage medium, which solve the defects that the source image features are single in extraction and the importance of the obvious features cannot be effectively highlighted in the prior art, realize the retention of rich detail information during the multi-focus image fusion, and ensure that the fused image obtained by fusion has better visual quality.
In order to realize the purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the present invention provides a multi-focus image fusion method, including:
acquiring a multi-focus image to be fused;
inputting the multi-focus images into a pre-trained fusion network model for fusion to obtain a fusion image;
the fusion network model is constructed by the following method: and constructing a fusion network model by utilizing the hole convolution network and the attention-based dense convolution neural network.
With reference to the first aspect, the method further includes the step of establishing a training data set, including:
and extracting images from the public data set MS-COCO, cutting the size of the images into uniform size to obtain label images, and forming a training data set by the label images.
With reference to the first aspect, the method further includes a step of preprocessing the training data set, including:
and performing Gaussian blur processing on different areas of the label images in the training data set.
With reference to the first aspect, further comprising a step of setting a loss function of the converged network model, including:
the following loss function is set:
L=Lmse+αLssim+βLper
Lssim=1-SSIM(O,T)
Figure BDA0003494906340000031
where L is a loss function, LmseIs a mean square loss function, LssimIs a loss function of structural similarity, LperIs a perceptual loss function, alpha and beta are balance parameters, O is a fusion image, T is a tag image, SSIM (O, T) represents the structural similarity between O and T,
Figure BDA0003494906340000032
represents the pixel value with coordinates (x, y) in the ith channel of the feature map extracted by the fused image through VGG16,
Figure BDA0003494906340000033
a pixel value C representing coordinates (x, y) in the i channel of the feature map extracted by the VGG16 from the label imagef、HfAnd WfRespectively representing the number of channels, height and width of any feature map.
With reference to the first aspect, further, the converged network model is trained by:
and training the fusion network model by using the constructed training data set in the Pythrch, wherein the size of the batch in the training process is set as 8.
With reference to the first aspect, the method further includes a step of optimizing parameters of the converged network model, including:
and optimizing parameters of the fusion network model by adopting an Adam optimizer, wherein the initial learning rate of the Adam optimizer is set to be 0.001.
In a second aspect, the present invention further provides a multi-focus image fusion system, including:
an acquisition module: the multi-focus image fusion method comprises the steps of obtaining a multi-focus image to be fused;
a fusion module: and the fusion image acquisition module is used for inputting the multi-focus images into a pre-trained fusion network model for fusion to obtain a fusion image.
In a third aspect, the present invention further provides a multi-focus image fusion apparatus, which includes a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of the first aspect.
In a fourth aspect, the invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the first aspect.
Compared with the prior art, the invention has the following beneficial effects:
according to the multi-focus image fusion method, the system, the device and the storage medium, provided by the invention, the multi-focus images are input into a pre-trained fusion network model for fusion to obtain the fusion images, so that the fusion of the multi-focus images is realized; the cavity convolution network enlarges the receptive field by increasing the expansion rate, and further extracts the multi-scale features in the source image (multi-focus image to be fused) more comprehensively; the dense convolutional neural network can effectively solve the gradient disappearance problem of a deep network, and meanwhile, in order to further highlight the importance of the salient features, an attention mechanism is introduced into the dense convolutional neural network, and the salient features are selected in a self-adaptive manner, so that the fusion performance is improved; in conclusion, the scheme of the invention can keep rich detail information when fusing multi-focus images, and the fused images obtained by fusion have better visual quality.
Drawings
Fig. 1 is a flowchart of a multi-focus image fusion method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a converged network model provided in an embodiment of the present invention;
fig. 3 is a second flowchart of a multi-focus image fusion method according to an embodiment of the present invention.
Detailed Description
The present invention is further described with reference to the accompanying drawings, and the following examples are only for clearly illustrating the technical solutions of the present invention, and should not be taken as limiting the scope of the present invention.
Example 1
As shown in fig. 1, a multi-focus image fusion method provided by an embodiment of the present invention includes:
and S1, acquiring a multi-focus image to be fused.
And acquiring a multi-focus image with part unclear for subsequent fusion.
And S2, inputting the multi-focus image into a pre-trained fusion network model for fusion to obtain a fusion image.
Constructing a training data set: a labeled training dataset (multi-focus image dataset) is constructed based on the common dataset MS-COCO.
In practical application, it is difficult to obtain multi-focus images and paired panoramic depth images thereof, and therefore a group of simulated multi-focus image data sets (i.e. training data sets) is constructed in the invention.
8000 high-definition natural images in a public data set MS-COCO are selected, and the size of the natural images is uniformly cut into 128 x 128 and then the natural images are used as label images.
Preprocessing a training data set: the label image is subjected to gaussian blurring processing of different regions, specifically, the label image is subjected to gaussian blurring processing of complementary regions.
In order to simulate the blurring of different degrees generated by different depths of field, 8000 selected label images are averagely divided into 4 groups, and the blurring processing is carried out by respectively adopting the Gaussian blurring with the Gaussian blur radius of 2, 4, 6 and 8.
And constructing a fusion network model by utilizing the void convolutional network, the convolutional layer and the attention-based dense convolutional neural network.
As shown in fig. 2, the converged network model includes three parts, which are Feature extraction (Feature extraction), Feature fusion (Feature fusion), and Image reconstruction (Image reconstruction), respectively.
The feature extraction part comprises two network branches, the two network branches share weight, and each network branch is composed of 1 hole convolution network with multiple parallel branches, 1 x 1 convolution layer and 1 intensive convolution neural network with attention mechanism.
The characteristic channel of the multi-branch parallel hole convolution network is 192, and the network is formed by hole convolutions with 3 convolution kernels of 3 x 3 and expansion rates of 1, 2 and 3 respectively.
The input channel and the output channel of the dense convolutional neural network with the attention mechanism are respectively 64 and 256, and the dense convolutional neural network is composed of 1 dense Block and 3 Squeeze-Excitation-blocks, wherein the dense Block comprises 3 multiplied by 3 convolutional layers, and the output of each layer is cascaded into the input of the next layer.
The 1 x 1 convolutional layer is used to adjust the dimensions of the feature channel.
The feature fusion part consists of splicing operation and 1 multiplied by 1 convolutional layer, and mainly realizes the fusion of features.
And the characteristic fusion part performs 'splicing' and 1 × 1 convolution operation on the characteristics of the multi-focus image obtained by the characteristic extraction part to realize characteristic fusion to obtain fusion characteristics, wherein the input channel and the output channel of the 1 × 1 convolution layer are 512 and 64 respectively.
The image reconstruction part mainly generates a fusion image by the fusion characteristics, the part consists of 4 3 multiplied by 3 convolution layers, and the number of characteristic channels is respectively 64, 64 and 3; each convolutional layer except the last layer uses the ReLU as an activation function.
In order to make the reconstructed image more accurate, the loss function of the fusion network model is set, and the following loss functions are set:
L=Lmse+αLssim+βLper
Lssim=1-SSIM(O,T)
Figure BDA0003494906340000061
where L is a loss function, LmseIs a mean square loss function, LssimIs a loss function of structural similarity, LperIs a perceptual loss function, alpha and beta are balance parameters, O is a fusion image, T is a tag image, SSIM (O, T) represents the structural similarity between O and T,
Figure BDA0003494906340000071
representing the ith channel of the feature map extracted by VGG16 from the fused imageThe pixel value of which the middle coordinate is (x, y),
Figure BDA0003494906340000072
a pixel value C representing coordinates (x, y) in the i channel of the feature map extracted by the VGG16 from the label imagef、HfAnd WfRespectively representing the number of channels, height and width of any feature map.
In the present embodiment, α and β are both 0.5.
And training the fusion network model by using the constructed training data set, wherein in the training process, the hyper-parameters of the fusion network model comprise Batch size (Batch size), initial Learning rate (Learning rate), iteration times (epoch) and a Learning rate attenuation strategy.
In this embodiment, training of the converged network model is realized by using a pytorech (a python-based scientific computer library), and the program running environment is RTX 3080/10GB RAM, Intel Core i7-10700K @3.80 GHz.
The batch size in the training process is set to be 8, an Adam optimizer is adopted to optimize parameters, the initial learning rate of the optimizer is set to be 0.001, the learning rate is adjusted in a cosine annealing attenuation mode, and the network trains for 500 epochs in total.
And inputting the multi-focus images into a pre-trained fusion network model for fusion to obtain a fusion image.
In order to prove the effectiveness of the fusion technology provided by the invention, 8 mainstream multi-focus image fusion methods are selected to compare the Lytro multi-focus color image data set with the invention, namely an NSCT method, an SR method, an IMF method, an MWGF method, a CNN method, a Deepfuse method, a DenseeFuse method (comprising a DenseeFuse-ADD method and a DenseeFuse-L1 method) and an IFCNN-MAX method. All comparative methods were tested using default parameters provided in the literature.
In the embodiment, four objective indexes are adopted as quantization indexes, namely Average Gradient (AG), Spatial Frequency (SF), Visual Information Fidelity (VIF) and edge retention (Q)AB/F) Table 1 is the average index value over the Lytro dataset; as can be seen from Table 1, the invention is described in AG, SF and VIF obtains the optimal result on the three indexes, namely QAB/FIndexes, second only to the CNN method, are obtained; from the results, the method is a feasible and efficient multi-focus image fusion method.
TABLE 1 average index values on Lytro data set
Figure BDA0003494906340000081
Example 2
The embodiment of the invention provides a multi-focus image fusion system, which comprises:
an acquisition module: the multi-focus image fusion method comprises the steps of obtaining a multi-focus image to be fused;
a fusion module: and the fusion image processing method is used for inputting the multi-focus image into a pre-trained fusion network model for fusion to obtain a fusion image.
The fusion network model is constructed by the following method: and constructing a fusion network model by utilizing the void convolution network and the intensive convolution neural network based on the attention mechanism.
Example 3
The embodiment of the invention provides a multi-focus image fusion device, which comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method of:
acquiring a multi-focus image to be fused;
and inputting the multi-focus images into a pre-trained fusion network model for fusion to obtain a fusion image.
The fusion network model is constructed by the following method: and constructing a fusion network model by utilizing the hole convolution network and the attention-based dense convolution neural network.
Example 4
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the following method:
acquiring a multi-focus image to be fused;
and inputting the multi-focus images into a pre-trained fusion network model for fusion to obtain a fusion image.
The fusion network model is constructed by the following method: and constructing a fusion network model by utilizing the void convolution network and the intensive convolution neural network based on the attention mechanism.
Example 5
As shown in fig. 3, a multi-focus image fusion method provided by an embodiment of the present invention includes:
and S1, constructing a training data set, and preprocessing the training data set.
And S2, constructing a fusion network model by using an attention mechanism and a dense convolution neural network.
And S3, setting a loss function of the fusion network model, and optimizing network parameters.
And S4, training the fusion network model by using the training data set to obtain the trained fusion network model.
And S5, acquiring a multi-focus image to be fused, and inputting the multi-focus image into the trained fusion network model to obtain a fusion image.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A multi-focus image fusion method, comprising:
acquiring a multi-focus image to be fused;
inputting the multi-focus images into a pre-trained fusion network model for fusion to obtain a fusion image;
the fusion network model is constructed by the following method: and constructing a fusion network model by utilizing the void convolution network and the intensive convolution neural network based on the attention mechanism.
2. The method of claim 1, further comprising the step of creating a training data set, comprising:
and extracting images from the public data set MS-COCO, cutting the images into uniform sizes to obtain label images, and forming a training data set by the label images.
3. The method of claim 2, further comprising the step of preprocessing the training data set, comprising:
and performing Gaussian blur processing on different areas of the label images in the training data set.
4. The method of claim 2, further comprising a step of setting a loss function of the fusion network model, comprising:
the following loss function is set:
L=Lmse+αLssim+βLper
Lssim=1-SSIM(O,T)
Figure FDA0003494906330000011
where L is a loss function, LmseIs a mean square loss function, LssimIs a loss function of structural similarity, LperIs a perceptual loss function, alpha and beta are balance parameters, O is a fusion image, T is a tag image, SSIM (O, T) represents the structural similarity between O and T,
Figure FDA0003494906330000012
represents the pixel value with coordinates (x, y) in the ith channel of the feature map extracted by the fused image through VGG16,
Figure FDA0003494906330000013
a pixel value C representing coordinates (x, y) in the i channel of the feature map extracted by the VGG16 from the label imagef、HfAnd WfRespectively representing the number of channels, height and width of any feature map.
5. The method of claim 2, wherein the fusion network model is trained by:
and training the fusion network model by using the constructed training data set in the Pythrch, wherein the size of the batch in the training process is set as 8.
6. The method of claim 1, further comprising the step of optimizing parameters of the fusion network model, comprising:
and optimizing parameters of the fusion network model by adopting an Adam optimizer, wherein the initial learning rate of the Adam optimizer is set to be 0.001.
7. A multi-focus image fusion system, comprising:
an acquisition module: the multi-focus image fusion method comprises the steps of obtaining a multi-focus image to be fused;
a fusion module: and the fusion image processing method is used for inputting the multi-focus image into a pre-trained fusion network model for fusion to obtain a fusion image.
8. A multi-focus image fusion device is characterized by comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 6.
9. Computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202210116063.8A 2022-01-29 2022-01-29 Multi-focus image fusion method, system, device and storage medium Pending CN114627035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210116063.8A CN114627035A (en) 2022-01-29 2022-01-29 Multi-focus image fusion method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210116063.8A CN114627035A (en) 2022-01-29 2022-01-29 Multi-focus image fusion method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN114627035A true CN114627035A (en) 2022-06-14

Family

ID=81898423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210116063.8A Pending CN114627035A (en) 2022-01-29 2022-01-29 Multi-focus image fusion method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN114627035A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597268A (en) * 2023-07-17 2023-08-15 中国海洋大学 Efficient multi-focus image fusion method and model building method thereof
CN117073848A (en) * 2023-10-13 2023-11-17 中国移动紫金(江苏)创新研究院有限公司 Temperature measurement method, device, equipment and storage medium
CN117372274A (en) * 2023-10-31 2024-01-09 珠海横琴圣澳云智科技有限公司 Scanned image refocusing method, apparatus, electronic device and storage medium
CN118195926A (en) * 2024-05-17 2024-06-14 昆明理工大学 Registration-free multi-focus image fusion method based on spatial position offset sensing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597268A (en) * 2023-07-17 2023-08-15 中国海洋大学 Efficient multi-focus image fusion method and model building method thereof
CN116597268B (en) * 2023-07-17 2023-09-22 中国海洋大学 Efficient multi-focus image fusion method and model building method thereof
CN117073848A (en) * 2023-10-13 2023-11-17 中国移动紫金(江苏)创新研究院有限公司 Temperature measurement method, device, equipment and storage medium
CN117372274A (en) * 2023-10-31 2024-01-09 珠海横琴圣澳云智科技有限公司 Scanned image refocusing method, apparatus, electronic device and storage medium
CN118195926A (en) * 2024-05-17 2024-06-14 昆明理工大学 Registration-free multi-focus image fusion method based on spatial position offset sensing
CN118195926B (en) * 2024-05-17 2024-07-12 昆明理工大学 Registration-free multi-focus image fusion method based on spatial position offset sensing

Similar Documents

Publication Publication Date Title
CN111311629B (en) Image processing method, image processing device and equipment
CN114627035A (en) Multi-focus image fusion method, system, device and storage medium
CN109815919B (en) Crowd counting method, network, system and electronic equipment
CN111754446A (en) Image fusion method, system and storage medium based on generation countermeasure network
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
CN111968123B (en) Semi-supervised video target segmentation method
CN112016682B (en) Video characterization learning and pre-training method and device, electronic equipment and storage medium
CN113450290B (en) Low-illumination image enhancement method and system based on image inpainting technology
CN110189260B (en) Image noise reduction method based on multi-scale parallel gated neural network
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN113129212B (en) Image super-resolution reconstruction method and device, terminal device and storage medium
CN114821058A (en) Image semantic segmentation method and device, electronic equipment and storage medium
CN112200887A (en) Multi-focus image fusion method based on gradient perception
Conde et al. Lens-to-lens bokeh effect transformation. NTIRE 2023 challenge report
CN110648331A (en) Detection method for medical image segmentation, medical image segmentation method and device
CN113239875A (en) Method, system and device for acquiring human face features and computer readable storage medium
CN115082966A (en) Pedestrian re-recognition model training method, pedestrian re-recognition method, device and equipment
CN112927137A (en) Method, device and storage medium for acquiring blind super-resolution image
CN109978928B (en) Binocular vision stereo matching method and system based on weighted voting
CN110580696A (en) Multi-exposure image fast fusion method for detail preservation
CN112489103B (en) High-resolution depth map acquisition method and system
CN111951171A (en) HDR image generation method and device, readable storage medium and terminal equipment
CN113850721A (en) Single image super-resolution reconstruction method, device and equipment and readable storage medium
CN117455757A (en) Image processing method, device, equipment and storage medium
CN117314750A (en) Image super-resolution reconstruction method based on residual error generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination