CN113837976B - Multi-focus image fusion method based on joint multi-domain - Google Patents

Multi-focus image fusion method based on joint multi-domain Download PDF

Info

Publication number
CN113837976B
CN113837976B CN202111092155.9A CN202111092155A CN113837976B CN 113837976 B CN113837976 B CN 113837976B CN 202111092155 A CN202111092155 A CN 202111092155A CN 113837976 B CN113837976 B CN 113837976B
Authority
CN
China
Prior art keywords
image
focus
dctconv
convolution
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111092155.9A
Other languages
Chinese (zh)
Other versions
CN113837976A (en
Inventor
聂茜茜
高新波
肖斌
王涌超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111092155.9A priority Critical patent/CN113837976B/en
Publication of CN113837976A publication Critical patent/CN113837976A/en
Application granted granted Critical
Publication of CN113837976B publication Critical patent/CN113837976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-focus image fusion method based on a combined multi-domain, and relates to the fields of digital image processing, pattern recognition, deep learning and multi-imaging application. Comprising the following steps: 1) Converting the multi-focus source image pair into a gray image, and obtaining high-dimensional nonlinear feature mapping through a DCTConv and LBPConv feature extraction module; 2) Inputting the high-dimensional nonlinear feature map into an FC layer taking 1 multiplied by 1 as a convolution kernel to obtain a focus measurement map of each image, and comparing focus values by using a softmax activation function to generate an initial binary decision map; 3) And (3) carrying out post-processing on the initial decision diagram by adopting a conditional random field and a morphological method so as to reduce noise, and enabling the processing to be smoother, thereby obtaining the final decision diagram. 4) And fusing the source image pairs according to the final decision graph to generate a multi-focus fused image. The invention generates images with higher edge contrast and introduces less artifacts in the motion regions of the fused image.

Description

Multi-focus image fusion method based on joint multi-domain
Technical Field
The invention relates to the application fields of digital image processing, pattern recognition, deep learning and multi-imaging, in particular to a multi-focus image fusion method based on a combination of a spatial domain, a transformation domain and a deep learning domain.
Background
With the development of image processing technology, people are pursuing image quality continuously, and research on multi-focus image fusion is promoted. The multi-focus fusion technique is to combine a series of partially focused images into one image so that all objects can be clearly imaged. The multi-focus fusion image is richer in target description, contains more accurate information description than any single source image, and therefore is more capable of satisfying human and machine perception. The good multi-focus image fusion method can improve the utilization rate of image information resources. At present, the multi-focus image fusion technology is widely applied in the fields of medical imaging, microscopic imaging, image enhancement, target identification, image reconstruction, computed radiography and the like, and has good application prospects in various fields. Along with the new generation of artificial intelligence development planning issued by the national institutes, great importance is placed on the deep application of artificial intelligence in the public safety field. The national public security organ is deployed in 2019 to perform the action of Yun Jian on the subjects of fraud, evasion and celebration, and capture suspects for the sabot through the Internet cloud service and the cloud platform so as to create a good social security environment. The artificial intelligence promotes the construction of safe cities, builds a city-level video monitoring system, collects and analyzes video image data from different types of monitoring cameras at different positions of the cities, and realizes the identification and tracking of face identities (criminal suspects, lost population and the like). The multi-focus image fusion technology can fuse a plurality of target images into a full-resolution image, which is helpful for image analysis, further analyzes the head portraits of the people and facilitates case detection.
A new multi-focus image fusion taxonomy generally includes four main branches: a multi-focus image fusion method based on a spatial domain, a multi-focus image fusion method based on a transform domain, a multi-focus image fusion method combining the transform domain and the spatial domain (a combined multi-focus image fusion method), and a deep learning multi-focus fusion method. The multi-focus image fusion method based on the spatial domain directly acts on the source image, and the fusion task is completed at the pixel level, the block level or the area level by carrying out weighted average on the source image. Unlike the spatial domain method, the transform domain-based multi-focus image fusion method completes the complete fusion task in another fusion space through transformation and inverse transformation. The source image is typically transformed into another feature domain using an image decomposition method, and then the fusion coefficients are inverse transformed according to a pre-designed fusion criterion. The transform domain based multi-focus image fusion method can effectively avoid discontinuous or blocking effects. While the earlier approaches to multi-focus fusion based on spatial and transform domains are simple and efficient, these approaches to multi-focus image fusion by hand design focus on using hand-made features, often limit the powerful representation of the source image, and from some point of view, it is almost impossible to propose an ideal design that takes all the necessary factors into account. In recent years, research on deep learning is rapid, and due to the strong image representation capability, the deep learning-based method can generally obtain better image quality, so that substantial progress of image fusion is promoted. However, the existing deep learning-based method has a common disadvantage that model parameters are too large, so that the model is long in time consumption and low in fusion efficiency. Based on the above discussion, each type of multi-focus image fusion method has respective advantages and disadvantages.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A lightweight network model is provided, and a multi-focus image fusion method based on a joint multi-domain with higher fusion quality is provided.
The technical scheme adopted by the invention for achieving the purpose is as follows: a multi-focus image fusion method based on joint multi-domain comprises the following steps:
1) And (5) extracting characteristics. And reading a plurality of multi-focus source image pairs, converting the multi-focus source image pairs into gray images, and extracting features by using two convolution layers of DCTConv and LBPConv to obtain high-dimensional nonlinear feature mapping.
2) An initial decision diagram is generated. The high-dimensional nonlinear feature map is input into a Full Connected (FC) layer taking 1×1 as a convolution kernel, a focus measurement map of each image is generated, and focus values are compared by using a softmax activation function to generate an initial binary decision map.
3) And generating a final decision diagram. And (3) performing post-processing on the binary decision map by adopting a conditional random field and morphology method to reduce noise, so that the processing is smoother, and a final decision map is obtained.
4) And (5) image fusion. And fusing the source image pairs according to the final decision graph to generate a multi-focus fused image. The invention has the advantages and beneficial effects as follows:
aiming at the defects of the prior method for fusing the multi-focus images, the invention provides a multi-focus image fusion method based on a combined multi-domain, which directly generates output through single forward propagation. In the transform domain, the DCTConv (Discrete Cosine Transform-based Convolution) is utilized to extract the high frequency and low frequency information of the image, thereby effectively avoiding the discontinuity or blocking effect of the image. In the spatial domain, edge and texture information is extracted using LBPConv (Local Binary Patterns-based Converluton), and finally the focus point is automatically selected using the FC layer. The invention provides a combined multi-domain multi-focus image fusion method combining a transformation domain, a spatial domain and deep learning for the first time, which plays a role in mutually inhibiting the defects of each domain and highlighting the advantages thereof. The results show that the invention generates images with higher edge contrast and introduces fewer artifacts in the motion regions of the fused image.
In practical application, in the step 1), the DCTConv introduces the anchor convolution weight on the basis of DCT kernel function coefficients to reduce the learning of network parameters, thereby greatly reducing the time complexity of the model. The DCTConv feature extraction module can effectively extract high-low frequency information of the image, so that the network has interpretability, and the characterization capability of the network model is improved. LBPConv is embedded into a lightweight network model, has the characteristics of edge preservation and structural similarity, can extract spatial information, enlarges an acceptance domain, enables the network to have interpretability, and further improves the quality of a fused image. The BiReLU in the LBPConv module adopts a simple threshold, has higher calculation efficiency, ensures that the network convergence speed is faster, and is not saturated, namely, the BiReLU can resist the problem of gradient disappearance. In the step 2), the network model of the invention introduces a 1×1 convolution layer on the basis of a full connection layer, and can realize images with any size as input. The number of parameters in the network can be limited to a certain range by utilizing the control of the 1 multiplied by 1 convolution kernel and the number of channels, and through the process, the number of parameters can be further reduced, and the network performance can be improved. In step 3), the fusion effect can be further improved by performing conditional random fields and morphology on the initial decision diagram. Compared with the traditional focusing evaluation function, the multi-focusing fusion method based on the combined multi-domain can select more accurate focusing pixels, has the characteristic of better distinguishing focusing areas, and can achieve a better fusion effect when being applied to multi-focusing image fusion.
Drawings
FIG. 1 is a flow chart of the present invention providing a preferred embodiment fusion scheme;
FIG. 2 is a first set of source images;
FIG. 3 is a fused image of a first set of source images;
FIG. 4 is a second set of source images;
FIG. 5 is a fused image of a second set of source images;
fig. 6 is an LBPConv filter.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and specifically described below with reference to the drawings in the embodiments of the present invention. The described embodiments are only a few embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
fig. 1: the technical scheme of the invention is explained in detail, and the multi-focus image fusion method based on the combined multi-domain comprises the following steps:
the first step: and (5) extracting characteristics. Converting the multi-focus source image pair into a gray image, and obtaining high-dimensional nonlinear feature mapping through a DCTConv and LBPConv feature extraction module;
and a second step of: an initial decision diagram is generated. Inputting the high-dimensional nonlinear feature map into an FC layer taking 1 multiplied by 1 as a convolution kernel, generating a focus measurement map of each image, comparing focus values by using a softmax activation function, and generating an initial binary decision map;
and a third step of: and generating a final decision diagram. Post-processing the binary decision map by adopting a conditional random field and morphology method to reduce noise, so that the processing is smoother, and a final decision map is obtained;
fourth step: and (5) image fusion. And fusing the source image pairs according to the final decision graph to generate a multi-focus fused image.
In the first step, the step of obtaining the high-dimensional nonlinear feature map by using the DCTConv feature extraction module comprises the following steps: the DCTConv module realizes DCT transformation through convolution and extracts image characteristics of a transformation domain. Based on DCT kernel function coefficients, a fixed convolution weight is introduced to reduce the learning of network parameters. For a given nxn DCT kernel, the weight parameters of its DCTConv filter can be defined as:
wherein (K) i ) T And K j Representing the i-th row and j-th column coefficients of the DCT kernel function, respectively, p represents the order of the DCT coefficients, and n represents the n-th convolution kernel of DCTConv, which can be calculated as n= (1+i+j) (i+j)/2+i+1. Thus, the first and second substrates are bonded together,the weight of the nth DCTConv kernel obtained from the coefficients of the p-th order DCT kernel function is represented.
Then, the feature extracted by DCTConv is normalized, and the normalization method of the self-adaptive DCTConv feature extraction process is as follows:
wherein I (x, y) represents the filled source gray scale image, m is consistent with the size of the DCT kernel function, I p (x, y) represents extracting the features of DCTConv over the transform domain.A feature map normalized to the DCTConv feature is shown.
The method for obtaining the high-dimensional nonlinear feature mapping by using the LBPConv feature extraction module comprises the following steps of: the LBPConv module realizes LBP through convolution and extracts image characteristics of a space domain. The LBPConv layer consists of a set of 8-direction convolution kernels, the essence of which is to compare the intensities of adjacent pixels in 8 directions sequentially with the intensity of the center pixel within the block. A value with high convolution strength is assigned a positive value, whereas a negative value. Then nonlinear operation is carried out through a Birelu activation function, and the image intensity generates a binary feature map of 8 channels as input of the next layer. BiReLU is mainly used for converting important information in all directions into 8-bit binary graphics. Thus, according to the traditional ReLU activation function, biReLU can be restated as:
where X is the value of the feature map.
Finally, attention mechanism LBPCA (LBP Channel Attention) is introduced to treat different channels differently, so that the characterization capability of the network can be improved. LBPCA designs a base 2 predefined 8-channel anchor, the response to an input can be denoted as A c =[2 0 ,2 1 ,...,2 7 ]. Obtaining 8-bit binary feature map after BiRelu activation as
Wherein sigma BiReLU Representing the birerlu activation function. i represents 8 directions of the neighborhood and c represents the number of channels.Characteristic value of the c-th channel in the i-direction, W i c The weight value of the c-th channel in the i-direction is represented. H and W represent the height and width of the feature map, respectively. Y is the generated binary feature map.
In this case, c runs on 8 neighbors of the center pixel u (x, y), and the deviations in LBPConv are all 0, the fused image generated by LBPConv has rich boundary information.
Finally, the method for generating the initial strategy diagram comprises the following steps:
wherein M is A (x, y) and M B (x, y) are respectively represented as a multi-focus source image pair I A And I B The value of the initial binary map generated at pixel (x, y), the more its valueLarge, represent I A Pixel ratio I at the same position B More focused and vice versa. T (x, y) represents the initial decision graph generated.
And thirdly, performing post-processing by using a conditional random field and a morphological method to obtain a final decision diagram for further optimizing the decision diagram.
And in the fourth step, generating a multi-focus fusion image according to the final decision diagram. The fusion scheme is as follows:
F(x,y)=~D(x,y)⊙I B (x,y)+D(x,y)⊙I A (x,y)
wherein ∈h represents the dot product between the decision map and the source image, D (x, y) represents the final decision map, I A (x, y) and I B (x, y) represents a source multi-focus image pair and F (x, y) represents a fused image. -representing the inverse of the binary decision diagram.
In order to verify the effect of the present invention, the following experiments were performed:
this laboratory, implemented in the pytorch platform, runs on an Intel (R) Core (TM) i9-10900K CPU,32.0GB RAM computer.
The experimental method comprises the following steps:
in the experimental process, the images in the Kodak image database are used as experimental data, and the two source images to be fused are respectively images focused on the foreground or the background of the same scene. The invention provides a multi-focus image fusion method based on a combined multi-domain, which is used for fusing two multi-focus source images to obtain a full-definition fused image of a target object.
FIG. 2 is a first set of source images (left view: left foreground clock in focus, right background clock out of focus; right view: left foreground clock out of focus, right background clock in focus);
FIG. 3 is a fused image of a first set of source images;
FIG. 4 is a second set of source images (left: lower foreground bottle focused, upper background axis defocused; right: lower foreground bottle defocused, upper background axis focused);
FIG. 5 is a fused image of a second set of source images;
FIG. 6LBPConv filter; the LBPConv filter sets eight directions to 1, essentially comparing the intensities of adjacent pixels in 8 directions with the intensity of the center pixel in the block in sequence. The convolution strength is larger, and is positive, otherwise, is negative.
The above examples should be understood as illustrative only and not limiting the scope of the invention. Various changes and modifications to the present invention may be made by one skilled in the art after reading the teachings herein, and such equivalent changes and modifications are intended to fall within the scope of the invention as defined in the appended claims.

Claims (5)

1. The multi-focus image fusion method based on the combined multi-domain is characterized by comprising the following steps of:
1) Reading a plurality of multi-focus source image pairs, converting the multi-focus source image pairs into gray images, extracting features by using two convolution layers of DCTConv and LBPConv to obtain high-dimensional nonlinear feature mapping, wherein the DCTConv module realizes DCT (discrete cosine transform) through convolution and extracts image features of a transform domain; the LBPConv module realizes LBP transformation through convolution and extracts image features of a spatial domain;
the DCTConv module realizes DCT transformation through convolution, and introduces a fixed convolution weight value on the basis of DCT kernel function coefficients: for a given nxn DCT kernel, its weight parameters for DCTConv are defined as:
wherein (K) i ) T And K j Representing the i-th row and j-th column coefficients of the DCT kernel function, respectively, p representing the order of the DCT coefficients, n representing the n-th convolution kernel of DCTConv, calculated as n= (1+i+j) (i+j)/2+i+1,representing the weight of the nth DCTConv kernel derived from the coefficients of the p-th order DCT kernel function;
the method comprises the following steps of carrying out normalization processing on extracted DCTConv features by adopting a self-adaptive DCTConv feature extraction process:
wherein I (x, y) represents the filled source gray image, m is consistent with the size of DCT kernel function coefficient, I p (x, y) represents extracting the features of DCTConv over the transform domain,representing a feature map after normalization of DCTConv features;
the LBPConv module realizes LBP transformation through convolution, wherein the LBPConv module consists of a group of convolution kernels in 8 directions, the intensities of adjacent pixels in the 8 directions are sequentially compared with the intensity of a central pixel in a block, and a value with high convolution intensity is given as a positive value, otherwise, the value with high convolution intensity is given as a negative value; then performing nonlinear operation through a Birelu activation function, and generating binary feature mapping of 8 channels by using the image intensity as the input of the next layer;
2) Connecting high-dimensional nonlinear feature mapping, inputting the high-dimensional nonlinear feature mapping into a full-connection layer taking 1 multiplied by 1 as a convolution kernel, generating a focus measurement mapping image of each image, comparing focus values by using a softmax activation function, and generating an initial binary decision mapping image;
3) Post-processing the initial binary decision map by adopting a conditional random field and morphology method to obtain a final decision map;
4) And fusing the source image pairs according to the final decision graph to generate a multi-focus fused image.
2. The multi-focus image fusion method based on joint multi-domain according to claim 1, wherein: the birlu activation function is expressed as:
where X is the value of the feature map.
3. The multi-focus image fusion method based on joint multi-domain according to claim 2, wherein: also includes directing attention to LBPCA to treat different channels differently, LBPCA designs a predefined 8-channel anchor weight of 2 as base, and the response to input is denoted as A c =[2 0 ,2 1 ,...,2 7 ]The 8-bit binary feature map after BiRelu activation is obtained as:
wherein sigma BiReLU Representing the BiRelu activation function, i representing the 8 directions of the neighborhood, c representing the number of channels,characteristic value of the c-th channel in the i-direction, W i c And (3) representing the weight value of the c-th channel in the i direction, wherein H and W respectively represent the height and width of the feature map, and Y is the generated binary feature map.
4. The multi-focus image fusion method based on joint multi-domain according to claim 1, wherein: the step 2) comprises the steps of:
wherein M is A (x, y) and M B (x, y) are respectively represented as a multi-focus source image pair I A And I B The larger the value of the initial binary map generated at pixel (x, y), the more I is represented A Pixel ratio I at the same position B More focused and vice versa, T (x, y) represents the initial decision diagram generated.
5. The multi-focus image fusion method based on joint multi-domain according to claim 1, wherein: and 4) generating a fusion scheme of the multi-focus fusion image according to the final decision diagram, wherein the fusion scheme comprises the following steps:
F(x,y)=~D(x,y)e I B (x,y)+D(x,y)e I A (x,y)
where e represents the dot product between the decision graph and the source image, D (x, y) represents the final decision graph, I A (x, y) and I B (x, y) represents a source multi-focus image pair, F (x, y) represents a fused image, and-represents the inverse of the binary decision diagram.
CN202111092155.9A 2021-09-17 2021-09-17 Multi-focus image fusion method based on joint multi-domain Active CN113837976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111092155.9A CN113837976B (en) 2021-09-17 2021-09-17 Multi-focus image fusion method based on joint multi-domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111092155.9A CN113837976B (en) 2021-09-17 2021-09-17 Multi-focus image fusion method based on joint multi-domain

Publications (2)

Publication Number Publication Date
CN113837976A CN113837976A (en) 2021-12-24
CN113837976B true CN113837976B (en) 2024-03-19

Family

ID=78959713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111092155.9A Active CN113837976B (en) 2021-09-17 2021-09-17 Multi-focus image fusion method based on joint multi-domain

Country Status (1)

Country Link
CN (1) CN113837976B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310236A (en) * 2013-06-27 2013-09-18 上海数据分析与处理技术研究所 Mosaic image detection method and system based on local two-dimensional characteristics
CN104881855A (en) * 2015-06-10 2015-09-02 北京航空航天大学 Multi-focus image fusion method using morphology and free boundary condition active contour model
WO2016084071A1 (en) * 2014-11-24 2016-06-02 Isityou Ltd. Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
CN107203769A (en) * 2017-04-27 2017-09-26 天津大学 Image characteristic extracting method based on DCT and LBP Fusion Features
CN107369148A (en) * 2017-09-20 2017-11-21 湖北工业大学 Based on the multi-focus image fusing method for improving SML and Steerable filter
CN107403168A (en) * 2017-08-07 2017-11-28 青岛有锁智能科技有限公司 A kind of facial-recognition security systems
CN108304833A (en) * 2018-04-17 2018-07-20 哈尔滨师范大学 Face identification method based on MBLBP and DCT-BM2DPCA
CN109359607A (en) * 2018-10-25 2019-02-19 辽宁工程技术大学 A kind of palm print and palm vein fusion identification method based on texture
CN111027570A (en) * 2019-11-20 2020-04-17 电子科技大学 Image multi-scale feature extraction method based on cellular neural network
CN111260599A (en) * 2020-01-20 2020-06-09 重庆邮电大学 Multi-focus image fusion method based on DCT and focus evaluation
CN112364809A (en) * 2020-11-24 2021-02-12 辽宁科技大学 High-accuracy face recognition improved algorithm
WO2021041715A2 (en) * 2019-08-30 2021-03-04 University Of Kansas Compositions including igg fc mutations and uses thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9805255B2 (en) * 2016-01-29 2017-10-31 Conduent Business Services, Llc Temporal fusion of multimodal data from multiple data acquisition systems to automatically recognize and classify an action

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310236A (en) * 2013-06-27 2013-09-18 上海数据分析与处理技术研究所 Mosaic image detection method and system based on local two-dimensional characteristics
WO2016084071A1 (en) * 2014-11-24 2016-06-02 Isityou Ltd. Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
CN104881855A (en) * 2015-06-10 2015-09-02 北京航空航天大学 Multi-focus image fusion method using morphology and free boundary condition active contour model
CN107203769A (en) * 2017-04-27 2017-09-26 天津大学 Image characteristic extracting method based on DCT and LBP Fusion Features
CN107403168A (en) * 2017-08-07 2017-11-28 青岛有锁智能科技有限公司 A kind of facial-recognition security systems
CN107369148A (en) * 2017-09-20 2017-11-21 湖北工业大学 Based on the multi-focus image fusing method for improving SML and Steerable filter
CN108304833A (en) * 2018-04-17 2018-07-20 哈尔滨师范大学 Face identification method based on MBLBP and DCT-BM2DPCA
CN109359607A (en) * 2018-10-25 2019-02-19 辽宁工程技术大学 A kind of palm print and palm vein fusion identification method based on texture
WO2021041715A2 (en) * 2019-08-30 2021-03-04 University Of Kansas Compositions including igg fc mutations and uses thereof
CN111027570A (en) * 2019-11-20 2020-04-17 电子科技大学 Image multi-scale feature extraction method based on cellular neural network
CN111260599A (en) * 2020-01-20 2020-06-09 重庆邮电大学 Multi-focus image fusion method based on DCT and focus evaluation
CN112364809A (en) * 2020-11-24 2021-02-12 辽宁科技大学 High-accuracy face recognition improved algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"MLNet: A Multi-domain Lightweight Network for Multi-focus Image Fusion";Xixi Nie等;《IEEE Transactions on Multimedia ( Early Access )》;第1-14页 *

Also Published As

Publication number Publication date
CN113837976A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN110458844B (en) Semantic segmentation method for low-illumination scene
US11263434B2 (en) Fast side-face interference resistant face detection method
CN110022422B (en) Video frame sequence generation method based on dense connection network
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
Zheng et al. T-net: Deep stacked scale-iteration network for image dehazing
Yu et al. Pedestrian detection based on improved Faster RCNN algorithm
CN116012255A (en) Low-light image enhancement method for generating countermeasure network based on cyclic consistency
Zhang et al. X-Net: A binocular summation network for foreground segmentation
CN113763417B (en) Target tracking method based on twin network and residual error structure
CN112927127A (en) Video privacy data fuzzification method running on edge device
She et al. Facial image inpainting algorithm based on attention mechanism and dual discriminators
CN113837976B (en) Multi-focus image fusion method based on joint multi-domain
CN117078505A (en) Image cartoon method based on structural line extraction
CN115358961A (en) Multi-focus image fusion method based on deep learning
CN115331003A (en) Single-stage instance segmentation method based on contour point representation mask under polar coordinates
Li et al. BARRN: A Blind Image Compression Artifact Reduction Network for Industrial IoT Systems
Goel et al. Automatic image colorization using u-net
CN114445418A (en) Skin mirror image segmentation method and system based on convolutional network of multitask learning
CN115187621A (en) Automatic U-Net medical image contour extraction network integrating attention mechanism
Wang et al. RacPixGAN: An Enhanced Sketch-to-Face Synthesis GAN Based on Residual modules, Multi-Head Self-Attention Mechanisms, and CLIP Loss
CN114698398A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
Li et al. Intrusion detection of railway clearance from infrared images using generative adversarial networks
Chen et al. Spoof Face Detection Via Semi-Supervised Adversarial Training
Zhang et al. A modified image processing method for deblurring based on GAN networks
CN113177460B (en) Double-branch Anchor Free face detection method and system based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant