AU2020100199A4 - A medical image fusion method based on two-layer decomposition and improved spatial frequency - Google Patents

A medical image fusion method based on two-layer decomposition and improved spatial frequency Download PDF

Info

Publication number
AU2020100199A4
AU2020100199A4 AU2020100199A AU2020100199A AU2020100199A4 AU 2020100199 A4 AU2020100199 A4 AU 2020100199A4 AU 2020100199 A AU2020100199 A AU 2020100199A AU 2020100199 A AU2020100199 A AU 2020100199A AU 2020100199 A4 AU2020100199 A4 AU 2020100199A4
Authority
AU
Australia
Prior art keywords
image
equation
fusion
detail
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020100199A
Inventor
Sihua Cao
Shuying Huang
Yong Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to AU2020100199A priority Critical patent/AU2020100199A4/en
Application granted granted Critical
Publication of AU2020100199A4 publication Critical patent/AU2020100199A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Abstract: Multi-modal medical image fusion is widely used in clinical diagnosis and treatment. This present patent investigates a novel multi-modal medical image fusion method based on Latent Low-Rank Representation (LatLRR) and modified spatial frequency. Firstly, the LatLRR is utilized to learn a project matrix which used to extract salient features. Then, the project matrix L after learning is used to decompose the source image into base part and detail part. Finally, for the detail layer, the modified spatial frequency is used to fuse the detail layer, and then guided filter is used to preserve the edge of the fused detail layer. For the base part, the method based on Visual saliency mapping (VSM) is used to fuse the base part. The fused image is obtained by combine the fused detail part and the base part. In addition, compared with other state-of-the-art fusion methods, experimental results demonstrate that proposed method has better fusion performance both in subjective visual and objective evaluation.

Description

BACKGROUND AND PURPOSE [0001]With the development of science and technology, computer imaging technology has also been greatly improved. In the field of medicine, there are more and more different types of medical images, which can effectively provide some information. For example, computerized tomography (CT) images capture hard tissue information, such as bone, etc. but they are tissue characterization is limited. Magnetic resonance imaging (MRI) images provide better soft tissue definition and higher spatial resolution. However, it lacks activity information about soft tissues. Positron emission tomography (PET) images can quantitatively and dynamically detect tissue metabolism, it has a wide range of applications in tumor tracking and detection, but they are with lower resolution. Single-photon emission computed tomography (SPECT) images are used to study blood flow in tissues and organs by the imaging technique of nuclear. In summary, each modality imaging has its own advantages and disadvantages. So, it is necessary to fuse the multi-mode medical images through the medical image fusion technology to obtain the fusion images with comprehensive information.
[0002]Medical image fusion methods can be divided into two categories: image fusion methods based on transformation domain and image fusion methods based on spatial domain. The transform domain based fusion method has been widely used in the field of medical image fusion, among which the most representative ones are the sparse representation based method and the multi-scale transform based method. The biggest disadvantage of the sparse representation-based approach is that dictionary learning is often time-consuming, especially online dictionary learning. Although the multi-scale method conforms to the visual system of the human eye, the fused image often has some disadvantages such as artifact and distortion. According to the characteristics of spatial information, the spatial domain method selects pixels or pixel blocks from the source image to construct the fusion image, which can well retain the details in the original image. The information retained in the fusion image is consistent with that in the source image. However, due to the limitation of fusion rules, the contrast and sharpness of the fusion images are reduced. Based on the above, considering the decomposition of source images in space and the corresponding improvement of fusion rules, a multi-modal medical image fusion method based on Latent low-rank (LatLRR) decomposition and improved spatial frequency was proposed.
[0003]The present medical image fusion method has some shortcomings. In this patent, we proposed a new medical image fusion framework based on Latent low-rank decomposition and improved spatial frequency.as shown in Fig. 1.The whole fusion framework is decomposed into three parts. First, LatLRR is used to decompose the source image into the base layer representing the approximate components of the source image and the detail layer that contains the detail information and texture of source image. Then, the base layer of the fused through fusion rules based on visual saliency map, this paper proposed the improved spatial frequency is used to get the fused initial detail layer, and use maximum fusion rules to fusion detail layer and get fused detail layer, the fused layer as guide figure of the guide filter, initial detail layer for guiding filter, the largest gradient information and details to get very good saved in the final detail layer. Finally, the fusion image is obtained by adding the fused base layer and the fused detail layer, and the result shows that this method not only has good visual effect, but also has excellent performance in objective evaluation index. Experimental results show that this method is superior to the existing fusion method in both subjective and objective indexes.
DESCRIPTION OF IMAGE DECOMPOSITION ALGORITHM BASED ON LATENTLRR [0001]In the field of image fusion, image decomposition and reconstruction are closely related to the quality of the fusion image. The purpose of image decomposition is to extract different types of information in the source image. Among the proposed methods, an image decomposition method based on Latent low-rank representation is selected, which can not only effectively decompose the source image into the base layer and detail layer, but also has the advantages of fast decomposition speed and low computational cost. The procedures of image decomposition are as follows:
[0002]Step l:Pre-train an item matrix L.
First build five small training set of medical images. In the training phase, we use the sliding window technology to all the image segmentation into non-overlapping patches, the patch size is 4x4, and then randomly selected from 1000 pieces to generate input matrix X which each column represents all the pixels in the image patches. Then the size of X is NxM,wherg A = 4x4,anj M = 1000 .The project matrix L can then be obtained by solving the following optimization problem, the size of Lis ΧχΧ .
min 114+114 MI4
Z,L,E
s.t X = XZ + LX + E Equation 1 where Z, L and E represent low-rank coefficient matrix, pre-trained item matrix and sparse noise matrix respectively. X is the constructed input matrix. Il’lli and ll’ll* denotes the _ norm anc[ nuclear norm respectively. The nuclear norm is used to find the sum of singular values in the matrix, λ > 0 is the balance coefficient. It’ s set to 0.4 here. Eq.(l) is solved by the inexact Augmented Lagrangian Multiplier (ALM). Then the item matrix L is obtained by Eq.(l).
[0003]Step 2:Decomposed source image.
Assuming thatfr,^e {1>2} represents the input source image, which is divided into many image patches by sliding window technique with overlapping. And these image patches are reshuffled into a source matrix,w Fdt hich each column indicates an image patch. The detail parts and the base parts are calculated by Eq.(2).the decomposition process can be described as follows:
= Lx P(Ik\Idk = R(Vdk\Ihk = Ik-Idk
s.t.Ik= Ihk +1dk,ke {1,2} Equation2
Where ht and ht represents the detailed part and the base part obtained by decomposition, L denote the item matrix which learned by LatLRR. F dk represents the vector decomposed from the source image Λ . Ρ(·) js the sliding window technique and the reshuffled operator. 7?(·) stands for the reconstruction operator that reconstructs the detail part into the detail image. As shown in Eq.(2),the details of the input image can be obtained from h , L ? and P(*) .The base part is obtained by subtracting the details of the input image from the input image.
[0004]Step 3 decomposed image reconstruction.
After the decomposition method based on LatLRR, input source images are decompose into base part and detail part, base part based on visual saliency mapping method to fusion, this paper proposes a based on improved spatial frequency and guide filtering method to fusion detail part, base part of the fused images can be calculated by Eq.(3).
Ihf = FSb dbjbi ) Equation 3
Where FSh is base part fusion strategy, which will be introduced in the next subsection. /,1//,2 is the The base parts obtained by decomposing two source images respectively. 4 denotes the fused base image.The fused detail images can be calculated by Eq.(4).
4/=W4i>42) Equation 4
Where F$d is detail part fusion strategy, which will be introduced in the next subsection. 4i»42 is the detail parts obtained by decomposing two source images respectively. 4 denotes the fused detail image.
[0005]After the fused detail images and fused base images are obtained through the designed fusion rules, the fusion images are reconstructed from those parts.
THE BASE PART FUSION RULE DESCRIPTION [0001]The base part of the input image mainly contains the approximate component of the source image and the brightness information, etc. In this patent, we use the fusion rule based on visual saliency mapping to fuse the base part. The algorithm based on visual saliency mapping defines the saliency of the current pixel in terms of its contrast with other pixels, Assume that 4 is the intensity value at pixel point P in image I, and the saliency value V(p) at pixel point P is defined as Eq.(5).
[0002]Step l:Calculate the saliency value of each pixel.
u(p) = k„ - A |+k„ - Λ |+K - Ψ......+K - LI rci,iatl0115
Where N denotes the pixel total number of pixels in I ,If two pixel have the same intensity value, their saliency values are equal. Thus, we can rewrite Eq.(5) as follows:
F(p) = Σ Mj 1/ - ΛI Equation 6
7=0
Where j denotes the pixel intensity, Mj is the number of pixel whose intensities are equal to j and L is the number of gary levels(256 in this patent).Then, the V(p) is normalized to [0,1].
[0003]Step 2:fusion base part.
Visual saliency mapping is used to obtain the mapping result VSM _ A and VSM _ B of the input image Λ . We can obtain the fused base part through the following adaptive weighted averaging:
Ihf =WhIhl +(l-Wh)Ih2
Equation 7
Where the weight Wh is defined as:
w _ VSM _A h~VSM A + VSM B
Equation 8 [0004]Based on the above base part fusion strategy, the information of the base part can be effectively fused.
THE DETAIL PART FUSION RULE DESCRIPTION [0001]Step EComputed spatial frequency.
The details of the input image mainly contain some structural information and significant features. Therefore, this patent proposes a fusion rule based on the improved spatial frequency and guided filtering for the fusion of details, which effectively preserves the structural information and significant features of the source image. Spatial frequency is an un-reference image quality evaluation index, which is used to evaluate the overall clarity of the image. The spatial frequency is defined as follows:
SF = 7(RF)2 + (CF)2
Equation 9
Where RF and CF Represents the row frequency and the column frequency, respectively.as defined blow:
MN 2 RFx ,v Σ Σ[/Ϊχ’ y) - f(x’ yV x=l y-2
Equation 10
I N M 2 CF = JXfx7v££[/(x’y)-f(x-by)]
V x=i
Equation 11
Where Mx/V is the size of input image,/(x, y) denotes the pixel value at position (χ, y).
[0002] Step 2: Computed improved spatial frequency.
On the basis of the above spatial frequency, this patent considers the main diagonal spatial frequency and secondary diagonal spatial frequency, and Eq.(9) can be rewritten as follows:
SF = ^(RF)2 + (C F)2 + (MD F)2 + (SD F)2 Equation 12
Where MD F and SD F Represents the main diagonal spatial frequency and secondary diagonal spatial frequency, respectively.as defined blow:
J MN
Χ/χίνΣΣίΛνγί-Αχ-ι,γ-ΐ)] x-2 y-1
Equation 13
J N M
ΧίχίνΣΣ[/(χ·γ)-ί·(χ+1·γ+1)] y=l x=l
Equation 14 [0003] Step 3: fusion initial detail part.
In this patent, the detail image is cut into non-overlapping patches and the spatial frequency of each corresponding patch is calculated with the improved spatial frequency. Mx N is the size of the patch, which is set as 2 here. Then the calculated space frequency is used to make a weighted average for detail part, as shown below:
I =w I +W I 1 df ” d\‘d\' ” dPdi
Equation 15
Where he weight Wdl an(j Wd2 js defined as:
SFk +a SFX + SF + ci ke {1,2}
Equation 16 where and SF2 two source images.a is the corresponding spatial frequencies of the details of the is a scalar parameter that approaches 0, in order to prevent the denominator from being equal to 0, which is set here to 0.0000001.
[0004] Through the above fusion rules based on the improved spatial frequency, the initial fusion detail image can be obtained. However, in order to introduce more information, the fused initial image can be filtered by guided filtering.
[0005] Guided filtering is the filtering of guided graph, which is an edge preserving filtering algorithm just like bilateral filtering. In the guided filtering, a local linear model is used, that is, a point on the function has a linear relationship with its neighboring points. When you need to compute a point on a function, you just need to get a linear analytic expression for that point.
[0006] Step 4: The initial fused detail image is filtered by guided filter.
A filter output image O is a linear transformation of a guided image G . In a window, wk centered at pixel k, the filter output Q at pixel i is obtained as follows:
0,. = atG,.+ht,Vze wk
Equation 17
Guided image G is obtained by maximizing the pixel value of the corresponding position of the two detail part, and its definition is as follows:
G = max(Zj1,Zj2),i = 1,2,...........N Equation 18 [0007]The derivative of Eq. (17) shows that when there is a gradient in the guide image G, there is also a gradient in the output image 0. ak and bk are linear coefficients in d wk and can be estimated by minimizing the following cost functions in the local window:
E(ak,bk) = 'Y_t{(akGi +bk -P,)2 + Equation 19
ΪΕ Wk
Where is the input image to be filtered, that is, the initial fusion detail part obtained by using the improved spatial frequency.
Where ε is an important parameter to adjust the filter effect, in this patent, it is set to 0.000001.using the least square method in solving Eq.(19), available:
1/ΗΣ- GF-μ.Ρ. _ ak = -----—---------, bk =Pk- a^k Equation 20
A +£
Here A a and (7k represent the mean and variance of G in local window wk. H is the number of pixels in the window, and is the mean of P in window A .A pixel is contained by multiple Windows, that is, each pixel is described by multiple linear functions. When the output value of a specific point is calculated, it is the average value after multiple linear functions are evaluated:
O, = — V (ak Gi + bk) = a fir + 6 Equation 21 [0008]Step 5:fusion final image.
Through the guided filtering, the edge information in the guided graph G can be effectively retained in the output image 0, so that the final detail fusion image has rich information, so that the output image 0 is final fused detail image df , and finally, the fused base image and fused the detail image are added to obtain the final fusion image Λ .As shown below:
Λ = Ibf + FIdf Equation 22
Where A/ and df are fused base image and fused detail image, respectively. Λ is final fused image.

Claims (3)

1. Description of image decomposition algorithm based on LatentLRR as follows: [0001] Step l:Pre-train an item matrix L.
First build a contains five small training set of medical images, in the training phase, we use the sliding window technology to all the image segmentation into non-overlapping patches, the patch size is 4x4, and then randomly selected from 1000 pieces to generate input matrix X which each column represents all the pixels in the image patches.Then the size of X is Nx M , where N = 4x 4, and M = 1000 The project matrix L can then be obtained by solving the following optimization problem,the size of L is ;Vx N .
min 114*114+44
Z,L,E
s.t.A = XZ + LX + E Equation 1 where Z, L and E represent low-rank coefficient matrix, pre-trained item matrix and sparse noise matrix respectively. X is the constructed input matrix. IHli and ll’ll* denotes the _ norm an(j nuclear norm respectively. The nuclear norm is used to find the sum of singular values in the matrix, λ > 0 is the balance coefficient.,it's set to 0.4 here.Eq.(l) is solved by the inexact Augmented Lagrangian Multiplier (ALM). Then the item matrix L is obtained by Eq.(l).
[0002]Step 2:Decomposed source image.
Assuming that/Ae {1,2} represents the input source image,which is divided into many image patches by sliding window technique with overlapping. And these image patches are reshuffled into a source matrix Eu ,which each column indicates an image patch. The detail parts and the base parts are calculated by Eq.(2).the decomposition process can be described as follows:
Vdk = Lx P(Ik\Idk = R(Vdk),Ihk = Ik-Idk
s. t .Ik = Ihk +1dk, k e {1,2} Equation 2
Where hk and hk represents the detailed part and the base part obtained by decomposition, L denote the item matrix which learned by LatLRR. Uk represents the vector decomposed from the source image A . P(·) is the sliding window technique and the reshuffled operator. R(·) stands for the reconstruction operator that reconstructs the detail part into the detail image. As shown in Eq.(2),the details of the input image can be obtained from Λ , L, and P(·) .The base part is obtained by subtracting the details of the input image from the input image.
[0003]Step 3:Decomposed image reconstruction.
After the decomposition method based on LatLRR, input source images are decompose into base part and detail part, base part based on visual saliency mapping method to fusion, this paper proposes a based on improved spatial frequency and guide filtering method to fusion detail part, base part of the fused images can be calculated by Eq.(3).
Ihf = FSb ) Equation 3
Where F$b is base part fusion strategy,which will be introduced in the next subsection. Λ1Α2 is the The base parts obtained by decomposing two source images respectively, hf denote the fused base image.
The fused detail images can be calculated by Eq.(4). Idf=FSd(.IdiJdi) Equation 4
Where F$d is detail part fusion strategy,which will be introduced in the next subsection. Α1Ά2 is the The detail parts obtained by decomposing two source images respectively. A// denote the fused detail image.
[0004] After the fused detail images and fused base images are obtained through the designed fusion rules, the fusion images are reconstructed from those parts.
2. The procedures of base part fusion rule as follows: [0001]Step 1 Calculate the saliency value of each pixel.
Up) = \i„ - /, I + \i„ -1,1 +|/„ - AI+......+1/„ - /»I Equation 5
Where N denotes the pixel total number of pixels in I ,If two pixel have the same intensity value,their saliency values are equal.thus,we can rewrite Eq.(5) as follows:
F(P) = Σ Mj \Jp ~ h | Equation 6
7-0
Where j denotes the pixel intensity, Mj is the number of pixel whose intensities are equal to j,and L is the number of gary levels(256 in this patent).Then,the V(p) js normalized to [0,1].
[0002]Step 2:fusion base part.
Visual saliency mapping is used to obtain the mapping result VSM _ A anc[ VSM _ B of the input image A . We can obtain the fused base part through the following adaptive weighted averaging:
Ihf=WhIhl+(l-Wh)Ih2
Equation 7
Where the weight Wh is defined as:
_ VSM _A h~VSM A + VSM B
Equation 8 [0003]Based on the above base part fusion strategy, the information of the base part can be effectively fused.
3. The procedures of detail part fusion rule as follows:
[0001] The details of the input image mainly contain some structural information and significant features.Therefore, this patent proposes a fusion rule based on the improved spatial frequency and guided filtering for the fusion of details, which effectively preserves the structural information and significant features of the source image. Spatial frequency is an un-reference image quality evaluation index, which is used to evaluate the overall clarity of the image.The spatial frequency is defined as follows:
[0002]Step EComputed spatial frequency.
SF = 7(RF)2 + (CF)2
Equation 9
Where RF and CF Represents the row frequency and the column frequency, respectively.as defined blow:
MN 2 RF=½ χ τνΣ Σ[ Ay y) - f(x’ yV x=l y-2
Equation 10
Ν Μ
V y-2 x=l
Equation 11
Where Mx N is the size of input image, f (x, y) denotes the pixel value at position (x,y).
[0003] Step 2: Computed improved spatial frequency.
SF = 7(RF)2 + (C F)2 + (MD F)2 + (SD F)2
Equation 12
Where F and SD F Represents the main diagonal spatial frequency and secondary diagonal spatial frequency, respectively.as defined blow:
I MN MDF = x/m x Ν Σ ΣΙ fF y) - fix-1, y-1)]
V x=2 y-2
Equation 13
J Ν M
Κ/χ7νΣΣ[/(νγ)-ί·(χ+ι,γ+ΐ)] y=l x=l
Equation 14 [0004] Step 3: fusion initial detail part.
I =W I +W I 1 df ” d\‘d\' ” dMdl
Equation 15
Where he weight R/ and ^2 is defined as:
SFk +a SFX + SF + ot ke {1,2}
Equation 16 where ^1 and ^2 is the corresponding spatial frequencies of the details of the two source images, a is a scalar parameter that approaches 0, in order to prevent the denominator from being equal to 0, which is set here to 0.0000001. [0005] Step 4: The initial fused detail image is filtered by guided filter.
A filter output image O is a linear transformation of a guided image G . In a window, wk centred at pixel k, the filter output Q at pixel i is obtained as follows:
(9,. = akGj +bk,\/i<E wk Equation 17
Guided image G is obtained by maximizing the pixel value of the corresponding position of the two detail part, and its definition is as follows:
G = max(/^,/^2),i = 1,2,...........N Equation 18
The derivative of Eq. (17) shows that when there is a gradient in the guide image G, there is also a gradient in the output image O. at and are linear coefficients in wk and can be estimated by minimizing the following cost functions in the local window:
E(ak,bk) = 'Y_t{(akGi +bk -P,)2 + ^¾2) Equation 19
ΪΕ Wk
Where is the input image to be filtered, that is, the initial fusion detail part obtained by using the improved spatial frequency.
Where ε is an important parameter to adjust the filter effect, in this patent, it is set to 0.000001.using the least square method in solving Eq.(19), available:
1/ΗΣ- G^-μ.Ρ. _ ak = ------—---------, bk =Pk- a^k Equation 20
Here Pk and ak represent the mean and variance of G in local window wk. M is the number of pixels in the window, and is the mean of P in window wk .A pixel is contained by multiple Windows, that is, each pixel is described by multiple linear functions. When the output value of a specific point is calculated, it is the average value after multiple linear functions are evaluated:
O, = — V (ak Gi + bk) = ajGj + b. Equation 21 [0006]Step 5:fusion final image.
Through the guided filtering, the edge information in the guided graph G can be effectively retained in the output image 0, so that the final detail fusion image has rich information, so that the output image 0 is final fused detail image df , and finally, the fused base image and fused the detail image are added to obtain the final fusion image Λ .As shown below:
fy = Ihf + FIdf Equation 22
Where Λ/ and ^df are fused base image and fused detail image, respectively. Λ is final fused image.
AU2020100199A 2020-02-08 2020-02-08 A medical image fusion method based on two-layer decomposition and improved spatial frequency Ceased AU2020100199A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020100199A AU2020100199A4 (en) 2020-02-08 2020-02-08 A medical image fusion method based on two-layer decomposition and improved spatial frequency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020100199A AU2020100199A4 (en) 2020-02-08 2020-02-08 A medical image fusion method based on two-layer decomposition and improved spatial frequency

Publications (1)

Publication Number Publication Date
AU2020100199A4 true AU2020100199A4 (en) 2020-03-19

Family

ID=69805000

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020100199A Ceased AU2020100199A4 (en) 2020-02-08 2020-02-08 A medical image fusion method based on two-layer decomposition and improved spatial frequency

Country Status (1)

Country Link
AU (1) AU2020100199A4 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429391A (en) * 2020-03-23 2020-07-17 西安科技大学 Infrared and visible light image fusion method, fusion system and application
CN111507913A (en) * 2020-04-08 2020-08-07 四川轻化工大学 Image fusion algorithm based on texture features
CN111539936A (en) * 2020-04-24 2020-08-14 河北工业大学 Mixed weight multispectral fusion method for lithium battery image
CN111833284A (en) * 2020-07-16 2020-10-27 昆明理工大学 Multi-source image fusion method based on low-rank decomposition and convolution sparse coding
CN112328782A (en) * 2020-11-04 2021-02-05 福州大学 Multi-modal abstract generation method fusing image filter
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112950518A (en) * 2021-03-19 2021-06-11 中国科学院长春光学精密机械与物理研究所 Image fusion method based on potential low-rank representation nested rolling guide image filtering
CN113742802A (en) * 2021-09-03 2021-12-03 国网经济技术研究院有限公司 Two-dimensional multi-element signal empirical mode decomposition rapid method for engineering drawing fusion
CN115065761A (en) * 2022-06-13 2022-09-16 中亿启航数码科技(北京)有限公司 Multi-lens scanning device and scanning method thereof
CN115797347A (en) * 2023-02-06 2023-03-14 临沂农业科技职业学院(筹) Automatic production line abnormity monitoring method based on computer vision
CN116167956A (en) * 2023-03-28 2023-05-26 无锡学院 ISAR and VIS image fusion method based on asymmetric multi-layer decomposition
CN116188305A (en) * 2023-02-16 2023-05-30 长春理工大学 Multispectral image reconstruction method based on weighted guided filtering
CN116630762A (en) * 2023-06-25 2023-08-22 山东卓业医疗科技有限公司 Multi-mode medical image fusion method based on deep learning
CN117197014A (en) * 2023-09-12 2023-12-08 南京诺源医疗器械有限公司 Lung medical image fusion method and system capable of reducing noise and electronic equipment
WO2024066090A1 (en) * 2022-09-26 2024-04-04 上海闻泰电子科技有限公司 Corner detection method and system based on texture features, electronic device, and medium

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429391B (en) * 2020-03-23 2023-04-07 西安科技大学 Infrared and visible light image fusion method, fusion system and application
CN111429391A (en) * 2020-03-23 2020-07-17 西安科技大学 Infrared and visible light image fusion method, fusion system and application
CN111507913A (en) * 2020-04-08 2020-08-07 四川轻化工大学 Image fusion algorithm based on texture features
CN111539936A (en) * 2020-04-24 2020-08-14 河北工业大学 Mixed weight multispectral fusion method for lithium battery image
CN111539936B (en) * 2020-04-24 2023-06-09 河北工业大学 Mixed weight multispectral fusion method of lithium battery image
CN111833284B (en) * 2020-07-16 2022-10-14 昆明理工大学 Multi-source image fusion method based on low-rank decomposition and convolution sparse coding
CN111833284A (en) * 2020-07-16 2020-10-27 昆明理工大学 Multi-source image fusion method based on low-rank decomposition and convolution sparse coding
CN112328782A (en) * 2020-11-04 2021-02-05 福州大学 Multi-modal abstract generation method fusing image filter
CN112561842B (en) * 2020-12-07 2022-12-09 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112950518B (en) * 2021-03-19 2022-10-04 中国科学院长春光学精密机械与物理研究所 Image fusion method based on potential low-rank representation nested rolling guide image filtering
CN112950518A (en) * 2021-03-19 2021-06-11 中国科学院长春光学精密机械与物理研究所 Image fusion method based on potential low-rank representation nested rolling guide image filtering
CN113742802A (en) * 2021-09-03 2021-12-03 国网经济技术研究院有限公司 Two-dimensional multi-element signal empirical mode decomposition rapid method for engineering drawing fusion
CN115065761A (en) * 2022-06-13 2022-09-16 中亿启航数码科技(北京)有限公司 Multi-lens scanning device and scanning method thereof
CN115065761B (en) * 2022-06-13 2023-09-12 中亿启航数码科技(北京)有限公司 Multi-lens scanning device and scanning method thereof
WO2024066090A1 (en) * 2022-09-26 2024-04-04 上海闻泰电子科技有限公司 Corner detection method and system based on texture features, electronic device, and medium
CN115797347A (en) * 2023-02-06 2023-03-14 临沂农业科技职业学院(筹) Automatic production line abnormity monitoring method based on computer vision
CN115797347B (en) * 2023-02-06 2023-04-28 临沂农业科技职业学院(筹) Automatic production line abnormality monitoring method based on computer vision
CN116188305A (en) * 2023-02-16 2023-05-30 长春理工大学 Multispectral image reconstruction method based on weighted guided filtering
CN116188305B (en) * 2023-02-16 2023-12-19 长春理工大学 Multispectral image reconstruction method based on weighted guided filtering
CN116167956A (en) * 2023-03-28 2023-05-26 无锡学院 ISAR and VIS image fusion method based on asymmetric multi-layer decomposition
CN116167956B (en) * 2023-03-28 2023-11-17 无锡学院 ISAR and VIS image fusion method based on asymmetric multi-layer decomposition
CN116630762A (en) * 2023-06-25 2023-08-22 山东卓业医疗科技有限公司 Multi-mode medical image fusion method based on deep learning
CN116630762B (en) * 2023-06-25 2023-12-22 山东卓业医疗科技有限公司 Multi-mode medical image fusion method based on deep learning
CN117197014A (en) * 2023-09-12 2023-12-08 南京诺源医疗器械有限公司 Lung medical image fusion method and system capable of reducing noise and electronic equipment
CN117197014B (en) * 2023-09-12 2024-02-20 南京诺源医疗器械有限公司 Lung medical image fusion method and system capable of reducing noise and electronic equipment

Similar Documents

Publication Publication Date Title
AU2020100199A4 (en) A medical image fusion method based on two-layer decomposition and improved spatial frequency
Zhou et al. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method
Liu et al. Deep iterative reconstruction estimation (DIRE): approximate iterative reconstruction estimation for low dose CT imaging
Selver et al. Patient oriented and robust automatic liver segmentation for pre-evaluation of liver transplantation
Zhao et al. A new approach for medical image enhancement based on luminance-level modulation and gradient modulation
CN107194912B (en) Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning
Ramlal et al. An improved multimodal medical image fusion scheme based on hybrid combination of nonsubsampled contourlet transform and stationary wavelet transform
US8705821B2 (en) Method and apparatus for multimodal visualization of volume data sets
Liu et al. Robust spiking cortical model and total-variational decomposition for multimodal medical image fusion
Panigrahy et al. Parameter adaptive unit-linking pulse coupled neural network based MRI–PET/SPECT image fusion
Chen et al. Blood vessel enhancement via multi-dictionary and sparse coding: Application to retinal vessel enhancing
Wang et al. Medical image fusion based on hybrid three-layer decomposition model and nuclear norm
Khan et al. Medical image colorization for better visualization and segmentation
Jia et al. Denoising for low-dose CT image by discriminative weighted nuclear norm minimization
Huang et al. Multi-modal feature-fusion for CT metal artifact reduction using edge-enhanced generative adversarial networks
Feng et al. Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning
Irshad et al. Gradient compass-based adaptive multimodal medical image fusion
Li et al. Learning non-local perfusion textures for high-quality computed tomography perfusion imaging
Fu et al. MDRANet: A multiscale dense residual attention network for magnetic resonance and nuclear medicine image fusion
Krishnan et al. Medical image enhancement in health care applications using modified sun flower optimization
Yin et al. Unpaired low-dose CT denoising via an improved cycle-consistent adversarial network with attention ensemble
Liang et al. A self-supervised deep learning network for low-dose CT reconstruction
Xu et al. Bi-MGAN: bidirectional T1-to-T2 MRI images prediction using multi-generative multi-adversarial nets
Khan et al. Multimodal medical image fusion towards future research: A review
Goyal et al. An efficient medical assistive diagnostic algorithm for visualisation of structural and tissue details in CT and MRI fusion

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry