CN105678725B - A kind of image interfusion method and device - Google Patents

A kind of image interfusion method and device Download PDF

Info

Publication number
CN105678725B
CN105678725B CN201511019552.8A CN201511019552A CN105678725B CN 105678725 B CN105678725 B CN 105678725B CN 201511019552 A CN201511019552 A CN 201511019552A CN 105678725 B CN105678725 B CN 105678725B
Authority
CN
China
Prior art keywords
image
function
image array
fused
indicate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201511019552.8A
Other languages
Chinese (zh)
Other versions
CN105678725A (en
Inventor
白旭
任婧婧
赵海英
陈洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd
Original Assignee
DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd filed Critical DIGITAL TELEVISION TECHNOLOGY CENTER BEIJING PEONY ELECTRONIC GROUP Co Ltd
Priority to CN201511019552.8A priority Critical patent/CN105678725B/en
Publication of CN105678725A publication Critical patent/CN105678725A/en
Application granted granted Critical
Publication of CN105678725B publication Critical patent/CN105678725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the invention discloses a kind of image interfusion method, this method obtains two image puppet polar coordinate Fourier transform amounts to be fused, and then obtains the Fourier spectrum function of corresponding two images to be fused;Further obtain mean value spectral function;According to the mean value spectral function domain in [π, π] in all minimum points, determine experience wavelet function and its scaling function group, according to the experience wavelet function and its scaling function group, determine a rough layer image array and the second quantity levels of detail image array, and permeate a rough layer blending image matrix and the second quantity levels of detail blending image matrix;The coarse blending image matrix and each details blending image matrix are carried out convolution sum with the experience wavelet function and its scaling function group to be added, obtain the image array of blending image.Since the embodiment of the present invention is layered different source images to be fused using identical experience wavelet function and its scaling function group, the information for reducing fused image is lost.

Description

A kind of image interfusion method and device
Technical field
The present invention relates to image processing field, more particularly to a kind of image interfusion method and device.
Background technology
Image co-registration is widely used in many theoretical researches and practical application.Due to the limitation of imaging device, Any single imaging sensor is all difficult to realize complete scene and presents, i.e., it cannot be guaranteed that photographic subjects can obtain clearly Imaging.And multiple similar/inhomogeneity sensors may be used in image fusion technology, the capture images from identical scene, and will It is merged by certain rule, to provide the complete image of a width, is the CT images of human brain as shown in Figure 1a, can be imaged The information of human brain bone, but it can not be imaged the information of human brain soft tissue, Fig. 1 b show the MRI images of human brain, Neng Goucheng As human brain soft tissue information, but it cannot be imaged the information of human brain bone, Fig. 1 c are the human brain CT- obtained after image co-registration MRI images can be imaged out the bone information and soft tissue information of human brain simultaneously in an image.
In the prior art, multiple dimensioned class Image Fusion is to use more Image Fusion.This kind of algorithm is first Source images are first resolved into several layers to be fused according to anatomic element;Then further according to the different fusion of different layer structure designs Rule obtains the fusion coefficients of each decomposition layer;Blending image matrix finally is obtained using the inverse of decomposition algorithm, by blending image Matrix obtains blending image.Empirical mode decomposition (EMD) method is often used to be layered source images, and this method is for difference Source images, corresponding source images are layered using intrinsic mode function corresponding with source images race (IMF), this side Method leads to the form of corresponding layer to be fused between different source images due to the difference of source images so the IMF races used are also different Ingredient is inconsistent so that when corresponding interlayer is merged using identical fusion rule, will produce the loss of image information.
Invention content
The embodiment of the invention discloses a kind of image interfusion method and devices, what the information to reduce fused image was lost Situation.
In order to achieve the above objectives, the embodiment of the invention discloses a kind of image interfusion method, the method includes the steps:
The image array of two images to be fused is carried out to pseudo- polar coordinate Fourier transform respectively, correspondence obtains two groups of puppet poles Coordinate Fourier transformation amount;
According to two groups of puppets polar coordinate Fourier transform amount, the Fourier spectrum letter of corresponding two images to be fused is obtained Number;
It averages function to described two Fourier spectrum functions, obtains a mean value spectral function;
According to all minimum points of the mean value spectral function domain in [- π, π], determine experience wavelet function and its Scaling function group, wherein the quantity of the scaling function group mesoscale function and the quantity of the minimum point are all the first number Amount;
According to the experience wavelet function and its scaling function group, the image array of two images to be fused is distinguished It is decomposed into a rough layer image array and the second quantity levels of detail image array;Second quantity is that the first quantity subtracts One;
Obtain two rough layer image arrays are permeated a coarse blending image matrix;
The obtained levels of detail image array is fused to the second quantity details blending image matrix;
The coarse blending image matrix and experience wavelet function are subjected to convolution algorithm, by each details blending image square Battle array scaling function corresponding with the scaling function group carries out convolution algorithm, and by the results added of all convolution algorithms, obtains To the image array of blending image.
Preferably, it is described according to two groups of puppets polar coordinate Fourier transform amount, obtain corresponding two images to be fused Fourier spectrum function, including:
The Fourier spectrum function of each image to be fused is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum letter Number, FP(Ii)(θk, | ω |) indicate image I to be fusediPseudo- polar coordinate Fourier transform amount group, wherein θkIt is sat for preset pseudo- pole Mark the angle parameter of Fourier transformation, NθFor the number of the angle parameter;
It is described that mean value spectral function is obtained according to the Fourier spectrum function, including:
Mean value spectral function is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum letter Number, F (| ω |) it is the mean value spectral function, NiFor the number of the image to be fused.
Preferably, all minimum points according to the mean value spectral function domain in [- π, π], determine experience Wavelet function and its scaling function group, including:
The experience wavelet function that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the experience small echo Function:
Wherein, φ1(p) the experience wavelet function, F are indicatedp1(p)) the experience wavelet function φ is indicated1(p) Pseudo- polar coordinate Fourier transform, γ are preset empirical value, ω1Indicate minimum minimum of abscissa value in the mean value spectral function It is worth the abscissa value of point;
The scaling function group that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the scaling function Group:
Wherein, m values are 1 ... L-1, L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity,Indicate each scaling function in the scaling function group,Indicate scaling functionPseudo- pole sit Mark Fourier transformation, ωmIndicate that the abscissa value of the small minimum points of abscissa value m in the mean value spectral function, γ are pre- If empirical value;
Function β is determined by following formula:
Preferably, it is described according to the experience wavelet function and its scaling function group, by two images to be fused Image array is separately disassembled into a rough layer image array and the first quantity levels of detail image array, including:
The rough layer image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Expression experience wavelet function φ1(p) conjugation of Fourier transformation,Indicate the inversion of Fourier transformation It changes, Wi 0(p) image I is indicatediRough layer image array;
The levels of detail image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Indicate scaling functionFourier transformation conjugation,Indicate that the inverse transformation of Fourier transformation, m take Value is 1 ..., L-1, and wherein L indicates that the sum of the minimum point, numerical values recited are equal to the first quantity, Wi m(p) figure is indicated As IiEach levels of detail image array.
Preferably, described permeate obtain two rough layer image arrays a coarse blending image matrix, Including:
Rough layer image array is calculated according to the following formulaEach element coefficient, wherein the rough layer image MatrixFor any one in two obtained the rough layer image array:
Wherein, r(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row,Table Show rough layer image arrayThe contrast of the element of middle x rows y row,Indicate rough layer image arrayThe contrast of the element of middle x rows y row, wherein rough layer image arrayIndicate obtain two it is described coarse Rough layer image array is removed in tomographic image matrixAnother rough layer image array, a be preset numerical value, wherein The contrast of the element of x rows y row is determined by following formula in arbitrary rough layer image array:
Wherein, S (W* 0(x, y)) indicate arbitrary rough layer image array W* 0(p) contrast for the element that x rows y is arranged in, M, N The line number and columns of the respectively described rough layer image array, W* 0(x, y) indicates rough layer image array W* 0(p) x rows y is arranged in Element value, W* 0(x ', y ') indicates rough layer image array W* 0(p) element value of the element of all non-x row y row in;
Rough layer image array is calculated according to the following formulaEach element coefficient:
r′(x,y)=1-r(x,y)
Wherein, r '(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row;
Each element in two rough layer image arrays is multiplied by corresponding element coefficient and corresponding phase Add, obtains the coarse blending image matrix;
It is described that the obtained levels of detail image array is fused to the first quantity details blending image matrix, including:
Calculate the mould of each levels of detail image array;
In every layer of details layer matrix of two images to be fused, the larger levels of detail image array of modulus, which is used as, to be corresponded to The details blending image matrix of this layer obtains the first quantity details blending image matrix.
The embodiment of the invention also discloses a kind of image fusion device, described device includes:
Pseudo- polar coordinate Fourier transform module, for the image array of two images to be fused to be carried out to pseudo- polar coordinates respectively Fourier transformation, correspondence obtain two groups of puppet polar coordinate Fourier transform amounts;
Fourier spectrum function acquisition module, for according to two groups of puppets polar coordinate Fourier transform amount, obtaining corresponding two The Fourier spectrum function of image to be fused;
Mean value spectral function acquisition module obtains a mean value for averaging function to described two Fourier spectrum functions Spectral function;
Experience small echo module, for all minimum points according to the mean value spectral function domain in [- π, π], really Experience wavelet function and its scaling function group are determined, wherein the quantity of the scaling function group mesoscale function and the minimum point Quantity all be the first quantity;
Hierarchical block is used for according to the experience wavelet function and its scaling function group, by two images to be fused Image array be separately disassembled into a rough layer image array and the second quantity levels of detail image array;Second quantity Subtract one for the first quantity;
Coarse Fusion Module, two rough layer image arrays for that will obtain permeate a coarse blending image Matrix;
Details Fusion Module, for the obtained levels of detail image array to be fused to the second quantity details fusion figure As matrix;
Fusion Module will be each for the coarse blending image matrix and experience wavelet function to be carried out convolution algorithm Corresponding with the scaling function group scaling function of details blending image matrix carries out convolution algorithm, and by all convolution algorithms Results added, obtain the image array of blending image.
Preferably, the Fourier spectrum function acquisition module, is specifically used for:
The Fourier spectrum function of each image to be fused is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum letter Number, FP(Ii)(θk, | ω |) indicate image I to be fusediPseudo- polar coordinate Fourier transform amount group, wherein θkIt is sat for preset pseudo- pole Mark the angle parameter of Fourier transformation, NθFor the number of the angle parameter.
The mean value spectral function acquisition module, is specifically used for:
Mean value spectral function is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum letter Number, F (| ω |) it is the mean value spectral function, NiFor the number of the image to be fused.
Preferably, the experience small echo module, is specifically used for:
The experience wavelet function that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the experience small echo Function:
Wherein, φ1(p) the experience wavelet function, F are indicatedp1(p)) the experience wavelet function φ is indicated1(p) Pseudo- polar coordinate Fourier transform, γ are preset empirical value, ω1Indicate minimum minimum of abscissa value in the mean value spectral function It is worth the abscissa value of point;
The scaling function group that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the scaling function Group:
Wherein, m values are 1 ... L-1, L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity,Indicate each scaling function in the scaling function group,Indicate scaling functionPseudo- pole sit Mark Fourier transformation, ωmIndicate that the abscissa value of the small minimum points of abscissa value m in the mean value spectral function, γ are pre- If empirical value;
Function β is determined by following formula:
Preferably, the hierarchical block, is specifically used for:
The rough layer image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Expression experience wavelet function φ1(p) conjugation of Fourier transformation,Indicate the inversion of Fourier transformation It changes, Wi 0(p) image I is indicatediRough layer image array;
The levels of detail image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression imageIThe Fourier transformation of i,Indicate scaling functionFourier transformation conjugation,Indicate that the inverse transformation of Fourier transformation, m take Value is 1 ..., L-1, and wherein L indicates that the sum of the minimum point, numerical values recited are equal to the first quantity, Wi m(p) figure is indicated As IiEach levels of detail image array.
Preferably, the coarse Fusion Module, is specifically used for:
Rough layer image array is calculated according to the following formulaEach element coefficient, wherein the rough layer image MatrixFor any one in two obtained the rough layer image array:
Wherein, r(xy)Indicate rough layer image arrayThe element coefficient of middle x rows y row,It indicates Rough layer image arrayThe contrast of the element of middle x rows y row,Indicate rough layer image arrayThe contrast of the element of middle x rows y row, wherein rough layer image arrayIndicate obtain two it is described coarse Rough layer image array is removed in tomographic image matrixAnother rough layer image array, a be preset numerical value, wherein The contrast of the element of x rows y row is determined by following formula in arbitrary rough layer image array:
Wherein, S (W* 0(x, y)) indicate arbitrary rough layer image array W* 0(p) contrast for the element that x rows y is arranged in, M, N The line number and columns of the respectively described rough layer image array, W* 0(x, y) indicates rough layer image array W* 0(p) x rows y is arranged in Element value, W* 0(x ', y ') indicates rough layer image array W* 0(p) element value of the element of all non-x row y row in;
Rough layer image array is calculated according to the following formulaEach element coefficient:
r′(x,y)=1-r(x,y)
Wherein, r '(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row;
Each element in two rough layer image arrays is multiplied by corresponding element coefficient and corresponding phase Add, obtains the coarse blending image matrix;
The details Fusion Module, is specifically used for:
Calculate the mould of each levels of detail image array;
In every layer of details layer matrix of two images to be fused, the larger levels of detail image array of modulus, which is used as, to be corresponded to The details blending image matrix of this layer obtains the first quantity details blending image matrix.
As seen from the above technical solutions, an embodiment of the present invention provides a kind of image interfusion method, this method is by two width The image array of image to be fused carries out pseudo- polar coordinate Fourier transform respectively, and correspondence obtains two groups of puppet polar coordinate Fourier transforms Amount;According to two groups of puppets polar coordinate Fourier transform amount, the Fourier spectrum function of corresponding two images to be fused is obtained;To institute It states two Fourier spectrum functions to average function, obtains a mean value spectral function;According to the mean value spectral function domain [- π, π] in all minimum points, experience wavelet function and its scaling function group are determined, wherein the scaling function group mesoscale The quantity of function and the quantity of the minimum point are all the first quantity;According to the experience wavelet function and its scaling function The image array of two images to be fused is separately disassembled into a rough layer image array and the second quantity details by group Tomographic image matrix;Second quantity is that the first quantity subtracts one;Obtain two rough layer image arrays are permeated A coarse blending image matrix;The obtained levels of detail image array is fused to the first quantity details blending image square Battle array;The coarse blending image matrix and experience wavelet function are subjected to convolution algorithm, by each details blending image matrix with Corresponding scaling function carries out convolution algorithm in the scaling function group, and by the results added of all convolution algorithms, is melted Close the image array of image.Since in the embodiment of the present invention, identical experience small echo letter is used to different source images to be fused Number and its scaling function group are layered, and the information for reducing fused image is lost.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Fig. 1 a are the CT images of human brain;
Fig. 1 b are the MRI images of human brain;
Fig. 1 c are the human brain CT-MRI images obtained after image co-registration;
Fig. 2 is a kind of flow diagram of image interfusion method provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of image fusion device provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without creative efforts Embodiment shall fall within the protection scope of the present invention.
Below by specific embodiment, the present invention is described in detail.
Fig. 2 is a kind of flow diagram of image interfusion method provided in an embodiment of the present invention, and the method is applied to eventually End, this method may include step:
S201:The image array of two images to be fused is carried out to pseudo- polar coordinate Fourier transform respectively, correspondence obtains two The pseudo- polar coordinate Fourier transform amount of group.
For example, treating blending image I respectively1And I2Image array pseudo- pole carried out with preset different angle parameter sit Fourier transformation is marked, the converted quantity group F of pseudo- polar coordinate Fourier transform is respectively obtainedP(I1)(θk, | ω |) and FP(I2)(θk,|ω |), wherein θkFor preset different angle parameter, further, θkCan value be:θ1=-135 °, θ2=-90 °, θ3=- 45 °, θ4=0 °, θ5=45 °, θ6=90 ° of θ7=135 °, θ8=180 °.According to preset different angle parameter to image array It is the prior art to carry out pseudo- polar coordinate Fourier transform, and the present invention repeats no more.
S202:According to two groups of puppets polar coordinate Fourier transform amount, the Fourier of corresponding two images to be fused is obtained Spectral function.
For example, for the image I obtained in step S1011Pseudo- polar coordinate Fourier transform converted quantity group FP(I1) (θk, | ω |), image I can be obtained by following formula1Fourier spectrum function:
Wherein, F (I1) (| ω |) indicate image I to be fused1Fourier spectrum function, FP(I1)(θk, | ω |) it indicates to wait melting Close image I1Pseudo- polar coordinate Fourier transform amount group, wherein θkFor the angle parameter of preset pseudo- polar coordinate Fourier transform, Nθ For the number of the angle parameter, for example, θkCan value be:θ1=-135 °, θ2=-90 °, θ3=-45 °, θ4=0 °, θ5= 45 °, θ6=90 ° of θ7=135 °, θ8=180 °, at this moment Nθ=8;For the image I obtained in step S1012, can be by same Mode obtain its Fourier spectrum function.
S203:According to the Fourier spectrum function, mean value spectral function is obtained.
It is averaged again after the Fourier spectrum function of two images to be fused of acquisition is added, you can obtain mean value spectrum letter Number.
Mean value spectral function can be obtained according to the following formula:
Wherein, wherein F (I1) (| ω |) indicate image I to be fused1Fourier spectrum function, F (I2) (| ω |) it indicates to wait for Blending image I2Fourier spectrum function.
S204:According to all minimum points of the mean value spectral function domain in [- π, π], experience small echo letter is determined Number and its scaling function group, wherein the quantity of the scaling function group mesoscale function and the quantity of the minimum point are all the One quantity.
The experience wavelet function that can determine following formula carries out pseudo- polar coordinates Fourier inversion, calculates the experience Wavelet function:
Wherein, φ1(p) the experience wavelet function, F are indicatedp1(p)) the experience wavelet function φ is indicated1(p) Pseudo- polar coordinate Fourier transform, γ are preset empirical value, ω1Indicate minimum minimum of abscissa value in the mean value spectral function It is worth the abscissa value of point.
The scaling function group that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the scaling function Group:
Wherein, m values are 1 ... L-1, L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity,Indicate each scaling function in the scaling function group,Indicate scaling functionPseudo- pole sit Mark Fourier transformation, ωmIndicate that the abscissa value of the small minimum points of abscissa value m in the mean value spectral function, γ are pre- If empirical value;
Function β is determined by following formula:
In this way, being directed to two images to be fused, an experience wavelet function φ can be obtained1(p) and scaling function groupWherein scaling function groupIncluding L-1 scaling function.
S205:According to the experience wavelet function and its scaling function group, by the image moment of two images to be fused Battle array is separately disassembled into a rough layer image array and the first quantity levels of detail image array.
For example, treating blending image I1, its rough layer image array W is determined according to the following formula1 0(p):
Wherein, F (I1) (ω) expression image I1Fourier transformation,Expression experience wavelet function φ1(p) Fourier transformation conjugation,Indicate the inverse transformation of Fourier transformation, W1 0(p) image I is indicated1Rough layer image moment Battle array.
Treat blending image I1, each of which levels of detail image array is determined according to the following formula:
Wherein, F (I1) (ω) expression image I1Fourier transformation,Indicate scaling functionFu In leaf transformation conjugation,Indicate that the inverse transformation of Fourier transformation, m values are 1 ..., L-1, wherein L indicates described minimum It is worth the sum of point, numerical values recited is equal to the first quantity, W1 m(p) image I is indicated1Each levels of detail image array, i.e., with m Value respectively obtains W from 1 to L-11 1(p),...,W1 L-1(p)。
For image I to be fused2, using identical method, obtain its rough layer image array W2 0(p) and each details Tomographic image matrix W2 1(p),...,W2 L-1(p)。
S206:Obtain two rough layer image arrays are permeated a coarse blending image matrix.
Rough layer image array is calculated according to the following formulaEach element coefficient, wherein the rough layer image MatrixFor any one in two obtained the rough layer image array:
Wherein, r(xy)Indicate rough layer image arrayThe element coefficient of middle x rows y row,It indicates Rough layer image arrayThe contrast of the element of middle x rows y row,Indicate rough layer image arrayThe contrast of the element of middle x rows y row, wherein rough layer image arrayIndicate obtain two it is described coarse Rough layer image array is removed in tomographic image matrixAnother rough layer image array, a be preset numerical value, wherein The contrast of the element of x rows y row is determined by following formula in arbitrary rough layer image array:
Wherein, S (W* 0(x, y)) indicate arbitrary rough layer image array W* 0(p) contrast for the element that x rows y is arranged in, M, N The line number and columns of the respectively described rough layer image array, W* 0(x, y) indicates rough layer image array W* 0(p) x rows y is arranged in Element value, W* 0(x ', y ') indicates rough layer image array W* 0(p) element value of the element of all non-x row y row in;
Rough layer image array is calculated according to the following formulaEach element coefficient:
r′(x,y)=1-r(x,y)
Wherein, r '(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row;
Each element in two rough layer image arrays is multiplied by corresponding element coefficient and corresponding phase Add, obtains the coarse blending image matrix.
S207:The obtained levels of detail image array is fused to the first quantity details blending image matrix.
Specifically, the mould of each levels of detail image array can be calculated first;Again in every layer of details of two images to be fused In layer matrix, the larger levels of detail image array of modulus obtains respective counts as the details blending image matrix corresponding to this layer Amount details blending image matrix.
For example, being directed to image I to be fused1Levels of detail image array W1 1(p) and corresponding image I to be fused2Details Tomographic image matrix W2 1(p), the size of its corresponding mould is calculated | W1 1(p) | and | W2 1(p) |, if W1 1(p) | > | W2 1(p) |, then will Corresponding details blending image matrixIt is determined as W1 1(p).Further, image to be fused can be obtained using following formula I1And I2Each arrive details blending image matrix
Wherein, for m values from 1 to L-1, L is the points of the minimum point, W1 m(p) and W2 m(p) corresponding m values are indicated respectively Image I to be fused1And I2Levels of detail image array,Indicate the details blending image matrix of corresponding m values.
S208:The coarse blending image matrix and experience wavelet function are subjected to convolution algorithm, each details is merged Corresponding with the scaling function group scaling function of image array carries out convolution algorithm, and by the result phase of all convolution algorithms Add, obtains the image array of blending image.
For example, according to the following formula, obtaining image I to be fused1And I2Blending image image array:
Wherein,Indicate that convolution algorithm, F (p) indicate the image array of blending image,Indicate coarse blending image Matrix,As m values are different, expression is each details blending image matrix, φ1(p) it is the experience small echo Function,As m values are different, expression is each different scaling function.
The embodiment of the present invention is directed to the inconsistent situation of anatomic element of corresponding layer to be fused between different source images, according to The mean value of the image to be fused is composed, and determines experience wavelet function and its scaling function group, according to the experience wavelet function and Its scaling function group obtains each layer to be fused of image to be fused so that the anatomic element of layer to be fused is consistent, reduces and melts The information of image is lost after conjunction.
Described image fusion method can be applied to have in the electronic equipment of image processing function, such as CT scanner, MRI Scanner, smart mobile phone, smart camera, computer etc..
Fig. 3 is a kind of structural schematic diagram of image fusion device provided in an embodiment of the present invention, and described device may include:
Pseudo- polar coordinate Fourier transform module 301, for the image array of two images to be fused to be carried out to pseudo- pole respectively Coordinate Fourier transformation, correspondence obtain two groups of puppet polar coordinate Fourier transform amounts;
Fourier spectrum function acquisition module 302, for according to two groups of puppets polar coordinate Fourier transform amount, being corresponded to The Fourier spectrum function of two images to be fused;
Mean value spectral function acquisition module 303 obtains one for averaging function to described two Fourier spectrum functions It is worth spectral function;
Experience small echo module 304 is used for all minimum points according to the mean value spectral function domain in [- π, π], Experience wavelet function and its scaling function group are determined, wherein the quantity and the minimum of the scaling function group mesoscale function The quantity of point is all the first quantity;
Hierarchical block 305 is used for according to the experience wavelet function and its scaling function group, by two figures to be fused The image array of picture is separately disassembled into a rough layer image array and the second quantity levels of detail image array;Second number Amount is that the first quantity subtracts one;
Coarse Fusion Module 306, two rough layer image arrays for that will obtain permeate a coarse fusion Image array;
Details Fusion Module 307 melts for the obtained levels of detail image array to be fused to the second quantity details Close image array;
Fusion Module 308 will be every for the coarse blending image matrix and experience wavelet function to be carried out convolution algorithm A details blending image matrix scaling function corresponding with the scaling function group carries out convolution algorithm, and all convolution are transported The results added of calculation obtains the image array of blending image.
Further, the Fourier spectrum function acquisition module 302, is specifically used for:
The Fourier spectrum function of each image to be fused is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum letter Number, FP(Ii)(θk, | ω |) indicate image I to be fusediPseudo- polar coordinate Fourier transform amount group, wherein θkIt is sat for preset pseudo- pole Mark the angle parameter of Fourier transformation, NθFor the number of the angle parameter.
The mean value spectral function acquisition module 303, is specifically used for:
Mean value spectral function is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum letter Number, F (| ω |) it is the mean value spectral function, NiFor the number of the image to be fused;
Further, the experience small echo module 304, is specifically used for:
The experience wavelet function that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the experience small echo Function:
Wherein, φ1(p) the experience wavelet function, F are indicatedp1(p)) the experience wavelet function φ is indicated1(p) Pseudo- polar coordinate Fourier transform, γ are preset empirical value, ω1Indicate minimum minimum of abscissa value in the mean value spectral function It is worth the abscissa value of point;
The scaling function group that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the scaling function Group:
Wherein, m values are 1 ... L-1, L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity,Indicate each scaling function in the scaling function group,Indicate scaling functionPseudo- polar coordinates Fourier transformation, ωmIndicate that the abscissa value of the small minimum points of abscissa value m in the mean value spectral function, γ are default Empirical value;
Function β is determined by following formula:
Further, the hierarchical block 305, is specifically used for:
The rough layer image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Expression experience wavelet function φ1(p) conjugation of Fourier transformation,Indicate the inversion of Fourier transformation It changes, Wi 0(p) image I is indicatediRough layer image array;
The levels of detail image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Indicate scaling functionFourier transformation conjugation,Indicate that the inverse transformation of Fourier transformation, m take Value is 1 ..., L-1, and wherein L indicates that the sum of the minimum point, numerical values recited are equal to the first quantity, Wi m(p) figure is indicated As IiEach levels of detail image array.
Further, the coarse Fusion Module 306, is specifically used for:
Rough layer image array is calculated according to the following formulaEach element coefficient, wherein the rough layer image MatrixFor any one in two obtained the rough layer image array:
Wherein, r(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row,It indicates Rough layer image arrayThe contrast of the element of middle x rows y row,Indicate rough layer image arrayThe contrast of the element of middle x rows y row, wherein rough layer image arrayIndicate obtain two it is described coarse Rough layer image array is removed in tomographic image matrixAnother rough layer image array, a be preset numerical value, wherein The contrast of the element of x rows y row is determined by following formula in arbitrary rough layer image array:
Wherein, S (W* 0(x, y)) indicate arbitrary rough layer image array W* 0(p) contrast for the element that x rows y is arranged in, M, N The line number and columns of the respectively described rough layer image array, W* 0(x, y) indicates rough layer image array W* 0(p) x rows y is arranged in Element value, W* 0(x ', y ') indicates rough layer image array W* 0(p) element value of the element of all non-x row y row in;
Rough layer image array is calculated according to the following formulaEach element coefficient:
r′(x,y)=1-r(x,y)
Wherein, r '(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row;
Each element in two rough layer image arrays is multiplied by corresponding element coefficient and corresponding phase Add, obtains the coarse blending image matrix;
The details Fusion Module 307, is specifically used for:
Calculate the mould of each levels of detail image array;
In every layer of details layer matrix of two images to be fused, the larger levels of detail image array of modulus, which is used as, to be corresponded to The details blending image matrix of this layer obtains the first quantity details blending image matrix.
Described image fusing device can be applied to have in the electronic equipment of image processing function, such as CT scanner, MRI Scanner, smart mobile phone, smart camera, computer etc..
An embodiment of the present invention provides a kind of image interfusion method and device, this method is by the image of two images to be fused Matrix carries out pseudo- polar coordinate Fourier transform respectively, and correspondence obtains two groups of puppet polar coordinate Fourier transform amounts;According to described two groups Pseudo- polar coordinate Fourier transform amount obtains the Fourier spectrum function of corresponding two images to be fused;To described two Fourier spectrums Function is averaged function, and a mean value spectral function is obtained;According to all poles of the mean value spectral function domain in [- π, π] Small value point, determines experience wavelet function and its scaling function group, wherein the quantity of the scaling function group mesoscale function and institute The quantity for stating minimum point is all the first quantity;According to the experience wavelet function and its scaling function group, two width is waited for The image array of blending image is separately disassembled into a rough layer image array and the second quantity levels of detail image array;It is described Second quantity is that the first quantity subtracts one;Obtain two rough layer image arrays are permeated a coarse blending image square Battle array;The obtained levels of detail image array is fused to the first quantity details blending image matrix;By the coarse fusion Image array and experience wavelet function carry out convolution algorithm, by each details blending image matrix with it is right in the scaling function group The scaling function answered carries out convolution algorithm, and by the results added of all convolution algorithms, obtains the image array of blending image.By In the embodiment of the present invention, different source images to be fused are carried out using identical experience wavelet function and its scaling function group Layering, the information for reducing fused image are lost.
For systems/devices embodiment, since it is substantially similar to the method embodiment, so the comparison of description is simple Single, the relevent part can refer to the partial explaination of embodiments of method.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
One of ordinary skill in the art will appreciate that all or part of step in realization above method embodiment is can It is completed with instructing relevant hardware by program, the program can be stored in computer read/write memory medium, The storage medium designated herein obtained, such as:ROM/RAM, magnetic disc, CD etc..
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (8)

1. a kind of image interfusion method, which is characterized in that the method includes the steps:
The image array of two images to be fused is carried out to pseudo- polar coordinate Fourier transform respectively, correspondence obtains two groups of puppet polar coordinates Fourier transformation amount;
According to two groups of puppets polar coordinate Fourier transform amount, the Fourier spectrum function of corresponding two images to be fused is obtained;It is right Described two Fourier spectrum functions are averaged function, and a mean value spectral function is obtained;
According to all minimum points of the mean value spectral function domain in [- π, π], experience wavelet function and its scale are determined Group of functions, wherein the quantity of the scaling function group mesoscale function and the quantity of the minimum point are all the first quantity;
According to the experience wavelet function and its scaling function group, the image array of two images to be fused is decomposed respectively For a rough layer image array and the second quantity levels of detail image array;Second quantity is that the first quantity subtracts one;
Obtain two rough layer image arrays are permeated a coarse blending image matrix, including:
Rough layer image array is calculated according to the following formulaEach element coefficient, wherein the rough layer image arrayFor any one in two obtained the rough layer image array:
Wherein, r(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row,Indicate coarse Tomographic image matrixThe contrast of the element of middle x rows y row,Indicate rough layer image array The contrast of the element of middle x rows y row, wherein rough layer image arrayIndicate two obtained rough layer image moments Rough layer image array is removed in battle arrayAnother rough layer image array, a is preset numerical value, wherein arbitrary coarse The contrast for the element that x rows y is arranged in tomographic image matrix is determined by following formula:
Wherein, S (W* 0(x, y)) indicate arbitrary rough layer image array W* 0(p) contrast for the element that x rows y is arranged in, M, N difference For the line number and columns of the rough layer image array, W* 0(x, y) indicates rough layer image array W* 0(p) member that x rows y is arranged in Element value, W* 0(x ', y ') indicates rough layer image array W* 0(p) element value of the element of all non-x row y row in;
Rough layer image array is calculated according to the following formulaEach element coefficient:
r′(x,y)=1-r(x,y)
Wherein, r '(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row;
Each element in two rough layer image arrays is multiplied by corresponding element coefficient and corresponding addition, is obtained To the coarse blending image matrix;
The obtained levels of detail image array is fused to the second quantity details blending image matrix, including:
Calculate the mould of each levels of detail image array;
In every layer of details layer matrix of two images to be fused, the larger levels of detail image array of modulus, which is used as, corresponds to the layer Details blending image matrix, obtain the first quantity details blending image matrix;
The coarse blending image matrix and experience wavelet function are subjected to convolution algorithm, by each details blending image matrix with Corresponding scaling function carries out convolution algorithm in the scaling function group, and by the results added of all convolution algorithms, is melted Close the image array of image.
2. according to the method described in claim 1, it is characterized in that, described according to two groups of puppet polar coordinate Fourier transforms Amount obtains the Fourier spectrum function of corresponding two images to be fused, including:
The Fourier spectrum function of each image to be fused is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum function, FP (Ii)(θk, | ω |) indicate image I to be fusediPseudo- polar coordinate Fourier transform amount group, wherein θkFor preset pseudo- polar coordinates Fu In leaf transformation angle parameter, NθFor the number of the angle parameter;
It is described that mean value spectral function is obtained according to the Fourier spectrum function, including:
Mean value spectral function is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum function, F (| ω |) it is the mean value spectral function, NiFor the number of the image to be fused.
3. according to the method described in claim 1, it is characterized in that, it is described according to the mean value spectral function domain at [- π, π] In all minimum points, determine experience wavelet function and its scaling function group, including:
The experience wavelet function that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the experience small echo letter Number:
Wherein, φ1(p) the experience wavelet function, F are indicatedp1(p)) the experience wavelet function φ is indicated1(p) pseudo- pole Coordinate Fourier transformation, γ are preset empirical value, ω1Indicate the minimum point of abscissa value minimum in the mean value spectral function Abscissa value;
The scaling function group that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the scaling function group:
Wherein, m values are 1 ... L-1, L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity, Indicate each scaling function in the scaling function group,Indicate scaling functionPseudo- polar coordinates Fu in Leaf transformation, ωmIndicate that the abscissa value of the small minimum points of abscissa value m in the mean value spectral function, γ are preset warp Test value;
Function β is determined by following formula:
4. according to the method described in claim 1, it is characterized in that, described according to the experience wavelet function and its scaling function The image array of two images to be fused is separately disassembled into a rough layer image array and the first quantity details by group Tomographic image matrix, including:
The rough layer image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Table Show experience wavelet function φ1(p) conjugation of Fourier transformation,Indicate the inverse transformation of Fourier transformation, Wi 0(p) figure is indicated As IiRough layer image array;
The levels of detail image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Table Show scaling functionFourier transformation conjugation,Indicate Fourier transformation inverse transformation, m values be 1 ..., L-1, wherein L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity, Wi m(p) image I is indicatediEach of Levels of detail image array.
5. a kind of image fusion device, which is characterized in that described device includes:
Pseudo- polar coordinate Fourier transform module, for carrying out the image array of two images to be fused respectively in pseudo- polar coordinates Fu Leaf transformation, correspondence obtain two groups of puppet polar coordinate Fourier transform amounts;
Fourier spectrum function acquisition module, for according to two groups of puppets polar coordinate Fourier transform amount, obtaining corresponding two width and waiting for The Fourier spectrum function of blending image;Mean value spectral function acquisition module, for averaging letter to described two Fourier spectrum functions Number obtains a mean value spectral function;
Experience small echo module determines warp for all minimum points according to the mean value spectral function domain in [- π, π] Wavelet function and its scaling function group are tested, wherein the number of the quantity and the minimum point of the scaling function group mesoscale function Amount is all the first quantity;
Hierarchical block is used for according to the experience wavelet function and its scaling function group, by the figure of two images to be fused As matrix is separately disassembled into a rough layer image array and the second quantity levels of detail image array;Second quantity is the One quantity subtracts one;
Coarse Fusion Module, two rough layer image arrays for that will obtain permeate a coarse blending image square Battle array, is specifically used for:
Rough layer image array is calculated according to the following formulaEach element coefficient, wherein the rough layer image arrayFor any one in two obtained the rough layer image array:
Wherein, r(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row,Indicate coarse Tomographic image matrixThe contrast of the element of middle x rows y row,Indicate rough layer image array The contrast of the element of middle x rows y row, wherein rough layer image arrayIndicate two obtained coarse tomographic images Rough layer image array is removed in matrixAnother rough layer image array, a is preset numerical value, wherein arbitrary thick The contrast for the element that x rows y is arranged in rough tomographic image matrix is determined by following formula:
Wherein, S (W* 0(x, y)) indicate arbitrary rough layer image array W* 0(p) contrast for the element that x rows y is arranged in, M, N difference For the line number and columns of the rough layer image array, W* 0(x, y) indicates rough layer image array W* 0(p) member that x rows y is arranged in Element value, W* 0(x ', y ') indicates rough layer image array W* 0(p) element value of the element of all non-x row y row in;
Rough layer image array is calculated according to the following formulaEach element coefficient:
r′(x,y)=1-r(x,y)
Wherein, r '(x,y)Indicate rough layer image arrayThe element coefficient of middle x rows y row;
Each element in two rough layer image arrays is multiplied by corresponding element coefficient and corresponding addition, is obtained To the coarse blending image matrix;
Details Fusion Module, for the obtained levels of detail image array to be fused to the second quantity details blending image square Battle array, is specifically used for:
Calculate the mould of each levels of detail image array;
In every layer of details layer matrix of two images to be fused, the larger levels of detail image array of modulus, which is used as, corresponds to the layer Details blending image matrix, obtain the first quantity details blending image matrix;
Fusion Module, for the coarse blending image matrix and experience wavelet function to be carried out convolution algorithm, by each details Corresponding with the scaling function group scaling function of blending image matrix carries out convolution algorithm, and by the knot of all convolution algorithms Fruit is added, and obtains the image array of blending image.
6. device according to claim 5, which is characterized in that the Fourier spectrum function acquisition module is specifically used for:
The Fourier spectrum function of each image to be fused is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum function, FP (Ii)(θk, | ω |) indicate image I to be fusediPseudo- polar coordinate Fourier transform amount group, wherein θkFor preset pseudo- polar coordinates Fu In leaf transformation angle parameter, NθFor the number of the angle parameter;
The mean value spectral function acquisition module, is specifically used for:
Mean value spectral function is obtained according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (| ω |) indicate image I to be fusediFourier spectrum function, F (| ω |) it is the mean value spectral function, NiFor the number of the image to be fused.
7. device according to claim 5, which is characterized in that the experience small echo module is specifically used for:
The experience wavelet function that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the experience small echo letter Number:
Wherein, φ1(p) the experience wavelet function, F are indicatedp1(p)) the experience wavelet function φ is indicated1(p) pseudo- pole Coordinate Fourier transformation, γ are preset empirical value, ω1Indicate the minimum point of abscissa value minimum in the mean value spectral function Abscissa value;
The scaling function group that following formula is determined carries out pseudo- polar coordinates Fourier inversion, calculates the scaling function group:
Wherein, m values are 1 ... L-1, L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity, Indicate each scaling function in the scaling function group,Indicate scaling functionPseudo- polar coordinates Fu in Leaf transformation, ωmIndicate that the abscissa value of the small minimum points of abscissa value m in the mean value spectral function, γ are preset warp Test value;
Function β is determined by following formula:
8. device according to claim 5, which is characterized in that the hierarchical block is specifically used for:
The rough layer image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Table Show experience wavelet function φ1(p) conjugation of Fourier transformation,Indicate the inverse transformation of Fourier transformation, Wi 0(p) figure is indicated As IiRough layer image array;
The levels of detail image array is determined according to the following formula:
Wherein, IiFor the mark of the image to be fused, F (Ii) (ω) expression image IiFourier transformation,Table Show scaling functionFourier transformation conjugation,Indicate that the inverse transformation of Fourier transformation, m values are 1 ..., L- 1, wherein L indicate that the sum of the minimum point, numerical values recited are equal to the first quantity, Wi m(p) image I is indicatediEach of it is thin Ganglionic layer image array.
CN201511019552.8A 2015-12-29 2015-12-29 A kind of image interfusion method and device Active CN105678725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511019552.8A CN105678725B (en) 2015-12-29 2015-12-29 A kind of image interfusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511019552.8A CN105678725B (en) 2015-12-29 2015-12-29 A kind of image interfusion method and device

Publications (2)

Publication Number Publication Date
CN105678725A CN105678725A (en) 2016-06-15
CN105678725B true CN105678725B (en) 2018-09-18

Family

ID=56298012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511019552.8A Active CN105678725B (en) 2015-12-29 2015-12-29 A kind of image interfusion method and device

Country Status (1)

Country Link
CN (1) CN105678725B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600657A (en) * 2016-12-16 2017-04-26 重庆邮电大学 Adaptive contourlet transformation-based image compression method
CN109600584A (en) * 2018-12-11 2019-04-09 中联重科股份有限公司 Method and device for observing tower crane, tower crane and machine readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982523A (en) * 2012-12-25 2013-03-20 中国科学院长春光学精密机械与物理研究所 Multisource and multi-focus color image fusion method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982523A (en) * 2012-12-25 2013-03-20 中国科学院长春光学精密机械与物理研究所 Multisource and multi-focus color image fusion method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
2D Empirical Transforms. Wavelets, Ridgelets, and Curvelets Revisited;Gilles J等;《SIAM Journal on Imaging Sciences》;20140123;第7卷(第1期);第3页最后一段至第5页倒数第3段,第12页最后一段至第14页 *
Empirical Wavelet Transform;Gilles J;《IEEE TRANSACTIONS ON SIGNAL PROCESSING》;20130815;第61卷(第16期);全文 *
Multispectral and Panchromatic Image Fusion using Empirical Wavelet Transform;S. Moushmi等;《Indian Journal of Science and Technology》;20150930;第8卷(第24期);摘要 *
Pseudopolar-Based Estimation of Large Translations,Rotations, and Scalings in Images;Yosi Keller等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20050131;第14卷(第1期);全文 *
基于经验小波变换的机械故障诊断方法研究;李志农等;《仪器仪表学报》;20141130;第35卷(第11期);全文 *
融合图像质量评价指标的相关性分析及性能评估;张小厉等;《自动化学报》;20140228;第40卷(第2期);全文 *

Also Published As

Publication number Publication date
CN105678725A (en) 2016-06-15

Similar Documents

Publication Publication Date Title
Bavirisetti et al. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform
CN104063844B (en) A kind of reduced graph generating method and system
CN106056562B (en) A kind of face image processing process, device and electronic equipment
CN111081354B (en) System and method for denoising medical images through deep learning network
Frowd et al. Recovering faces from memory: The distracting influence of external facial features.
Zhao et al. A deep learning based anti-aliasing self super-resolution algorithm for MRI
Zhang et al. Blurred image recognition by Legendre moment invariants
CN107945234A (en) A kind of definite method and device of stereo camera external parameter
CN104155623B (en) For automatically determining the method and system of the reversing magnetic field time of tissue types
Lyu et al. Encoding metal mask projection for metal artifact reduction in computed tomography
RU2016120275A (en) ORTHOPEDIC FIXATION USING ANALYSIS OF IMAGES
JP6462152B2 (en) Full motion color video atmospheric turbulence correction processing
CN105678725B (en) A kind of image interfusion method and device
CN109360175A (en) A kind of infrared image interfusion method with visible light
CN103854267A (en) Image fusion and super-resolution achievement method based on variation and fractional order differential
Glitzner et al. On-line 3D motion estimation using low resolution MRI
RU2014113387A (en) VISUALIZATION OF RESULTS OF TREATMENT OF VESSELS
CN110473226A (en) Training method, computer equipment and the readable storage medium storing program for executing of image processing network
CN108227925A (en) Sitting posture adjusting method, device, equipment and storage medium
KR20200127702A (en) Apparatus and Method of Speckle Reduction in Optical Coherence Tomography using Convolutional Networks
CN105590294B (en) A kind of image processing method and electronic equipment
CN112261399B (en) Capsule endoscope image three-dimensional reconstruction method, electronic device and readable storage medium
DE112021005277T5 (en) Object key point detection
CN110689486A (en) Image processing method, device, equipment and computer storage medium
CN109389651A (en) Magnetic resonance chemical shift coded imaging method and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant