CN116342444B - Dual-channel multi-mode image fusion method and electronic equipment - Google Patents

Dual-channel multi-mode image fusion method and electronic equipment Download PDF

Info

Publication number
CN116342444B
CN116342444B CN202310123425.0A CN202310123425A CN116342444B CN 116342444 B CN116342444 B CN 116342444B CN 202310123425 A CN202310123425 A CN 202310123425A CN 116342444 B CN116342444 B CN 116342444B
Authority
CN
China
Prior art keywords
image
fusion
channel
energy
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310123425.0A
Other languages
Chinese (zh)
Other versions
CN116342444A (en
Inventor
刘慧�
朱积成
王欣雨
郭强
张永霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Finance and Economics
Original Assignee
Shandong University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Finance and Economics filed Critical Shandong University of Finance and Economics
Priority to CN202310123425.0A priority Critical patent/CN116342444B/en
Publication of CN116342444A publication Critical patent/CN116342444A/en
Application granted granted Critical
Publication of CN116342444B publication Critical patent/CN116342444B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Algebra (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a double-channel multi-mode image fusion method and a fusion imaging terminal, which relate to the technical field of medical imaging and decompose a source image into a structural channel and an energy channel through JBF (joint-beam-forming) transformation; the local gradient energy operator is adopted to fuse the small-edge and small-scale detail information of the structural channel and tissue fibers, and the local entropy detail enhancement operator, PCNN and NSCT with phase consistency are adopted to fuse the energy channel and the edge intensity, texture characteristics and gray level change condition of the organ; and obtaining a fusion image through inverse JBF conversion. The invention can strengthen detail information and improve medical image fusion with the similarity of the multi-mode medical image on the basis of keeping the edge, reducing noise and smoothing of the fusion image. The structure channel adopts an improved local gradient energy operator, so that the expression of the detail information of the fusion image is further improved.

Description

Dual-channel multi-mode image fusion method and electronic equipment
Technical Field
The invention relates to the technical field of medical imaging, in particular to a dual-channel multi-mode image fusion method and a fusion imaging terminal.
Background
With the application and development of sensor technology and computer technology, medical imaging technology plays an increasingly important role in modern medical diagnosis and treatment. Because of imaging mechanism and technical limitation, different images acquired by a single sensor can only reflect local characteristics of a lesion part, so that the useful information of a target mode medical image is extracted and the complementary information of a plurality of original medical images is fused to ensure that the fused image can provide more comprehensive and reliable lesion description and help doctors to make more accurate and comprehensive diagnosis on the lesion part in order to observe all the characteristics of the lesion part in one image.
In the prior art, image fusion techniques are widely studied in the medical field, and many scholars propose a large number of image fusion algorithms, and these methods are roughly divided into spatial domain techniques and frequency domain techniques. The spatial domain technique refers to directly performing fusion operation on a source image pixel level or color space, and currently common methods include an image pixel maximum method, an image pixel weighted average method, a principal component analysis method (PRINCIPAL COMPONENT ANALYSIS, PCA), a Brownian transformation and the like. The spatial domain technology can effectively reserve the spatial information of medical images, but the phenomena of image detail loss, contrast reduction, partial spectrum information loss, spectrum degradation and the like can also occur in fusion.
The introduction of frequency domain techniques significantly improves the above problems, and currently common frequency domain techniques include pyramid transformation, wavelet transformation, multi-scale transformation (MST), and the like. Among them, MST related work has made breakthrough progress in recent years, including 3 steps, multi-scale decomposition (muti-scale decomposition, MSD), high-low frequency coefficient selection under specific methods, and inverse MSD reconstruction. As a representative in multi-scale geometric analysis, the non-downsampled contourlet transform (non-subsampled contourlet transform, NSCT) is an idea of introducing non-downsampling on the basis of the conventional contourlet transform (contourlet transform, CT), overcoming the directional aliasing and pseudo gibbs phenomena in the conventional contourlet transform. However, NSCT, as a frequency domain technique, lacks expression of spatial neighborhood information such as similarity between pixels, depth distance, etc., thereby limiting its ability to preserve edges, noise reduction smoothness.
Meanwhile, with the development of a bilateral filtering theory, a joint bilateral filter (joint bilateral filter, JBF) is being widely applied to the field of medical image fusion as a novel signal processing means. Different from the fusion rule of the traditional linear filter, the JBF is used as a nonlinear filter, euclidean distance between pixels is used as a weight, calculation is carried out according to the comprehensive characteristics of the space weight and the similar weight, the structural characteristics between the pixels are effectively extracted, and the problems of global blurring and non-ideal edge structural characteristics when the traditional average filtering and low-pass filtering are used for basic detail separation are solved. However, the limited number and direction of decomposition still causes the fused image to have a defect in terms of the degree of decomposition of structural information and details, which restricts further application. This work still faces significant challenges in improving multi-feature performance and texture quality of each modality image.
Disclosure of Invention
The dual-channel multi-mode image fusion method provided by the invention not only can solve the problems of global blurring and non-ideal edge structure characteristics, but also can ensure the requirement of the fusion image on the decomposition degree of structural information and details, improves the multi-feature expression and texture quality of each mode image, and meets the use requirement.
The method comprises the following steps: step 1, decomposing a source image into a structure channel and an energy channel through JBF conversion;
Step 2, fusing small-edge and small-scale detail information of a structural channel and tissue fibers by adopting a local gradient energy operator, and fusing the energy channel and the edge strength, texture characteristics and gray level change condition of an organ by adopting a local entropy detail enhancement operator, PCNN and NSCT with phase consistency;
And step 3, obtaining a fusion image through inverse JBF conversion.
It should be further noted that, step 1 further includes: global blurring of the input image I, i.e.
Rm=Gm*I (11)
Wherein R m represents the smoothed result at standard deviation σ; g m denotes a gaussian filter with variance σ 2, and the gaussian filter G m at (x, y) is defined as:
generating a global blurred image G using a weighted average Gaussian filter, i.e
Wherein I represents an input image; n (j) represents the adjacent pixel set of pixel point i; Representing the variance of the pixel values; z j represents a normalization operation, i.e
Large-scale structure employing JBF to recover energy channels, i.e
Where g s denotes an intensity range function based on the intensity difference between pixels; g d denotes a spatial distance function based on the pixel distance; z j represents a normalization operation, i.e
Σ sr represents the spatial weight and the range weight of the control bilateral filter, respectively;
obtaining energy channels E I (x, y) of the source images A, B, and obtaining a structural channel S I (x, y) by a formula (19);
SI(x,y)=I(x,y)-EI(x,y) (19)。
It should be further noted that, step 1 further includes: constructing local gradient energy operators, i.e.
LGE(x,y)=NE1(x,y)·ST(x,y) (20)
Wherein ST (x, y) represents the structure tensor salient image generated by the STs;
NE 1 (x, y) represents the local energy of the image at (x, y), i.e
The size of the neighborhood at the position (x, y) is (2N+1) × (2N+1), and the value of N is 4;
By comparing the magnitudes of the local gradient energy between the source images, a decision matrix S map (x, y) is defined as
Updating the decision matrix of the structural channel fusion to S mapi (x, y), i.e
Wherein Ω 1 represents a local area of size t×t centered on (x, y), T having a value of 21;
The fused structural channels S F (x, y) are obtained according to the following rules, namely
Wherein S A(x,y),SB (x, y) is the structural channel of the source images a, B, respectively.
It should be further noted that, step 1 further includes: configuring an energy channel high-frequency subband fusion rule;
the method comprises the following steps: details of the high frequency sub-band of the energy channel are described, and the local entropy of the image centered on (x, y) is defined as:
Wherein S represents a window with (2 N+1) x (2 N+1) centered on (x, y);
calculating the gray scale rate at (x, y) based on the spatial frequency, reflecting the detail features thereof, i.e
Wherein h, w respectively represent the length and width of the source image; CF, RF respectively represent the first order difference between the x and y directions at (i, j), the formula is
CF(x,y)=f(x,y)-f(x-1,y) (27)
RF(x,y)=f(x,y)-f(x,y-1) (28)
The magnitude of the edge pixel point gradient at (x, y) is calculated based on the edge density, specifically defined as:
Wherein s x,sy represents the result of the Sobel operator convolution in the x and y directions, respectively, that is
sx=T*hx (30)
sy=T*hy (31)
T represents each pixel point (x, y); h x,hy denotes the Sobel operator in the x, y directions, respectively, i.e
Fusing high-frequency sub-bands of the energy channel through a high-frequency comprehensive measurement operator HM;
The parameter alpha 111 is used for adjusting the weights of the local entropy, the spatial frequency and the edge density of the image in the HM respectively;
By comparing the magnitudes between the energy channel high frequency subbands HM, a decision matrix E Hmap (x, y) of energy channel high frequency subband fusion is obtained, defined as
At the same time, fused images of the 1 st to 4 th layers of high-frequency sub-bands after fusion are obtained according to the following rules
Wherein,Respectively representing the source image A, B, the 1 st to 4 th layers of energy channel high frequency sub-bands.
In the method, PCNN is adopted to fuse the 5 th layer high-frequency sub-band, and the fused energy channel high-frequency sub-band is obtained by calculating the PCNN excitation times
Wherein,Respectively representing a source image A and a source image B, wherein the layer 5 energy channel is a high-frequency subband; respectively representing the excitation times of the high-frequency sub-band PCNN of the energy channel of the 5 th layer of the source image, and the formula T ij (n) is
Tij(n)=Tij(n-1)+Pij(n) (38)
P ij (n) represents the output model of PCNN.
In the method, to obtain the output model of PCNN, the feed input and the link input of the neuron at (x, y) are defined as
Dij(n)=Iij (39)
Wherein the parameter VL represents the amplitude of the link input;
W ijop represents the previous state of excitation of the eight neighborhood neurons, i.e
Secondly, calculating the attenuation of the previous value of the internal activity item U ij (n) by using an exponential attenuation coefficient eta f, and carrying out nonlinear modulation on D ij (n) and C ij (n) through link strength beta to obtain the current internal activity item, which is defined as
At the same time, the current dynamic threshold is iteratively updated, i.e
Wherein η e and V E represent the exponential decay coefficient and the amplitude of E ij (n), respectively;
Determining the state of the PCNN output model P ij (n) by comparing the current internal activity item D ij (n) with the dynamic threshold E ij (n-1) at the n-1 th iteration, which is defined as
Obtaining a fusion result of the 5 th layer high frequency sub-band according to formulas (37) and (44);
Obtaining the fused energy channel high-frequency sub-band according to the following rule I.e.
It should be further noted that the method further includes configuring an energy channel low-frequency subband fusion rule;
the method comprises the following specific steps: the PC value at (x, y) is defined as
Wherein θ k represents the direction angle at k; A value representing the magnitude of the amplitude of the nth fourier component and the angle θ k; ω represents a parameter for removing a phase component in the image signal;
The formula is
Representing the convolution result of the image pixel at (x, y), i.e
I L (x, y) denotes the pixel value of the low-frequency subband of the energy channel located at (x, y); And Representing a parity-symmetric filter bank of two-dimensional Log-Gabor of scale size n.
It should be further noted that, the method reflects the local contrast variation condition of the image by calculating the (x, y) neighborhood sharpness variation, which is specifically defined as:
Wherein, M and N are 3; SCM formula is
Omega 2 denotes a local area of size 3 x 3;
Configuring a local energy NE 2;
wherein, M and N are 3;
And fusing the low-frequency sub-bands of the energy channel by a low-frequency comprehensive measurement operator LM:
wherein the parameter alpha 222 is used for adjusting the phase consistency value, the local sharpness variation and the weight of the local energy in the LM respectively;
Obtaining the fused energy channel low-frequency sub-band according to the following rule I.e.
Wherein,Respectively representing source image energy channel low frequency sub-bands; e Lmap (x, y) represents the decision matrix of energy channel low frequency subband fusion, defined as
R i (x, y) is defined as
N represents the number of source images; omega 3 denotes a size centered on (x, y)Is provided with a sliding window which is arranged on the upper surface of the glass substrate,The value is 7;
High frequency subband using dual coordinate system operator And low frequency sub-bandsAnd (3) performing linear reconstruction to realize NSCT inverse transformation and obtain an energy channel fusion image E F.
It should be further noted that the method further includes: generating a structural channel fusion image S F ((x, y) and an energy channel fusion image E F (x, y), and obtaining a final fusion image by superposition:
F(x,y)=SF(x,y)+EF(x,y) (60)
Setting input as a source image A, B;
Setting output as a fusion image F;
The method comprises the following specific steps:
Step1, reading in source images A and B, and generating a structural channel { S A,SB } and an energy channel { E A,EB } by adopting JBF decomposition;
Step2, adopting the local gradient energy operator fusion of the formula (20) to generate a structure channel fusion image S F of the structure channel { S A,SB };
Step3, fusing the energy channels { E A,EB } to generate an energy channel fused image E F;
Step3.1, energy channel { E A,EB } was decomposed using NSCT to produce energy channel high frequency subbands And energy channel low frequency sub-bands
Step3.2, fusing the high-frequency sub-bands of the 1 st layer to the 4 th layer by adopting a high-frequency comprehensive measurement operator HM rule based on LE, SF and ED;
Step3.3, fusing the high-frequency sub-bands of the 5 th layer by adopting a PCNN rule of a formula (37);
Step3.4, fusing the low-frequency sub-bands by adopting a low-frequency comprehensive measurement operator LM rule based on PC, LSCM, NE 2 in a formula (56);
step3.5, for high and low frequency sub-bands after fusion Generating an energy channel E F by adopting NSCT inverse transformation;
Step4, the fused structural channel S F and energy channel E F are inverse transformed by JBF of formula (60) to generate the final fused image F.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the dual-channel multi-mode image fusion method when executing the program.
From the above technical scheme, the invention has the following advantages:
The dual-channel multi-mode image fusion method can enable the fused image to strengthen detail information and improve the similarity with the multi-mode medical image on the basis of keeping edges and reducing noise smoothly. The invention also adopts an improved local gradient energy operator for the structural channel, and adopts a low-frequency comprehensive measuring operator consisting of phase, local sharpness variation and local energy for the low-frequency sub-band of the energy channel to calculate, thereby further improving the expression of the detail information of the fused image. . The energy channel generated by the JBF conversion is decomposed again through NSCT and is subjected to fusion treatment, so that the multidirectional and multiscale characteristics of frame decomposition are improved; the enhancement detail operator based on the local entropy is provided, the 1 st to 4 th layer high-frequency sub-bands decomposed by the NSCT of the energy channel are processed by calculating the local entropy, the spatial frequency and the edge density of the image, the 5 th layer high-frequency sub-band is processed by adopting a pulse coupled neural network (pulse coupled neural network, PCNN), and the extraction and the utilization of the edge contour structure and the texture features in the energy channel are improved by combining the deep learning with the traditional method.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a framework diagram of a two-channel multi-modality image fusion method;
FIG. 2 is a flow chart of a dual channel multi-modality image fusion method;
FIG. 3 is a graph showing the fusion results of MR-T1/MR-T2 images at different values of σ s;
FIG. 4 is a plot of MR-T1/MR-T2 fusion quality at various values of σ s;
FIG. 5 is a plot of the fusion quality of MR-T1/MR-T2 at different values of S.
Detailed Description
The invention provides a two-channel multi-mode image fusion method, which relates to a two-channel medical image fusion mode combining technologies such as JBF, NSCT, structure tensor theory and the like with local entropy and gradient energy. The invention adopts the JBF to effectively utilize the spatial structure of the image, so that various fused medical images show good hemming property while being smooth, and the method comprises 3 steps of firstly performing JBF conversion on a source image A and B to obtain a structural channel { S A,SB } and an energy channel { E A,EB }; secondly, extracting and fusing structural channel and energy channel information by using a specific fusion rule to obtain { S F,EF }; finally, the fusion image F is obtained through inverse JBF transformation. The process not only reflects the spatial proximity between pixels, but also considers the gray level similarity between pixels, achieves the purposes of edge protection and denoising, and has the characteristics of simplicity, non-iteration and local property. However, JBF is a two-channel fusion technique, and the decomposition of JBF has limitations, and incomplete image decomposition results in that an energy channel contains part of detailed texture information from a structural channel, so that a subsequent fusion rule cannot effectively identify and extract corresponding information, and the image fusion quality is affected. The NSCT has multiple scales and anisotropism, can describe the singular information of the image, and describes the characteristics of different frequency bands and different directions. In consideration of the point, NSCT is implanted into the energy channel, and the structure and detail texture of the energy channel are decomposed and fused again, so that the multi-direction and multi-scale purpose of the model is improved.
The invention relates to NSCT, which is based on CT, adopts an up-sampling filter to sample and replace the down-sampling process in the decomposition process, and consists of a non-down-sampling pyramid (non-subsampled pyramid, NSP) and a non-down-sampling direction filter bank (non-subsampled directional filter bank, NSDFB), which respectively conduct scale decomposition and direction decomposition on images, avoid the phenomena of direction aliasing and pseudo-Gibbs caused by sampling, ensure the translational invariance in the decomposition process, and promote the extraction capability of image edge information, and comprises 3 steps of firstly decomposing a source image energy channel { E A,EB } by NSCT to obtain an energy channel high-frequency subbandAnd low frequency sub-bandsSecondly, extracting and fusing the high and low frequency sub-band information of the energy channel by a specific rule to obtainFinally, an energy channel image E F was obtained by inverting NSCT.
For the structure tensor theory of the present invention, the local window Ω 0 is used to take ε→0 + in the α direction, and the image f (x, y) variation at (x, y) is defined as
In general, the local geometric features of an image f (x, y) at (x, y) are characterized by a local rate of change C (α), defined as
Wherein S represents the structure tensor, i.e
Semi-positive definite matrix representing a second moment
The local gradient vector representing the image f (x, y) is given by
Lambda 12 represents the structure tensor respectivelyIs given by
In summary, the structure tensor significance detection operator (structural tensor significance detection operator, STS) is defined as
Based on the above technology, decomposing the source image into a structural channel and an energy channel by JBF transformation; the local gradient energy operator is adopted to fuse the small-edge and small-scale detail information of the structural channel and tissue fibers, and the local entropy detail enhancement operator, PCNN and NSCT with phase consistency are adopted to fuse the energy channel and the edge intensity, texture characteristics and gray level change condition of the organ; and obtaining a fusion image through inverse JBF conversion. Therefore, the medical image fusion method can strengthen detail information and improve the similarity with the multi-mode medical image on the basis of keeping the edge, reducing noise and smoothing of the fusion image.
The dual-channel multi-mode image fusion method can also acquire and process the associated data based on the artificial intelligence technology. The dual-channel multi-mode image fusion method utilizes a digital computer or a machine controlled by the digital computer to simulate, extend and expand the intelligence of a person, sense the environment, acquire knowledge and acquire the theory, the method, the technology and the application device of the best result by using the knowledge. Of course, the dual-channel multi-mode image fusion method has a hardware-level technology and a software-level technology. Hardware technologies typically include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. Software technologies mainly include computer perspective technology, machine learning/deep learning, and programming languages. Programming languages include, but are not limited to, object-oriented programming languages such as Java, smalltalk, C ++, and conventional procedural programming languages such as the "C" language or similar programming languages.
Fig. 1 and 2 show a flow chart of a preferred embodiment of the dual channel multi-modality image fusion method of the present invention. The dual-channel multi-mode image fusion method is applied to one or more fusion imaging terminals, wherein the fusion imaging terminals are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and hardware of the fusion imaging terminals comprises, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device and the like.
The fusion imaging terminal can be any electronic product that can perform man-machine interaction with a user, such as a Personal computer, a tablet computer, a smart phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), an interactive internet protocol television (Internet Protocol Television, IPTV), an intelligent wearable device, and the like.
The converged imaging terminal may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The network in which the converged imaging terminal is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to obtain a fused image with rich details and clear textures, the invention comprises 3 steps of decomposition of a combined bilateral filter, fusion of a structural channel and an energy channel and image reconstruction, as shown in figures 1 and 2; decomposing the source image into a structural channel and an energy channel through JBF conversion; secondly, fusing small-edge and small-scale detail information such as tissue fibers of the structural channel by adopting a local gradient energy operator, and fusing organ edge strength, texture characteristics and gray level change conditions of the energy channel by adopting a local entropy detail enhancement operator, PCNN and NSCT with phase consistency; finally, obtaining a fusion image through inverse JBF conversion.
In one exemplary embodiment, to maximize the detail texture of the source image, the input image I is first subjected to a global blurring process, i.e
Rm=Gm*I (11)
Wherein R m represents the smoothed result at standard deviation σ; g m denotes a gaussian filter with variance σ 2, and the gaussian filter G m at (x, y) is defined as
Subsequently, a global blurred image G is generated using a weighted average Gaussian filter, i.e
Wherein I represents an input image; n (j) represents the adjacent pixel set of pixel point i; Representing the variance of the pixel values; z j represents a normalization operation, i.e
However, through global blurring processing, the image intensity information is relatively dispersed, if the image intensity information is directly used as an energy channel, the subsequent fusion rule cannot extract the intensity information, and therefore problems such as boundary blurring and artifacts of the fused image are caused. To generate relatively concentrated edge intensity information, JBF is employed to recover the large scale structure of the energy channel, i.e
Where g s denotes an intensity range function based on the intensity difference between pixels; g d denotes a spatial distance function based on the pixel distance; z j represents a normalization operation, i.e
Σ sr represents the spatial weight and the range weight, respectively, of the control bilateral filter.
In summary, the energy channels E I (x, y) of the source images a, B are obtained, and the structural channels S I (x, y) are obtained by the expression (19).
SI(x,y)=I(x,y)-EI(x,y) (19)
In the embodiment of the invention, a structural channel fusion rule needs to be configured, in particular, in medical imaging, the quality of detail information expression plays a decisive role in the quality of organ lesion diagnosis, and in order to accurately reflect detail information such as small edge structures, fibers and the like in organs and tissues, the structural channel information is extracted and fused by adopting a local gradient energy operator based on structural tensor and neighborhood energy. To solve the problem that STS cannot detect the lack of tiny detail features in the intensity function image, a local gradient energy operator (local GRADIENT ENERGY, LGE) is constructed, namely
LGE(x,y)=NE1(x,y)·ST(x,y) (20)
Wherein ST (x, y) represents the structure tensor salient image generated by the STs; NE 1 (x, y) represents the local energy of the image at (x, y), i.e
The neighborhood size at (x, y) is (2N+1) × (2N+1), and N takes on a value of 4.
By comparing the magnitudes of the local gradient energy between the source images, a decision matrix S map (x, y) is defined as
To ensure region integrity in the target image, the decision matrix for the structural channel fusion is updated to S mapi (x, y), i.e
Wherein Ω 1 denotes a local area of size t×t centered on (x, y), T having a value of 21.
In summary, the fused structural channels S F (x, y) are obtained according to the following rule, namely
Wherein S A(x,y),SB (x, y) is the structural channel of the source images a, B, respectively.
The invention also configures the energy channel fusion rule. Here, the energy channel contains organ contour structure and edge intensity information through JBF decomposition. At the same time, the limitations of its resolution result in energy channels containing small amounts of textural features such as fibers. Therefore, the complexity of the energy channel information is required to be decomposed again through NSCT, and the local entropy detail enhancement operator, PCNN and phase consistency are adopted to extract and fuse outline structures such as texture features, organs and bones respectively, so that the utilization rate of the energy channel information is further improved, and the fusion effect is improved.
The invention also sets the high-frequency subband fusion rule of the configuration energy channel. Specifically, through NSCT decomposition, each decomposition layer of the high-frequency sub-band of the energy channel comprises organ outline structures and fiber texture features with different scales, and the image fusion effect is directly influenced by the quality of the information extraction of the decomposition layers. Meanwhile, the conditions that the number of layers of each decomposition layer increases, the image scale information decreases and the like occur, so that the image information of the highest decomposition layer is difficult to effectively extract by a general image fusion rule.
PCNN is used as a neural network model, has pulse synchronization and global coupling characteristics, can extract effective information from complex backgrounds, is superior to most traditional methods, and has remarkable advantages in the aspects of edge detection, refinement, recognition and the like of image fusion. In consideration of the point, PCNN is embedded into the processing of the high-frequency sub-band, and the extraction capability of the information of the 5 th layer high-frequency sub-band is improved, so that the aim of improving the structure and texture characteristics of the fused image is fulfilled. And meanwhile, the local entropy enhancement detail operator is adopted to fuse the 1 st to 4 th layers of high-frequency sub-bands, so that the organ contour and the fiber texture reduction degree of the fused image are further improved. A large number of experiments prove that the 1 st to 4 th layers of the high-frequency sub-band of the energy channel adopt local entropy detail enhancement operators, and the 5 th layer adopts a PCNN method to have obvious advantages for extracting and fusing the structure and texture information.
In the present invention, first, the fusion rule of the high-frequency sub-bands of layers 1 to 4 is described. The image entropy is used as a statistical method for estimating the information quantity of the image and reflects the detail information contained in the image. In general, the greater the entropy, the more detail information the image contains. However, the entropy of the entire image often does not reflect the local detail information of the image. To solve this problem, local entropy of the image is introduced, and detailed information of the high-frequency sub-band of the energy channel is further described. The Local Entropy (LE) of an image centered on (x, y) is defined as
Wherein S represents a window having a size of (2n+1) × (2n+1) centered on (x, y).
The invention introduces spatial frequency (spatial frequency, SF) to further highlight texture information, reflects detail characteristics thereof by calculating gray scale change rate at (x, y), namely
Wherein h, w respectively represent the length and width of the source image; CF, RF respectively represent the first order difference between the x and y directions at (i, j), the formula is
CF(x,y)=f(x,y)-f(x-1,y) (27)
RF(x,y)=f(x,y)-f(x,y-1) (28)
However, LE, SF is an estimate of detailed information describing images, and extraction and expression of large-scale structural information such as contours are lacking. Thus, edge density (EDGE DENSITY, ED) is introduced, the hierarchy of its structure and contour edges is highlighted by computing the magnitude of the gradient of edge pixel points at (x, y), defined as
Wherein s x,sy represents the result of the Sobel operator convolution in the x and y directions, respectively, that is
sx=T*hx (30)
sy=T*hy (31)
T represents each pixel point (x, y); h x,hy denotes the Sobel operator in the x, y directions, respectively, i.e
Thereby, the energy channel high frequency sub-bands are fused by the high frequency synthesis measurement operator HM.
Wherein the parameter alpha 111 is used to adjust the weights of the local entropy, the spatial frequency and the edge density of the image in the HM, respectively.
By comparing the magnitudes between the energy channel high frequency subbands HM, a decision matrix E Hmap (x, y) of energy channel high frequency subband fusion is obtained, defined as
At the same time, fused images of the 1 st to 4 th layers of high-frequency sub-bands after fusion are obtained according to the following rules
Wherein,Respectively representing the source image A, B, the 1 st to 4 th layers of energy channel high frequency sub-bands.
Secondly, the invention adopts PCNN to fuse the 5 th layer high frequency sub-band, and obtains the fused energy channel high frequency sub-band by calculating the PCNN excitation times
Wherein,Respectively representing a source image A and a source image B, wherein the layer 5 energy channel is a high-frequency subband; respectively representing the excitation times of the high-frequency sub-band PCNN of the energy channel of the 5 th layer of the source image, and the formula T ij (n) is
Tij(n)=Tij(n-1)+Pij(n) (38)
P ij (n) represents the output model of PCNN.
In PCNN, D ij (n) and C ij (n) represent the feeding input and the linking input of the neuron located at (x, y), respectively, after n iterations. D ij (n) is related to the intensity of the input image I ij throughout the iteration; the synaptic weight of C ij (n) is related to the previous state of excitation of the eight neighbor neurons. To obtain the output model of PCNN, first, the feed input and link input of neurons at (x, y) are defined as
Dij(n)=Iij (39)
Wherein the parameter V L represents the amplitude of the link input; w ijop represents the previous state of excitation of the eight neighborhood neurons, i.e
Secondly, calculating the attenuation of the previous value of the internal activity item U ij (n) by using an exponential attenuation coefficient eta f, and carrying out nonlinear modulation on D ij (n) and C ij (n) through link strength beta to obtain the current internal activity item, which is defined as
At the same time, the current dynamic threshold is iteratively updated, i.e
Where η e and V E represent the exponential decay coefficient and the amplitude of E ij (n), respectively.
Finally, the state of the PCNN output model P ij (n) is determined by comparing the current internal activity item D ij (n) with the dynamic threshold E ij (n-1) at the n-1 th iteration, which is defined as
In summary, the fusion result of the layer 5 high frequency sub-bands is obtained according to the formulas (37) (44). Meanwhile, the fused energy channel high-frequency sub-band is obtained according to the following ruleI.e.
As an embodiment of the present invention, an energy channel low frequency subband fusion rule is also configured.
The low frequency sub-bands include energy channel pixel brightness and gray scale variations. In order to further improve the information quantity of the low-frequency sub-band, the information of the low-frequency sub-band image is enhanced by adopting phase consistency. Phase coincidence (phase congruency, PC) is a dimensionless measure that is commonly used to reflect the sharpness of an image and the importance of image features. The PC value at (x, y) is defined as
Wherein θ k represents the direction angle at k; A value representing the magnitude of the amplitude of the nth fourier component and the angle θ k; ω represents a parameter for removing a phase component in the image signal; The formula is
Representing the convolution result of the image pixel at (x, y), i.e
I L (x, y) denotes the pixel value of the low-frequency subband of the energy channel located at (x, y); And Representing a parity-symmetric filter bank of two-dimensional Log-Gabor of scale size n.
However, PC, as a kind of contrast invariance, cannot reflect the local contrast variation situation. Therefore, local sharpness variation (LSCM) is introduced, and (x, y) neighborhood sharpness variation (SHARPNESS CHANGE measure, SCM) is calculated to reflect the local contrast variation of the image, which is defined as
Wherein, M and N are 3, and SCM formula is
Omega 2 denotes a local area of size 3x 3.
Since PC, LSCM does not fully reflect local signal strength, local energy NE 2 is introduced.
Wherein, M and N take the value of 3.
Thereby, the energy channel low frequency sub-bands are fused by the low frequency synthesis measurement operator LM.
Wherein the parameter α 222 is used to adjust the weights of the phase coincidence value, the local sharpness change amount, and the local energy level in the LM, respectively.
To sum up, the fused energy channel low frequency sub-band is obtained according to the following ruleI.e.
Wherein,Respectively representing source image energy channel low frequency sub-bands; e Lmap (x, y) represents the decision matrix of energy channel low frequency subband fusion, defined as
R i (x, y) is defined as
N represents the number of source images; omega 3 denotes a size centered on (x, y)Is provided with a sliding window which is arranged on the upper surface of the glass substrate,The value is 7.
Finally, using a dual coordinate system operator for high frequency subbandsAnd low frequency sub-bandsAnd performing linear reconstruction to realize NSCT inverse transformation to obtain an energy channel fusion image E F.
In the embodiment of the invention, the fusion image is reconstructed, specifically, the structural channel fusion image S F (x, y) and the energy channel fusion image E F (x, y) are generated through the steps, and the final fusion image is obtained through superposition:
F(x,y)=SF(x,y)+EF(x,y) (60)
Setting input as a source image A, B;
Setting output as a fusion image F;
The method comprises the following specific steps:
Step1, reading in source images A and B, and generating a structural channel { S A,SB } and an energy channel { E A,EB } by adopting JBF decomposition;
Step2, adopting the local gradient energy operator fusion of the formula (20) to generate a structure channel fusion image S F of the structure channel { S A,SB };
Step3, fusing the energy channels { E A,EB } to generate an energy channel fused image E F;
Step3.1, energy channel { E A,EB } was decomposed using NSCT to produce energy channel high frequency subbands And energy channel low frequency sub-bands
Step3.2, fusing the high-frequency sub-bands of the 1 st layer to the 4 th layer by adopting a high-frequency comprehensive measurement operator HM rule based on LE, SF and ED;
Step3.3, fusing the high-frequency sub-bands of the 5 th layer by adopting a PCNN rule of a formula (37);
step3.4, fusing the low-frequency sub-bands by adopting a low-frequency comprehensive measurement operator LM rule based on PC, LSCM and NE 2;
step3.5, for high and low frequency sub-bands after fusion Generating an energy channel E F by adopting NSCT inverse transformation;
Step4, the fused structural channel S F and energy channel E F are inverse transformed by JBF of formula (60) to generate the final fused image F.
Therefore, the dual-channel multi-mode image fusion method can enable the fused image to strengthen detail information and improve the similarity with the multi-mode medical image on the basis of keeping the edge and reducing noise smooth. The invention also adopts an improved local gradient energy operator for the structural channel, and adopts a low-frequency comprehensive measuring operator consisting of phase, local sharpness variation and local energy for the low-frequency sub-band of the energy channel to calculate, thereby further improving the expression of the detail information of the fused image. The energy channel generated by the JBF conversion is decomposed again through NSCT and is subjected to fusion treatment, so that the multidirectional and multiscale characteristics of frame decomposition are improved; the enhancement detail operator based on the local entropy is provided, the 1 st to 4 th layer high-frequency sub-bands decomposed by the NSCT of the energy channel are processed by calculating the local entropy, the spatial frequency and the edge density of the image, the 5 th layer high-frequency sub-band is processed by adopting a pulse coupled neural network (pulse coupled neural network, PCNN), and the extraction and the utilization of the edge contour structure and the texture features in the energy channel are improved by combining the deep learning with the traditional method.
Further, as experiments and results analysis of the embodiments of the above examples, in order to verify technical effects of the method of the present invention, the following description will be given with specific implementation effects: setting experimental data and setting a test image. To fully verify the superiority of this method, a comprehensive and extensive experimental analysis was performed. Experiments were performed on human brain image datasets captured from four different imaging mechanisms from the harvard medical college website ①, with the resolution of each test image set to 256 x 256, with 118 pairs of multi-modality medical images being used to fully verify the validity of the method. The experimental results of 4 pairs of magnetic resonance imaging groups (MR-T1/MR-T2), 4 pairs of electron and magnetic resonance imaging groups (CT/MR), 4 pairs of magnetic resonance imaging and single photon emission computed tomography groups (MR/SPECT), 4 pairs of magnetic resonance imaging and positron emission computed tomography groups (MR/PET) were randomly chosen and analyzed from visual and objective indices, respectively.
All experiments were written by Matlab 2018 with a running environment AMD Ryzen 7 5800with Radeon Graphics3.20GHz,RAM of 16.0GB.
The invention uses six commonly used measurement indexes to comprehensively and quantitatively evaluate the performances of different fusion methods. Firstly, the invention adopts three indexes of peak signal-to-noise ratio (PSNR), structural similarity (structural similarity, SSIM) and mutual information (mutual infor-information, MI) to measure the similarity between the fusion image and the source image. The higher the index, the less distortion the fusion process produces, and the more similar the source and fusion images are.
Wherein, PSNR measures its similarity by calculating the mean square error between the source image and the fusion image, SSIM measures the structural similarity between the source image and the fusion image, MI measures its correlation by calculating the information entropy of the fusion image and the joint information entropy of both the fusion image and the source image. Secondly, the invention adopts three indexes of spatial frequency (spatial frequency, SF), standard deviation (standard deviation, SD) and edge information retention (Qabf) to measure the edge information of the source image, the retention of detail textures and the contrast of the fused image. The higher the index, the more detail and texture information the fused image contains, and the better the quality of visual information obtained from the source image. In addition, in order to further evaluate the performance of image fusion, the invention also introduces information Entropy (EN) and visual information fusion fidelity (visual information fusion fidelity, VIFF) indexes, and further measures the information content of the fusion image and the reduction degree of the fusion image on the source image. The higher the index is, the better the fusion performance is, and the distortion condition of the fusion image is smaller.
The verification mode of the invention is a method for adjusting one parameter by fixing other parameters, a series of fusion results are generated on the multi-mode medical image by using 118, and the fusion results are evaluated from aspects such as similarity index measurement, visual effect and the like so as to determine the optimal value of the parameter. The optimal parameters are analyzed below using MR-T1/MR-T2 image fusion as an example.
1) Gaussian standard deviation σ s:
The gaussian standard deviation sigma s is used as the spatial weight of the bilateral filter to determine the quality of the spatial information identification of the source image, and influences the texture structure of the fusion image and the similarity degree with the source image. Therefore, it is particularly important to set an appropriate gaussian standard deviation σ s.
To determine the optimum value of σ s, other parameters were fixed to take values between 1 and 6, and the experimental results are shown in fig. 3. As can be seen in the close-up region of fig. 3c, the detail information from the source image is severely attenuated, there are significant artifacts in the brain's return region, and even a severe loss of detail occurs at the saddle pool. At the same time, fig. 3c also shows a case that the gray information of the fusion image is distorted and does not match the source image information, which is not preferable in medical diagnosis. As can be seen from the close-up region in FIG. 3d, the case is improved, the contour structure of the fused image is more vivid than that in FIG. 3c, but the situation that the texture features are obviously missing still at the positions of the cerebral sulcus and the like and the similarity degree with the source image is unbalanced, the gray change cannot accurately reflect the lesion information, and the accuracy of medical diagnosis is seriously affected. As can be seen from the close-up areas of fig. 3f,3g and 3h, as the value of σ s increases, the fusion energy loss increases, the contrast decreases, the fiber texture features are obviously weakened, and meanwhile, the condition that the similarity degree of the fusion image and the MR-T1 and MR-T2 images is unbalanced is more remarkable, as the value of σ s increases, the MR-T2 image information contained in the ventricles of the brain forefoot at the fusion image side gradually decreases, and even when the value of σ s is 6, the fusion image cannot reflect the MR-T2 information thereof. However, when σ s is taken as 3, the fusion image is not only expressed by texture detail characteristics, but also restored by source image information, and is balanced by MR-T1 and MR-T2 image information, compared with other values, the fusion image has obvious advantages, as can be seen from fig. 3e, the texture of the fusion image brain sulcus is clearer, the fiber detail change is obvious, the gray level of the anterior feet of the lateral ventricle is balanced, the edge is clear, and no artifact and distortion phenomenon exists. In addition, as can be seen from the objective evaluation index in table 1, when σ s takes on a value of 3, the pixel gray scale and the structural similarity of the fusion image and the source image information are the highest, and the fusion performance is the best.
The data of table 1 are presented in the form of a line graph, and the performance quality under the variation of σ s can be more intuitively analyzed, as shown in fig. 4. As can be seen from fig. 4, as σ s increases, the objective index increases, and peaks when σ s takes on a value of 3, whereas as σ s takes on a value exceeding 3, the degree of similarity of the fused image to the source image gradually decreases, the adverse effect increases as σ s increases, and the image fusion performance decreases as it decreases. Therefore, when the sigma s is 3, the similarity degree of the fusion image and the source image is highest, the texture detail information is most obvious, and the fusion performance is best no matter in subjective analysis or objective index. The optimum value of σ s is therefore adjusted to 3. In addition, it was found through a large number of experiments that the experimental result is not affected by the value of the parameter σ r in equation (16), so that the value of σ r is 0.05.
Table 1 objective evaluation of MR-T1/MR-T2 fusion results at different values of Sigma s
And (5) pouring. The optimal values are indicated in bold.
2) Window size S:
In the energy channel high-frequency subband detail enhancement operator, the window size S is used as the window size of local image entropy, and the blocking condition of the source image is determined, so that the information of the source image is characterized by calculating the entropy value of each blocking. Therefore, setting an appropriate window size S plays a critical role in the quality of the extraction degree of the source image information.
The window size S was set to a value between 1 and 6 by fixing other parameters, and the experimental results are shown in table 2. As can be seen from table 2, as the window size S increases, the objective index increases until the value of the window size S is 3, the PSNR and MI values reach the optimal state, and the fusion performance is the best at this time, and the reduction degree of the source image information is the highest, however, as the window size S increases, the value of the window size S exceeds 3, and the similarity index between the fusion image and the source image gradually decreases. Meanwhile, as the window size S increases, the SSIM value increases, when the value is 2, the SSIM reaches the optimal state, the optimal state is maintained all the time, when the window size S exceeds 5, the SSIM value starts to decrease, and the fusion performance and the image quality are poorer and worse. In the comprehensive analysis, when the value of S is 3, the pixel gray level and the structural similarity of the fusion image and the source image information are the highest, and the fusion performance is the best.
Similarly, the data of table 2 are presented in the form of a line graph, as shown in fig. 5. As can be seen from fig. 5, when the window size S is 3, compared with other values, each objective index is in a peak state, and at this time, the fused image reaches an optimal state in terms of the reduction degree, the similarity degree, and the detail texture expression of the source image, which is favorable for capturing and analyzing lesion information of a patient by a medical worker, and improves the reliability and the authenticity of medical diagnosis, so that the S optimal value is adjusted to 3.
TABLE 2 objective evaluation of MR-T1/MR-T2 fusion results at different values
And (5) pouring. The optimal values are indicated in bold.
The units and algorithm steps of each example described in the embodiments disclosed in the dual-channel multi-mode image fusion method provided by the invention can be implemented by electronic hardware, computer software or a combination of the two, and in order to clearly illustrate the interchangeability of hardware and software, the components and steps of each example have been generally described in terms of functions in the above description. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The two-channel multi-modality image fusion method provided by the invention is the unit and algorithm steps of each example described in connection with the embodiments disclosed herein, and can be implemented in electronic hardware, computer software, or a combination of both, and to clearly illustrate the interchangeability of hardware and software, the components and steps of each example have been generally described in terms of functionality in the foregoing description. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (2)

1. The double-channel multi-mode image fusion method is characterized by comprising the following steps of:
Setting an input as a source image ,
Setting output as a fusion image F;
The method specifically comprises the following steps:
step1, reading source image ,Generating structural channels by JBF decompositionAnd an energy channel
Step1 further comprises: for input imageGlobal blurring, i.e.
(11)
Wherein,Expressed in standard deviation ofThe following smoothing result; Representing variance as In (2) Gaussian filter, inGaussian filter atThe definition is as follows:
(12)
Generating a global blurred image using a weighted average gaussian filter I.e.
(13)
Wherein,Representing an input image; representing pixel points Is a set of adjacent pixels; Representing the variance of the pixel values; Representing normalization operations, i.e.
(14)
Large-scale structure employing JBF to recover energy channels, i.e
(15)
Wherein,Representing an intensity range function based on the intensity differences between pixels; Representing a spatial distance function based on the pixel distance; Representing normalization operations, i.e.
(16)
(17)
(18)
, Respectively representing the spatial weight and the range weight of the control bilateral filter;
energy channel for obtaining input image And obtaining structural channels by formula (19)
(19);
Thereby respectively obtaining the structural channels of the input source images A, BAnd an energy channel
Step2, structural channelGenerating a structural channel fusion image by adopting local gradient energy operator fusion of (20)
Step2 further comprises: constructing local gradient energy operators, i.e.
(20)
Wherein,Representing a structure tensor saliency image generated by the STS;
Is shown in Local energy of the image at, i.e
(21)
The size of the neighborhood is that, The value is 4;
obtaining a decision matrix by comparing the magnitudes of local gradient energy between source images Is defined as
(22)
Updating the decision matrix of the structure channel fusion toI.e.
(23)
Wherein,Expressed in terms ofA local region of size 21 x 21, being the centre;
Obtaining a fused structure channel fused image according to the following rule I.e.
(24)
Wherein,, Respectively source images, Is provided;
step3, energy channel Fusion generation of energy channel fusion images
Step3.1, energy channel pairEnergy channel high frequency subband generation using NSCT decompositionAnd energy channel low frequency sub-bands
Step3.2, fusing the high-frequency sub-bands of the 1 st layer to the 4 th layer by adopting a high-frequency comprehensive measurement operator HM rule based on LE, SF and ED;
Step3.2 further comprises: configuring an energy channel high-frequency subband fusion rule;
The configuration of the energy channel high-frequency subband fusion rule comprises the following steps: details of the high-frequency sub-bands of the energy channels are described as The local entropy of the image for the center is defined as:
(25)
Wherein, Expressed in terms ofIs centered and has the size ofIs a window of (2);
Spatial frequency based computation The gray scale change rate at the position reflecting the detail characteristics thereof, namely
(26)
Wherein,, Representing the length and width of the source image, respectively; CF, RF are respectively locatedFirst order difference between x and y directions is expressed as
(27)
(28)
Edge density based computationThe magnitude of the edge pixel gradient at the point is specifically defined as:
(29)
Wherein, , Respectively represent, Results after convolution of the Sobel operator in the direction, i.e.
(30)
(31)
Representing each pixel point; Respectively representSobel operator in direction, i.e.
(32)
(33)
Fusing high-frequency sub-bands of the energy channel through a high-frequency comprehensive measurement operator HM;
(34)
Wherein the parameters are , , The weights are respectively used for adjusting the local entropy, the spatial frequency and the edge density of the image in the HM;
By comparing the sizes of the high-frequency sub-bands HM of the energy channel, a decision matrix for fusion of the high-frequency sub-bands of the energy channel is obtained Defined as
(35)
Meanwhile, fused images of the 1 st to 4 th layers of high-frequency sub-bands after fusion are obtained according to the following rules
(36)
Wherein,, Respectively representing source images, Layer 1 to 4 energy channel high frequency subbands;
Step3.3, fusing the high-frequency sub-bands of the 5 th layer by adopting a PCNN rule of a formula (37);
in the method, PCNN is adopted for fusing the 5 th layer high-frequency sub-band, and the fused energy channel high-frequency sub-band is obtained by calculating the PCNN excitation times
(37)
Wherein,, Respectively representing source images, Layer 5 energy channel high frequency sub-bands;, Respectively representing the source image representing the number of excitation of the layer 5 energy channel high frequency sub-band PCNN, The formula is
(38)
An output model representing PCNN;
in the method, in order to obtain the output model of PCNN Feed input and link input of neurons at a site, defined as
(39)
(40)
Wherein the parameters areRepresenting the amplitude of the link input;
Representing the previous state of excitation of eight neighborhood neurons, i.e
(41)
Second, using exponential decay coefficientsComputing internal activity itemsThe decay magnitude of the previous value and by link strengthFor a pair ofAndNonlinear modulation is carried out to obtain the current internal activity item, which is defined as
(42)
At the same time, the current dynamic threshold is iteratively updated, i.e
(43)
Wherein,AndRespectively represent the exponential decay coefficientsAmplitude of (2);
using current internal activity items And the firstDynamic threshold at multiple iterationsComparing the sizes, and judging the PCNN output modelIs defined as the state of
(44)
Obtaining a fusion result of the 5 th layer high frequency sub-band according to formulas (37) and (44);
step3.4, PC-based, LSCM-based, using (56) for the low frequency sub-band, The low-frequency comprehensive measurement operator LM rule of (2) is fused;
the method is realized by calculating The neighborhood sharpness change reflects the local contrast change condition of the image, and is specifically defined as:
(53)
Wherein, , The value is 3, and the SCM formula is
(54)
Representing a size ofA local area;
configuring local energy
(55)
Wherein,, The value is 3;
And fusing the low-frequency sub-bands of the energy channel by a low-frequency comprehensive measurement operator LM:
(56)
Wherein the parameters are , , Weights for adjusting the phase coincidence value, local sharpness variation and local energy in LM, respectively;
the method further comprises the steps of configuring an energy channel low-frequency subband fusion rule;
The method specifically comprises the following steps: at the position of The PC value at is defined as
(46)
Wherein,The representation is located atA direction angle at; Represent the first Fourier components and anglesAmplitude magnitude of (a) is determined; A parameter indicating a phase component for removing the image signal;
The formula is
(47)
(48)
(49)
, The representation is located atConvolution results of pixels of the image, i.e
(50)
(51)
(52)
The representation is located atPixel values at the low frequency sub-band of the energy channel; And Representing a scale of sizeParity-symmetric filter bank of two-dimensional Log-Gabor;
Obtaining the fused energy channel low-frequency sub-band according to the following rule I.e.
(57)
Wherein,, Respectively representing source image energy channel low frequency sub-bands; Decision matrix representing energy channel low frequency subband fusion, defined as
(58)
Is defined as
(59)
Representing the number of source images; expressed in terms of Is centered and has the size ofIs provided with a sliding window which is arranged on the upper surface of the glass substrate,,The value is 7;
step3.5, obtaining the fused energy channel high frequency subband according to the following rule ,
I.e.
(45);
Step3.6, pair of fused high and low frequency sub-bandsEnergy channel fusion image generation using NSCT inverse transformation
High frequency subband using dual coordinate system operatorAnd low frequency sub-bandsPerforming linear reconstruction to realize NSCT inverse transformation to obtain an energy channel fusion image
Step4, fusing the images of the fused structural channelsAnd energy channel fusion imageGenerating a final fusion image F by adopting the inverse JBF conversion of the formula (60);
(60)。
2. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the two-channel multi-modality image fusion method of claim 1 when the program is executed by the processor.
CN202310123425.0A 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment Active CN116342444B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310123425.0A CN116342444B (en) 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310123425.0A CN116342444B (en) 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116342444A CN116342444A (en) 2023-06-27
CN116342444B true CN116342444B (en) 2024-07-26

Family

ID=86878173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310123425.0A Active CN116342444B (en) 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116342444B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883803B (en) * 2023-09-07 2023-12-05 南京诺源医疗器械有限公司 Image fusion method and system for glioma edge acquisition
CN118097581B (en) * 2024-04-28 2024-06-25 山东领军智能交通科技有限公司 Road edge recognition control method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494093A (en) * 2022-01-17 2022-05-13 广东工业大学 Multi-modal image fusion method
CN115222725A (en) * 2022-08-05 2022-10-21 兰州交通大学 NSCT domain-based PCRGF and two-channel PCNN medical image fusion method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722877B (en) * 2012-06-07 2014-09-10 内蒙古科技大学 Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
CN107403416B (en) * 2017-07-26 2020-07-28 温州大学 NSCT-based medical ultrasonic image denoising method with improved filtering and threshold function
CN113496473A (en) * 2020-04-07 2021-10-12 无锡盛高计算机科技有限公司 Image fusion method based on dynamic target detection
CN113436128B (en) * 2021-07-23 2022-12-06 山东财经大学 Dual-discriminator multi-mode MR image fusion method, system and terminal
CN115018728A (en) * 2022-06-15 2022-09-06 济南大学 Image fusion method and system based on multi-scale transformation and convolution sparse representation
CN115100172A (en) * 2022-07-11 2022-09-23 西安邮电大学 Fusion method of multi-modal medical images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494093A (en) * 2022-01-17 2022-05-13 广东工业大学 Multi-modal image fusion method
CN115222725A (en) * 2022-08-05 2022-10-21 兰州交通大学 NSCT domain-based PCRGF and two-channel PCNN medical image fusion method

Also Published As

Publication number Publication date
CN116342444A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
Lahoud et al. Zero-learning fast medical image fusion
CN116342444B (en) Dual-channel multi-mode image fusion method and electronic equipment
Deng et al. An edge detection approach of image fusion based on improved Sobel operator
CN106023200A (en) Poisson model-based X-ray chest image rib inhibition method
Dolui et al. A new similarity measure for non-local means filtering of MRI images
Lu et al. Nonlocal Means‐Based Denoising for Medical Images
US20150371372A1 (en) System and method for medical image quality enhancement using multiscale total variation flow
CN116630762B (en) Multi-mode medical image fusion method based on deep learning
CN111815766A (en) Processing method and system for reconstructing blood vessel three-dimensional model based on 2D-DSA image
Bhateja et al. An improved medical image fusion approach using PCA and complex wavelets
Rajalingam et al. Review of multimodality medical image fusion using combined transform techniques for clinical application
Jeevakala Sharpening enhancement technique for MR images to enhance the segmentation
Dogra et al. Multi-modality medical image fusion based on guided filter and image statistics in multidirectional shearlet transform domain
El-Shafai et al. Traditional and deep-learning-based denoising methods for medical images
Sahu et al. MRI de-noising using improved unbiased NLM filter
Nageswara Reddy et al. BRAIN MR IMAGE SEGMENTATION BY MODIFIED ACTIVE CONTOURS AND CONTOURLET TRANSFORM.
Zhang et al. Multi-resolution depth image restoration
Kahol et al. Deep learning-based multimodal medical image fusion
CN108961171B (en) Mammary gland DTI image denoising method
Al-Dabbas et al. Medical image enhancement to extract brain tumors from CT and MRI images
Wang et al. Retracted: Complex image denoising framework with CNN‐wavelet under concurrency scenarios for informatics systems
Qiu et al. A despeckling method for ultrasound images utilizing content-aware prior and attention-driven techniques
Brindha et al. Fusion of radiological images of glioblastoma multiforme using weighted average and maximum selection method
Dhore et al. Chest x-ray segmentation using watershed and super pixel segmentation technique
Gautam et al. Implementation of NLM and PNLM for de-noising of MRI images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant