CN103606138B - A kind of fusion method of the medical image based on texture region division - Google Patents

A kind of fusion method of the medical image based on texture region division Download PDF

Info

Publication number
CN103606138B
CN103606138B CN201310379493.XA CN201310379493A CN103606138B CN 103606138 B CN103606138 B CN 103606138B CN 201310379493 A CN201310379493 A CN 201310379493A CN 103606138 B CN103606138 B CN 103606138B
Authority
CN
China
Prior art keywords
image
pcnn
texture region
images
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310379493.XA
Other languages
Chinese (zh)
Other versions
CN103606138A (en
Inventor
张宝华
刘鹤
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Science and Technology
Original Assignee
Inner Mongolia University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Science and Technology filed Critical Inner Mongolia University of Science and Technology
Priority to CN201310379493.XA priority Critical patent/CN103606138B/en
Publication of CN103606138A publication Critical patent/CN103606138A/en
Application granted granted Critical
Publication of CN103606138B publication Critical patent/CN103606138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of fusion method of the medical image based on texture region division, the present invention is directed to the deficiency that existing integration technology exists, merge for CT and MR multi modal medical image, that cluster mode utilizes K-means with multicharacteristic information? Clustering splits the character pair point of extraction source image, by sorting out the unique point set merging and set up multi modal medical image, according to unique point distribution, image is divided into texture region and non-grain region, texture region coefficient of correspondence input PCNN obtains Fire mapping image, fusion coefficients is selected according to ignition times, the coefficient in non-grain region is merged by binary channels PCNN.Experimental result shows, the method can accurate partitioned image texture region, and then utilizes PCNN and binary channels PCNN to select respective advantage at image zones of different coefficient, fused images clean mark, and quality is improved.

Description

A kind of fusion method of the medical image based on texture region division
Technical field
The present invention relates to a kind of method of technical field of image processing, specifically a kind of Method of Medical Image Fusion divided based on multi-cluster center and texture region.
Background technology
Medical image fusion is an important branch in image co-registration field, is also difficult point and the focus of research at present.Medical image fusion carries out informix utilization by deriving from the dissimilar view data to same biorgan images such as () CT, MR and PET that multiclass Medical Devices obtain, image ratio single image after fusion contains abundanter useful information, for follow-up doctor's diagnosis and treatment are provided convenience, there is very strong using value.
According to the unusual of image procossing mode, image co-registration can be divided into Pixel-level, feature level and decision level 3 levels.Pixel-level image fusion is most widely used, the most direct to the process of pixel, and the relevance between pixel is considered less in fusion decision-making; Feature level image co-registration utilizes mathematical statistics amount characteristic information extraction from image, and carried out the process of comprehensive treatment and analysis; Decision level image co-registration be obtain image characteristic information basis on carry out abstract further, for next step decision-making provides foundation.
Medical image contrast is lower, and noise is serious, and image quality is poor, and these features have impact on the application of Pixel-level fusion method in Medical image fusion, reduce fused image quality.
Summary of the invention
The technical issues that need to address of the present invention are just the defect overcoming prior art; a kind of fusion method of the medical image based on texture region division is provided; it merges relative to Pixel-level; feature-based fusion based on area merges considers the correlativity between neighbor; highlight provincial characteristics; reduce the interference of noise to important informations such as textures, the texture information of available protecting image, more useful informations can be extracted.
For solving the problem, the present invention adopts following technical scheme:
The invention provides a kind of fusion method of the medical image based on texture region division, comprise the following steps:
1. calculate the average of source images respectively, standard deviation, entropy, the information such as greatest gradient value, as initial cluster centre, the foundation that two width image clustering centers are produced is consistent with the objective evaluation index of picture quality;
2. by K-meansClustering algorithm, two width source images are carried out cluster with cluster centre respectively, obtain feature space vector;
3. extract the feature distributed areas of every width image according to feature space vector, compare the corresponding region of two width images, setting threshold value T, extract coefficient in two width images and be all greater than the positional information of threshold value and extract respective regions according to this segmentation of class, be defined as texture region;
4. texture region pixel value input PCNN neural network is obtained respective Fire mapping image, get pixel that in two width images, ignition times is larger as the fusion coefficients of fused images, non-grain area pixel value is merged by binary channels PCNN;
5. obtain fused images by fusion coefficients.
The present invention is directed to the deficiency that existing integration technology exists, merge for CT and MR multi modal medical image, be that cluster mode utilizes K-meansClustering to split the character pair point of extraction source image with multicharacteristic information, by sorting out the unique point set merging and set up multi modal medical image, according to unique point distribution, image is divided into texture region and non-grain region, texture region coefficient of correspondence input PCNN obtains Fire mapping image, select fusion coefficients according to ignition times, the coefficient in non-grain region is merged by binary channels PCNN.Experimental result shows, the method can accurate partitioned image texture region, and then utilizes PCNN and binary channels PCNN to select respective advantage at image zones of different coefficient, fused images clean mark, and quality is improved.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention effect schematic diagram.
In figure: (a) is CT image, (b) is MR image, (c) is embodiment design sketch, (d) is be syncretizing effect figure based on wavelet transform (DWT) based on Laplacian Pyramid Transform (Lap) syncretizing effect figure, (e).
Embodiment
Elaborate to embodiments of the invention below, the present embodiment is implemented under premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
The present embodiment comprises the following steps:
The first step: the average calculating source images respectively, standard deviation, entropy, the information such as greatest gradient value, as initial cluster centre, the foundation that two width image clustering centers are produced is consistent with the objective evaluation index of picture quality;
Second step: by K-meansClustering algorithm, two width source images are carried out cluster with cluster centre respectively, obtain feature space vector;
N sample is divided into K bunch J={J by K-meansClustering algorithm 1, J 2,-J k, in bunch, sample standard deviation has higher similarity, and between bunch, differences between samples is obvious.If Arg={Arg 1, Arg 2,-Arg kbe class center corresponding to K class, wherein Arg kj kthe mean value of sample in individual bunch, each bunch can be represented by the class prototype of correspondence.K-meansClustering algorithm divides data by minimizing error sum of squares criterion function in class, and its objective function is defined as follows:
T ( M , J ) = Σ k = 1 K Σ m x ∈ J k | | m x - Arg k | | 2 - - - ( 1 )
J k=(m x∈M)k=argmin||m x-Arg j|| 2}(2)
Arg k = Σ m x ∈ J k m x | J k | - - - ( 3 )
For 2-D data (as image), K-meansClustering algorithm can be described as, and gets a sub-picture as training sample R, if any one pixel is I in a sub-picture xy, sample R is clustered into K bunch.
K-meansClustering algorithm mainly comprises following step:
1. initialization: a random selecting K cluster centre;
2. sample is assigned: calculate the Euclidean distance of each pixel to each class center, sample is divided into the class nearest apart from it;
3. upgrade: recalculate each Xin Culei center;
4. repeat step 2 and 3, until criterion function convergence, cluster centre is no longer changed, thus obtains K cluster.
Here criterion function is defined as error sum of squares criterion function, and its physical meaning is that the similarity between pixel represents by the distance between them usually, and distance is less, represents that pixel diversity factor is less; Distance is larger, and pixel diversity factor is larger.
3. the feature distributed areas of every width image are extracted according to feature space vector, the relatively corresponding region of two width images, in setting threshold value T(the present embodiment, T is set as the half of gradation of image average), extract coefficient in two width images be all greater than the positional information of threshold value T and extract respective regions according to this segmentation of class, be defined as texture region;
4. texture region pixel value input PCNN neural network is obtained respective Fire mapping image, get pixel that in two width images, ignition times is larger as the fusion coefficients of fused images, non-grain area pixel value is merged by binary channels PCNN;
Described initialization refers to: time initial, each neuron is all in flameout state, F xy(0)=0, U xy(0)=0, Y xy(0)=0, T xy[n]=0.
Described iteration concrete steps comprise:
A) starting condition: each neuron is all in flameout state, F xy(0)=0, U xy(0)=0, Y xy(0)=0, T xy[n]=0;
B) iterative operation: coefficient of dissociation is inputted network, by the expression of acceptance domain, modulation domain and pulses generation territory, node-by-node algorithm U xy(n) and T xy(n-1), and compare both sizes to determine whether produce ignition event, specifically comprise:
Neuron in the corresponding iterative operation process of PCNN is made up of acceptance domain, modulation domain and pulses generation territory:
Acceptance domain:
F xy [ n ] = e - α F F xy [ n - 1 ] + V F Σ W xy Y xy [ n - 1 ] + S xy - - - ( 4 )
Modulation domain:
U xy[n]=1+βF xy[n](5)
Pulses generation territory:
Y xy [ n ] = 1 , U xy [ n ] > T xy [ n ] 0 , otherwise - - - ( 6 )
T xy [ n ] = e - α T T xy [ n - 1 ] + V T Y xy [ n ] - - - ( 7 )
In formula, x and y represents image each pixel transverse and longitudinal coordinate figure.S xyrepresenting input stimulus, can be generally Laplce's energy, gradient energy, spatial frequency domain etc. of coefficient after the grey scale pixel value after the normalization of (x, y) place, decomposition.N represents iterations, F xyrepresent feedback channel input, w xyfor cynapse connects power, V tfor normaliztion constant, U xyrepresent neuronic internal activity item.β represents strength of joint, Y xyrepresent that neuronic pulse exports, its value is 0 or 1.T xydynamic threshold, α t, α ffor regulating the constant of corresponding formula, n is iterations.If U xy[n] > T xy[n], then neuron produces a pulse, is called and once lights a fire.In fact, after n iteration, utilize the total ignition times of (x, y) neuron to represent the information of image corresponding point position.Through PCNN igniting, the Fire mapping image be made up of the ignition times that neuron is total is as the output of PCNN.
Binary channels PCNN is the improved form to PCNN, and the neuron in corresponding iterative operation process is made up of acceptance domain, modulation domain and pulses generation territory:
Acceptance domain:
F xy A [ n ] = S xy A [ n ] - - - ( 8 )
F xy B [ n ] = S xy B [ n ] - - - ( 9 )
Modulation domain:
U xy [ n ] = max ( F xy A [ n ] ( 1 + β xy A [ n ] ) , F xy B [ n ] ( 1 + β xy B [ n ] ) ) - - - ( 10 )
Pulses generation territory:
Y xy [ n ] = 1 , U xy [ n ] > T xy [ n ] 0 , otherwise - - - ( 11 )
T xy [ n ] = e - α T T xy [ n - 1 ] + V T Y xy [ n ] - - - ( 12 )
Wherein: two passage xth y neuronic feed back input amounts, for external drive input, T xyfor neuron dynamic threshold, α tfor time constant, V tfor normaliztion constant, U xyfor internal activity item, β xy aand β xy bbe respectively weight coefficient, Y xyfor xth y neuronic output, n is iterations.
Described acceptance domain accepts to input from the outside of two passages, respectively the different focusing source figure of corresponding two width, and these two amounts are modulated in modulating part, produce internal activity item U xy.U xybe input to pulses generation part and produce neuronic pulse output valve Y xy.In described pulses generation territory, work as U xy[n] > T xytime [n-1], neuron is activated, and exports a pulse, meanwhile, and T xy[n] is promoted rapidly by feedback, proceeds next iteration.Work as U xy[n]≤T xytime [n-1], pulse generator is closed, and stops producing pulse.Afterwards, threshold value start index declines, and works as U xy[n] > T xytime [n-1], pulse generator is opened, and enters new iterative loop.
C) stopping criterion for iteration: after all coefficient of dissociation all calculate, complete current iteration.
Pulse producer determines ignition event according to current threshold value, all neuron firing quantity after recording each iteration.If when iterations reaches N, stop iteration.N refers to the iterations set in network.Determine fusion coefficients:
Make I f(x, y)=U xy(n), I in formula f(x, y) represents the sub-band coefficients of fused images, U xyn () represents internal activity item, (x, y) is the pixel being positioned at xth row, y row in image, and x=1,2 ,-P, y=1,2 ,-Q, P are the total line number of image, and Q is the total columns of image.
Normalized I f(x, y) corresponding fusion coefficients.Due to I fsome values of (x, y) may exceed dynamic range of images value, can not directly as output image data, therefore by I fthe value of (x, y) normalizes to [0,1].
The fusion rule that the present invention relates to refers to:
A. fusion coefficients is selected by PCNN
Excite the size of produced ignition times as the preferred index of pixel according to the neuron that pixel maps, select the fusion coefficients of correspondence position in two width images;
B. fusion coefficients is selected by binary channels PCNN
Binary channels PCNN can improve the effect of PCNN inclined dark areas feature selecting in medical image, compared with traditional single channel PCNN, binary channels PCNN is walked abreast by two simplification PCNN and forms, first by calculating with pixel A (x, y) centered by position 3*3 neighborhood in any 3 points and with the difference of other any 3 points, obtain wherein minimum value and maximal value, by the poor H of maximal value and minimum value, pass through computing obtains the β value of A (x-1, y-1).By selecting neuronic internal activity item U in two passages xycontrol the fired state of pixel.Thus according to U xyselect pixel U in two width figure xythe maximum pixel as fused images.
Initialization U xy[0]=0, T xy[0]=0, Y xy[0]=0,
n=200,α T=0.1,α F=0,V T=25。
The U in PCNN is calculated according to formula (5)-(7) xy[n], T xy[n], Y xy[n], calculates the U in binary channels PCNN according to formula (10)-(12) xy[n], T xy[n], Y xy[n].
The selection rule of fusion coefficients is as follows:
Cog fF xy = Cogf A xy , if ( U xy [ n ] = F A xy [ n ] ( 1 + β A xy ) cogfB xy , if ( U xy [ n ] = F B xy [ n ] ( 1 + β B xy ) - - - ( 13 )
CogfF xyrepresent fusion coefficients, CogfA xywith CogfB xyrepresent source figure I respectively aand I bthe coefficient of middle correspondence.
5. obtain fused images by fusion coefficients.
Fig. 1 is embodiment of the present invention effect schematic diagram;
In figure: (a) is CT image, (b) is MR image, (c) is embodiment design sketch, (d) is be syncretizing effect figure based on wavelet transform (DWT) based on Laplacian Pyramid Transform (Lap) syncretizing effect figure, (e).
In sum, can be seen by the effectiveness comparison of Fig. 1, this method merges the respective information of multiple focussing image better, has not only effectively enriched the background information of image, and has protected the details in image to greatest extent, met human-eye visual characteristic.So in the figure real information of fused images faithful to source, the inventive method is significantly better than the syncretizing effect based on laplacian pyramid, wavelet transform, principal component analysis (PCA) and FSDPyramid.
Q is passed through in table 1 aB/F, mutual information (MI) weigh different fusion method obtain fused image quality, Q aB/Frepresent that in fused images, marginal information enriches degree, MI represents that fused images contains the degree of source image information, and can be seen by data in table 1, this method is at Q aB/F, mutual information two indices compares with additive method and all has clear improvement, the fused images that display this method generates has larger partial gradient, and grey level distribution is disperseed more, and image texture is abundanter, and details is given prominence to, and syncretizing effect is better.
Table 1 fused images objective evaluation Indexes Comparison
Last it is noted that obviously, above-described embodiment is only for example of the present invention is clearly described, and the restriction not to embodiment.For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description.Here exhaustive without the need to also giving all embodiments.And thus the apparent change of amplifying out or variation be still among protection scope of the present invention.

Claims (1)

1., based on a fusion method for the medical image of texture region division, it is characterized in that, said method comprising the steps of:
1). calculate the average of source images, standard deviation, entropy, greatest gradient value respectively, as initial cluster centre, the foundation that two width image clustering centers are produced is consistent with the objective evaluation index of picture quality;
2). by K-meansClustering algorithm, two width source images are carried out cluster with cluster centre respectively, obtain feature space vector;
3). according to the vectorial feature distributed areas of extracting every width image of feature space, compare the corresponding region of two width images, setting threshold value T, extract coefficient in two width images and be all greater than the positional information of threshold value and split extraction respective regions accordingly, be defined as texture region;
4). texture region pixel value input PCNN neural network is obtained respective Fire mapping image, gets pixel that in two width images, ignition times is larger as the fusion coefficients of fused images, non-grain area pixel value is merged by binary channels PCNN;
5). obtain fused images by fusion coefficients.
CN201310379493.XA 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division Active CN103606138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310379493.XA CN103606138B (en) 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310379493.XA CN103606138B (en) 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division

Publications (2)

Publication Number Publication Date
CN103606138A CN103606138A (en) 2014-02-26
CN103606138B true CN103606138B (en) 2016-04-27

Family

ID=50124358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310379493.XA Active CN103606138B (en) 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division

Country Status (1)

Country Link
CN (1) CN103606138B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106558043B (en) * 2015-09-29 2019-07-23 阿里巴巴集团控股有限公司 A kind of method and apparatus of determining fusion coefficients
CN105427269A (en) * 2015-12-09 2016-03-23 西安理工大学 Medical image fusion method based on WEMD and PCNN
CN108198184B (en) * 2018-01-09 2020-05-05 北京理工大学 Method and system for vessel segmentation in contrast images
JP2019175446A (en) * 2018-03-26 2019-10-10 株式会社リコー Image processing device, imaging system, image processing method, and program
CN108629826A (en) * 2018-05-15 2018-10-09 天津流形科技有限责任公司 A kind of texture mapping method, device, computer equipment and medium
CN109345494B (en) * 2018-09-11 2020-11-24 中国科学院长春光学精密机械与物理研究所 Image fusion method and device based on potential low-rank representation and structure tensor
CN110321920B (en) * 2019-05-08 2021-10-22 腾讯科技(深圳)有限公司 Image classification method and device, computer readable storage medium and computer equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722877B (en) * 2012-06-07 2014-09-10 内蒙古科技大学 Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
CN103037168B (en) * 2012-12-10 2016-12-21 内蒙古科技大学 Steady Surfacelet domain multi-focus image fusing method based on compound PCNN

Also Published As

Publication number Publication date
CN103606138A (en) 2014-02-26

Similar Documents

Publication Publication Date Title
CN103606138B (en) A kind of fusion method of the medical image based on texture region division
Srinivas et al. Knowledge transfer with jacobian matching
CN105184309B (en) Classification of Polarimetric SAR Image based on CNN and SVM
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN105138993A (en) Method and device for building face recognition model
CN106778745A (en) A kind of licence plate recognition method and device, user equipment
Cai et al. Classification complexity assessment for hyper-parameter optimization
Bouchaffra et al. Structural hidden Markov models for biometrics: Fusion of face and fingerprint
Ram et al. Image denoising using nl-means via smooth patch ordering
CN103745205A (en) Gait recognition method based on multi-linear mean component analysis
CN109145974A (en) One kind being based on the matched multi-level image Feature fusion of picture and text
CN107180436A (en) A kind of improved KAZE image matching algorithms
CN106570183A (en) Color picture retrieval and classification method
CN104881682A (en) Image classification method based on locality preserving mapping and principal component analysis
Peer et al. Strategies for exploiting independent cloud implementations of biometric experts in multibiometric scenarios
CN110569882B (en) Image information classification method and device
CN106709566A (en) Deep learning-based data missing value refilling method
Feng et al. Generative memory-guided semantic reasoning model for image inpainting
CN105426836A (en) Single-sample face recognition method based on segmented model and sparse component analysis
CN103927730A (en) Image noise reduction method based on Primal Sketch correction and matrix filling
Yang et al. A multi-domain and multi-modal representation disentangler for cross-domain image manipulation and classification
CN103037168A (en) Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN)
CN110472495B (en) Deep learning face recognition method based on graphic reasoning global features
CN106354850A (en) Image recognition method based on K-nearest neighbor classification
US9208402B2 (en) Face matching for mobile devices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant