CN103606138A - Fusion method of medical images based on texture region division - Google Patents
Fusion method of medical images based on texture region division Download PDFInfo
- Publication number
- CN103606138A CN103606138A CN201310379493.XA CN201310379493A CN103606138A CN 103606138 A CN103606138 A CN 103606138A CN 201310379493 A CN201310379493 A CN 201310379493A CN 103606138 A CN103606138 A CN 103606138A
- Authority
- CN
- China
- Prior art keywords
- images
- image
- pcnn
- texture
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 9
- 230000004927 fusion Effects 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000003064 k means clustering Methods 0.000 claims abstract description 9
- 238000013507 mapping Methods 0.000 claims abstract description 7
- 239000000284 extract Substances 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000007547 defect Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 15
- 210000002569 neuron Anatomy 0.000 description 11
- 238000012545 processing Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000010494 dissociation reaction Methods 0.000 description 2
- 230000005593 dissociations Effects 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a fusion method of medical images based on texture region division. Aiming at the defects existing in existing fusion technology, the fusion method fuses CT and MR multimode medical images, uses multi-feature information as a clustering mode and uses a K-means Clustering algorithm to divide and extract corresponding feature points of source images, establishes a feature point set of the multimode medial images through classification and merging, the images are divided into texture regions and non-texture regions according to the distribution of the feature points, corresponding coefficients of the texture regions are input to a PCNN to obtain ignition mapping images, a fusion coefficient is selected according to the number of times of ignition, and coefficients of the non-texture regions are fused through a dual-channel PCNN. An experimental result shows that the method can precisely divide texture regions of images, and further uses the PCNN and the dual-channel PCNN to select coefficients of different regions of the images to realize respective advantages, the fused image is clear in texture, and the quality is improved.
Description
Technical field
The present invention relates to a kind of method of technical field of image processing, specifically a kind of Method of Medical Image Fusion based on multi-cluster center and texture region division.
Background technology
Medical image fusion is an important branch in image co-registration field, is also difficult point and the focus of research at present.Medical image fusion is, by what derive from that multiclass Medical Devices obtain, same biorgan's dissimilar view data (image such as CT, MR and PET) is carried out to informix utilization, image ratio single image after fusion has comprised abundanter useful information, for follow-up doctor's diagnosis and treatment, provide convenience, there is very strong using value.
According to the unusual of image processing method formula, image co-registration being divided into Pixel-level, feature level and 3 levels of decision level.Pixel-level image fusion is most widely used, the most direct to the processing of pixel, and the relevance between pixel is considered less in merging decision-making; Feature level image co-registration is to utilize mathematical statistics amount characteristic information extraction from image, and the process of comprehensively analyzing and processing; Decision level image co-registration be on the basis of characteristic information that obtains image, carry out further abstract, for next step decision-making provides foundation.
Medical image contrast is lower, and noise is serious, and image quality is poor, and these features have affected the application of Pixel-level fusion method in Medical image fusion, have reduced fused image quality.
Summary of the invention
The technical issues that need to address of the present invention are just to overcome the defect of prior art; a kind of fusion method of the medical image of dividing based on texture region is provided; it merges with respect to Pixel-level; the feature level merging based on region merges has considered the correlativity between neighbor; given prominence to provincial characteristics; reduce the interference of noise to important informations such as textures, effectively protected the texture information of image, can extract more useful informations.
For addressing the above problem, the present invention adopts following technical scheme:
The fusion method that the invention provides a kind of medical image of dividing based on texture region, comprises the following steps:
1. calculate respectively the information such as the average of source images, standard deviation, entropy, greatest gradient value, as initial cluster centre, the objective evaluation index according to picture quality that two width image clustering centers are produced is consistent;
2. by K-means Clustering algorithm, two width source images are carried out to cluster with cluster centre respectively, obtain feature space vector;
3. according to feature space vector, extract the feature distributed areas of every width image, the corresponding region of two width images relatively, setting threshold T, extracts in two width images that coefficient is all greater than the positional information of threshold value and this cuts apart extraction respective regions according to class, is defined as texture region;
4. texture region pixel value input PCNN neural network is obtained to Fire mapping image separately, get pixel that the number of times of lighting a fire in two width images is larger as the fusion coefficients of fused images, non-texture region pixel value is merged by binary channels PCNN;
5. by fusion coefficients, obtain fused images.
The present invention is directed to the deficiency that existing integration technology exists, for CT and MR multimode medical image, merge, the multicharacteristic information of take utilizes K-means Clustering as cluster mode cuts apart the character pair point of extraction source image, by classification, merge the unique point set of setting up multimode medical image, according to unique point, distribute image is divided into texture region and non-texture region, texture region coefficient of correspondence input PCNN obtains Fire mapping image, according to igniting selection of times fusion coefficients, the coefficient of non-texture region merges by binary channels PCNN.Experimental result shows, the method is partitioned image texture region accurately, and then utilizes PCNN and binary channels PCNN to select advantage separately, fused images clean mark, quality improvement at image zones of different coefficient.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention effect schematic diagram.
In figure: be (a) that CT image, (b) are that MR image, (c) are that embodiment design sketch, (d) are to be the syncretizing effect figure based on wavelet transform (DWT) based on Laplacian Pyramid Transform (Lap) syncretizing effect figure, (e).
embodiment
Below embodiments of the invention are elaborated, the present embodiment is implemented take technical solution of the present invention under prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
The present embodiment comprises the following steps:
The first step: calculate respectively the average of source images, standard deviation, entropy, the information such as greatest gradient value, as initial cluster centre, the objective evaluation index according to picture quality that two width image clustering centers are produced is consistent;
Second step: by K-means Clustering algorithm, two width source images are carried out to cluster with cluster centre respectively, obtain feature space vector;
K-means Clustering algorithm is divided into K bunch J={J by n sample
1, J
2... J
k, in bunch, sample standard deviation has higher similarity, and between bunch, differences between samples is obvious.If Arg={Arg
1, Arg
2... Arg
kbe the corresponding Lei of K class center, wherein Arg
kj
kthe mean value of sample in individual bunch, each bunch can be represented by corresponding class prototype.K-means Clustering algorithm is divided data by minimizing error sum of squares criterion function in class, and its objective function is defined as follows:
J
k=(m
x∈M)k=argmin||m
x-Arg
j||
2} (2)
For 2-D data (as image), K-means Clustering algorithm can be described as, and gets a sub-picture as training sample R, and establishing any one pixel in a sub-picture is I
xy,
sample R is clustered into K bunch.
K-means Clustering algorithm mainly comprises following step:
1. initialization: choose at random K cluster centre;
2. sample is assigned: calculate each pixel to the Euclidean distance at each class center, sample is divided into apart from its nearest class;
3. upgrade: recalculate each Xin Culei center;
4. repeating step 2 and 3, until criterion function is restrained, no longer changes cluster centre, thereby obtains K cluster.
The criterion function is here defined as error sum of squares criterion function, and its physical meaning is that the similarity between pixel represents by the distance between them conventionally, and distance is less, represents that pixel diversity factor is less; Distance is larger, and pixel diversity factor is larger.
3. the feature distributed areas of extracting every width image according to feature space vector, the corresponding region of comparing two width images, in setting threshold T(the present embodiment, T is set as half of gradation of image average), extract in two width images that coefficient is all greater than the positional information of threshold value T and this cuts apart extraction respective regions according to class, be defined as texture region;
4. texture region pixel value input PCNN neural network is obtained to Fire mapping image separately, get pixel that the number of times of lighting a fire in two width images is larger as the fusion coefficients of fused images, non-texture region pixel value is merged by binary channels PCNN;
Described iteration concrete steps comprise:
B) iterative operation: coefficient of dissociation is inputted to network, by expression, modulation domain and the pulse generation territory of acceptance domain, node-by-node algorithm
with
, and relatively both sizes, to determine whether produce ignition event, specifically comprise:
Neuron in the corresponding iterative operation process of PCNN produces territory by acceptance domain, modulation domain and pulse and forms:
Acceptance domain:
Modulation domain:
(5)
Pulse produces territory:
(7)
In formula, each pixel transverse and longitudinal coordinate figure of x and y presentation video.
represent input stimulus, can be generally
grey scale pixel value after place's normalization, the Laplce's energy that decomposes rear coefficient, gradient energy, spatial frequency domain etc.
represent iterations,
represent feedback channel input,
for cynapse connects power,
for normaliztion constant,
represent neuronic internal activity item.
represent strength of joint,
represent neuronic pulse output, its value is 0 or 1.
dynamic threshold,
,
for regulating the constant of corresponding formula, n is iterations.If
, neuron produces a pulse, is called once igniting.In fact,
after inferior iteration, utilize
the total igniting number of times of neuron carrys out the information of presentation video corresponding point position.Through PCNN igniting, the Fire mapping image consisting of the total igniting number of times of neuron is as the output of PCNN.
Binary channels PCNN is the improved form to PCNN, and the neuron in corresponding iterative operation process produces territory by acceptance domain, modulation domain and pulse and forms:
Acceptance domain:
Modulation domain:
Pulse produces territory:
(11)
Wherein:
,
two passages
individual neuronic feed back input amount,
,
for external drive input,
for neuron dynamic threshold,
for time constant,
for normaliztion constant,
for internal activity item,
with
be respectively
,
weight coefficient,
be
individual neuronic output, n is iterations.
Described acceptance domain is accepted the outside input from two passages, the different focusing source figure of corresponding two width of difference, and these two amounts are modulated in modulating part, produce internal activity item
.
be input to pulse generating portion and produce neuronic pulse output valve
.Described pulse produces in territory, when
time, neuron is activated, and exports a pulse, meanwhile,
by feeding back rapid lifting, proceed next iteration.When
time, pulse generator is closed, and stops producing pulse.Afterwards, threshold value start index declines, when
time, pulse generator is opened, and enters new iterative loop.
C) stopping criterion for iteration: after all coefficient of dissociation all calculate, complete this iteration.
Pulse producer determines ignition event according to current threshold value, records all neuron firing quantity after each iteration.When if iterations reaches N, stop iteration.N refers to the iterations of setting in network.Determine fusion coefficients:
Order
, in formula
the sub-band coefficients that represents fused images,
represent internal activity item,
the pixel that x is capable, y is listed as that is positioned in image,
,
, P is the total line number of image, Q is the total columns of image.
Normalized
corresponding fusion coefficients.Due to
some values may surpass dynamic range of images value, can not be directly as output image data, therefore will
value normalize to [0,1].
The fusion rule the present invention relates to refers to:
A. by PCNN, select fusion coefficients
According to the neuron of pixel mapping, excite the size of produced igniting number of times as the preferred index of pixel, select the fusion coefficients of correspondence position in two width images;
B. by binary channels PCNN, select fusion coefficients
Binary channels PCNN can improve the effect of PCNN inclined to one side dark areas feature selecting in medical image, compare with traditional single channel PCNN, binary channels PCNN is walked abreast and is formed by two simplification PCNN, first by calculating with pixel A (x, y) centered by the 3*3 neighborhood of position any 3 points and with the difference of other any 3 points, obtain wherein minimum value and maximal value, by the poor H of maximal value and minimum value, process
computing obtains the β value of A (x-1, y-1).By selecting neuronic internal activity item in two passages
control the fired state of pixel.Thereby according to
select pixel in two width figure
the maximum pixel as fused images.
According to formula (5)-(7), calculate in PCNN
, according to formula (10)-(12), calculate in binary channels PCNN
.
The selection rule of fusion coefficients is as follows:
represent fusion coefficients,
with
represent respectively source figure
with
the coefficient of middle correspondence.
5. by fusion coefficients, obtain fused images.
Fig. 1 is embodiment of the present invention effect schematic diagram;
In figure: be (a) that CT image, (b) are that MR image, (c) are that embodiment design sketch, (d) are to be the syncretizing effect figure based on wavelet transform (DWT) based on Laplacian Pyramid Transform (Lap) syncretizing effect figure, (e).
In sum, by the effect of Fig. 1, relatively can see, this method merges the information separately of multiple focussing image better, has not only effectively enriched the background information of image, and has protected to greatest extent the details in image, meets human-eye visual characteristic.So aspect the figure real information of the faithful to source of fused images, the inventive method is significantly better than the syncretizing effect based on laplacian pyramid, wavelet transform, principal component analysis (PCA) and FSD Pyramid.
In table 1, pass through
, mutual information (MI) weighs the fused image quality that different fusion methods obtain,
represent that in fused images, marginal information is enriched degree, MI represents the degree that fused images has comprised source image information, by data in table 1, can be seen, this method exists
, two indexs of mutual information compare all and have clear improvement with additive method, show that the fused images that this method generates has larger partial gradient, grey level distribution is disperseed more, image texture is abundanter, details is outstanding, syncretizing effect is better.
The comparison of table 1 fused images objective evaluation index
Finally it should be noted that: obviously, above-described embodiment is only for example of the present invention is clearly described, and the not restriction to embodiment.For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description.Here exhaustive without also giving all embodiments.And the apparent variation of being amplified out thus or change are still among protection scope of the present invention.
Claims (1)
1. a fusion method for the medical image of dividing based on texture region, is characterized in that, said method comprising the steps of:
1). calculate respectively the average of source images, standard deviation, entropy, the information such as greatest gradient value, as initial cluster centre, the objective evaluation index according to picture quality that two width image clustering centers are produced is consistent;
2). by K-means Clustering algorithm, two width source images are carried out to cluster with cluster centre respectively, obtain feature space vector;
3). the feature distributed areas of extracting every width image according to feature space vector, the corresponding region of comparing two width images, setting threshold T, extracts in two width images that coefficient is all greater than the positional information of threshold value and this cuts apart extraction respective regions according to class, is defined as texture region;
4). texture region pixel value input PCNN neural network is obtained to Fire mapping image separately, get pixel that the number of times of lighting a fire in two width images is larger as the fusion coefficients of fused images, non-texture region pixel value is merged by binary channels PCNN;
5). by fusion coefficients, obtain fused images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310379493.XA CN103606138B (en) | 2013-08-28 | 2013-08-28 | A kind of fusion method of the medical image based on texture region division |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310379493.XA CN103606138B (en) | 2013-08-28 | 2013-08-28 | A kind of fusion method of the medical image based on texture region division |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103606138A true CN103606138A (en) | 2014-02-26 |
CN103606138B CN103606138B (en) | 2016-04-27 |
Family
ID=50124358
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310379493.XA Expired - Fee Related CN103606138B (en) | 2013-08-28 | 2013-08-28 | A kind of fusion method of the medical image based on texture region division |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103606138B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105427269A (en) * | 2015-12-09 | 2016-03-23 | 西安理工大学 | Medical image fusion method based on WEMD and PCNN |
CN106558043A (en) * | 2015-09-29 | 2017-04-05 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus for determining fusion coefficients |
CN108198184A (en) * | 2018-01-09 | 2018-06-22 | 北京理工大学 | The method and system of contrastographic picture medium vessels segmentation |
CN108629826A (en) * | 2018-05-15 | 2018-10-09 | 天津流形科技有限责任公司 | A kind of texture mapping method, device, computer equipment and medium |
CN109345494A (en) * | 2018-09-11 | 2019-02-15 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on potential low-rank representation and structure tensor |
CN110321920A (en) * | 2019-05-08 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Image classification method, device, computer readable storage medium and computer equipment |
CN110365873A (en) * | 2018-03-26 | 2019-10-22 | 株式会社理光 | Image processing apparatus, camera chain, image processing method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722877A (en) * | 2012-06-07 | 2012-10-10 | 内蒙古科技大学 | Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network) |
CN103037168A (en) * | 2012-12-10 | 2013-04-10 | 内蒙古科技大学 | Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN) |
-
2013
- 2013-08-28 CN CN201310379493.XA patent/CN103606138B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722877A (en) * | 2012-06-07 | 2012-10-10 | 内蒙古科技大学 | Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network) |
CN103037168A (en) * | 2012-12-10 | 2013-04-10 | 内蒙古科技大学 | Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN) |
Non-Patent Citations (3)
Title |
---|
ZHAOBIN WANG ET AL.: "Medical image fusion using m-PCNN", 《INFORMATION FUSION》 * |
张宝华 等: "基于复合激励模型的Surfacelet域多聚焦图像融合方法", 《光电工程》 * |
苏冬雪 等: "基于多特征模糊聚类的图像融合方法", 《计算机辅助设计与图形学学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106558043A (en) * | 2015-09-29 | 2017-04-05 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus for determining fusion coefficients |
CN106558043B (en) * | 2015-09-29 | 2019-07-23 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus of determining fusion coefficients |
CN105427269A (en) * | 2015-12-09 | 2016-03-23 | 西安理工大学 | Medical image fusion method based on WEMD and PCNN |
CN108198184A (en) * | 2018-01-09 | 2018-06-22 | 北京理工大学 | The method and system of contrastographic picture medium vessels segmentation |
CN108198184B (en) * | 2018-01-09 | 2020-05-05 | 北京理工大学 | Method and system for vessel segmentation in contrast images |
CN110365873A (en) * | 2018-03-26 | 2019-10-22 | 株式会社理光 | Image processing apparatus, camera chain, image processing method |
CN108629826A (en) * | 2018-05-15 | 2018-10-09 | 天津流形科技有限责任公司 | A kind of texture mapping method, device, computer equipment and medium |
CN109345494A (en) * | 2018-09-11 | 2019-02-15 | 中国科学院长春光学精密机械与物理研究所 | Image interfusion method and device based on potential low-rank representation and structure tensor |
CN109345494B (en) * | 2018-09-11 | 2020-11-24 | 中国科学院长春光学精密机械与物理研究所 | Image fusion method and device based on potential low-rank representation and structure tensor |
CN110321920A (en) * | 2019-05-08 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Image classification method, device, computer readable storage medium and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN103606138B (en) | 2016-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103606138A (en) | Fusion method of medical images based on texture region division | |
Song et al. | Saliency detection for strip steel surface defects using multiple constraints and improved texture features | |
Laibacher et al. | M2u-net: Effective and efficient retinal vessel segmentation for real-world applications | |
Yan et al. | Automatic photo adjustment using deep neural networks | |
CN105184309B (en) | Classification of Polarimetric SAR Image based on CNN and SVM | |
Cai et al. | IterDANet: Iterative intra-domain adaptation for semantic segmentation of remote sensing images | |
Laibacher et al. | M2U-Net: Effective and efficient retinal vessel segmentation for resource-constrained environments | |
Alata et al. | Choice of a pertinent color space for color texture characterization using parametric spectral analysis | |
CN106462724A (en) | Methods and systems for verifying face images based on canonical images | |
CN105138993A (en) | Method and device for building face recognition model | |
CN104268593A (en) | Multiple-sparse-representation face recognition method for solving small sample size problem | |
Feng et al. | Generative memory-guided semantic reasoning model for image inpainting | |
Liu et al. | A fabric defect detection algorithm via context-based local texture saliency analysis | |
CN103617604A (en) | Image fusion method based on characteristic extraction of two dimension empirical mode decomposition method | |
Tun et al. | Federated learning with intermediate representation regularization | |
Qi et al. | Learning explainable embeddings for deep networks | |
Avi-Aharon et al. | Differentiable histogram loss functions for intensity-based image-to-image translation | |
CN103037168A (en) | Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN) | |
Alvi et al. | An evolving spatio-temporal approach for gender and age group classification with spiking neural networks | |
Zhu et al. | Exploiting enhanced and robust RGB-D face representation via progressive multi-modal learning | |
Yang et al. | Privileged information-based conditional structured output regression forest for facial point detection | |
Chen et al. | Hyperspectral remote sensing IQA via learning multiple kernels from mid-level features | |
Liu et al. | Domain-Adaptive generative adversarial networks for sketch-to-photo inversion | |
Hu et al. | Illumination robust single sample face recognition based on ESRC | |
Kuznetsov et al. | Deep Learning Based Face Liveliness Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160427 |
|
CF01 | Termination of patent right due to non-payment of annual fee |