CN103606138A - Fusion method of medical images based on texture region division - Google Patents

Fusion method of medical images based on texture region division Download PDF

Info

Publication number
CN103606138A
CN103606138A CN201310379493.XA CN201310379493A CN103606138A CN 103606138 A CN103606138 A CN 103606138A CN 201310379493 A CN201310379493 A CN 201310379493A CN 103606138 A CN103606138 A CN 103606138A
Authority
CN
China
Prior art keywords
images
image
pcnn
texture
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310379493.XA
Other languages
Chinese (zh)
Other versions
CN103606138B (en
Inventor
张宝华
刘鹤
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Science and Technology
Original Assignee
Inner Mongolia University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Science and Technology filed Critical Inner Mongolia University of Science and Technology
Priority to CN201310379493.XA priority Critical patent/CN103606138B/en
Publication of CN103606138A publication Critical patent/CN103606138A/en
Application granted granted Critical
Publication of CN103606138B publication Critical patent/CN103606138B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a fusion method of medical images based on texture region division. Aiming at the defects existing in existing fusion technology, the fusion method fuses CT and MR multimode medical images, uses multi-feature information as a clustering mode and uses a K-means Clustering algorithm to divide and extract corresponding feature points of source images, establishes a feature point set of the multimode medial images through classification and merging, the images are divided into texture regions and non-texture regions according to the distribution of the feature points, corresponding coefficients of the texture regions are input to a PCNN to obtain ignition mapping images, a fusion coefficient is selected according to the number of times of ignition, and coefficients of the non-texture regions are fused through a dual-channel PCNN. An experimental result shows that the method can precisely divide texture regions of images, and further uses the PCNN and the dual-channel PCNN to select coefficients of different regions of the images to realize respective advantages, the fused image is clear in texture, and the quality is improved.

Description

A kind of fusion method of the medical image of dividing based on texture region
Technical field
The present invention relates to a kind of method of technical field of image processing, specifically a kind of Method of Medical Image Fusion based on multi-cluster center and texture region division.
Background technology
Medical image fusion is an important branch in image co-registration field, is also difficult point and the focus of research at present.Medical image fusion is, by what derive from that multiclass Medical Devices obtain, same biorgan's dissimilar view data (image such as CT, MR and PET) is carried out to informix utilization, image ratio single image after fusion has comprised abundanter useful information, for follow-up doctor's diagnosis and treatment, provide convenience, there is very strong using value.
According to the unusual of image processing method formula, image co-registration being divided into Pixel-level, feature level and 3 levels of decision level.Pixel-level image fusion is most widely used, the most direct to the processing of pixel, and the relevance between pixel is considered less in merging decision-making; Feature level image co-registration is to utilize mathematical statistics amount characteristic information extraction from image, and the process of comprehensively analyzing and processing; Decision level image co-registration be on the basis of characteristic information that obtains image, carry out further abstract, for next step decision-making provides foundation.
Medical image contrast is lower, and noise is serious, and image quality is poor, and these features have affected the application of Pixel-level fusion method in Medical image fusion, have reduced fused image quality.
Summary of the invention
The technical issues that need to address of the present invention are just to overcome the defect of prior art; a kind of fusion method of the medical image of dividing based on texture region is provided; it merges with respect to Pixel-level; the feature level merging based on region merges has considered the correlativity between neighbor; given prominence to provincial characteristics; reduce the interference of noise to important informations such as textures, effectively protected the texture information of image, can extract more useful informations.
For addressing the above problem, the present invention adopts following technical scheme:
The fusion method that the invention provides a kind of medical image of dividing based on texture region, comprises the following steps:
1. calculate respectively the information such as the average of source images, standard deviation, entropy, greatest gradient value, as initial cluster centre, the objective evaluation index according to picture quality that two width image clustering centers are produced is consistent;
2. by K-means Clustering algorithm, two width source images are carried out to cluster with cluster centre respectively, obtain feature space vector;
3. according to feature space vector, extract the feature distributed areas of every width image, the corresponding region of two width images relatively, setting threshold T, extracts in two width images that coefficient is all greater than the positional information of threshold value and this cuts apart extraction respective regions according to class, is defined as texture region;
4. texture region pixel value input PCNN neural network is obtained to Fire mapping image separately, get pixel that the number of times of lighting a fire in two width images is larger as the fusion coefficients of fused images, non-texture region pixel value is merged by binary channels PCNN;
5. by fusion coefficients, obtain fused images.
The present invention is directed to the deficiency that existing integration technology exists, for CT and MR multimode medical image, merge, the multicharacteristic information of take utilizes K-means Clustering as cluster mode cuts apart the character pair point of extraction source image, by classification, merge the unique point set of setting up multimode medical image, according to unique point, distribute image is divided into texture region and non-texture region, texture region coefficient of correspondence input PCNN obtains Fire mapping image, according to igniting selection of times fusion coefficients, the coefficient of non-texture region merges by binary channels PCNN.Experimental result shows, the method is partitioned image texture region accurately, and then utilizes PCNN and binary channels PCNN to select advantage separately, fused images clean mark, quality improvement at image zones of different coefficient.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention effect schematic diagram.
In figure: be (a) that CT image, (b) are that MR image, (c) are that embodiment design sketch, (d) are to be the syncretizing effect figure based on wavelet transform (DWT) based on Laplacian Pyramid Transform (Lap) syncretizing effect figure, (e).
embodiment
Below embodiments of the invention are elaborated, the present embodiment is implemented take technical solution of the present invention under prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
The present embodiment comprises the following steps:
The first step: calculate respectively the average of source images, standard deviation, entropy, the information such as greatest gradient value, as initial cluster centre, the objective evaluation index according to picture quality that two width image clustering centers are produced is consistent;
Second step: by K-means Clustering algorithm, two width source images are carried out to cluster with cluster centre respectively, obtain feature space vector;
K-means Clustering algorithm is divided into K bunch J={J by n sample 1, J 2... J k, in bunch, sample standard deviation has higher similarity, and between bunch, differences between samples is obvious.If Arg={Arg 1, Arg 2... Arg kbe the corresponding Lei of K class center, wherein Arg kj kthe mean value of sample in individual bunch, each bunch can be represented by corresponding class prototype.K-means Clustering algorithm is divided data by minimizing error sum of squares criterion function in class, and its objective function is defined as follows:
T ( M , J ) = Σ k = 1 K Σ m x ∈ J k | | m x - Arg k | | 2 - - - ( 1 )
J k=(m x∈M)k=argmin||m x-Arg j|| 2} (2)
Arg k = Σ m x ∈ J k m x | J k | - - - ( 3 )
For 2-D data (as image), K-means Clustering algorithm can be described as, and gets a sub-picture as training sample R, and establishing any one pixel in a sub-picture is I xy,
Figure DEST_PATH_GDA0000439452050000033
sample R is clustered into K bunch.
K-means Clustering algorithm mainly comprises following step:
1. initialization: choose at random K cluster centre;
2. sample is assigned: calculate each pixel to the Euclidean distance at each class center, sample is divided into apart from its nearest class;
3. upgrade: recalculate each Xin Culei center;
4. repeating step 2 and 3, until criterion function is restrained, no longer changes cluster centre, thereby obtains K cluster.
The criterion function is here defined as error sum of squares criterion function, and its physical meaning is that the similarity between pixel represents by the distance between them conventionally, and distance is less, represents that pixel diversity factor is less; Distance is larger, and pixel diversity factor is larger.
3. the feature distributed areas of extracting every width image according to feature space vector, the corresponding region of comparing two width images, in setting threshold T(the present embodiment, T is set as half of gradation of image average), extract in two width images that coefficient is all greater than the positional information of threshold value T and this cuts apart extraction respective regions according to class, be defined as texture region;
4. texture region pixel value input PCNN neural network is obtained to Fire mapping image separately, get pixel that the number of times of lighting a fire in two width images is larger as the fusion coefficients of fused images, non-texture region pixel value is merged by binary channels PCNN;
Described initialization refers to: when initial each neuron all in flameout state,
Figure DEST_PATH_IMAGE020
, ,
Figure DEST_PATH_IMAGE024
,
Figure DEST_PATH_IMAGE026
.
Described iteration concrete steps comprise:
A) starting condition: each neuron is all in flameout state, ,
Figure 736952DEST_PATH_IMAGE022
,
Figure 918535DEST_PATH_IMAGE024
,
Figure 45891DEST_PATH_IMAGE026
;
B) iterative operation: coefficient of dissociation is inputted to network, by expression, modulation domain and the pulse generation territory of acceptance domain, node-by-node algorithm with
Figure DEST_PATH_IMAGE030
, and relatively both sizes, to determine whether produce ignition event, specifically comprise:
Neuron in the corresponding iterative operation process of PCNN produces territory by acceptance domain, modulation domain and pulse and forms:
Acceptance domain:
Figure DEST_PATH_IMAGE032
(4)
Modulation domain:
(5)
Pulse produces territory:
Figure DEST_PATH_IMAGE036
(6)
(7)
In formula, each pixel transverse and longitudinal coordinate figure of x and y presentation video.
Figure DEST_PATH_IMAGE040
represent input stimulus, can be generally
Figure DEST_PATH_IMAGE042
grey scale pixel value after place's normalization, the Laplce's energy that decomposes rear coefficient, gradient energy, spatial frequency domain etc.
Figure DEST_PATH_IMAGE044
represent iterations,
Figure DEST_PATH_IMAGE046
represent feedback channel input,
Figure DEST_PATH_IMAGE048
for cynapse connects power,
Figure DEST_PATH_IMAGE050
for normaliztion constant,
Figure DEST_PATH_IMAGE052
represent neuronic internal activity item.
Figure DEST_PATH_IMAGE054
represent strength of joint,
Figure DEST_PATH_IMAGE056
represent neuronic pulse output, its value is 0 or 1.
Figure DEST_PATH_IMAGE058
dynamic threshold,
Figure DEST_PATH_IMAGE060
, for regulating the constant of corresponding formula, n is iterations.If
Figure DEST_PATH_IMAGE064
, neuron produces a pulse, is called once igniting.In fact, after inferior iteration, utilize
Figure 885725DEST_PATH_IMAGE042
the total igniting number of times of neuron carrys out the information of presentation video corresponding point position.Through PCNN igniting, the Fire mapping image consisting of the total igniting number of times of neuron is as the output of PCNN.
Binary channels PCNN is the improved form to PCNN, and the neuron in corresponding iterative operation process produces territory by acceptance domain, modulation domain and pulse and forms:
Acceptance domain:
Figure DEST_PATH_IMAGE066
(8)
Figure DEST_PATH_IMAGE068
(9)
Modulation domain:
Figure DEST_PATH_IMAGE070
(10)
Pulse produces territory:
(11)
Figure DEST_PATH_IMAGE074
(12)
Wherein:
Figure DEST_PATH_IMAGE076
,
Figure DEST_PATH_IMAGE078
two passages
Figure DEST_PATH_IMAGE080
individual neuronic feed back input amount,
Figure DEST_PATH_IMAGE082
,
Figure DEST_PATH_IMAGE084
for external drive input, for neuron dynamic threshold,
Figure DEST_PATH_IMAGE086
for time constant,
Figure 108820DEST_PATH_IMAGE050
for normaliztion constant,
Figure 433622DEST_PATH_IMAGE052
for internal activity item,
Figure DEST_PATH_IMAGE088
with
Figure DEST_PATH_IMAGE090
be respectively
Figure 761615DEST_PATH_IMAGE076
, weight coefficient,
Figure 386948DEST_PATH_IMAGE056
be
Figure 933467DEST_PATH_IMAGE080
individual neuronic output, n is iterations.
Described acceptance domain is accepted the outside input from two passages, the different focusing source figure of corresponding two width of difference, and these two amounts are modulated in modulating part, produce internal activity item
Figure 106959DEST_PATH_IMAGE052
.
Figure 55324DEST_PATH_IMAGE052
be input to pulse generating portion and produce neuronic pulse output valve
Figure 23280DEST_PATH_IMAGE056
.Described pulse produces in territory, when
Figure DEST_PATH_IMAGE092
time, neuron is activated, and exports a pulse, meanwhile, by feeding back rapid lifting, proceed next iteration.When
Figure DEST_PATH_IMAGE096
time, pulse generator is closed, and stops producing pulse.Afterwards, threshold value start index declines, when
Figure 994778DEST_PATH_IMAGE092
time, pulse generator is opened, and enters new iterative loop.
C) stopping criterion for iteration: after all coefficient of dissociation all calculate, complete this iteration.
Pulse producer determines ignition event according to current threshold value, records all neuron firing quantity after each iteration.When if iterations reaches N, stop iteration.N refers to the iterations of setting in network.Determine fusion coefficients:
Order
Figure DEST_PATH_IMAGE098
, in formula
Figure DEST_PATH_IMAGE100
the sub-band coefficients that represents fused images,
Figure 581748DEST_PATH_IMAGE028
represent internal activity item,
Figure 446936DEST_PATH_IMAGE042
the pixel that x is capable, y is listed as that is positioned in image,
Figure DEST_PATH_IMAGE102
,
Figure DEST_PATH_IMAGE104
, P is the total line number of image, Q is the total columns of image.
Normalized
Figure 458230DEST_PATH_IMAGE100
corresponding fusion coefficients.Due to
Figure 41658DEST_PATH_IMAGE100
some values may surpass dynamic range of images value, can not be directly as output image data, therefore will
Figure 229057DEST_PATH_IMAGE100
value normalize to [0,1].
The fusion rule the present invention relates to refers to:
A. by PCNN, select fusion coefficients
According to the neuron of pixel mapping, excite the size of produced igniting number of times as the preferred index of pixel, select the fusion coefficients of correspondence position in two width images;
B. by binary channels PCNN, select fusion coefficients
Binary channels PCNN can improve the effect of PCNN inclined to one side dark areas feature selecting in medical image, compare with traditional single channel PCNN, binary channels PCNN is walked abreast and is formed by two simplification PCNN, first by calculating with pixel A (x, y) centered by the 3*3 neighborhood of position any 3 points and with the difference of other any 3 points, obtain wherein minimum value and maximal value, by the poor H of maximal value and minimum value, process
Figure DEST_PATH_IMAGE106
computing obtains the β value of A (x-1, y-1).By selecting neuronic internal activity item in two passages
Figure 152013DEST_PATH_IMAGE052
control the fired state of pixel.Thereby according to
Figure 461772DEST_PATH_IMAGE052
select pixel in two width figure
Figure 470179DEST_PATH_IMAGE052
the maximum pixel as fused images.
Initialization
Figure DEST_PATH_IMAGE108
,
Figure DEST_PATH_IMAGE110
Figure DEST_PATH_IMAGE112
According to formula (5)-(7), calculate in PCNN , according to formula (10)-(12), calculate in binary channels PCNN
Figure 398952DEST_PATH_IMAGE114
.
The selection rule of fusion coefficients is as follows:
Figure DEST_PATH_IMAGE116
(13)
Figure DEST_PATH_IMAGE118
represent fusion coefficients, with
Figure DEST_PATH_IMAGE122
represent respectively source figure
Figure DEST_PATH_IMAGE124
with
Figure DEST_PATH_IMAGE126
the coefficient of middle correspondence.
5. by fusion coefficients, obtain fused images.
Fig. 1 is embodiment of the present invention effect schematic diagram;
In figure: be (a) that CT image, (b) are that MR image, (c) are that embodiment design sketch, (d) are to be the syncretizing effect figure based on wavelet transform (DWT) based on Laplacian Pyramid Transform (Lap) syncretizing effect figure, (e).
In sum, by the effect of Fig. 1, relatively can see, this method merges the information separately of multiple focussing image better, has not only effectively enriched the background information of image, and has protected to greatest extent the details in image, meets human-eye visual characteristic.So aspect the figure real information of the faithful to source of fused images, the inventive method is significantly better than the syncretizing effect based on laplacian pyramid, wavelet transform, principal component analysis (PCA) and FSD Pyramid.
In table 1, pass through , mutual information (MI) weighs the fused image quality that different fusion methods obtain,
Figure 986535DEST_PATH_IMAGE128
represent that in fused images, marginal information is enriched degree, MI represents the degree that fused images has comprised source image information, by data in table 1, can be seen, this method exists
Figure 404878DEST_PATH_IMAGE128
, two indexs of mutual information compare all and have clear improvement with additive method, show that the fused images that this method generates has larger partial gradient, grey level distribution is disperseed more, image texture is abundanter, details is outstanding, syncretizing effect is better.
The comparison of table 1 fused images objective evaluation index
Figure DEST_PATH_IMAGE130
Finally it should be noted that: obviously, above-described embodiment is only for example of the present invention is clearly described, and the not restriction to embodiment.For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description.Here exhaustive without also giving all embodiments.And the apparent variation of being amplified out thus or change are still among protection scope of the present invention.

Claims (1)

1. a fusion method for the medical image of dividing based on texture region, is characterized in that, said method comprising the steps of:
1). calculate respectively the average of source images, standard deviation, entropy, the information such as greatest gradient value, as initial cluster centre, the objective evaluation index according to picture quality that two width image clustering centers are produced is consistent;
2). by K-means Clustering algorithm, two width source images are carried out to cluster with cluster centre respectively, obtain feature space vector;
3). the feature distributed areas of extracting every width image according to feature space vector, the corresponding region of comparing two width images, setting threshold T, extracts in two width images that coefficient is all greater than the positional information of threshold value and this cuts apart extraction respective regions according to class, is defined as texture region;
4). texture region pixel value input PCNN neural network is obtained to Fire mapping image separately, get pixel that the number of times of lighting a fire in two width images is larger as the fusion coefficients of fused images, non-texture region pixel value is merged by binary channels PCNN;
5). by fusion coefficients, obtain fused images.
CN201310379493.XA 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division Expired - Fee Related CN103606138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310379493.XA CN103606138B (en) 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310379493.XA CN103606138B (en) 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division

Publications (2)

Publication Number Publication Date
CN103606138A true CN103606138A (en) 2014-02-26
CN103606138B CN103606138B (en) 2016-04-27

Family

ID=50124358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310379493.XA Expired - Fee Related CN103606138B (en) 2013-08-28 2013-08-28 A kind of fusion method of the medical image based on texture region division

Country Status (1)

Country Link
CN (1) CN103606138B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427269A (en) * 2015-12-09 2016-03-23 西安理工大学 Medical image fusion method based on WEMD and PCNN
CN106558043A (en) * 2015-09-29 2017-04-05 阿里巴巴集团控股有限公司 A kind of method and apparatus for determining fusion coefficients
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN108629826A (en) * 2018-05-15 2018-10-09 天津流形科技有限责任公司 A kind of texture mapping method, device, computer equipment and medium
CN109345494A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on potential low-rank representation and structure tensor
CN110321920A (en) * 2019-05-08 2019-10-11 腾讯科技(深圳)有限公司 Image classification method, device, computer readable storage medium and computer equipment
CN110365873A (en) * 2018-03-26 2019-10-22 株式会社理光 Image processing apparatus, camera chain, image processing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722877A (en) * 2012-06-07 2012-10-10 内蒙古科技大学 Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
CN103037168A (en) * 2012-12-10 2013-04-10 内蒙古科技大学 Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN)

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722877A (en) * 2012-06-07 2012-10-10 内蒙古科技大学 Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
CN103037168A (en) * 2012-12-10 2013-04-10 内蒙古科技大学 Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHAOBIN WANG ET AL.: "Medical image fusion using m-PCNN", 《INFORMATION FUSION》 *
张宝华 等: "基于复合激励模型的Surfacelet域多聚焦图像融合方法", 《光电工程》 *
苏冬雪 等: "基于多特征模糊聚类的图像融合方法", 《计算机辅助设计与图形学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106558043A (en) * 2015-09-29 2017-04-05 阿里巴巴集团控股有限公司 A kind of method and apparatus for determining fusion coefficients
CN106558043B (en) * 2015-09-29 2019-07-23 阿里巴巴集团控股有限公司 A kind of method and apparatus of determining fusion coefficients
CN105427269A (en) * 2015-12-09 2016-03-23 西安理工大学 Medical image fusion method based on WEMD and PCNN
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN108198184B (en) * 2018-01-09 2020-05-05 北京理工大学 Method and system for vessel segmentation in contrast images
CN110365873A (en) * 2018-03-26 2019-10-22 株式会社理光 Image processing apparatus, camera chain, image processing method
CN108629826A (en) * 2018-05-15 2018-10-09 天津流形科技有限责任公司 A kind of texture mapping method, device, computer equipment and medium
CN109345494A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on potential low-rank representation and structure tensor
CN109345494B (en) * 2018-09-11 2020-11-24 中国科学院长春光学精密机械与物理研究所 Image fusion method and device based on potential low-rank representation and structure tensor
CN110321920A (en) * 2019-05-08 2019-10-11 腾讯科技(深圳)有限公司 Image classification method, device, computer readable storage medium and computer equipment

Also Published As

Publication number Publication date
CN103606138B (en) 2016-04-27

Similar Documents

Publication Publication Date Title
CN103606138A (en) Fusion method of medical images based on texture region division
Song et al. Saliency detection for strip steel surface defects using multiple constraints and improved texture features
Laibacher et al. M2u-net: Effective and efficient retinal vessel segmentation for real-world applications
Yan et al. Automatic photo adjustment using deep neural networks
CN105184309B (en) Classification of Polarimetric SAR Image based on CNN and SVM
Cai et al. IterDANet: Iterative intra-domain adaptation for semantic segmentation of remote sensing images
Laibacher et al. M2U-Net: Effective and efficient retinal vessel segmentation for resource-constrained environments
Alata et al. Choice of a pertinent color space for color texture characterization using parametric spectral analysis
CN106462724A (en) Methods and systems for verifying face images based on canonical images
CN105138993A (en) Method and device for building face recognition model
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
Feng et al. Generative memory-guided semantic reasoning model for image inpainting
Liu et al. A fabric defect detection algorithm via context-based local texture saliency analysis
CN103617604A (en) Image fusion method based on characteristic extraction of two dimension empirical mode decomposition method
Tun et al. Federated learning with intermediate representation regularization
Qi et al. Learning explainable embeddings for deep networks
Avi-Aharon et al. Differentiable histogram loss functions for intensity-based image-to-image translation
CN103037168A (en) Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN)
Alvi et al. An evolving spatio-temporal approach for gender and age group classification with spiking neural networks
Zhu et al. Exploiting enhanced and robust RGB-D face representation via progressive multi-modal learning
Yang et al. Privileged information-based conditional structured output regression forest for facial point detection
Chen et al. Hyperspectral remote sensing IQA via learning multiple kernels from mid-level features
Liu et al. Domain-Adaptive generative adversarial networks for sketch-to-photo inversion
Hu et al. Illumination robust single sample face recognition based on ESRC
Kuznetsov et al. Deep Learning Based Face Liveliness Detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160427

CF01 Termination of patent right due to non-payment of annual fee