CN109086805B - Clustering method based on deep neural network and pairwise constraints - Google Patents

Clustering method based on deep neural network and pairwise constraints Download PDF

Info

Publication number
CN109086805B
CN109086805B CN201810765487.0A CN201810765487A CN109086805B CN 109086805 B CN109086805 B CN 109086805B CN 201810765487 A CN201810765487 A CN 201810765487A CN 109086805 B CN109086805 B CN 109086805B
Authority
CN
China
Prior art keywords
network
neural network
self
clustering
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810765487.0A
Other languages
Chinese (zh)
Other versions
CN109086805A (en
Inventor
黄嘉桥
王家兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201810765487.0A priority Critical patent/CN109086805B/en
Publication of CN109086805A publication Critical patent/CN109086805A/en
Application granted granted Critical
Publication of CN109086805B publication Critical patent/CN109086805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions

Abstract

The invention discloses a deep neural network and pairwise constraint based clustering method, which comprises the steps of giving a data set containing pairwise constraints among data; obtaining a difference vector between data set samples; constructing a self-coding network and a deep neural network; taking a data set sample as the input of a self-coding network, taking an input data set sample as the output training network of the self-coding network, taking the output at the bottleneck of the self-coding network as the input of a deep neural network, and taking paired constraints as a correct marking training network; combining the trained self-coding network and the deep neural network to a clustering algorithm; and performing clustering tasks by using a clustering algorithm. The method combines the pairwise constraints among the data in the original data set, performs dimension reduction operation and deep neural network learning characteristics on the input data through the self-coding network, provides a loss function of the network model and an optimization algorithm based on gradient descent, and effectively improves the clustering precision of the clustering algorithm.

Description

Clustering method based on deep neural network and pairwise constraints
Technical Field
The invention relates to the technical field of clustering methods and high-dimensional clustering based on deep neural networks and pairwise constraints, in particular to a method for clustering based on pairwise constraints among data.
Background
Data clustering, also known as unsupervised learning, is an effective method of dividing a group of data objects into several clusters. Unsupervised learning, however, cannot know what each cluster specifically represents because it is clustering unlabeled data. With the continuous deepening of network informatization, the total data volume of the whole internet is continuously increased, and how to fully explore and utilize useful information contained in the data is a hot problem in the field of computer science in recent years. High-dimensional clustering is a common problem, and is specifically represented as follows: the traditional clustering algorithm is difficult to cluster a high-dimensional data space and is influenced by 'dimension disaster', and a plurality of clustering methods which are good in performance in a low-dimensional data space are applied to the high-dimensional data space and cannot obtain a good clustering effect.
Under the background, the clustering of high-dimensional data becomes a problem needing deep exploration, and the clustering of the high-dimensional data mainly comprises three categories of (1) linear dimension reduction methods, PCA, CCA, NMF and the like, (2) linear kernel nonlinear dimension reduction methods, KDA, &lTtTtranslation = LL' &gTtLL &lTt/T &gTtE and the like, and (3) a neural network.
Because most of the existing methods for performing high-dimensional clustering by using a neural network have the defect of directly learning the characteristics of original data, even if massive data are used for training the neural network, the clustering accuracy cannot be further improved. Therefore, finding a feature that is more representative of clustering and using the feature to train a network becomes an urgent problem in the art.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a clustering method based on a deep neural network and pairwise constraints, which can enable the network to learn a characteristic capable of representing clustering better, thereby improving the clustering precision.
The purpose of the invention is realized by the following technical scheme:
a deep neural network and pairwise constraint-based clustering method comprises the following steps:
given a data set containing pairwise constraints between data;
preprocessing a data set to obtain a difference vector between data set samples;
constructing a self-coding network and a deep neural network;
taking the difference vector of the data set sample as the input of a self-coding network, and hopefully, the self-coding network can reconstruct the input, so that the difference vector of the input data set sample is simultaneously used as the output of the self-coding network, the middle output of the self-coding network is used as the input of a deep neural network, and paired constraint is used as a correct mark;
combining the trained self-coding network and the deep neural network to a clustering algorithm;
and performing clustering tasks by using a clustering algorithm.
Further, the method of using the difference vector of the data set sample as the input of the self-coding network, using the difference vector of the input data set sample as the output of the self-coding network, using the output at the bottleneck of the self-coding network as the input of the deep neural network, and using the pair-wise constraint as the correct mark is as follows: constructing an auto-encoder, taking the difference vector of the data set samples as input, taking the output EO from the encoding part of the encoder, using EO as input to the fully-connected neural network, adding a softmax layer to the last layer of the fully-connected neural network, and comparing the prediction result with the pairwise constraints. The softmax layer is used for converting the output into a probability distribution, and the output with the maximum probability is selected as a prediction result. And the predicted result is the probability of being in the same class or the probability of being out of the same class.
Further, the trained self-coding network and deep neural network are combined to a clustering algorithm, which is defined as follows: the prediction result of the network model is used to replace the distance calculation part in the traditional clustering algorithm. And selecting all data points and each cluster center in each round for prediction, firstly obtaining a difference vector between the data points and each cluster center, then performing dimension reduction operation through a self-coding network, and then obtaining the output of a coding part of the self-coding network for prediction through a deep neural network, wherein the given numerical value reflects the probability that the sample and the cluster center are considered to belong to the same cluster by the neural network or the probability that the sample and the cluster center do not belong to the same cluster.
Further, the network structure includes an auto-encoder AE and a fully-connected neural network FC, so the loss function calculation formula used in training the network is as follows:
L(AE,FC,CLU)=L(AE)+L(FC)+L(CLU)
i.e., including the self-coding loss L (AE), the full-connection network prediction loss L (FC), and the clustering loss L (C L U), the self-coding loss is calculated as follows:
L(AE)=L(X,X')=||X-X'||2=||X-WH||2
the calculation formula of the loss of the fully-connected neural network is as follows:
Figure BDA0001728935050000031
the formula for calculating the clustering loss is as follows:
Figure BDA0001728935050000032
for the self-coding loss, the least square operation is carried out by inputting the self and the result obtained by network reconstruction. Wherein WH represents the result reconstructed by X through the self-coding network, W is the weight matrix of the decoding network in the self-coding network, and H is the input of the decoding network of the self-coding network. For the loss of the fully-connected neural network, a cross entropy calculation method is adopted. For the clustering loss, a loss function of a k-means clustering method is adopted, hiRepresenting the ith input of the fully-connected network, M representing the cluster center, SjIs a vector with only one value of 1, indicating that the input belongs to the cluster; the remainder is 0, indicating that the input does not belong to the cluster.
Further, the training of the self-encoder and the fully-connected neural network in the neural network is performed alternately, that is, in one training, the self-encoder is updated once, then one fully-connected neural network is updated, and finally the clustering center point is updated. The updating is stopped until the overall loss converges or the training times reach the designated times. And at the end of each training round, updating the weight of each parameter in the network by using a random gradient descent method. Wherein the updating of the cluster center point is similar to the updating of parameters in the neural network. We will note how many data points are contained in each cluster, and if the more data points, the less each point has an effect on the cluster center point. And adding the offset of the data point contained in the cluster to the central point to serve as a new cluster central point.
Further, the trained network structure is combined with a clustering algorithm, and the clustering algorithm combined with the network structure is used for clustering tasks.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention overcomes the defect that most of the existing methods for carrying out high-dimensional clustering by using the neural network directly learn the characteristics of the original data, and utilizes the pairwise constraint between the original data as input to enable the neural network to learn the pairwise constraint between the original data even if mass data are used for training the neural network and the clustering accuracy cannot be further improved. And because the invention uses the neural network to carry out dimensionality reduction on the data, the invention can also be applied to high-dimensionality data. In addition, because the dimension reduction processing data is used, the data distribution can be changed into the distribution suitable for k-means clustering, and therefore, the accuracy of the clustering method is effectively improved through the method.
Drawings
FIG. 1 is a schematic flow diagram of an example method;
fig. 2 is a schematic structural diagram of a neural network of the embodiment (the upper left and upper right networks represent self-coding networks, and the lower right corner represents a fully-connected network).
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Example 1
Referring to fig. 1, a deep neural network and pairwise constraint-based clustering method includes the following steps:
step S1, a data set S is given;
in the step, because the data with the label is often difficult to obtain, an expert is generally needed for marking; whereas the pair-wise constraints are easier to acquire than tagged data. Thus a data set is given that includes pairwise constraints.
Step S2: preprocessing S to obtain a difference vector DV;
the difference vector DV is obtained by differencing the samples in the two data sets, and the resulting DV is used to train the self-encoding network.
Step S3: constructing a self-coding network and a deep neural network;
and constructing a self-coding network and a deep neural network, wherein the input of the deep neural network is the output of the coding network in the self-coding network. The self-coding network is used for carrying out dimensionality reduction operation on the network, and the deep neural network is used for learning the characteristics of the data after dimensionality reduction.
Step S4: training a self-coding network and a deep neural network;
the network structure contains an auto-encoder AE and a fully-connected neural network FC, so the loss function calculation formula used in the network training is as follows:
L(AE,FC,CLU)=L(AE)+L(FC)+L(CLU)
i.e., including the self-coding loss L (AE), the full-connection network prediction loss L (FC), and the clustering loss L (C L U), the self-coding loss is calculated as follows:
L(AE)=L(X,X')=||X-X'||2=||X-WH||2
the calculation formula of the loss of the fully-connected neural network is as follows:
Figure BDA0001728935050000051
the formula for calculating the clustering loss is as follows:
Figure BDA0001728935050000052
for the self-coding loss, the least square operation is carried out by inputting the self and the result obtained by network reconstruction. Wherein WH represents the result reconstructed by X through the self-coding network, W is the weight matrix of the decoding network in the self-coding network, and H is the input of the decoding network of the self-coding network. For the loss of the fully-connected neural network, a cross entropy calculation method is adopted. For the clustering loss, a loss function of a k-means clustering method is adopted, hiRepresenting the ith input of the fully-connected network, M representing the cluster center, SjIs a vector with only one value of 1, indicating that the input belongs to the cluster; the remainder is 0, indicating that the input does not belong to the cluster.
The training of the self-encoder and the fully-connected neural network in the neural network is performed alternately, namely, in one training, the self-encoder is updated once, then one fully-connected neural network is updated, and finally the clustering center point is updated. The updating is stopped until the overall loss converges or the training times reach the designated times. And at the end of each training round, updating the weight of each parameter in the network by using a random gradient descent method. Wherein the updating of the cluster center point is similar to the updating of parameters in the neural network. We will note how many data points are contained in each cluster, and if the more data points, the less each point has an effect on the cluster center point. And adding the offset of the data point contained in the cluster to the central point to serve as a new cluster central point. .
Step S5: combining the model with a clustering algorithm;
the prediction result of the network model is used to replace the distance calculation part in the traditional clustering algorithm. And selecting all data points and each cluster center in each round for prediction, namely obtaining a difference vector between each data point and each cluster center, then performing dimension reduction operation through a self-coding network, obtaining output of a bottleneck, and predicting through a neural network, wherein the given numerical value reflects the probability that the sample and the cluster center are considered to belong to the same cluster by the neural network or the probability that the sample and the cluster center do not belong to the same cluster.
The invention designs a new loss function:
L(AE,FC,CLU)=L(AE)+L(FC)+L(CLU)
the loss function comprises reconstruction loss L (AE), full-connection network loss L (FC) and clustering loss L (C L U), and the clustering precision of the clustering algorithm on high-dimensional data can be effectively improved by minimizing the loss function, firstly, the reconstruction can enable the high-dimensional data to be mapped to a low-dimensional space, the low-dimensional mapping can fully express the original input by minimizing the reconstruction loss, and the low-dimensional mapping can be aggregated into a reliable cluster by minimizing the full-connection loss and the clustering loss.
In a real society, hospital doctors spend a great deal of time consulting every day. Not only does this happen, the inquiry process may take a lot of time to make a decision due to the inexperience of the doctor and the inability of the patient to accurately speak his/her feelings, but also may cause the doctor to make a diagnosis error.
For example, when analyzing a tumor, the size, position, shape and movement of the tumor are taken into consideration, and then appropriate medical advice is given. However, there are various factors that may affect the detection of tumor markers, and a method is needed to find the bottom features of tumors. The neural network is a good method, and the influence of redundant information on results can be reduced by learning features and reducing dimensionality. We can predict whether a tumor is benign or malignant by a trained neural network. The decision making efficiency can be improved, and the decision making accuracy can also be improved.
The invention can give a reference result to a doctor by efficiently and accurately predicting through the neural network, and the doctor can draw a more reasonable conclusion on the reference result. Therefore, the decision making efficiency of doctors can be accelerated, and redundant information which can influence diagnosis can be avoided, so that the decision making accuracy is improved.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (5)

1. A tumor feature extraction method based on a deep neural network and pairwise constrained clustering is characterized by comprising the following steps:
s1, giving a data set containing the size, the position and the shape of the tumor containing paired constraints among data;
s2, preprocessing a data set containing the size, position and shape of the tumor to obtain a difference vector between data set samples;
s3, constructing a self-coding network and a deep neural network;
s4, taking the difference vector of the data set sample as the input of a self-coding network, simultaneously taking the difference vector of the input data set sample as the output of the self-coding network, taking the middle output of the self-coding network as the input of a deep neural network, taking paired constraints as correct marks, and training the self-coding network and the deep neural network;
s5, combining the trained self-coding network and the deep neural network to a clustering algorithm; features of the tumor are extracted using a clustering algorithm.
2. The method for extracting tumor features based on deep neural network and pairwise constrained clustering according to claim 1, wherein step S4 specifically includes: taking the difference vector of the data set samples as input, taking the output EO from the coding part of the coder, taking the EO as the input of a fully-connected neural network, adding a softmax layer to the last layer of the fully-connected neural network, and comparing the prediction result with the pairwise constraints; the softmax layer is used for converting the output into a probability distribution, the output with the maximum probability is selected as a prediction result, and the prediction result is the probability in the same class or the probability in a different class.
3. The method for extracting tumor features based on deep neural network and pairwise constrained clustering according to claim 1, wherein step S5 specifically includes: using the prediction result of the network model to replace a distance calculation part in the traditional clustering algorithm; and selecting all data points and each cluster center in each round for prediction, firstly obtaining a difference vector between the data points and each cluster center, then performing dimension reduction operation through a self-coding network, and then obtaining the output of a coding part of the self-coding network for prediction through a deep neural network, wherein the given numerical value reflects the probability that the sample and the cluster center are considered to belong to the same cluster by the neural network or the probability that the sample and the cluster center do not belong to the same cluster.
4. The method for extracting tumor features based on deep neural network and pairwise constrained clustering according to claim 1, wherein the network structure comprises an auto-encoder AE and a fully-connected neural network FC, so that the loss function calculation formula used in training the network is as follows:
L(AE,FC,CLU)=L(AE)+L(FC)+L(CLU)
including self-encoding loss L (AE), full-connection network prediction loss L (FC), and clustering loss L (C L U);
wherein the calculation formula of the self-coding loss is as follows:
L(AE)=L(X,X')=||X-X'||2=||X-WH||2
the calculation formula of the loss of the fully-connected neural network is as follows:
Figure FDA0002225383300000021
the formula for calculating the clustering loss is as follows:
Figure FDA0002225383300000022
for the self-coding loss, performing least square operation by inputting the self and a result obtained by network reconstruction; wherein WH represents the result reconstructed by the X through the self-coding network, W is a weight matrix of a decoding network in the self-coding network, and H is the input of the decoding network of the self-coding network; for the loss of the fully-connected neural network, a cross entropy calculation method is adopted; for the clustering loss, a loss function of a k-means clustering method is adopted, hiRepresenting the ith input of the fully-connected network, M representing the cluster center, SjIs a vector with only one value of 1, indicating that the input belongs to the cluster; the remainder is 0, indicating that the input does not belong to the cluster.
5. The method for extracting tumor features based on deep neural network and pairwise constrained clustering according to claim 1, wherein the training of the self-encoder and the fully-connected neural network in the neural network is performed alternately, that is, in one training, the self-encoder is updated once, then one fully-connected neural network is updated, and finally the clustering center point is updated; updating until the overall loss convergence or the training times reach the designated times, and stopping; at the end of each round of training, updating the weight of each parameter in the network by using a random gradient descent method; the updating of the clustering center point is similar to the updating of parameters in the neural network, the number of data points contained in each cluster is recorded, and if the number of data points is more, the influence of each point on the clustering center point is smaller; and adding the offset of the data point contained in the cluster to the central point to serve as a new cluster central point.
CN201810765487.0A 2018-07-12 2018-07-12 Clustering method based on deep neural network and pairwise constraints Active CN109086805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810765487.0A CN109086805B (en) 2018-07-12 2018-07-12 Clustering method based on deep neural network and pairwise constraints

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810765487.0A CN109086805B (en) 2018-07-12 2018-07-12 Clustering method based on deep neural network and pairwise constraints

Publications (2)

Publication Number Publication Date
CN109086805A CN109086805A (en) 2018-12-25
CN109086805B true CN109086805B (en) 2020-07-28

Family

ID=64837704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810765487.0A Active CN109086805B (en) 2018-07-12 2018-07-12 Clustering method based on deep neural network and pairwise constraints

Country Status (1)

Country Link
CN (1) CN109086805B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948662B (en) * 2019-02-27 2020-10-20 浙江工业大学 Face image depth clustering method based on K-means and MMD
CN109978013B (en) * 2019-03-06 2021-01-19 华南理工大学 Deep clustering method for character action recognition
CN110032973B (en) * 2019-04-12 2021-01-19 哈尔滨工业大学(深圳) Unsupervised parasite classification method and system based on artificial intelligence
CN110232690B (en) * 2019-06-05 2023-03-17 广东工业大学 Image segmentation method, system, equipment and computer readable storage medium
CN110443318B (en) * 2019-08-09 2023-12-08 武汉烽火普天信息技术有限公司 Deep neural network method based on principal component analysis and cluster analysis
CN111709437B (en) * 2019-10-31 2023-08-04 中国科学院沈阳自动化研究所 Abnormal behavior detection method oriented to field process behavior of petrochemical industry
CN111178427B (en) * 2019-12-27 2022-07-26 杭州电子科技大学 Method for performing image dimensionality reduction and embedded clustering based on depth self-coding of Sliced-Wasserstein distance
CN111598119A (en) * 2020-02-18 2020-08-28 天津大学 Image clustering method based on residual error network
CN113033615B (en) * 2021-03-01 2022-06-07 电子科技大学 Radar signal target real-time association method based on online micro-cluster clustering
CN113743595B (en) * 2021-10-09 2023-08-15 福州大学 Structural parameter identification method based on physical driving self-encoder neural network
CN114266911A (en) * 2021-12-10 2022-04-01 四川大学 Embedded interpretable image clustering method based on differentiable k-means
CN114462548B (en) * 2022-02-23 2023-07-18 曲阜师范大学 Method for improving accuracy of single-cell deep clustering algorithm
CN115310585A (en) * 2022-07-04 2022-11-08 浙江大学 High-dimensional neural signal dimension reduction method based on self-encoder and application

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000073972A1 (en) * 1999-05-28 2000-12-07 Texas Tech University Adaptive vector quantization/quantizer
CN103530689A (en) * 2013-10-31 2014-01-22 中国科学院自动化研究所 Deep learning-based clustering method
CN104408072A (en) * 2014-10-30 2015-03-11 广东电网有限责任公司电力科学研究院 Time sequence feature extraction method based on complicated network theory and applicable to classification
CN104933438A (en) * 2015-06-01 2015-09-23 武艳娇 Image clustering method based on self-coding neural network
CN106845863A (en) * 2017-02-23 2017-06-13 沈阳工业大学 A kind of distributed wind-power generator is exerted oneself and heat load sync index Forecasting Methodology
CN107229945A (en) * 2017-05-05 2017-10-03 中山大学 A kind of depth clustering method based on competition learning
US10202103B2 (en) * 2016-12-29 2019-02-12 Intel Corporation Multi-modal context based vehicle theft prevention

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000073972A1 (en) * 1999-05-28 2000-12-07 Texas Tech University Adaptive vector quantization/quantizer
CN103530689A (en) * 2013-10-31 2014-01-22 中国科学院自动化研究所 Deep learning-based clustering method
CN104408072A (en) * 2014-10-30 2015-03-11 广东电网有限责任公司电力科学研究院 Time sequence feature extraction method based on complicated network theory and applicable to classification
CN104933438A (en) * 2015-06-01 2015-09-23 武艳娇 Image clustering method based on self-coding neural network
US10202103B2 (en) * 2016-12-29 2019-02-12 Intel Corporation Multi-modal context based vehicle theft prevention
CN106845863A (en) * 2017-02-23 2017-06-13 沈阳工业大学 A kind of distributed wind-power generator is exerted oneself and heat load sync index Forecasting Methodology
CN107229945A (en) * 2017-05-05 2017-10-03 中山大学 A kind of depth clustering method based on competition learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Clustering with Instance-level Constraints》;Kiri Wagstaff et al;;《http://www.cs.cornell.edu/home/wkiri/research/constraints.html》;20001231;第1页 *
《基于成对约束的交叉熵半监督聚类算法》;李晁铭 等;;《模式识别与人工智能》;20170731;第30卷(第7期);第598-608页 *

Also Published As

Publication number Publication date
CN109086805A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109086805B (en) Clustering method based on deep neural network and pairwise constraints
CN109460473B (en) Electronic medical record multi-label classification method based on symptom extraction and feature representation
CN108334574B (en) Cross-modal retrieval method based on collaborative matrix decomposition
CN112257341B (en) Customized product performance prediction method based on heterogeneous data difference compensation fusion
CN111382272A (en) Electronic medical record ICD automatic coding method based on knowledge graph
CN109902714B (en) Multi-modal medical image retrieval method based on multi-graph regularization depth hashing
Jiang et al. A hybrid intelligent model for acute hypotensive episode prediction with large-scale data
CN112712118A (en) Medical text data oriented filtering method and system
CN112328859B (en) False news detection method based on knowledge-aware attention network
CN113807299B (en) Sleep stage staging method and system based on parallel frequency domain electroencephalogram signals
CN115271063A (en) Inter-class similarity knowledge distillation method and model based on feature prototype projection
CN108122613B (en) Health prediction method and device based on health prediction model
CN113920379A (en) Zero sample image classification method based on knowledge assistance
CN117036760A (en) Multi-view clustering model implementation method based on graph comparison learning
CN113284627A (en) Medication recommendation method based on patient characterization learning
CN113191150A (en) Multi-feature fusion Chinese medical text named entity identification method
CN117093849A (en) Digital matrix feature analysis method based on automatic generation model
CN114287910A (en) Brain function connection classification method based on multi-stage graph convolution fusion
CN113743453A (en) Population quantity prediction method based on random forest
KR20220144132A (en) Method for analyzing microbial interaction network from microbiome data using non-negative matrix factorization
CN112989048A (en) Network security domain relation extraction method based on dense connection convolution
CN112307288A (en) User clustering method for multiple channels
Li et al. Application of Deep Learning Technology in Predicting the Risk of Inpatient Death in Intensive Care Unit
Singhal et al. EPMS: A framework for large-scale patient matching
Miao et al. Missing data interpolation of Alzheimer’s disease based on column-by-column mixed mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant