WO2019033636A1 - 基于最小化损失学习的不平衡样本分类方法 - Google Patents
基于最小化损失学习的不平衡样本分类方法 Download PDFInfo
- Publication number
- WO2019033636A1 WO2019033636A1 PCT/CN2017/115848 CN2017115848W WO2019033636A1 WO 2019033636 A1 WO2019033636 A1 WO 2019033636A1 CN 2017115848 W CN2017115848 W CN 2017115848W WO 2019033636 A1 WO2019033636 A1 WO 2019033636A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- classification
- value
- neural network
- sample
- training
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000013528 artificial neural network Methods 0.000 claims abstract description 24
- 230000006870 function Effects 0.000 claims abstract description 20
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000013145 classification model Methods 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 33
- 238000012549 training Methods 0.000 abstract description 32
- 238000011156 evaluation Methods 0.000 abstract description 8
- 238000012854 evaluation process Methods 0.000 abstract description 2
- 239000000523 sample Substances 0.000 description 30
- 238000007635 classification algorithm Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 3
- 239000002131 composite material Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- OYPRJOBELJOOCE-UHFFFAOYSA-N Calcium Chemical compound [Ca] OYPRJOBELJOOCE-UHFFFAOYSA-N 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the invention belongs to the technical field of data classification, and particularly relates to an unbalanced sample classification method.
- Cost-sensitive learning method change the weight of the original data in the evaluation criteria, usually use the artificially set sensitive cost matrix to help calculate the classification loss to solve the imbalance problem.
- a common feature of traditional classifier optimization algorithms is that they are not balanced with unbalanced data sets like the classical imbalance processing methods. Their main idea is to modify them.
- the training process or classification process of the classifier adapts to the unbalanced data set, not only reduces the influence of the unbalanced distribution on the training process by optimizing the training process of the algorithm, but also trains the model through normal training ideas, and then performs through a series of other processes.
- the adjustment of the model, or after obtaining the ordinary model adopts a different method from the classical classification stage in the classification stage to solve the imbalance problem.
- the traditional classification algorithm can not be directly used for the unbalanced sample classification problem.
- the usual ideas are very intuitive, and most of them directly affect the data set, whether it is directly changing the sample in the data set.
- the basic idea of the traditional classification method is to generalize the consistency hypothesis of the training sample space, and to generalize the bias, so that the samples that have not appeared in the entire sample space can be predicted.
- classifiers They are nothing more than the Vapnik Chervonenkis Dimension function they use, and the bias conditions are different.
- the classical unbalanced classification algorithm happens to be contrary to the traditional machine learning. Because the classical unbalanced classification algorithm mostly solves the imbalance problem by changing the original sample distribution, the traditional machine learning algorithm is based on training data and real data. It is based on independent and identical distribution. If the distribution of training data is changed, it may have an unknown effect on the results. Although the impact of this kind of influence on some discriminant models is not up to the level that can destroy the effect of the whole model, it is certain that this will affect the decision-making process of the model. Moreover, whether the final prediction process of the real space is biased toward a good direction or a bad direction is usually impossible to judge, especially for some algorithms with random process participation, such as the SMOTE algorithm, the situation of changing the distribution is more serious. Even if the cross-validation method is used to train the model, the average classification accuracy or the classified F1 value will be relatively large in many trials.
- the present invention designs an algorithm that directly uses the F1 value as a training target to solve the problem of unbalanced data set classification, and has achieved good results.
- An unbalanced sample classification method based on minimum loss learning is applied to an artificial neural network model, characterized in that the method comprises:
- Figure 1 is a schematic diagram of a data set probability density curve.
- the present invention designs a method for directly training a model by targeting the evaluation criteria.
- the basic idea of the maximum F1 training method and the method can be applied to the unbalanced data set classification problem.
- the current data set is a one-dimensional unbalanced data set, containing both majority and minority samples.
- the probability density curve is shown in Figure 1. It is assumed that the ratio of samples of most classes to minority classes is n:1, where n> 1. Obviously, the basic idea of the traditional classifier is to maximize the global accuracy rate as the final training target. For the boundary parts of the two types, even if the probability density is similar, the majority of the boundary part is different because the majority and the minority sample base are different. The number of class samples will be much larger than the number of samples in a few classes. The final classification boundary is very likely to be near the position of line b in the middle of the figure, on the side that is biased toward a few classes.
- the idea of the classical unbalanced data set classification algorithm is to directly reduce the sample ratio between the majority class and the minority class by some method.
- the number of sample points of the two types is the same or very close, and then the traditional classification is applied.
- the classification limit with the highest global classification accuracy rate should be the line a in the figure.
- This line uses the abscissa of the intersection of two types of probability density curves as the boundary threshold.
- the minority class on the left side of the boundary line and the majority class on the right side are misclassified samples. It is easy to prove by the area method, and the number of classification error samples is the smallest.
- the most classic composite evaluation standard of F1 is selected as the optimization target, so the loss function can be used.
- Set to the (1-F1) value. Feature set of training samples And target output collection
- h is assumed by the hypothesis h:X ⁇ Y for a single sample as a whole hypothesis for all training samples.
- the minimum loss value is the same as the maximum value of F1.
- the concept of minimizing the loss is extended to maximize the objective function:
- the algorithm of the present invention utilizes the neural network training process to adopt the idea of classifying the network using the current state, then solving the loss and optimizing the loss to reach the next better state, and transforming the evaluation process in the training.
- the expected value of the loss is solved by the probability of the current output (8), and the expected value is optimized, so that the output and the direct relationship between the parameter and the target can be established. Contact, you can also increase the probability that the target will get a higher value by optimizing the expected value, so that the meaning of training is not lost.
- the nature of the root covariance can be considered as the covariance will always be 0, so there is a relationship (10).
- the expected value is the upper bound of the approximation and relatively close, so it is also possible to continuously expand the expectation and converge to a global optimal solution or a local optimal solution, which has achieved the training goal.
- the idea of maximizing the specific objective function of the algorithm for solving the imbalance problem has been introduced above, and the objective function of the overall F1 value applied to the training set is constructed by using the evaluation criteria of the unbalanced sample classification.
- the algorithm of maximizing F1 value is applied to the artificial neural network (ANN) model.
- ANN artificial neural network
- the most commonly used effective weighting strategy is the backpropagation algorithm. Because the final result of the algorithm trains the objective function to the maximum value. Therefore, the update process is as shown in equations (11) and (12). Where ⁇ represents the learning rate, its size affects the convergence speed of the neural network and the convergence accuracy, and occasionally may affect the final convergence to which very good solution.
- the inner product result of the net j table node j passes the result before the sigmod function, and o j is the result of the net j processed by the sigmod function.
- Algorithm 1 minimizes loss neural network
- Both ⁇ l and ⁇ kl are parameters in the neural network model, and their updating methods are all updated by the gradient descent method in (11) (12), that is, each time the output layer deviation is added to the partial derivative of each node.
- the ⁇ kl and ⁇ l here are formally replaceable and are calculated according to this equation).
- (13) and (14) are the partial differentials obtained for the output layer parameter ⁇ , and the calculation method is
- the experimental data sets are all from the UCI machine learning data set.
- the data set selection process the data sets that have appeared in other unbalanced data set classification algorithms are selected, and the following 8 data sets are available.
- the parameters are as follows: 1 is shown.
- the invention adopts SMOTE algorithm, Adaboost algorithm, structured support vector machine algorithm (SSVM), classical neural network algorithm (ANN), sensitive cost learning algorithm (SCL) and algorithm of the invention (ML-ANN) for comparison.
- SSVM structured support vector machine algorithm
- ANN classical neural network algorithm
- SCL sensitive cost learning algorithm
- ML-ANN algorithm of the invention
- the algorithm of the present invention has achieved some success in the unbalanced data set classification algorithm, and the result is generally superior to the previous algorithm.
Abstract
Description
Claims (2)
- 一种基于最小化损失学习的不平衡样本分类方法,应用于人工神经网络模型中,其特征在于:所述方法包括:S2:对输入—隐藏层连接系数矩阵ωkl和隐藏—输出层连接系数向量θl进行初始化,每一个分量范围为(-0.1,0.1);令ωkl′←0、θl′←0、f′←0;S4:如果fnow>f,则返回当前ωkl、θl;否则执行S5;S5:如果fnow>f′,则ωkl′←ωkl,θl′←θl;S6:根据下式(1)与(2)更新θl,根据下式(3)更新ωkl,S7:回到步骤S3,直到迭代次数达到m;S8:返回ωkl′、θl′;S9:利用优化后的人工神经网络模型对不平衡样本进行分类。
- 根据权利要求1所述的方法,其特征在于:所述对不平衡样本进行分类具体包括:将样本特征输入w和θ所表征的人工神经网络分类模型,输出分类类标。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710702075.8 | 2017-08-16 | ||
CN201710702075.8A CN107578061A (zh) | 2017-08-16 | 2017-08-16 | 基于最小化损失学习的不平衡样本分类方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019033636A1 true WO2019033636A1 (zh) | 2019-02-21 |
Family
ID=61034482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/115848 WO2019033636A1 (zh) | 2017-08-16 | 2017-12-13 | 基于最小化损失学习的不平衡样本分类方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107578061A (zh) |
WO (1) | WO2019033636A1 (zh) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110851959A (zh) * | 2019-10-18 | 2020-02-28 | 天津大学 | 一种融合深度学习和分位数回归的风速区间预测方法 |
CN111159935A (zh) * | 2019-12-11 | 2020-05-15 | 同济大学 | 基于lhs的bp神经网络参数标定方法 |
CN111240344A (zh) * | 2020-02-11 | 2020-06-05 | 哈尔滨工程大学 | 一种基于双神经网络强化学习技术的自主水下机器人无模型控制方法 |
CN111325338A (zh) * | 2020-02-12 | 2020-06-23 | 暗物智能科技(广州)有限公司 | 神经网络结构评价模型构建和神经网络结构搜索方法 |
CN111652384A (zh) * | 2019-03-27 | 2020-09-11 | 上海铼锶信息技术有限公司 | 一种数据量分布的平衡方法及数据处理方法 |
CN111738420A (zh) * | 2020-06-24 | 2020-10-02 | 莫毓昌 | 一种基于多尺度抽样的机电设备状态数据补全与预测方法 |
CN112529328A (zh) * | 2020-12-23 | 2021-03-19 | 长春理工大学 | 一种产品性能预测方法及系统 |
CN112766379A (zh) * | 2021-01-21 | 2021-05-07 | 中国科学技术大学 | 一种基于深度学习多权重损失函数的数据均衡方法 |
CN113298230A (zh) * | 2021-05-14 | 2021-08-24 | 西安理工大学 | 一种基于生成对抗网络的不平衡数据集的预测方法 |
CN113673579A (zh) * | 2021-07-27 | 2021-11-19 | 国网湖北省电力有限公司营销服务中心(计量中心) | 一种基于小样本的用电负荷分类算法 |
CN113723679A (zh) * | 2021-08-27 | 2021-11-30 | 暨南大学 | 基于代价敏感深度级联森林的饮用水质预测方法及系统 |
CN113807023A (zh) * | 2021-10-04 | 2021-12-17 | 北京亚鸿世纪科技发展有限公司 | 基于门控循环单元网络的工业互联网设备故障预测方法 |
CN114638336A (zh) * | 2021-12-26 | 2022-06-17 | 海南大学 | 聚焦于陌生样本的不平衡学习 |
CN114676727A (zh) * | 2022-03-21 | 2022-06-28 | 合肥工业大学 | 一种基于csi的与位置无关的人体活动识别方法 |
WO2023078240A1 (en) * | 2021-11-03 | 2023-05-11 | International Business Machines Corporation | Training sample set generation from imbalanced data in view of user goals |
CN116503385A (zh) * | 2023-06-25 | 2023-07-28 | 吉林大学 | 基于虚拟全局代理的糖网眼底图像分级方法和设备 |
CN111178897B (zh) * | 2019-12-18 | 2023-08-08 | 浙江大学 | 在不平衡数据上快速特征学习的代价敏感的动态聚类方法 |
CN117476125A (zh) * | 2023-12-27 | 2024-01-30 | 豆黄金食品有限公司 | 一种基于数据分析的腐竹余液回收数据处理系统 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985382B (zh) * | 2018-05-25 | 2022-07-15 | 清华大学 | 基于关键数据通路表示的对抗样本检测方法 |
CN108921095A (zh) * | 2018-07-03 | 2018-11-30 | 安徽灵图壹智能科技有限公司 | 一种基于神经网络的停车位管理系统、方法及停车位 |
CN110751175A (zh) * | 2019-09-12 | 2020-02-04 | 上海联影智能医疗科技有限公司 | 损失函数的优化方法、装置、计算机设备和存储介质 |
CN111082470B (zh) * | 2020-01-15 | 2022-09-02 | 合肥工业大学 | 含低风速分散式风电的配电网多目标动态鲁棒重构方法 |
CN113627485A (zh) * | 2021-07-10 | 2021-11-09 | 南京理工大学 | 基于admm的不平衡大数据分布式分类方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104951809A (zh) * | 2015-07-14 | 2015-09-30 | 西安电子科技大学 | 基于不平衡分类指标与集成学习的不平衡数据分类方法 |
CN105787046A (zh) * | 2016-02-28 | 2016-07-20 | 华东理工大学 | 一种基于单边动态下采样的不平衡数据分类系统 |
CN105868775A (zh) * | 2016-03-23 | 2016-08-17 | 深圳市颐通科技有限公司 | 基于pso算法的不平衡样本分类方法 |
WO2017003831A1 (en) * | 2015-06-29 | 2017-01-05 | Microsoft Technology Licensing, Llc | Machine learning classification on hardware accelerators with stacked memory |
-
2017
- 2017-08-16 CN CN201710702075.8A patent/CN107578061A/zh active Pending
- 2017-12-13 WO PCT/CN2017/115848 patent/WO2019033636A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017003831A1 (en) * | 2015-06-29 | 2017-01-05 | Microsoft Technology Licensing, Llc | Machine learning classification on hardware accelerators with stacked memory |
CN104951809A (zh) * | 2015-07-14 | 2015-09-30 | 西安电子科技大学 | 基于不平衡分类指标与集成学习的不平衡数据分类方法 |
CN105787046A (zh) * | 2016-02-28 | 2016-07-20 | 华东理工大学 | 一种基于单边动态下采样的不平衡数据分类系统 |
CN105868775A (zh) * | 2016-03-23 | 2016-08-17 | 深圳市颐通科技有限公司 | 基于pso算法的不平衡样本分类方法 |
Non-Patent Citations (1)
Title |
---|
ZHANG, CHUNKAI: "A New Approach for Imbalanced Data Classification Based on Minimize Loss Learning", IEEE SECOND INTERNATIONAL CONFERENCE ON DATA SCIENCE IN CYBERSPACE, 29 June 2017 (2017-06-29), XP033139595 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111652384A (zh) * | 2019-03-27 | 2020-09-11 | 上海铼锶信息技术有限公司 | 一种数据量分布的平衡方法及数据处理方法 |
CN111652384B (zh) * | 2019-03-27 | 2023-08-18 | 上海铼锶信息技术有限公司 | 一种数据量分布的平衡方法及数据处理方法 |
CN110851959B (zh) * | 2019-10-18 | 2024-04-02 | 天津大学 | 一种融合深度学习和分位数回归的风速区间预测方法 |
CN110851959A (zh) * | 2019-10-18 | 2020-02-28 | 天津大学 | 一种融合深度学习和分位数回归的风速区间预测方法 |
CN111159935A (zh) * | 2019-12-11 | 2020-05-15 | 同济大学 | 基于lhs的bp神经网络参数标定方法 |
CN111178897B (zh) * | 2019-12-18 | 2023-08-08 | 浙江大学 | 在不平衡数据上快速特征学习的代价敏感的动态聚类方法 |
CN111240344B (zh) * | 2020-02-11 | 2023-04-07 | 哈尔滨工程大学 | 基于强化学习技术的自主水下机器人无模型控制方法 |
CN111240344A (zh) * | 2020-02-11 | 2020-06-05 | 哈尔滨工程大学 | 一种基于双神经网络强化学习技术的自主水下机器人无模型控制方法 |
CN111325338A (zh) * | 2020-02-12 | 2020-06-23 | 暗物智能科技(广州)有限公司 | 神经网络结构评价模型构建和神经网络结构搜索方法 |
CN111738420A (zh) * | 2020-06-24 | 2020-10-02 | 莫毓昌 | 一种基于多尺度抽样的机电设备状态数据补全与预测方法 |
CN111738420B (zh) * | 2020-06-24 | 2023-06-06 | 莫毓昌 | 一种基于多尺度抽样的机电设备状态数据补全与预测方法 |
CN112529328B (zh) * | 2020-12-23 | 2023-08-22 | 长春理工大学 | 一种产品性能预测方法及系统 |
CN112529328A (zh) * | 2020-12-23 | 2021-03-19 | 长春理工大学 | 一种产品性能预测方法及系统 |
CN112766379A (zh) * | 2021-01-21 | 2021-05-07 | 中国科学技术大学 | 一种基于深度学习多权重损失函数的数据均衡方法 |
CN112766379B (zh) * | 2021-01-21 | 2023-06-20 | 中国科学技术大学 | 一种基于深度学习多权重损失函数的数据均衡方法 |
CN113298230B (zh) * | 2021-05-14 | 2024-04-09 | 武汉嫦娥医学抗衰机器人股份有限公司 | 一种基于生成对抗网络的不平衡数据集的预测方法 |
CN113298230A (zh) * | 2021-05-14 | 2021-08-24 | 西安理工大学 | 一种基于生成对抗网络的不平衡数据集的预测方法 |
CN113673579A (zh) * | 2021-07-27 | 2021-11-19 | 国网湖北省电力有限公司营销服务中心(计量中心) | 一种基于小样本的用电负荷分类算法 |
CN113723679A (zh) * | 2021-08-27 | 2021-11-30 | 暨南大学 | 基于代价敏感深度级联森林的饮用水质预测方法及系统 |
CN113723679B (zh) * | 2021-08-27 | 2024-04-16 | 暨南大学 | 基于代价敏感深度级联森林的饮用水质预测方法及系统 |
CN113807023A (zh) * | 2021-10-04 | 2021-12-17 | 北京亚鸿世纪科技发展有限公司 | 基于门控循环单元网络的工业互联网设备故障预测方法 |
WO2023078240A1 (en) * | 2021-11-03 | 2023-05-11 | International Business Machines Corporation | Training sample set generation from imbalanced data in view of user goals |
CN114638336A (zh) * | 2021-12-26 | 2022-06-17 | 海南大学 | 聚焦于陌生样本的不平衡学习 |
CN114638336B (zh) * | 2021-12-26 | 2023-09-22 | 海南大学 | 聚焦于陌生样本的不平衡学习 |
CN114676727B (zh) * | 2022-03-21 | 2024-02-20 | 合肥工业大学 | 一种基于csi的与位置无关的人体活动识别方法 |
CN114676727A (zh) * | 2022-03-21 | 2022-06-28 | 合肥工业大学 | 一种基于csi的与位置无关的人体活动识别方法 |
CN116503385A (zh) * | 2023-06-25 | 2023-07-28 | 吉林大学 | 基于虚拟全局代理的糖网眼底图像分级方法和设备 |
CN116503385B (zh) * | 2023-06-25 | 2023-09-01 | 吉林大学 | 基于虚拟全局代理的糖网眼底图像分级方法和设备 |
CN117476125A (zh) * | 2023-12-27 | 2024-01-30 | 豆黄金食品有限公司 | 一种基于数据分析的腐竹余液回收数据处理系统 |
CN117476125B (zh) * | 2023-12-27 | 2024-04-05 | 豆黄金食品有限公司 | 一种基于数据分析的腐竹余液回收数据处理系统 |
Also Published As
Publication number | Publication date |
---|---|
CN107578061A (zh) | 2018-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019033636A1 (zh) | 基于最小化损失学习的不平衡样本分类方法 | |
CN108388651B (zh) | 一种基于图核和卷积神经网络的文本分类方法 | |
CN110263227B (zh) | 基于图神经网络的团伙发现方法和系统 | |
Wu et al. | Self-adaptive attribute weighting for Naive Bayes classification | |
CN109389151B (zh) | 一种基于半监督嵌入表示模型的知识图谱处理方法和装置 | |
CN113326731A (zh) | 一种基于动量网络指导的跨域行人重识别算法 | |
CN107729290B (zh) | 一种利用局部敏感哈希优化的超大规模图的表示学习方法 | |
CN104298873A (zh) | 一种基于遗传算法和粗糙集的属性约简方法及精神状态评估方法 | |
CN112749757B (zh) | 基于门控图注意力网络的论文分类模型构建方法及系统 | |
CN110297888A (zh) | 一种基于前缀树与循环神经网络的领域分类方法 | |
WO2020168796A1 (zh) | 一种基于高维空间采样的数据增强方法 | |
Quek et al. | A novel approach to the derivation of fuzzy membership functions using the Falcon-MART architecture | |
Tembusai et al. | K-nearest neighbor with K-fold cross validation and analytic hierarchy process on data classification | |
CN110245682A (zh) | 一种基于话题的网络表示学习方法 | |
CN112286996A (zh) | 一种基于网络链接和节点属性信息的节点嵌入方法 | |
Qiao et al. | SRS-DNN: a deep neural network with strengthening response sparsity | |
Gu et al. | Fuzzy time series forecasting based on information granule and neural network | |
Memon et al. | Neural regression trees | |
Yang et al. | Intelligent classification model for railway signal equipment fault based on SMOTE and ensemble learning | |
Özdemir et al. | The modified fuzzy art and a two-stage clustering approach to cell design | |
Behera et al. | A comparative study of back propagation and simulated annealing algorithms for neural net classifier optimization | |
Wang et al. | Kernel-based deep learning for intelligent data analysis | |
CN113408602A (zh) | 一种树突神经网络初始化方法 | |
CN114722212A (zh) | 一种面向人物关系网络的自动元路径挖掘方法 | |
Singh et al. | Adaptive genetic programming based linkage rule miner for entity linking in Semantic Web |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17921862 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17921862 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17921862 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17921862 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/01/2021) |