CN116050518A - Knowledge graph embedded model data poisoning effect evaluation method - Google Patents

Knowledge graph embedded model data poisoning effect evaluation method Download PDF

Info

Publication number
CN116050518A
CN116050518A CN202211426092.0A CN202211426092A CN116050518A CN 116050518 A CN116050518 A CN 116050518A CN 202211426092 A CN202211426092 A CN 202211426092A CN 116050518 A CN116050518 A CN 116050518A
Authority
CN
China
Prior art keywords
poisoning
mrr
data
knowledge graph
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211426092.0A
Other languages
Chinese (zh)
Inventor
王乐
朱东
顾钊铨
谢禹舜
邓建宇
谭灏南
张欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202211426092.0A priority Critical patent/CN116050518A/en
Publication of CN116050518A publication Critical patent/CN116050518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of knowledge graph embedded data poisoning, and discloses a knowledge graph embedded model data poisoning effect evaluation method, which takes account of poisoning and concealment of data poisoning on the basis of MRR, can better measure the influence degree of different data poisoning attacks on the knowledge graph, and can better measure the effect of the knowledge graph embedded data poisoning; the index provided by the invention enables different data poisoning attacks to be compared with each other on the same model, and the index can be biased towards poisoning or concealment by adjusting the parameter a.

Description

Knowledge graph embedded model data poisoning effect evaluation method
Technical Field
The invention relates to the field of knowledge-graph embedded data poisoning, in particular to a knowledge-graph embedded model data poisoning effect evaluation method.
Background
The knowledge graph is an important field of computer science research at present, and the knowledge graph embedding is an important branch of the knowledge graph, and aims to convert structured data of the knowledge graph into high-dimensional vectors, and meanwhile, the semantics and the topological structure of the knowledge graph are reserved, so that the knowledge graph is convenient to serve downstream tasks better. Along with the wide application of the knowledge graph in industrial tasks such as semantic search, recommendation systems, dialogue systems and the like, the knowledge graph embedding task is valued by researchers, and the data poisoning of the knowledge graph embedding task also draws the wide attention of the researchers. The accuracy of knowledge-graph embedding can be reduced to a certain extent by poisoning the data of the knowledge-graph embedding model.
The main poisoning means at present is poisoning of target triples in the knowledge graph, and the ideal poisoning result is that in the application process of the knowledge graph, when the application relates to the target triples, the performance of the knowledge graph embedding is obviously reduced, and when the application does not relate to the target triples, the performance of the knowledge graph embedding is almost unchanged. The method is characterized in that the method evaluates the poisoning performance of a data poisoning attack on a target triplet, and also evaluates the influence degree of the poisoning strategy on other triples, wherein the influence on other triples is larger and is easier to be found by a user, so that the influence on other triples is smaller and better, in other words, the method evaluates the poisoning performance of the data poisoning attack and also evaluates the hiding performance of the poisoning attack on non-target triples.
Generally speaking, an attacker performs poisoning work by selecting a target triplet, and then generating poisoning data by using the target triplet, for example, in paper "Data Poisoning Attack against Knowledge Graph Embedding" (Hengtong Zhang, franking Zheng, jing Gao: data Poisoning Attack against Knowledge Graph embedding.ijcai 2019: 4853-4859), the author designs opposite poisoning data by embedding the knowledge pattern in the direction in which the gradient of the entity embedding is most rapidly decreased, and adds the poisoning data into the training set without the knowledge of the user, thereby resulting in the decrease of the embedding performance of the knowledge pattern.
Currently, a plurality of evaluation indexes aiming at knowledge graph embedded model data poisoning exist. The closest evaluation index is MRR, which is a common performance index in the knowledge-graph embedding model, and when a triplet prediction task is carried out on the knowledge-graph embedding result, the average value of the ranking reciprocal of the correct result in the prediction result is MRR. In addition, there are correlation indexes MR and hit@n, MR is the average value of the correct result ranking values in the predicted results, and hit@n refers to the proportion of the number of times that the correct result ranking appears in the top N in all the predicted times. For example, in the knowledge graph, one triplet (A, friend, B) is selected, the predicted results of the A and the friend have various possibilities, the ranking of the B is taken, and the result obtained by predicting other triples for multiple times and averaging is the MR of the whole knowledge graph embedded model; taking the reciprocal of the ranking value of B, predicting other triples for multiple times, and averaging to obtain a result which is the MRR of the whole knowledge graph embedding model; and predicting other triples for multiple times, wherein in each predicted result, if the correct entity rank is in the first N bits, counting once until the result obtained by dividing the counted times by the predicted times is the hit@N. These metrics may reflect the performance of the knowledge-graph-embedding model, for example, the authors in "Translating Embeddings for Modeling Multi-related Data" (Antoine Bordes, nicolas Usuner, alberto GarcI a-DurA n: translating Embeddings for Modeling Multi-related data.NIPS 2013:2787-2795) reflect the performance of the knowledge-graph-embedding model, transE, by MRR, and Hist@10. The degree of damage of the data poisoning to the embedding performance of the knowledge graph can be reflected by observing the change values of the indexes before and after the data poisoning, for example, in 'Poisoning Knowledge Graph Embeddings via Relation Inference Patterns', an author designs three data poisoning attack methods through the reasoning mode of the knowledge graph, and directly uses the degree of decline of the MRR value to reflect the effect of the data poisoning, but only reflects the poisoning performance of the data poisoning to a certain degree.
In the above scheme, taking MRR as an example, the use of the declining value of MRR can reflect the poisoning performance of data poisoning, but cannot reflect the concealment performance of data poisoning. This is because in current data poisoning experiments, only the target triples are used as test sets to measure the MRR values, which results in MRR values that only reflect the extent of the target triples' embeddability degradation.
In different knowledge graph embedding models, the reduction degree of MRR cannot be used as a poisoning performance comparison standard of different poisoning strategies, because the original MRR values of the different knowledge graph embedding models are different, and the reduced values on different bases are not comparable, so that the method for evaluating the poisoning effect of the knowledge graph embedding model data is provided.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a knowledge graph embedded model data poisoning effect evaluation method, which is characterized in that the poisoning group and the comparison group are arranged to measure the effect of a knowledge graph embedded poisoning attack from the angles of poisoning and concealment.
(II) technical scheme
In order to achieve the above purpose, the present invention provides the following technical solutions: a knowledge graph embedded model data poisoning effect evaluation method comprises the following steps:
the first step: when data poisoning attack is carried out on the knowledge graph embedding model, a poisoning group A, a control group B and a flat group C are obtained;
and a second step of: generating N poisoning triplet sets A by taking the triplet of the poisoning group A as poisoning seeds p
And a third step of: n fact triplet data are randomly added in an original training set, after training, A, B and C are respectively used as test sets to obtain the value of a performance index MRR of a target model as A MRR 、B MRR And C MRR
Fourth step: adding poisoning triplet set A in original training set p After training, A, B and C are respectively used as test sets to obtain the value of the performance index MRR of the model after poisoning as A' MRR 、B' MRR And C' MRR
Fifth step: comparative A MRR 、B MRR And C MRR With A' MRR 、B' MRR And C' MRR ,A MRR -A' MRR Numerical reaction to degradation of poisoning group performance, B MRR -B' MRR Decline in Performance in response control group, B MRR -C MRR Can reflect the generalization performance of the original model before poisoning, B' MRR -C' MRR The generalization performance of the model after poisoning can be reflected, and the generalization performance of the model before and after poisoning can be introduced into a calculation formula. Therefore, the poisoning performance index Dp and the concealment index Di of the data poisoning attack A are designed:
Figure SMS_1
Figure SMS_2
preferably, the specific content of the first step is that when the knowledge graph embedding model is subjected to data poisoning attack, the poisoning attack is marked as P, the attack cost is N, the number of attack triples is N, after the attacker selects N target triples as poisoning groups a, a comparison group B which is distributed with the poisoning groups at random is generated, and a flat group C which is randomly generated from the actual triples is generated at random, wherein the three groups of data are equal in number and all serve as test sets.
Preferably, the definition D-score measures the comprehensive performance of the poisoning strategy, and the calculation method of the D-score is the harmonic mean of the poisoning Dp and the masking Di;
Figure SMS_3
the value of D1 fairly gives consideration to the poisoning and concealment of the poisoning of the knowledge-graph embedded data, when the same poisoning Dp is met, the higher the value of D1 is, the better the concealment Di is, the higher the value of D1 is, the better the poisoning Dp is, the bias parameter a is introduced, and the value of Da is defined as follows:
Figure SMS_4
preferably, the definition of Da is such that D-score is more concerned with the poisoning of the challenge when a=2 and with the concealment of the challenge when a=0.5.
(III) beneficial effects
Compared with the prior art, the invention provides a knowledge graph embedded model data poisoning effect evaluation method, which has the following beneficial effects:
1. the knowledge graph embedded model data toxin-throwing effect evaluation method is clear in thought, simple in index calculation and objective in evaluation effect.
2. The knowledge graph embedded model data poisoning effect evaluation method is characterized in that a control group, a poisoning group and a displacement group are arranged, the poisoning Dp and the concealment Di are defined, the neglect of concealment in the past experiment process of comparing the experimental effect of poisoning attack is made up, the generalization capability of the model is fully considered, and the poisoning and concealment comparison method is quantized.
3. According to the knowledge graph embedded model data poisoning effect evaluation method, weighing factors of poisoning attacks are fully considered, and different poisoning attacks can be compared transversely through the comprehensive effect of D-score reaction poisoning attacks.
4. Compared with the effect of using the MRR to reduce the degree of reaction of the poisoning attack, the knowledge graph embedded model data poisoning effect evaluation method can intuitively quantify the poisoning and concealment of different poisoning attacks. The D-score of the present invention can be used to synthesize the effect of a challenge.
Drawings
FIG. 1 is a schematic diagram of a poisoning group of a P-type poisoning method;
fig. 2 is a schematic diagram of an experimental procedure.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-2, a method for evaluating a poisoning effect of knowledge-graph embedded model data includes the following steps:
1. when a certain data poisoning attack (marked as P) is performed on the knowledge graph embedding model, the attack cost is N, the number of attack triplets is N, after the attacker selects N target triplets as poisoning groups A, the experiment needs to randomly generate a control group B which is distributed with the poisoning groups and a flat substitution group C which is randomly generated from the actual triplets, and the three groups of data are equal in number and are all used as test sets.
2. When generating poisoning data, N poisoning triplet sets A are generated by taking the triplet of the poisoning group A as a poisoning seed p While the control and the displacement groups did not perform any operation.
3. N fact triplet data are randomly added in an original training set, after training, A, B and C are respectively used as test sets to obtain the value of a performance index MRR of a target model as A MRR 、B MRR And C MRR . The values of the MRR values of the three test sets are expected to be very close.
4. Adding poisoning triplet set A in original training set p After training, A, B and C are respectively used as test sets to obtain the value of the performance index MRR of the model after poisoning as A' MRR 、B' MRR And C' MRR
5. Comparison of two experiments, A MRR -A' MRR Numerical reaction to degradation of poisoning group performance, B MRR -B' MRR Decline in Performance in response control group, B MRR -C MRR Can reflect the generalization performance of the original model before poisoning, B' MRR -C' MRR The generalization performance of the model after poisoning can be reflected, and the generalization before and after the model is poisoned is introduced into a calculation formula when the poisoning index and the hidden index are designed in consideration of the influence of the generalization on the poisoning attack of the data. Therefore, the poisoning performance index Dp and the concealment index Di of the data poisoning attack A are designed:
Figure SMS_5
the index reduction degree of the poisoning group A is mainly considered in the design of the poisoning Dp, the generalization performance of the model before and after poisoning is considered in the denominator, the greater the value of the Dp is, the stronger the poisoning of the data poisoning attack is, and the greater the value of the Di is, the stronger the concealment of the data poisoning attack is.
The comprehensive performance of the poisoning strategy is measured by defining the D-score, and the calculation method of the D-score is the harmonic average of poisoning Dp and disguised Di so as to conveniently observe the effect difference between different poisoning attacks.
Figure SMS_6
The value of D1 fairly gives consideration to the toxicity and concealment of the toxicity of the knowledge-graph embedded data. When the same toxicity Dp is satisfied, the higher the value of D1 is, the better the concealment Di is; the higher the value of D1, the better the poisoning Dp for the same concealment Di. In order to better enable the index to show effect in different poisoning strategies, a bias parameter a is introduced, and Da is defined as the value
Figure SMS_7
Da is defined such that D-score is more concerned with poisoning of the challenge when a=2 and with concealment of the challenge when a=0.5.
FIG. 1 is a schematic flow chart of a poisoning group of a data poisoning experiment in the present invention.
In the diagram 101, the selected target triplets, i.e. poisoning group a, have a number N, and the group data will serve as poisoning seeds and serve as the basis for data poisoning data generation. 102 is poisoning data A generated based on the data of the poisoning group A after the data poisoning method P p The set of data is used for being put into a knowledge graph embedding model to generate a poisoning effect. 103 represents a data poisoning method, and different data poisoning methods can be performed according to the same batch of poisoning seedsDifferent poisoning data can be generated.
Fig. 2 is an experimental flow chart under the P-poisoning method designed in the present invention.
The data partitioning stage 201 in the figure is the stage before the data poisoning; 202 is a data poisoning model training phase; 203 is the output result of the model after the data is detoxified; 211 is used as a poisoning seed to generate poisoning data by selecting the obtained target triplet as the poisoning group A before data poisoning; 212 is a control group B co-distributed with the poisoning group from a random selection; 213 is a flat-alternative group C randomly selected from the real triads, the three groups of experimental data are equal in quantity, and all the three groups of data are used as a test set to obtain corresponding knowledge graph embedded model performance indexes; 221 is the original training set; 222 is experimental data in the flat group C, and the original training set and the flat group are added into the target model together to obtain a flat model 223 through training; 225 is the original training set, 22 is the poisoning data A generated by using the poisoning group A as the poisoning seed p The two are used as training sets of the original model, and the poisoning model 224 is obtained after training; the parallel-substituted model and the parallel-substituted model obtained by training are tested by taking the poisoning group A, the control group B and the parallel-substituted group C as test sets respectively to obtain six groups of MRR data, wherein the six groups of MRR data are the poisoning group A of the parallel-substituted model in 203 respectively MRR Control group B MRR Level set C MRR Poisoning group A 'of poisoning model' MRR Control group B' MRR And peace take group C' MRR
Specifically, taking an experimental result of a transient model on a data set FB15K-237 as an example, when a target node is selected as a triplet prediction, 3000 triples are selected as a poisoning group A, a group of comparison groups B which are distributed in the same way as the poisoning group A are generated, a flat group C is randomly selected from the real triples, and the three groups have the same experimental data quantity. Adding the Pintegroup C into the original training set, training, wherein the MRR value of the transition model in the poisoning group is A MRR =0.644, MRR value in control group B MRR =0.624, MRR value in the plateau group is C MRR =0.612. In the data poisoning attack A experiment, poisoning data A is generated by taking poisoning group A as seeds p . Will poison data A p Adding the training set in the original model M, retraining the model, and taking A, B and C as test sets again to obtain the model performance after poisoning: a's' MRR =0.502、B' MRR =0.555 and C' MRR =5.562。
The poisoning of the X-ray challenge is calculated as:
Figure SMS_8
calculating the concealment of X-ray poisoning attacks as
Figure SMS_9
Calculating D-score as X-ray attack
Figure SMS_10
Calculating D2 of X-ray poisoning attack as
Figure SMS_11
D0.5 of X-ray poisoning attack is calculated as
Figure SMS_12
Among the above-mentioned indexes, the magnitude of the numerical value indicates the merits of the performance, and the larger the numerical value is, the better the performance is. The poisoning, concealment and comprehensive performance of different data poisoning attacks can be compared by calculating Dp, di and D-score of various attack modes.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. The knowledge graph embedding model data poisoning effect evaluation method is characterized by comprising the following steps of:
the first step: when data poisoning attack is carried out on the knowledge graph embedding model, a poisoning group A, a control group B and a flat group C are obtained;
and a second step of: generating N poisoning triplet sets A by taking the triplet of the poisoning group A as poisoning seeds p
And a third step of: n fact triplet data are randomly added in an original training set, after training, A, B and C are respectively used as test sets to obtain the value of a performance index MRR of a target model as A MRR 、B MRR And C MRR
Fourth step: adding poisoning triplet set A in original training set p After training, A, B and C are respectively used as test sets to obtain the value of the performance index MRR of the model after poisoning as A' MRR 、B' MRR And C' MRR
Fifth step: comparative A MRR 、B MRR And C MRR With A' MRR 、B' MRR And C' MRR ,A MRR -A' MRR Numerical reaction to degradation of poisoning group performance, B MRR -B' MRR Decline in Performance in response control group, B MRR -C MRR Can reflect the generalization performance of the original model before poisoning, B' MRR -C' MRR The generalization performance of the model after poisoning can be reflected, and the generalization performance of the model before and after poisoning can be introduced into a calculation formula. Therefore, the poisoning performance index Dp and the concealment index Di of the data poisoning attack A are designed:
Figure FDA0003942398310000011
Figure FDA0003942398310000012
2. the knowledge graph embedded model data poisoning effect evaluation method according to claim 1, wherein the knowledge graph embedded model data poisoning effect evaluation method is characterized by comprising the following steps of: the specific content of the first step is that when the knowledge graph embedding model is subjected to data poisoning attack, the poisoning attack is marked as P, the attack cost is N, the number of attack triples is N, an attacker can randomly generate a comparison group B which is distributed with the poisoning group after selecting N target triples as the poisoning group A, and a flat group C which is randomly generated from the actual triples, wherein the number of the three groups of data is equal and the three groups of data are all used as a test set.
3. The knowledge graph embedded model data poisoning effect evaluation method according to claim 1, wherein the knowledge graph embedded model data poisoning effect evaluation method is characterized by comprising the following steps of: the method for calculating the D-score measures the comprehensive performance of the poisoning strategy, and the D-score is the harmonic mean of poisoning Dp and masking Di;
Figure FDA0003942398310000013
the value of D1 fairly gives consideration to the poisoning and concealment of the poisoning of the knowledge-graph embedded data, when the same poisoning Dp is met, the higher the value of D1 is, the better the concealment Di is, the higher the value of D1 is, the better the poisoning Dp is, the bias parameter a is introduced, and the value of Da is defined as follows:
Figure FDA0003942398310000021
4. the knowledge graph embedded model data poisoning effect evaluation method according to claim 3, wherein the knowledge graph embedded model data poisoning effect evaluation method is characterized in that: the definition of Da makes D-score more concerned with the poisoning of the challenge when a=2 and with the concealment of the challenge when a=0.5.
CN202211426092.0A 2022-11-14 2022-11-14 Knowledge graph embedded model data poisoning effect evaluation method Pending CN116050518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211426092.0A CN116050518A (en) 2022-11-14 2022-11-14 Knowledge graph embedded model data poisoning effect evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211426092.0A CN116050518A (en) 2022-11-14 2022-11-14 Knowledge graph embedded model data poisoning effect evaluation method

Publications (1)

Publication Number Publication Date
CN116050518A true CN116050518A (en) 2023-05-02

Family

ID=86122514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211426092.0A Pending CN116050518A (en) 2022-11-14 2022-11-14 Knowledge graph embedded model data poisoning effect evaluation method

Country Status (1)

Country Link
CN (1) CN116050518A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952205A (en) * 2024-03-26 2024-04-30 电子科技大学(深圳)高等研究院 Back door attack method, system and medium for knowledge graph embedding model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952205A (en) * 2024-03-26 2024-04-30 电子科技大学(深圳)高等研究院 Back door attack method, system and medium for knowledge graph embedding model

Similar Documents

Publication Publication Date Title
CN115809569B (en) Reliability evaluation method and device based on coupling competition failure model
CN116050518A (en) Knowledge graph embedded model data poisoning effect evaluation method
Pei et al. Transformer uncertainty estimation with hierarchical stochastic attention
Castle et al. Using model selection algorithms to obtain reliable coefficient estimates
CN118193778B (en) Multi-feature-fused remote sensing image retrieval method
Zhou et al. Novel probabilistic neural network models combined with dissolved gas analysis for fault diagnosis of oil-immersed power transformers
Wang et al. Investigation of RBFNN Based on Improved PSO Optimization Algorithm for Performance and Emissions Prediction of a High‐Pressure Common‐Rail Diesel Engine
Zhang et al. Temporal Graph Contrastive Learning for Sequential Recommendation
CN109614074A (en) Approximate adder reliability degree calculation method based on probability transfer matrix model
CN116701950B (en) Click rate prediction model depolarization method, device and medium for recommendation system
CN111669410B (en) Industrial control network negative sample data generation method, device, server and medium
CN109063095A (en) A kind of weighing computation method towards clustering ensemble
CN116306226B (en) Fuel cell performance degradation prediction method
Wang et al. Soft sensor modeling method by maximizing output-related variable characteristics based on a stacked autoencoder and maximal information coefficients
Boonyakunakorn et al. Forecasting of Thailand's Rice Exports Price: Based on Ridge and Lasso Regression
Zhong et al. A collaborative filtering recommendation algorithm based on fuzzy C-means clustering
Magnani et al. Anytime skyline query processing for interactive systems
CN114491699A (en) Three-dimensional CAD software usability quantification method and device based on expansion interval number
Yang et al. Adaptive density peak clustering for determinging cluster center
Kong et al. Activated Parameter Locating via Causal Intervention for Model Merging
Yang et al. A statistical user-behavior trust evaluation algorithm based on cloud model
Li et al. Effective and Efficient Training for Sequential Recommendation Using Cumulative Cross-Entropy Loss
Jiang et al. Incremental electricity consumer behavior learning using smart meter data
Eck et al. Two-sample KS test with approxQuantile in Apache Spark®
CN113688559B (en) Sea water desalination system fault diagnosis method based on improved selective evolution random network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination