CN116522007B - Recommendation system model-oriented data forgetting learning method, device and medium - Google Patents

Recommendation system model-oriented data forgetting learning method, device and medium Download PDF

Info

Publication number
CN116522007B
CN116522007B CN202310814010.8A CN202310814010A CN116522007B CN 116522007 B CN116522007 B CN 116522007B CN 202310814010 A CN202310814010 A CN 202310814010A CN 116522007 B CN116522007 B CN 116522007B
Authority
CN
China
Prior art keywords
model
data
data set
recommendation system
learning method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310814010.8A
Other languages
Chinese (zh)
Other versions
CN116522007A (en
Inventor
何向南
张洋
冯福利
白移梦
胡治宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202310814010.8A priority Critical patent/CN116522007B/en
Publication of CN116522007A publication Critical patent/CN116522007A/en
Application granted granted Critical
Publication of CN116522007B publication Critical patent/CN116522007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The invention discloses a data forgetting learning method, a device and a medium for a recommendation system model, wherein the data forgetting learning method comprises the following steps: firstly, obtaining a calculation function of a recommended system model for a sample under a training data set, outputting the calculation function as a prediction for the sample, training the model to be optimal on the training data set, and obtaining a model after erasing an unavailable data set through the optimal model and influence function estimation. According to the method, the training architecture, the model architecture and the deployment mode of the original model are not changed, the model parameters which are not important for data erasure are deleted, and the accuracy and the data erasure efficiency of data erasure based on the influence function are improved.

Description

Recommendation system model-oriented data forgetting learning method, device and medium
Technical Field
The invention relates to the field of data processing systems or methods, in particular to a data forgetting learning method, device and medium for a recommendation system model.
Background
The recommendation system is a key basic tool for solving the information explosion problem in the current mobile internet era, really influences activities such as life, entertainment and travel of people, and is the most commonly used interaction medium between a service provider and a user. Recommendation systems typically infer user interests from historical interaction information of users, which must be remembered by a model deployed to contain parameters. However, a model also requires that some historical data be erased in some cases, such as users requiring deletion of their individual's historical information for privacy, and the system itself requires deletion of some information for attack purposes, etc. Such information to be erased is referred to as unusable data for convenience. It should be noted that these unusable data to be erased are not only erased from the database, but rather that it should be emphasized how to erase these unusable data from the parameters of the model.
In current recommendation systems, the erasure of unusable data is accomplished primarily in dependence upon retraining. The first is full retraining, i.e. training the recommendation system model from scratch to achieve the goal of data erasure without using unusable data, which is often faced with more time consuming, and the recommendation system is a real-time system, and therefore is not practical. The second type of retraining is partial retraining, which is a method in which data is divided into different independent parts at the time of initial training, different parts of data are used to train different sub-models, and when a request for erasing unusable data is received, only a small part of the sub-models need to be retrained, thereby increasing model retraining efficiency. However, this approach requires that the unavailable data only affects a small fraction of the sub-models, and this assumption also limits the practical application of the approach, since the distribution of the unavailable data is often unknown and unrestricted. In addition to the retraining method, some research works are to record gradient update information of the training process and then retrospectively cancel gradient update information of unavailable data to achieve the purpose of data erasure, however, the method ignores the interaction between different samples.
In a technical field outside the recommendation system, there are also research efforts to use the influence function (influence function) to achieve data erasure, but it cannot be directly applied to the recommendation system because it cannot evaluate the influence of erasure-unavailable data on other data calculation functions. In addition, direct application can also cause massive computational overhead problems. The invention improves the influence function, so that the influence caused by the change of the calculation function of other data caused by erasure of unavailable data can be measured, and the calculation acceleration is realized through a pruning scheme.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a data forgetting learning method, device and medium for a recommendation system model, which do not change the training architecture, model architecture and deployment mode of an original model, and improve the accuracy and the data erasing efficiency of erasing data based on an influence function.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a data forgetting learning method for a recommendation system model, where the data forgetting learning method includes the following steps:
s1, definitionIs a parameter of->The recommendation system model of the representation is +.>In training data set->The following calculation function, which is output as a prediction for the sample, and is +.>The optimal model ∈>Expressed as->Wherein->For the recommendation system model in the data set +.>The sum of the loss functions below, expressed as +.>Representing data set +.>One of them, < >>Representing the user->Representing articles->Representing user +.>For articles->Is->Representing a loss function;
s2, in the data setThe unavailable data set is +.>The remaining data set is +.>The model after erasing the data is +.>Expressed as->Estimating ∈according to the influence function>The estimated result is recorded as +.>The resulting erase unavailable data set +.>The latter model is denoted->
Further, different data sets are usedThe erasure-disabled data set is used in a different way for the input samples>The model calculation functions of (2) are also different.
Still further, the erasing of the unusable data setThe specific calculation process of the model calculation function of (a) is as follows:
(1) If data setUser item pair input in->Is a recommendation system model calculation functionWill leave the data set +.>All of the sample points satisfying the conditionRecorded as a calculation function change dataset +.>The recommendation system model is based on the dataset +.>And the left data set->Lower calculation data set +.>The difference between the loss functions of all sample points in (a) is +.>Represented as
(2) Computing unusable data setsIn dataset +.>The lower loss function->Denoted as->
(3) Based on the obtained、/>Defined as->Based on +.>Intensity plus->And->Optimal model->,/>Expressed as:wherein->Representing a disturbance term, deriving +.>,/>
(4) Defining unusable data setsThe influence function for the recommendation system model is +.>Expressed as:wherein, the method comprises the steps of, wherein,representation pair->Conduct and->Expressed as a Hessian matrix,>an inverse matrix denoted as a Hessian matrix;
(5) Then based on the influence functionEstimate->Is thatErase unavailable data set +.>The model after that is: />
Furthermore, when the data forgetting learning method is used for data erasure, acceleration is realized by a pruning method, and model parameters which are not important for data erasure are deleted.
Furthermore, in the pruning method, the specific process of deleting the unimportant model parameters is as follows:
(1) Data setAll users and item sets in (1) are denoted +.>And for each element thereinCalculating a set of elements with which statistics interact, denoted +.>And sets the maximum number of iterations K, for all elements +.>Initializing an importance score for it +.>Cutting ratio per iteration
(2) Initializing an empty setTraversing data set +.>Is>Pairing user articlesAdded to->Execute any->Update->The method comprises the steps of carrying out a first treatment on the surface of the After the traversal is completed, updateFinally let->
(3) If it isInitializing a null set->Let all->Traversing->For any->Update->And will->Added to->The method comprises the steps of carrying out a first treatment on the surface of the After traversal is completed, update->Let->
(4) If it isContinuing to execute the step (3) to obtain ,/>Representing the model parameters corresponding to v and returning +.>The method comprises the steps of carrying out a first treatment on the surface of the According to the->Will->The other parameters are marked as +.>There is->,/>,/>For the model parameters important for erasing data +.>For model parameters that are not important for erasing data, the +.>Is only considered +.>Is a variation of (c).
Further, an influence function according to the recommendation system modelWill->Replaced byAnd +.>Fix->Taking this as a constant, then ∈ ->Namely, the method is simplified as follows:whereinErase unavailable data set +.>The model after that is:wherein->
Further, if saidThe parameter number of (2) is->(/>) The computational complexity of the model update is reduced toOr->Wherein->For influence->Sample size of->For model parameters>Is the time complexity.
Further, the loss functionIs a binary cross entropy loss function.
In a second aspect, the present invention provides a data recommendation device, comprising a memory storing computer executable instructions and a processor configured to execute the computer executable instructions, the computer executable instructions implementing the data forgetting learning method when executed by the processor.
In a third aspect, the present invention provides a computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the data forgetting learning method.
Compared with the prior art, the invention has the following beneficial effects:
the method improves the accuracy of erasing data based on the influence function, and only considers the loss function when the influence of the traditional influence function calculation data on the modelItem, ignore->And->The prediction function (i.e., the calculation function) that measures other data is the affected part of the removed data, which is widely present in the recommender model, and only taking this affected part into account, complete erasure of the data can be achieved, thereby achieving the same target results as training the recommender model from scratch.
The invention improves the efficiency of erasing data. In one aspect, the present invention is directed to avoiding direct estimationThis quantity is then directly added to the original model to obtain a model of erasure unusable data, avoiding retraining, and in the event that erasure data is not too much, assisting in achieving faster erasure. On the other hand, the present invention proposes that the pruning method is accelerated by +.>Transition to->Further speeding up, deleting model parameters that are not important for data erasure.
The method is a post-processing method, does not need to change the model framework and training framework of the original model, only needs to obtain the gradient of the model and can access the model, can be conveniently and directly grafted to the deployed system model, and is beneficial to the wide application of the method.
Detailed Description
Example 1:
the embodiment discloses a data forgetting learning method facing a recommendation system model, which comprises the following steps:
s1, definitionIs a parameter of->The recommendation system model of the representation is +.>In training data set->The following calculation function, which is output as a prediction for the sample, and is +.>The optimal model ∈>Expressed as->Wherein->For the recommendation system model in the data set +.>The sum of the loss functions below, expressed as +.>Watch (watch)Show dataset +.>One of them, < >>Representing the user->Representing articles->Representing user +.>For articles->Is->Represents a loss function, loss function->The method specifically comprises the steps of binary cross entropy loss function;
s2, in the data setThe unavailable data set is +.>The remaining data set is +.>The model after erasing the data is +.>Expressed as->Estimating ∈according to the influence function>Estimation result recordRecorded as->The resulting erase unavailable data set +.>The latter model is denoted->
When using different data setsIn the case of different calculation modes for the input samples, the erase unavailable data set is adopted>The model calculation functions of (2) are also different. Erase unusable data set +.>The specific calculation process of the model calculation function of (a) is as follows:
(1) If data setUser item pair input in->Is a recommendation system model calculation functionWill leave the data set +.>All of the sample points satisfying the conditionRecorded as a calculation function change dataset +.>The recommendation system model is based on the dataset +.>And the left data set->Lower calculation data set +.>The difference between the loss functions of all sample points in (a) is +.>Represented as
(2) Computing unusable data setsIn dataset +.>The lower loss function->Denoted as->
(3) Based on the obtained、/>Defined as->Based on +.>Intensity plus->And->Optimal model->,/>Expressed as:wherein->Representing a disturbance term, deriving +.>,/>
(4) Defining unusable data setsThe influence function for the recommendation system model is +.>Expressed as:wherein, the method comprises the steps of, wherein,representation pair->Conduct and->Expressed as a Hessian matrix,>an inverse matrix denoted as a Hessian matrix;
(5) Then based on the influence functionEstimate->Is thatErase unavailable data set +.>The model after that is: />
In this embodiment, when the data forgetting learning method performs data erasure, acceleration is realized by a pruning method, and model parameters which are not important for data erasure are deleted. In the pruning method, the specific flow of deleting unimportant model parameters is as follows:
(1) Data setAll users and item sets in (1) are denoted +.>And for each element thereinCalculating a set of elements with which statistics interact, denoted +.>And sets the maximum number of iterations K, for all elements +.>Initializing an importance for itScore->Cutting ratio per iteration
(2) Initializing an empty setTraversing data set +.>Is>Pairing user articlesAdded to->Execute any->Update->The method comprises the steps of carrying out a first treatment on the surface of the After the traversal is completed, updateFinally let->
(3) If it isInitializing a null set->Let all->Traversing->For any->Update->And will->Added to->The method comprises the steps of carrying out a first treatment on the surface of the After traversal is completed, update->Let->
(4) If it isContinuing to execute the step (3) to obtain ,/>Representing the model parameters corresponding to v and returning +.>The method comprises the steps of carrying out a first treatment on the surface of the According to the->Will->The other parameters are marked as +.>There is->,/>,/>For the model parameters important for erasing data +.>For model parameters that are not important for erasing data, the +.>Is only considered +.>Is a variation of (c).
Influence functions according to the recommendation system modelWill->Replaced by->And +.>Fix->Taking this as a constant, then ∈ ->Namely, the method is simplified as follows:whereinErase unavailable data set +.>The model after that is:wherein->. If->The parameter number of (2) is->(/>) The computational complexity of the model update is reduced to +.>Or->Wherein->For influence->Sample size of->For model parameters>Is the time complexity.
In order to verify the effectiveness of the learning method disclosed in example 1 applied to the recommendation model system, the following experimental verification was performed using the real data set. The data set is a common public data set Amazon, and the division and the setting of the data set follow the common mode of data erasure study in a recommended model system to erase attack data in the data so as to realize better model performance. Experiments were performed based on two commonly used recommended models MF, LGCN, in contrast to the different data erasure methods Retrain, recEraser and SISA, etc., where Retrain is a method that is fully trained from scratch, which yields the model performance versus the erasure data that should yield performance. The experimental results are shown in table 1:
table 1 data erasure effects with different erasure methods
From the data erasing effects recorded in table 1, it can be found that the learning method disclosed in this embodiment is consistent with the standard result corresponding to strin, compared with other data erasing methods, and better results than those before erasing the data are obtained, so that it can be proved that the learning method of this embodiment can realize accurate data erasing.
In addition, efficiency experiments can be performed on the data set, and the time efficiency of the learning method and the Retrain method disclosed by the embodiment in erasing data with different ratios is tested respectively, and the experimental results are shown in table 2:
TABLE 2 time Performance comparison results
From the time performance comparison results recorded in table 2, it can be found that the recommendation method disclosed in this embodiment can achieve data erasure faster when deleting data with different proportions. By combining the results of table 1 and table 2, it was verified that the method of embodiment 1 can quickly and accurately achieve data erasure.
The invention can be applied to any recommendation system which is based on data driving and can be derived, and is used for responding to the request of considering data erasure such as privacy protection of a user or security of a platform or other parties, and the like, and assisting the realization of ecology of the recommendation system which is friendly, legal and compliant. The invention can be integrated into a deployed recommendation system in a software mode in a specific implementation level, and can also be installed on a website in an online mode to directly provide data erasure request interfaces for different users.
Example 2:
the embodiment discloses a data recommendation device, which comprises a memory and a processor, wherein the memory stores computer executable instructions, the processor is configured to execute the computer executable instructions, and the computer executable instructions are executed by the processor to realize the data forgetting learning method disclosed in the first embodiment.
Example 3:
the embodiment discloses a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is run by a processor, the data forgetting learning method disclosed in the first embodiment is realized.

Claims (8)

1. A data forgetting learning method facing a recommendation system model is characterized by comprising the following steps:
s1, definitionThe recommendation system model, represented by θ, for one parameter is trained on the user item pair (u, i) in the dataset +.>The following calculation function, which is output as a prediction for the sample, and is +.>The optimal model ∈>Denoted as->Wherein (1)>For the recommendation system model in the data set +.>The sum of the loss functions is expressed asRepresenting data set +.>One sample of (a), u represents the user, i represents the object, y ui A label representing user u for item i, l representing a loss function;
s2, in the data setThe unavailable data set is +.>The remaining data set is +.>The model after erasing the data isExpressed as->Estimating +.>The estimated result is recorded as +.>The resulting erase unavailable data set +.>The latter model is denoted->The recommender model uses different datasets +.>The erasure-disabled data set is used in a different way for the input samples>The model calculation functions of (2) are also different; erase unavailable data set->The specific calculation process of the model calculation function of (a) is as follows:
(1) If data setThe recommender model calculation function of the user item pair (u, i)> Leave data set +.>Is a function of the sample points (u, i, y) ui ) Recorded as a calculation function change dataset +.>The recommendation system model is based on the dataset +.>And the left data set->Lower calculation data set +.>The difference between the loss functions of all sample points in (a) is +.> Represented as
(2) Computing unusable data setsIn dataset +.>The lower loss function->Denoted as->
(3) Based on the obtainedDefined as->Based on the E intensity plus ++>And->Optimal model-> Expressed as: /> Where E represents a disturbance term, a result is
(4) Defining unusable data setsThe influence function for the recommendation system model is +.> Expressed as: />Wherein (1)>Represents deriving θ and +.>Expressed as a Hessian matrix,>an inverse matrix denoted as a Hessian matrix;
(5) Then based on the influence functionLet delta be +.> Erase unavailable data set +.>The model after that is: />
2. The recommendation system model-oriented data forgetting learning method according to claim 1, wherein acceleration is realized by a pruning method when data erasure is performed by the data forgetting learning method, and model parameters which are not important for data erasure are deleted.
3. The recommendation system model-oriented data forgetting learning method according to claim 2, wherein the specific flow of deleting unimportant model parameters in the pruning method is as follows:
(1) Data setAll users and item sets in (1) are denoted +.>And for each element therein +.>Computing a set of elements with which statistics interact, denoted +.>And sets the maximum number of iterations K +.>Initializing an importance score for it +.>Clipping ratio per iteration->k=0,1,…,K;
(2) Initializing an empty setTraversing dataset +.>Each of the samples (u, i, y ui ) Add user item pair { u, i } to +.>Execution of update +.for arbitrary v ε { u, i }>After traversal is completed, update-> And->Finally let k=1;
(3) If K is less than or equal to K, initializing an empty setLet all->Walk->For any oneUpdate->And v' is added to +.>After the traversal is completed, updateLet k≡ k+1;
(4) Continuing to execute the step (3) to obtainθ v Representing model parameters corresponding to v, and returning psi; based on the obtained ψ, the other parameters in θ are noted as φ, and there is θ= { ψ, φ },and phi is a model parameter which is relatively important for erasure data, phi is a model parameter which is not important for erasure data, and changes of phi are ignored in updating, and only the changes of phi are considered.
4. The recommendation system model oriented data forgetting learning method of claim 3, wherein the influence function of the recommendation system model θ is replaced by { ψ, φ }, and φ is fixed in the model as +.>Taking it as a constant, thenNamely, the method is simplified as follows: /> Wherein-> Erase unavailable data set +.>The model after that is: />Wherein (1)>
5. The recommendation system model-oriented data forgetting learning method according to claim 4, wherein if the parameter amount of ψ is p '(p' < p), the calculation complexity of model update is reduced to O (n 'p' 2 +p' 3 ) Or O (n ' p '), where n ' is the influenceP is the model parameter number and O is the time complexity.
6. The recommendation system model oriented data forgetting learning method according to claim 1, wherein the loss function is a binary cross entropy loss function.
7. A data recommendation device comprising a memory storing computer executable instructions and a processor configured to execute the computer executable instructions, wherein the computer executable instructions when executed by the processor implement the data forgetting learning method of any of claims 1 to 6.
8. A computer readable storage medium having a computer program stored thereon, characterized in that the computer program when executed by a processor implements the data forgetting learning method according to any one of claims 1 to 6.
CN202310814010.8A 2023-07-05 2023-07-05 Recommendation system model-oriented data forgetting learning method, device and medium Active CN116522007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310814010.8A CN116522007B (en) 2023-07-05 2023-07-05 Recommendation system model-oriented data forgetting learning method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310814010.8A CN116522007B (en) 2023-07-05 2023-07-05 Recommendation system model-oriented data forgetting learning method, device and medium

Publications (2)

Publication Number Publication Date
CN116522007A CN116522007A (en) 2023-08-01
CN116522007B true CN116522007B (en) 2023-10-20

Family

ID=87408630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310814010.8A Active CN116522007B (en) 2023-07-05 2023-07-05 Recommendation system model-oriented data forgetting learning method, device and medium

Country Status (1)

Country Link
CN (1) CN116522007B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117390685B (en) * 2023-12-07 2024-04-05 湖北省楚天云有限公司 Pedestrian re-identification data privacy protection method and system based on forgetting learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110491146A (en) * 2019-08-21 2019-11-22 浙江工业大学 A kind of traffic signal control scheme real-time recommendation method based on deep learning
CN111199242A (en) * 2019-12-18 2020-05-26 浙江工业大学 Image increment learning method based on dynamic correction vector
CN111932512A (en) * 2020-08-06 2020-11-13 吉林大学 Intracranial hemorrhage detection algorithm applied to CT image based on CNN and NLSTM neural network
CN113590958A (en) * 2021-08-02 2021-11-02 中国科学院深圳先进技术研究院 Continuous learning method of sequence recommendation model based on sample playback
CN114611631A (en) * 2022-04-14 2022-06-10 广州大学 Method, system, device and medium for fast training a model from a partial training set
CN114692894A (en) * 2022-04-02 2022-07-01 南京大学 Implementation method of machine learning model supporting dynamic addition and deletion of user data
CN114863243A (en) * 2022-04-28 2022-08-05 国家电网有限公司大数据中心 Data forgetting method, device, equipment and storage medium of model
EP4083838A1 (en) * 2021-04-30 2022-11-02 Hochschule Karlsruhe Method and system to collaboratively train data analytics model parameters
CN115329864A (en) * 2022-08-11 2022-11-11 北京有竹居网络技术有限公司 Method and device for training recommendation model and electronic equipment
CN116226654A (en) * 2022-09-09 2023-06-06 西安电子科技大学 Machine learning data forgetting method based on mask gradient

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2024517412A (en) * 2021-04-16 2024-04-22 ストロング フォース ヴィーシーエヌ ポートフォリオ 2019,エルエルシー Systems, methods, kits, and apparatus for digital product network systems and biology-based value chain networks
US11803578B2 (en) * 2021-09-21 2023-10-31 Sap Se Personalized evolving search assistance

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110491146A (en) * 2019-08-21 2019-11-22 浙江工业大学 A kind of traffic signal control scheme real-time recommendation method based on deep learning
CN111199242A (en) * 2019-12-18 2020-05-26 浙江工业大学 Image increment learning method based on dynamic correction vector
CN111932512A (en) * 2020-08-06 2020-11-13 吉林大学 Intracranial hemorrhage detection algorithm applied to CT image based on CNN and NLSTM neural network
EP4083838A1 (en) * 2021-04-30 2022-11-02 Hochschule Karlsruhe Method and system to collaboratively train data analytics model parameters
CN113590958A (en) * 2021-08-02 2021-11-02 中国科学院深圳先进技术研究院 Continuous learning method of sequence recommendation model based on sample playback
CN114692894A (en) * 2022-04-02 2022-07-01 南京大学 Implementation method of machine learning model supporting dynamic addition and deletion of user data
CN114611631A (en) * 2022-04-14 2022-06-10 广州大学 Method, system, device and medium for fast training a model from a partial training set
CN114863243A (en) * 2022-04-28 2022-08-05 国家电网有限公司大数据中心 Data forgetting method, device, equipment and storage medium of model
CN115329864A (en) * 2022-08-11 2022-11-11 北京有竹居网络技术有限公司 Method and device for training recommendation model and electronic equipment
CN116226654A (en) * 2022-09-09 2023-06-06 西安电子科技大学 Machine learning data forgetting method based on mask gradient

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"深度学习驱动的知识追踪研究进展综述";梁琨等;《计算机工程与应用》;41-58 *

Also Published As

Publication number Publication date
CN116522007A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
US10846643B2 (en) Method and system for predicting task completion of a time period based on task completion rates and data trend of prior time periods in view of attributes of tasks using machine learning models
US9762566B2 (en) Reducing authentication confidence over time based on user history
JP2024026276A (en) Computer-based systems, computer components and computer objects configured to implement dynamic outlier bias reduction in machine learning models
Rosin Multi-armed bandits with episode context
US8818922B2 (en) Method and apparatus for predicting application performance across machines with different hardware configurations
Holmes et al. Fast nonparametric conditional density estimation
CN116522007B (en) Recommendation system model-oriented data forgetting learning method, device and medium
US20170116530A1 (en) Generating prediction models in accordance with any specific data sets
US20240062229A1 (en) Predicting the probability of a product purchase
CN111159542B (en) Cross-domain sequence recommendation method based on self-adaptive fine tuning strategy
Martinet et al. On the invertibility of EGARCH (p, q)
CN110929114A (en) Tracking digital dialog states and generating responses using dynamic memory networks
CN112910710B (en) Network flow space-time prediction method and device, computer equipment and storage medium
WO2020047861A1 (en) Method and device for generating ranking model
CN108470052B (en) Anti-trust attack recommendation algorithm based on matrix completion
US11501107B2 (en) Key-value memory network for predicting time-series metrics of target entities
CN109636212B (en) Method for predicting actual running time of job
Pokle et al. Deep equilibrium approaches to diffusion models
CN113554178A (en) Optimizing gradient boost feature selection
CN117349899B (en) Sensitive data processing method, system and storage medium based on forgetting model
CN113570437A (en) Product recommendation method and device
US20170177767A1 (en) Configuration of large scale advection diffusion models with predetermined rules
CN115079257A (en) Q value estimation and seismic attenuation compensation method based on fusion network
EP4026064A1 (en) Computational implementation of gaussian process models
CN105224881B (en) A kind of two-way K anonymous methods under mass-rent database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant