CN111461306B - Feature evaluation method and device - Google Patents

Feature evaluation method and device Download PDF

Info

Publication number
CN111461306B
CN111461306B CN202010244963.1A CN202010244963A CN111461306B CN 111461306 B CN111461306 B CN 111461306B CN 202010244963 A CN202010244963 A CN 202010244963A CN 111461306 B CN111461306 B CN 111461306B
Authority
CN
China
Prior art keywords
sample
replacement
neural network
network model
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010244963.1A
Other languages
Chinese (zh)
Other versions
CN111461306A (en
Inventor
武桓州
魏龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010244963.1A priority Critical patent/CN111461306B/en
Publication of CN111461306A publication Critical patent/CN111461306A/en
Application granted granted Critical
Publication of CN111461306B publication Critical patent/CN111461306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides a method and a device for feature evaluation, relates to the technical field of deep learning, and specifically comprises the following steps: the electronic equipment replaces the characteristic attribute of one characteristic of any sample by a replacement operator for any sample of the first sample set each time to obtain a plurality of replacement samples; obtaining vector values corresponding to the characteristic attributes in the plurality of replacement samples in the neural network model; further calculating to obtain the prediction result of each replacement sample; and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample. In the embodiment of the application, the calculation of the feature importance can be realized based on the online interaction between the electronic equipment and the equipment for operating the neural network model and the calculation resources of the electronic equipment, and the occupation of the resources is less.

Description

Feature evaluation method and device
Technical Field
The present application relates to the field of deep learning technologies, and in particular, to a method and an apparatus for feature evaluation.
Background
In computer vision, speech recognition, robot and other artificial intelligence applications, a Deep Neural Network (DNN) model is widely used, and the deep neural network model can extract high-level features from a sample by using a statistical learning method to realize prediction or classification and the like. In specific application, if effective characteristics are input into the deep neural network model, the deep neural network model can be helped to depict objects more accurately and rapidly, and accurate estimation or classification and the like are provided. However, with the change of external conditions, etc., the effective features may change, or are no longer effective at the user side, and the deep neural network model may not provide accurate estimation or classification, etc. Therefore, it is necessary to analyze the influence of each feature input neuron on the output of the model, evaluate the importance of features, optimize the features, and the like.
In the prior art, a scheme for evaluating the feature importance of a deep neural network model is obtained by calculation in model training. Specifically, when the deep neural network model is trained on a highly concurrent distributed cluster, the scheme is realized by tampering with feature values in a training sample, and feature importance evaluation is completed.
However, the cost of completing feature importance evaluation in the model training process is high, because if the training sample is to be tampered, an independent training task is required to perform calculation, and resources with the same magnitude as that of a conventional model training task are required to be consumed.
Disclosure of Invention
The embodiment of the application provides a method and a device for feature evaluation, which aim to solve the technical problem that the cost for completing feature importance evaluation in the prior art is high.
A first aspect of an embodiment of the present application provides a method for feature evaluation, including:
obtaining a first sample set, wherein any sample in the first sample set comprises a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used for describing the features; for any sample of the first sample set, replacing the characteristic attribute of one characteristic of the any sample by using a replacement operator at each time to obtain a plurality of replacement samples of the any sample; obtaining corresponding vector values of the characteristic attributes of the plurality of replacement samples in a neural network model; the neural network model is obtained by training a second sample set, and the first sample set and the second sample set do not have an intersection; calculating the input value of each replacement sample in the neural network model according to the vector value; vector calculation is carried out on the input value of each replacement sample in the neural network model, and the prediction result of each replacement sample is obtained; and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample. Therefore, after the neural network model is trained, the calculation of the importance of the features can be realized based on the online interaction of the electronic equipment and the equipment for operating the neural network model and the calculation resources of the electronic equipment, and the occupation of the resources is less.
Optionally, the obtaining of the vector values of the feature attributes of the plurality of replacement samples corresponding to the neural network model includes: sending a query request to a device running the neural network model; the query request comprises characteristic attributes in the plurality of replacement samples; the neural network model comprises the corresponding relation between the characteristic attributes of the plurality of replacement samples and the vector values; obtaining the vector values from a device running the neural network model.
Optionally, the electronic device is different from the device running the neural network model. Therefore, the evaluation of the feature importance can be localized and decoupled from the model training framework, so that the resources can be saved, and the horizontal expansion can be supported.
Optionally, the calculating an input value of each replacement sample in the neural network model according to the vector value includes: and for any one of the replacement samples, replacing the characteristic attribute of the replacement sample with the vector value corresponding to the characteristic attribute of the replacement sample to obtain the input value of the replacement sample in the neural network model.
Optionally, the determining, according to a difference between the prediction result of each of the replacement samples and the actual value of each of the samples, the weight of each of the features includes: calculating an area under the curve AUC value of each replacement sample according to the prediction result of each replacement sample and the actual value of each sample; and reversely ordering the AUC values of the alternative samples to obtain the weight of each feature.
Optionally, the method further includes: and periodically determining the weight of each feature, and indicating to reduce the update frequency of the neural network model when the fluctuation of the weight of each feature is less than a fluctuation threshold value. Therefore, the training resources of the neural network model can be saved, and the cost of running and maintaining the neural network model is reduced.
Optionally, the method further includes: features having an importance below a threshold are deleted in an update sample when updating the neural network model. Therefore, the calculation amount in the process of training the neural network model can be reduced, and the training efficiency is improved.
Optionally, the method further includes: the weight of each of the features is displayed in a user interface. The user can observe the importance degree of each feature on the model effect according to the display of the user interface, and further delete the features with the long-term ranking at a lower level so as to save the computing resources of the application system.
A second aspect of the embodiments of the present application provides an apparatus for feature evaluation, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first sample set, any sample in the first sample set comprises a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used for describing the features;
a replacing module, configured to replace, for any sample of the first sample set, a feature attribute of one feature of the any sample with a replacing operator at a time, so as to obtain a plurality of replacing samples of the any sample;
the obtaining module is further configured to obtain vector values corresponding to the feature attributes of the multiple replacement samples in the neural network model; the neural network model is obtained by adopting a second sample set for training, and the first sample set and the second sample set have no intersection;
the calculation module is used for calculating the input value of each replacement sample in the neural network model according to the vector value;
the calculation module is further configured to perform vector calculation on the input value of each replacement sample in the neural network model to obtain a prediction result of each replacement sample;
the calculating module is further configured to determine a weight of each feature according to a difference between a prediction result of each of the replacement samples and an actual value of each of the samples.
Optionally, the obtaining module is specifically configured to:
sending a query request to a device running the neural network model; the query request comprises characteristic attributes in the plurality of replacement samples; the neural network model comprises the corresponding relation between the characteristic attributes of the plurality of replacement samples and the vector values; obtaining the vector values from a device running the neural network model.
Optionally, the electronic device is different from the device running the neural network model.
Optionally, the calculation module is specifically configured to: and for any one of the replacement samples, replacing the characteristic attribute of the replacement sample with the vector value corresponding to the characteristic attribute of the replacement sample to obtain the input value of the replacement sample in the neural network model.
Optionally, the calculation module is specifically configured to: calculating an area under the curve AUC value of each replacement sample according to the prediction result of each replacement sample and the actual value of each sample; and reversely ordering the AUC values of the alternative samples to obtain the weight of each feature.
Optionally, the method further includes:
and the indicating module is used for periodically determining the weight of each feature and indicating to reduce the updating frequency of the neural network model under the condition that the fluctuation of the weight of each feature is less than a fluctuation threshold value.
Optionally, the method further includes:
and the deleting module is used for deleting the characteristics with the importance degree lower than the threshold value in the updating sample when the neural network model is updated.
Optionally, the method further includes:
and the display module is used for displaying the weight of each characteristic on a user interface.
A third aspect of the embodiments of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding first aspects.
A fourth aspect of embodiments of the present application provides a non-transitory computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of the preceding first aspects.
In summary, the embodiment of the present application has the following beneficial effects with respect to the prior art:
the embodiment of the application provides a method and a device for characteristic evaluation, which can realize the calculation of the weight of the characteristic based on the online interaction of electronic equipment and equipment for operating a neural network model and the calculation resource of the electronic equipment after the training of the neural network model is finished, and occupy less resources. Specifically, a first sample set may be obtained, where any one sample in the first sample set includes multiple features and feature attributes of the sample, and the feature attributes of the sample are used to describe the features; aiming at any sample of the first sample set, replacing the characteristic attribute of one characteristic of any sample by using a replacement operator every time to obtain a plurality of replacement samples; obtaining corresponding vector values of the characteristic attributes in the plurality of replacement samples in the neural network model; the neural network model is obtained by training a second sample set, and the first sample set and the second sample set do not have intersection; calculating the input value of each replacement sample in the neural network model according to the vector value; carrying out vector calculation on the input value of each replacement sample in the neural network model to obtain the prediction result of each replacement sample; and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample. In the embodiment of the application, the feature importance can be obtained based on the computing resources of one piece of electronic equipment, in addition, in the electronic equipment, the weight of the feature can be computed according to the sample different from the training sample of the neural network model, the new sample and the sample for computing and training the neural network model do not need to be combined for computation, the computation amount is smaller, and the computation efficiency is higher.
Drawings
Fig. 1 is a schematic network structure diagram of a neural network model provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a system architecture for a feature evaluation method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of a method for feature evaluation provided by an embodiment of the present application;
fig. 4 is a schematic view of a scenario of a method for feature evaluation provided in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a feature evaluation device according to an embodiment of the present application;
FIG. 6 is a block diagram of an electronic device for implementing a method of feature evaluation of an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The method of the embodiment of the application can be applied to electronic equipment, such as a terminal or a server, and the terminal can include: a mobile phone, a tablet computer, a notebook computer, or a desktop computer, etc. The embodiment of the present application does not specifically limit the specific device used.
The first sample set described in the embodiment of the present application may be obtained by a system for training a neural network model from a network or a client, and the first sample set may be stored in a storage space, and the electronic device may obtain the first sample set from the storage space periodically or randomly.
The second sample set described in embodiments of the present application may be used to train a sample set of a neural network model. The first sample set is disjoint from the second sample set. In one possible implementation, the second set of samples may be obtained earlier than the first set of samples, or it may be understood that over time, new samples are generated with which the importance of the features to the neural network model may be calculated.
The features and feature attributes of the sample described in the embodiment of the present application may be set according to an actual application scenario, for example, in an application scenario in which the click rate of the user on an event is counted, the features of the sample may include: age, gender, location, user equipment model, etc., and the attribute of the feature may be a specific value corresponding to the feature, such as an attribute of age being a specific age value, an attribute of gender being male or female, etc. The embodiment of the present application does not specifically limit the features of the sample, the specific values and the number of the feature attributes.
The replacement operator described in the embodiment of the present application may adopt a random replacement manner, and replace the feature attribute of one feature in a sample each time, so that more replacement samples may be generated according to the first sample set, for example, if there are M samples in the first sample set and each sample has N features, then M × N replacement samples may be generated by using the replacement operator, and since the vector value of one feature in each replacement sample is replaced, the weight (or referred to as importance) of the feature replaced in the replacement sample may be determined subsequently according to a difference between the prediction result of the replacement sample and the actual value of the sample.
The vector values of the feature attributes described in the embodiments of the present application, which correspond to the neural network model, may have a one-to-one correspondence relationship with the specific neural network model, for example, for the same feature attribute, different vector values correspond to different neural network models. Therefore, in the embodiment of the application, the real-time vector value of the characteristic attribute in the neural network model can be obtained based on the online interaction with the neural network model, and the importance of each characteristic to the neural network model can be further calculated.
The electronic device described in this embodiment of the present application may provide a Graphical User Interface (GUI), and the GUI may display weight values of each feature in the GUI in a form of a curve, a text, a graph, a table, or any other image, which is not specifically limited in this embodiment of the present application.
The neural network model described in the embodiment of the present application may be a DNN model, or may be any other neural network model, and this is not specifically limited in the embodiment of the present application. Fig. 1 shows a schematic network structure of a DNN model. A plurality of hidden layers (hidden layers) exist in the DNN model, and characteristic values of characteristic slots (slots, which may also be called characteristics) of each neuron acquired in an input layer (input layer) are processed and calculated by a large number of neurons in the hidden layers, and are finally output by an output layer (output layer). The transfer and calculation of characteristic values in such a network structure is generally not visible.
When evaluating the feature importance of the DNN model, the possible ways are: and for a model with N input slots, performing N times of calculation, sequentially selecting one slot each time, mixing the input of the selected slot in an input layer, and randomly replacing the slot with the same slot value of a non-sample. Therefore, if a slot is important for the model, the loss and disorder of the value of the slot can negatively affect the final effect of the model, and the lower the area under the curve (AUC) of the model is, the more important the slot is, after all the features are completely replaced, the order of the importance in the features in the model can be obtained by performing reverse sequencing according to the value of the AUC. The AUC is defined as an area enclosed by a coordinate axis under a receiver operating characteristic curve (ROC) curve, a value of the area is not greater than 1, and since the ROC curve is generally located above a line y = x, a value of the AUC ranges between 0.5 and 1. The overall effect of the model can be given by using the AUC value as an evaluation standard, and the influence effect of a single neuron on the whole can be known by resolving the contribution of each characteristic input neuron to the final AUC of the model.
Generally, the evaluation of the feature importance of the DNN model needs to be implemented in a model training framework. For example, when the model is trained on a highly concurrent distributed cluster, the above scheme can be implemented by tampering with slot values in the training samples, and the evaluation of the feature importance of the slot dimensions is completed.
However, the scheme is relatively expensive to complete in the model training process, because the main purpose of model training is to obtain the latest model by using a real training sample, if the training sample is tampered, an independent training task is necessary to perform calculation, so that it is determined that large-scale model importance evaluation is performed on a cluster, resources with the same magnitude as that of a conventional model training task are required to be consumed, and the cost for evaluating feature importance is relatively high.
Based on this, the embodiment of the application localizes the process of feature evaluation, and can realize feature importance calculation based on online interaction between the electronic device and the device running the neural network model and the calculation resources of the electronic device after the training of the neural network model is completed, so that the occupation of resources is less.
Exemplarily, as shown in fig. 2, fig. 2 is a schematic diagram of an application scenario architecture to which the method provided by the embodiment of the present application is applied.
In this embodiment, the electronic device 11 may obtain a first sample set, calculate to obtain a replacement sample of each sample in the first sample set, and send an inquiry request to the device 12 that runs the neural network model, the device 12 may be a server cluster or the like, and for example, may include a server 121 to a server 12n (n is a value greater than 1), and the like, the device 12 may obtain a vector value corresponding to a feature attribute value in the inquiry request in the neural network model, the device 12 sends the vector value to the electronic device 11, and the electronic device may obtain an input value corresponding to each replacement sample in the neural network model according to the vector value, perform vector calculation according to the input value corresponding to each replacement sample in the neural network model, obtain a prediction result of each replacement sample, and further determine an importance of each feature to the neural network model according to a difference between the prediction result of each replacement sample and an actual value of each sample.
In other words, in the embodiment of the present application, the feature importance may be calculated based on the calculation resource of one electronic device, and in the electronic device, the weight of the feature may be calculated according to a sample different from the training sample of the neural network model, and it is not necessary to calculate a combination of a new sample and a sample for calculating the training neural network model, so that the calculation amount is small, and the calculation efficiency is high.
As shown in fig. 3, fig. 3 is a schematic flow chart of a method for feature evaluation according to an embodiment of the present application.
The method specifically comprises the following steps:
s101: obtaining a first sample set, wherein any sample in the first sample set comprises a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used for describing the features.
In the embodiment of the present application, the neural network model may be updated periodically, for example, according to a certain period, the neural network model is trained by combining a new sample, and the newly trained neural network model may be run in the application system. The device for collecting the neural network model training samples can collect newly generated samples according to certain conditions and store the new samples in a storage space or a database and the like.
The electronic device of the embodiment of the application can obtain a plurality of new samples in the storage space for storing the new samples to obtain the first sample set. For example, the electronic device may periodically query the storage space, and when there are new samples in the storage space, the electronic device may obtain the first sample set by obtaining the new samples from the storage space.
For example, taking an application scenario of counting the click rate of the user on an event as an example, the features of the samples in the first sample set may include: age and gender, the attribute of age is a specific age value, the attribute of gender is male or female, and the actual click rate of the user on an event can also be included in the sample.
S102: and for any sample of the first sample set, replacing the characteristic attribute of one characteristic of the any sample by using a replacement operator at each time to obtain a plurality of replacement samples of the any sample.
In this embodiment of the present application, for any sample of the first sample set, a plurality of replacement samples of the sample may be obtained in a manner that the replacement operator replaces the feature attribute of one feature at a time. The alternative may be random.
Illustratively, in one of the samples of the first sample set, the features and the attributes of the features include: age 20, gender women, two replacement samples are available. For example, the replacement sample obtained after replacing the feature "age" is "age 40, gender female", and the replacement sample obtained after replacing the feature "gender" is "age 20, gender male".
S103: obtaining vector values corresponding to the characteristic attributes of the plurality of replacement samples in a neural network model; the neural network model is obtained by training a second sample set, and the first sample set and the second sample set do not have intersection.
In the embodiment of the present application, the neural network model may be obtained by any possible training mode, which is not specifically limited in the embodiment of the present application, and the first sample set and the second sample set used for training the neural network model do not intersect with each other, that is, a new sample is used to verify the importance of each feature to the neural network model.
In this embodiment of the application, the electronic device may interact with a device running the neural network model on line, and query vector values corresponding to the feature attributes in the plurality of replacement samples in the neural network model. For example, the electronic device may send a query request to a device running the neural network model; the query request comprises characteristic attributes in a plurality of replacement samples; the neural network model comprises a corresponding relation between the characteristic attribute and the vector value; the electronic device may then obtain the corresponding vector values from the device running the neural network model. The specific characteristic attribute and the specific vector value may be determined according to an actual application scenario, and the embodiment of the present application is not particularly limited.
S104: and calculating the input value of each replacement sample in the neural network model according to the vector value.
In the embodiment of the application, the electronic device acquires the vector values corresponding to the characteristic attributes in the replacement samples in the neural network model, so that the input values of the replacement samples in the neural network model can be obtained based on the vector values.
In a possible implementation manner, for any one of the replacement samples, the feature attributes of the replacement sample are replaced with vector values corresponding to the feature attributes of the replacement sample, so that an input value of the replacement sample in the neural network model can be obtained.
S105: and vector calculation is carried out on the input value of each replacement sample in the neural network model to obtain the prediction result of each replacement sample.
In the embodiment of the application, any method which can be realized can be adopted to perform vector calculation on the input values of the replacement samples in the neural network model to obtain the prediction results of the replacement samples. Since the vector calculation is a relatively conventional method, the embodiments of the present application are not described herein again.
S106: and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample.
In the embodiment of the present application, if the difference between the prediction result of the replacement sample and the actual value of the sample is large, it can be stated that the importance of the feature in the replacement sample, in which the attribute is replaced, to the neural network model is high. If the prediction result of the replacement sample is slightly different from the actual value of the sample, it can be shown that the importance of the feature in the replacement sample replacing the attribute is lower for the neural network model. And thus the weight of each feature can be derived.
In a possible implementation manner, the AUC value of each replacement sample may be calculated according to the prediction result of each replacement sample and the actual value of each sample; and reversely ordering the AUC values of the replacement samples to obtain the weight of each feature. Since AUC value calculation is a more conventional method, the embodiment of the present application is not described herein again.
In summary, the embodiment of the present application provides a method and an apparatus for feature evaluation, which can implement feature importance calculation based on online interaction between an electronic device and a device running a neural network model and on computational resources of the electronic device after training of the neural network model is completed, and occupy less resources. Specifically, a first sample set may be obtained, where any sample in the first sample set includes a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used to describe the features; aiming at any sample of the first sample set, replacing the characteristic attribute of one characteristic of any sample by using a replacement operator every time to obtain a plurality of replacement samples; obtaining vector values corresponding to the characteristic attributes in the plurality of replacement samples in the neural network model; the neural network model is obtained by adopting a second sample set for training, and the first sample set and the second sample set have no intersection; calculating the input value of each replacement sample in the neural network model according to the vector value; carrying out vector calculation on the input value of each replacement sample in the neural network model to obtain the prediction result of each replacement sample; and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample. In the embodiment of the application, the feature importance can be obtained based on the computing resources of one piece of electronic equipment, and in the electronic equipment, the weight of the feature can be computed according to the sample different from the training sample of the neural network model, the new sample and the sample for computing the training neural network model do not need to be combined for computation, so that the computation amount is small, and the computation efficiency is high.
On the basis of the embodiment corresponding to fig. 3, in a possible implementation manner, it may further include: and periodically determining the weight of each feature, and indicating to reduce the update frequency of the neural network model when the fluctuation of the weight of each feature is less than a fluctuation threshold value.
In the embodiment of the present application, the method for calculating the feature importance described in the embodiment corresponding to fig. 3 may be used to periodically calculate the weight of each feature, and if the weight of each feature does not fluctuate greatly in several calculations, for example, the fluctuation is smaller than a fluctuation threshold (the fluctuation threshold may be set according to actual conditions, and the embodiment of the present application is not specifically limited), it indicates that the neural network model is relatively stable, and may instruct the user to reduce the update frequency of the neural network model in a manner of text, image, or voice, so as to save training resources of the neural network model and reduce the cost for operating and maintaining the neural network model.
On the basis of the embodiment corresponding to fig. 3, in a possible implementation manner, it may further include: in updating the neural network model, features having a significance below a threshold are deleted in the updated sample.
In the embodiment of the application, for the feature with lower importance, the significance for training the neural network model is not great, and the feature with the importance lower than the threshold value in the updating sample can be updated when the neural network model is updated, so that the calculated amount in the training of the neural network model can be reduced, and the training efficiency is improved.
On the basis of the embodiment corresponding to fig. 3, in a possible implementation manner, it may further include: the weight of each of the features is displayed in a user interface.
In this embodiment of the present application, a curve, a text, a graph, a table, or any other arbitrary image may be used in the user interface to display the weight of each feature, and this is not particularly limited in this embodiment of the present application. The user can observe the importance degree of each feature on the effect of the model according to the display of the user interface, and then delete the features with the long-term ranking at a lower level, so that the computing resources of the application system are saved.
Fig. 4 is a scene diagram of a specific feature evaluation method according to an embodiment of the present application. As shown in fig. 4.
And at the time of X, performing model training by using cluster equipment and the sample X to obtain a neural network model, and operating the neural network model in an application system.
At time X +1, a sample X +1 of a new and currently untrained sample set is obtained, attributes of each feature in the sample X +1 can be replaced respectively by a replacement operator, a sample set after replacement is generated, an attribute of one feature in each sample after replacement is different from a sample before replacement, a vector value of the feature attribute after replacement is further obtained from a model, a prediction result of each replacement sample is obtained by DNN network calculation, an AUC value is calculated by an AUC operator in a feature dimension, the calculated AUC value can be written into a repository, and further feature importance display, feature research, model delay analysis and the like can be performed, which is not specifically limited in the embodiment of the present application.
In the embodiment of the application, the electronic device executing the feature importance evaluation method can be different from a device operating a neural network model, so that the evaluation of the feature importance can be localized and single-machine, and can be decoupled from a model training framework, and a distributed parameter server (parameter server) is not required. In real business, generally, complete evaluation of one model requires consumption of 200 cluster resources for 15 hours of calculation; in the embodiment of the application, the characteristic importance evaluation of the same model can be completed by using one physical machine within 30 minutes, and through large-scale data comparison, the evaluation conclusion of the two schemes is consistent, the difference of the model AUC is in ten thousandths (0.0001), and the expected effect is completely achieved. In addition, in the embodiment of the application, the whole evaluation scheme is carried out locally, all sample data can be sampled and cut, on the premise of ensuring the confidence coefficient, the sample size can be randomly compressed, the evaluation time is shortened, meanwhile, the traditional large-scale cluster training mode is changed into an on-line access discrete characteristic value dictionary, the time consumption can be reduced in magnitude, and the purpose of high-efficiency evaluation can be achieved. Currently applied in real recommendation systems, this process is shortened from 10 hours to 30 minutes. In the embodiment of the application, feature importance calculation can be triggered during model iteration, calculation is completed during the next iteration, the importance result of each version is displayed in a visualized mode through the result, and the application system is supported powerfully to improve benefits. In addition, the CPU calculation in the embodiment of the application can support horizontal capacity expansion, capacity expansion can be performed on the evaluation capacity by increasing local CPU resources, and the calculation speed is increased.
Fig. 5 is a schematic structural diagram of an embodiment of the feature evaluation apparatus provided in the present application. As shown in fig. 5, the feature evaluation apparatus provided in this embodiment includes:
an obtaining module 51, configured to obtain a first sample set, where any sample in the first sample set includes a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used to describe the features;
a replacing module 52, configured to, for any sample of the first sample set, replace, by using a replacing operator, a feature attribute of one feature of the any sample at a time, so as to obtain a plurality of replacing samples of the any sample;
the obtaining module 51 is further configured to obtain vector values of the feature attributes of the plurality of replacement samples corresponding to the neural network model; the neural network model is obtained by adopting a second sample set for training, and the first sample set and the second sample set have no intersection;
a calculating module 53, configured to calculate an input value of each replacement sample in the neural network model according to the vector value;
the calculating module 53 is further configured to perform vector calculation on the input value of each replacement sample in the neural network model to obtain a prediction result of each replacement sample;
the calculating module 53 is further configured to determine a weight of each feature according to a difference between a prediction result of each of the replacement samples and an actual value of each of the samples.
Optionally, the obtaining module is specifically configured to:
sending a query request to a device running the neural network model; the query request comprises characteristic attributes in the plurality of replacement samples; the neural network model comprises the corresponding relation between the characteristic attributes of the plurality of replacement samples and the vector values; obtaining the vector values from a device running the neural network model.
Optionally, the electronic device is different from the device running the neural network model.
Optionally, the calculation module is specifically configured to: and for any one of the replacement samples, replacing the characteristic attribute of the replacement sample with the vector value corresponding to the characteristic attribute of the replacement sample to obtain the input value of the replacement sample in the neural network model.
Optionally, the calculation module is specifically configured to: calculating an area under the curve AUC value of each replacement sample according to the prediction result of each replacement sample and the actual value of each sample; and reversely ordering the AUC values of the alternative samples to obtain the weight of each feature.
Optionally, the method further includes:
and the indicating module is used for periodically determining the weight of each feature and indicating to reduce the updating frequency of the neural network model under the condition that the fluctuation of the weight of each feature is less than a fluctuation threshold value.
Optionally, the method further includes:
and the deleting module is used for deleting the characteristics with the importance degree lower than the threshold value in the updating sample when the neural network model is updated.
Optionally, the method further includes:
and the display module is used for displaying the weight of each characteristic on a user interface.
The embodiment of the application provides a method and a device for characteristic evaluation, which can realize the calculation of the importance of characteristics based on the online interaction of electronic equipment and equipment for operating a neural network model and the calculation resources of the electronic equipment after the training of the neural network model is completed, and occupy less resources. Specifically, a first sample set may be obtained, where any sample in the first sample set includes a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used to describe the features; aiming at any sample of the first sample set, replacing the characteristic attribute of one characteristic of any sample by using a replacement operator every time to obtain a plurality of replacement samples; obtaining vector values corresponding to the characteristic attributes in the plurality of replacement samples in the neural network model; the neural network model is obtained by adopting a second sample set for training, and the first sample set and the second sample set have no intersection; calculating the input value of each replacement sample in the neural network model according to the vector value; carrying out vector calculation on the input value of each replacement sample in the neural network model to obtain the prediction result of each replacement sample; and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample. In the embodiment of the application, the feature importance can be obtained based on the computing resources of one piece of electronic equipment, in addition, in the electronic equipment, the weight of the feature can be computed according to the sample different from the training sample of the neural network model, the new sample and the sample for computing and training the neural network model do not need to be combined for computation, the computation amount is smaller, and the computation efficiency is higher.
The feature evaluation device provided in each embodiment of the present application can be used to execute the method shown in each corresponding embodiment, and the implementation manner and principle thereof are the same, and are not described again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, it is a block diagram of an electronic device according to the method of feature evaluation of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). One processor 601 is illustrated in fig. 6.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of feature evaluation provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of feature evaluation provided herein.
The memory 602, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 51, the replacement module 52, and the calculation module 53 shown in fig. 5) corresponding to the method of feature evaluation in the embodiments of the present application. The processor 601 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 602, that is, the method of feature evaluation in the above method embodiments.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device evaluated by the characteristics, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory located remotely from the processor 601, and these remote memories may be connected over a network to the electronic device for feature evaluation. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of feature evaluation may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic device for feature evaluation, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, after the neural network model is trained, the calculation of the feature importance degree can be realized based on the online interaction of the electronic equipment and the equipment for operating the neural network model and the calculation resources of the electronic equipment, and the occupation of the resources is less. Specifically, a first sample set may be obtained, where any one of the samples in the first sample set includes: the method comprises the following steps of (1) describing features by using a plurality of features and feature attributes of a sample; aiming at any sample of the first sample set, replacing the characteristic attribute of one characteristic of any sample by using a replacement operator every time to obtain a plurality of replacement samples; obtaining vector values corresponding to the characteristic attributes in the plurality of replacement samples in the neural network model; the neural network model is obtained by training a second sample set, and the first sample set and the second sample set do not have intersection; calculating the input value of each replacement sample in the neural network model according to the vector value; performing vector calculation on the input value of each replacement sample in the neural network model to obtain the prediction result of each replacement sample; and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample. In the embodiment of the application, the feature importance can be obtained based on the computing resources of one piece of electronic equipment, in addition, in the electronic equipment, the weight of the feature can be computed according to the sample different from the training sample of the neural network model, the new sample and the sample for computing and training the neural network model do not need to be combined for computation, the computation amount is smaller, and the computation efficiency is higher.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. A method for feature evaluation, applied to an electronic device, the method comprising:
obtaining a first sample set, wherein any sample in the first sample set comprises a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used for describing the features;
for any sample of the first sample set, replacing the characteristic attribute of one characteristic of the any sample by using a replacement operator each time to obtain a plurality of replacement samples of the any sample;
obtaining vector values corresponding to the characteristic attributes of the plurality of replacement samples in a neural network model; the neural network model is obtained by training a second sample set, and the first sample set and the second sample set do not have an intersection;
calculating the input value of each replacement sample in the neural network model according to the vector value;
vector calculation is carried out on the input value of each replacement sample in the neural network model, and the prediction result of each replacement sample is obtained;
and determining the weight of each feature according to the difference between the prediction result of each replacement sample and the actual value of each sample.
2. The method of claim 1, wherein obtaining corresponding vector values of the feature attributes of the plurality of replacement samples in a neural network model comprises:
sending a query request to a device running the neural network model; the query request comprises characteristic attributes of the plurality of replacement samples; the neural network model comprises the corresponding relation between the characteristic attributes of the plurality of replacement samples and the vector values;
obtaining the vector values from a device running the neural network model.
3. The method of claim 2, wherein the electronic device is different from a device running the neural network model.
4. The method according to any one of claims 1-3, wherein said calculating input values of each of said replacement samples in said neural network model from said vector values comprises:
and for any one of the replacement samples, replacing the characteristic attribute of the replacement sample with the vector value corresponding to the characteristic attribute of the replacement sample to obtain the input value of the replacement sample in the neural network model.
5. The method according to any one of claims 1-3, wherein determining the weight of each feature according to the difference between the prediction result of each of the replaced samples and the actual value of each of the samples comprises:
calculating an area under the curve AUC value of each replacement sample according to the prediction result of each replacement sample and the actual value of each sample;
and reversely ordering the AUC values of the alternative samples to obtain the weight of each feature.
6. The method according to any one of claims 1-3, further comprising:
and periodically determining the weight of each feature, and indicating to reduce the update frequency of the neural network model when the fluctuation of the weight of each feature is less than a fluctuation threshold value.
7. The method according to any one of claims 1-3, further comprising:
features having an importance below a threshold are deleted in an update sample when updating the neural network model.
8. The method of any one of claims 1-3, further comprising:
the weight of each of the features is displayed on a user interface.
9. An apparatus for feature evaluation, applied to an electronic device, includes:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first sample set, any sample in the first sample set comprises a plurality of features and feature attributes of the sample, and the feature attributes of the sample are used for describing the features;
a replacement module, configured to replace, for any sample of the first sample set, a feature attribute of one feature of the any sample with a replacement operator at a time, to obtain multiple replacement samples of the any sample;
the obtaining module is further configured to obtain vector values corresponding to the feature attributes of the plurality of replacement samples in the neural network model; the neural network model is obtained by adopting a second sample set for training, and the first sample set and the second sample set have no intersection;
the calculation module is used for calculating the input value of each replacement sample in the neural network model according to the vector value;
the calculation module is further configured to perform vector calculation on the input value of each replacement sample in the neural network model to obtain a prediction result of each replacement sample;
the calculating module is further configured to determine a weight of each feature according to a difference between a prediction result of each of the replacement samples and an actual value of each of the samples.
10. The apparatus of claim 9, wherein the obtaining module is specifically configured to:
sending a query request to a device running the neural network model; the query request comprises characteristic attributes of the plurality of replacement samples; the neural network model comprises the corresponding relation between the characteristic attributes of the plurality of replacement samples and the vector values; obtaining the vector values from a device running the neural network model.
11. The apparatus of claim 10, wherein the electronic device is different from a device running the neural network model.
12. The apparatus according to any one of claims 9-11, wherein the computing module is specifically configured to: and for any one of the replacement samples, replacing the characteristic attribute of the replacement sample with the vector value corresponding to the characteristic attribute of the replacement sample to obtain the input value of the replacement sample in the neural network model.
13. The apparatus according to any one of claims 9 to 11, wherein the computing module is specifically configured to: calculating an area under the curve AUC value of each replacement sample according to the prediction result of each replacement sample and the actual value of each sample; and reversely ordering the AUC values of the replacement samples to obtain the weight of each characteristic.
14. The apparatus of any one of claims 9-11, further comprising:
and the indicating module is used for periodically determining the weight of each feature and indicating to reduce the updating frequency of the neural network model under the condition that the fluctuation of the weight of each feature is less than a fluctuation threshold value.
15. The apparatus of any one of claims 9-11, further comprising:
and the deleting module is used for deleting the characteristics with the importance degree lower than the threshold value in the updating sample when the neural network model is updated.
16. The apparatus of any one of claims 9-11, further comprising:
and the display module is used for displaying the weight of each characteristic on a user interface.
17. An electronic device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202010244963.1A 2020-03-31 2020-03-31 Feature evaluation method and device Active CN111461306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244963.1A CN111461306B (en) 2020-03-31 2020-03-31 Feature evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244963.1A CN111461306B (en) 2020-03-31 2020-03-31 Feature evaluation method and device

Publications (2)

Publication Number Publication Date
CN111461306A CN111461306A (en) 2020-07-28
CN111461306B true CN111461306B (en) 2023-04-18

Family

ID=71682411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244963.1A Active CN111461306B (en) 2020-03-31 2020-03-31 Feature evaluation method and device

Country Status (1)

Country Link
CN (1) CN111461306B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI770629B (en) * 2020-10-08 2022-07-11 大陸商星宸科技股份有限公司 Neural network optimization method, device and processor
CN112528159B (en) * 2020-12-24 2024-03-26 北京百度网讯科技有限公司 Feature quality assessment method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866472A (en) * 2015-06-15 2015-08-26 百度在线网络技术(北京)有限公司 Generation method and device of word segmentation training set
CN108665064A (en) * 2017-03-31 2018-10-16 阿里巴巴集团控股有限公司 Neural network model training, object recommendation method and device
CN110162799A (en) * 2018-11-28 2019-08-23 腾讯科技(深圳)有限公司 Model training method, machine translation method and relevant apparatus and equipment
CN110222727A (en) * 2019-05-15 2019-09-10 广东电网有限责任公司电力调度控制中心 A kind of short-term load forecasting method and device based on deep neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10846555B2 (en) * 2017-06-26 2020-11-24 Verizon Patent And Licensing Inc. Object recognition based on hierarchical domain-based models

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866472A (en) * 2015-06-15 2015-08-26 百度在线网络技术(北京)有限公司 Generation method and device of word segmentation training set
CN108665064A (en) * 2017-03-31 2018-10-16 阿里巴巴集团控股有限公司 Neural network model training, object recommendation method and device
CN110162799A (en) * 2018-11-28 2019-08-23 腾讯科技(深圳)有限公司 Model training method, machine translation method and relevant apparatus and equipment
CN110222727A (en) * 2019-05-15 2019-09-10 广东电网有限责任公司电力调度控制中心 A kind of short-term load forecasting method and device based on deep neural network

Also Published As

Publication number Publication date
CN111461306A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
JP7166322B2 (en) Methods, apparatus, electronics, storage media and computer programs for training models
CN111143686B (en) Resource recommendation method and device
CN111311321B (en) User consumption behavior prediction model training method, device, equipment and storage medium
CN112559870B (en) Multi-model fusion method, device, electronic equipment and storage medium
CN111582479A (en) Distillation method and device of neural network model
CN111695695A (en) Quantitative analysis method and device for user decision behaviors
CN111461306B (en) Feature evaluation method and device
CN111563593A (en) Training method and device of neural network model
CN111461343A (en) Model parameter updating method and related equipment thereof
CN111460384A (en) Policy evaluation method, device and equipment
CN111738418A (en) Training method and device for hyper network
CN112288483A (en) Method and device for training model and method and device for generating information
CN112084150A (en) Model training method, data retrieval method, device, equipment and storage medium
KR20230006601A (en) Alignment methods, training methods for alignment models, devices, electronic devices and media
CN111563198A (en) Material recall method, device, equipment and storage medium
CN112580723A (en) Multi-model fusion method and device, electronic equipment and storage medium
CN112819497B (en) Conversion rate prediction method, conversion rate prediction device, conversion rate prediction apparatus, and storage medium
CN111738325A (en) Image recognition method, device, equipment and storage medium
CN111767990A (en) Neural network processing method and device
CN111241225A (en) Resident area change judgment method, resident area change judgment device, resident area change judgment equipment and storage medium
CN113656689B (en) Model generation method and network information pushing method
CN111625710B (en) Processing method and device of recommended content, electronic equipment and readable storage medium
CN114037060A (en) Pre-training model generation method and device, electronic equipment and storage medium
CN113780548A (en) Method, apparatus, device and storage medium for training a model
CN112598136A (en) Data calibration method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant