CN116975580A - Information evaluation method, device and equipment - Google Patents

Information evaluation method, device and equipment Download PDF

Info

Publication number
CN116975580A
CN116975580A CN202310745106.3A CN202310745106A CN116975580A CN 116975580 A CN116975580 A CN 116975580A CN 202310745106 A CN202310745106 A CN 202310745106A CN 116975580 A CN116975580 A CN 116975580A
Authority
CN
China
Prior art keywords
information
field
evaluation
probability
evaluated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310745106.3A
Other languages
Chinese (zh)
Inventor
丁霞
王志文
王海丰
杨晶生
边超
霍巧弟
徐冰
卢莎
隗斯诺
邹刘征
徐潇
孟潇
赵臻宇
张飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310745106.3A priority Critical patent/CN116975580A/en
Publication of CN116975580A publication Critical patent/CN116975580A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses an information evaluation method, an information evaluation device and information evaluation equipment, which are used for acquiring information to be evaluated and field values of the information to be evaluated under information characteristic fields. Inputting field values of the information to be evaluated under the information characteristic fields into a pre-trained information evaluation model, and acquiring information evaluation probability that the information to be evaluated is target type information. Furthermore, at least one key feature field in the information feature fields is acquired, and the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold. And adjusting the obtained information evaluation probability based on the field value of the information to be evaluated under the key characteristic field, and re-acquiring the information evaluation probability that the information to be evaluated is the target type information. Therefore, the field value of the key feature field with larger influence degree is taken as an adjustment basis, the information evaluation probability output by the information evaluation model is readjusted, and the information evaluation probability can be adjusted to be more accurate.

Description

Information evaluation method, device and equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an information evaluation method, apparatus, and device.
Background
With the rapid development of information technology, some information is collected according to requirements in some scenes. For example, in a marketing scenario, marketers may collect cue information and mine consumer demand based on the collected cue information.
Typically, the collected information needs to be evaluated to screen the information by the evaluation result. For example, in a marketing scenario, quality assessment may be performed on the collected cue information.
At present, an evaluation model can be obtained through a model training mode, information to be evaluated is input into the evaluation model, and the evaluation model can output an evaluation result aiming at the information. However, the evaluation accuracy of the evaluation model is not high.
Disclosure of Invention
In view of the above, the present application provides an information evaluation method, apparatus and device, which can improve the accuracy of evaluating information to be evaluated.
In order to solve the problems, the technical scheme provided by the application is as follows:
in a first aspect, the present application provides an information evaluation method, the method comprising:
acquiring information to be evaluated, and acquiring a field value of the information to be evaluated under an information characteristic field;
inputting a field value of the information to be evaluated under an information characteristic field into a pre-trained information evaluation model, and acquiring information evaluation probability of the information to be evaluated, which is output by the information evaluation model, as target type information;
Acquiring at least one key feature field in the information feature fields; the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold;
and adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field, and re-acquiring the information evaluation probability of the information to be evaluated as the target type information.
In a second aspect, the present application provides an information evaluation apparatus, the apparatus comprising:
the first acquisition unit is used for acquiring information to be evaluated and a field value of the information to be evaluated under an information characteristic field;
the input unit is used for inputting a field value of the information to be evaluated under an information characteristic field into a pre-trained information evaluation model, and acquiring information evaluation probability of the information to be evaluated, which is output by the information evaluation model, as target type information;
a second obtaining unit, configured to obtain at least one key feature field in the information feature fields; the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold;
and the adjusting unit is used for adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field and re-acquiring the information evaluation probability of the information to be evaluated as the target type information.
In a third aspect, the present application provides an electronic device comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement any of the information evaluation methods.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the information evaluation methods.
From this, the application has the following beneficial effects:
the application provides an information evaluation method, an information evaluation device and information evaluation equipment. And further, inputting a field value of the information to be evaluated under the information characteristic field into a pre-trained information evaluation model, and acquiring the information evaluation probability of the information to be evaluated output by the information evaluation model as the target type information. The information evaluation probability represents an evaluation result of the information to be evaluated. Further, at least one key feature field in the information feature fields is obtained, and the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold. Therefore, the obtained information evaluation probability is adjusted based on the field value of the information to be evaluated under the key characteristic field, and the information evaluation probability that the information to be evaluated is the target type information is acquired again. Therefore, the field value of the key feature field with larger influence degree is taken as an adjustment basis, the information evaluation probability output by the information evaluation model is readjusted, and the information evaluation probability can be adjusted to be more accurate.
Drawings
Fig. 1 is a schematic diagram of a frame of an exemplary application scenario provided in an embodiment of the present application;
FIG. 2 is a flowchart of an information evaluation method according to an embodiment of the present application;
FIG. 3 is a flow chart of adjusting weights of key feature fields according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an information evaluation device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a basic structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of embodiments of the application will be rendered by reference to the appended drawings and appended drawings.
In order to facilitate understanding and explanation of the technical solutions provided by the embodiments of the present application, the following description will first explain the background art of the present application.
It will be appreciated that before using the technical solutions of the various embodiments in the disclosure, the user may be informed of the type of personal information involved, the range of use, the use scenario, etc. in an appropriate manner, and obtain the authorization of the user. That is, when the information in the present application relates to personal information of a user, the information is acquired after the authorization of the user is obtained.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Therefore, the user can select whether to provide personal information to the software or hardware such as the electronic equipment, the application program, the server or the storage medium for executing the operation of the technical scheme according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative, and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
With the rapid development of information technology, some information is collected according to requirements in some scenes. For example, in a marketing scenario, marketers may collect cue information and mine consumer demand based on the collected cue information. Typically, the collected information needs to be evaluated to screen the information by the evaluation result. For example, in a marketing scenario, quality assessment may be performed on the just-collected cue information.
If the information just collected can not meet the use requirement temporarily, the part of information can be stored first, and the part of information can be converted into stock information. For example, in a marketing scenario, when the just-collected cue information indicates that the customer has no intention to consume, the cue information may be stored as stock cue information. However, the stock information may change over time, and the information may change from not satisfying the use requirement before to satisfying the use requirement. Therefore, the stock information also needs to be evaluated periodically.
Currently, there are three information evaluation modes.
First, evaluation rules can be set manually, and information can be evaluated through the evaluation rules. Specifically, the information is matched with the evaluation rule, and when the information hits the evaluation rule, the better the evaluation effect of the information is determined.
Second, an evaluation index is set, and information is evaluated through the evaluation index. Specifically, the information is ranked according to the result of the information under the evaluation index, and the higher the ranking is, the better the evaluation effect of the information is.
However, the setting and execution of the above two evaluation methods are subjective, which may cause inaccurate evaluation results.
Thirdly, an evaluation model can be obtained through a model training mode, information to be evaluated is input into the evaluation model, and the evaluation model outputs an evaluation result aiming at the information. However, according to the research of the applicant, when the evaluation model is trained based on the historical information, the historical information may comprise key information, the key information has a larger influence on the evaluation result of the historical information, but the historical information comprising the key information occupies less of all the historical information, and is sparse. The method can lead the learning of the evaluation model to be insufficient, the key information can not be fully learned, the model training effect is poor, and the evaluation accuracy of the evaluation model obtained by training is low.
Based on the above, the embodiment of the application provides an information evaluation method, an information evaluation device and information evaluation equipment, which are used for firstly acquiring information to be evaluated and a field value of the information to be evaluated under an information characteristic field. And further, inputting a field value of the information to be evaluated under the information characteristic field into a pre-trained information evaluation model, and acquiring the information evaluation probability of the information to be evaluated output by the information evaluation model as the target type information. The information evaluation probability represents an evaluation result of the information to be evaluated. Further, at least one key feature field in the information feature fields is obtained, and the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold. The field value of the key feature field is the key information in the information to be evaluated. Therefore, the obtained information evaluation probability is adjusted based on the field value of the information to be evaluated under the key characteristic field, and the information evaluation probability that the information to be evaluated is the target type information is acquired again. Therefore, the field value of the key feature field with larger influence degree is taken as an adjustment basis, the information evaluation probability output by the information evaluation model is readjusted, and the information evaluation probability can be adjusted to be more accurate.
It will be appreciated that the above solutions suffer from drawbacks, all of which are the result of the applicant after practice and careful study. Accordingly, the discovery process of the above-described problems, and the solutions presented below by embodiments of the present application with respect to the above-described problems, should be all contributions of the applicant to the embodiments of the present application during the present application.
In order to facilitate understanding of the information evaluation method provided by the embodiment of the present application, the following description is made in connection with the scenario example shown in fig. 1. Referring to fig. 1, the diagram is a schematic frame diagram of an exemplary application scenario provided in an embodiment of the present application. The information evaluation method may be implemented by a terminal device or a server, and is not limited herein.
In practical applications, an information evaluation model is trained in advance. And acquiring the field value of the information to be evaluated under the information characteristic field.
At least one key feature field of the information feature fields is obtained. The field value of the key feature field is key information, and in general, the key information in the information to be evaluated is sparse. However, the influence degree of the field value of the key feature field on the information evaluation probability is greater than the preset degree threshold, that is, the influence of the field value of the key feature field on the information evaluation probability is greater. The feature fields other than the key feature field in the information feature field are normal feature fields. As shown in fig. 1, the common feature fields include a field X1, a field X2, … …, and a field Xn, and the key feature fields include a field Y1, a field Y2, … …, and a field Yn. Wherein n is a positive integer. The field value of the information to be evaluated under the information characteristic field includes the field value of the information to be evaluated under the normal characteristic field and the field value of the information to be evaluated under the key characteristic field.
The field value of the information to be evaluated under the information characteristic field is input into an information evaluation model, and the information evaluation model outputs the information evaluation probability that the information to be evaluated is the target type information, and can be represented by prob 1. The "information evaluation probability that the information to be evaluated is the target type information" is used to represent the evaluation result of the information to be evaluated.
Since the field value of the key feature field has a larger influence on the information evaluation probability that the information to be evaluated is the target type information, the obtained information evaluation probability can be adjusted (i.e. the adjustment link in fig. 1) according to the field value of the information to be evaluated under the key feature field, and the information evaluation probability that the information to be evaluated is the target type information can be obtained again and can be represented by prob 2. Wherein adjusting the information evaluation probability comprises increasing the information evaluation probability or decreasing the information evaluation probability.
Those skilled in the art will appreciate that the frame diagram shown in fig. 1 is but one example in which embodiments of the present application may be implemented. The scope of applicability of the embodiments of the application is not limited in any way by the framework.
In order to facilitate understanding of the present application, an information evaluation method provided by an embodiment of the present application is described below with reference to the accompanying drawings.
Referring to fig. 2, a flowchart of an information evaluation method according to an embodiment of the present application is shown. As shown in fig. 2, the method may include S201-S204:
s201: and acquiring the field value of the information to be evaluated under the information characteristic field.
The information to be evaluated is information to be evaluated. In the embodiment of the present application, the information may be different types of information according to different scenes, which is not limited herein. For example, in a marketing scenario, the information is specifically cue information. Specifically, the marketing department may collect a large amount of cue information through a registration page, a consultation page, etc. of the product officer network, or obtain cue information from other information platforms through purchasing traffic. The thread information may refer to client information. In another example, in the information security scenario, the information may also be network information.
The information feature field corresponds to a field value, which represents the information feature. After obtaining a piece of information, according to the information characteristic field, a field value corresponding to the information characteristic field can be extracted or deduced from the information. For example, the information feature field is "xx name". Taking the obtained information 1 as an example, the field value corresponding to the information feature field extracted from the information 1 is "name 1", and the "name 1" is the information feature under the "xx name".
In the embodiment of the application, the number of the information characteristic fields and the acquisition mode are not limited. The information characteristic field may be a fixed field or updated according to the actual situation. For example, in a marketing scenario, a number of filling items are set at registration that need to be filled in. The content corresponding to the filling in items is the collected information, and the filling in items can be used as information characteristic fields. In addition, some other relevant information feature fields may be extended based on the filling item. For example, when the filling item is "xx name", not only "xx name" may be used as one information feature field, but also "xxx industry" associated with "xx name" may be used as one information feature field. It will be appreciated that the information feature field may be adaptively updated if some fill-in items are added or subtracted at registration.
In addition, the information may be classified into newly collected information or stock information, and the information to be evaluated in this step may be the newly collected information or the stock information. If the information just collected can not meet the use requirement temporarily, the part of information can be stored first, and the part of information can be converted into stock information.
For example, in a marketing scenario, a portion of the cue information may indicate that the customer has a consumption intent at the time of its creation (or time of acquisition). And another part of the cue information indicates that the customer has no consumption intention, the cue information can be stored as stock cue information. Wherein, whether the customer has the consumption intention at present can be determined through the contact of the marketing personnel with the customer or through the property of the customer and the behavior of the customer. For example, when the customer replies that he wants to try out first, needs time to compare with the bid, temporarily has no budget, etc., it indicates that the customer has no intention to consume.
However, the stock clue information may change with time, and the stock clue information may also be changed from the previous "indicating that the client does not have the intention to consume" to "indicating that the client has the intention to consume. For example, in a marketing scenario, when a customer triggers a payment request after a certain period of time, the previously stored stock information is updated, and the updated stock information includes the information that the customer triggers a payment request. The information that the customer triggered some sort of payment request "indicates that the customer has a intent to consume. It can be known that, when storing, the number of the stock clue information can be stored, and the stock clue information corresponds to the same number before and after updating.
S202: inputting field values of the information to be evaluated under the information characteristic fields into a pre-trained information evaluation model, and acquiring information evaluation probability of the information to be evaluated output by the information evaluation model as target type information.
The information evaluation model is trained in advance. As an alternative example, the information evaluation model is pre-trained from a training information set. The training information set comprises at least one piece of history information, a field value of each piece of history information under an information characteristic field and an information label of each piece of history information. The training information set may be considered to include at least one piece of training data, each piece of training data including a piece of history information, a field value of the history information under an information feature field, and an information tag of the history information.
It will be appreciated that the historical information in the training information set may be a portion of the historical information selected from all of the collected historical information, with the remainder of the historical information and corresponding information tags being available for the testing/verification process of the information assessment model. The historical information in the training information set can be selected randomly from all collected historical information, or can be selected according to a preset rule, and the preset rule is not limited.
When the information to be evaluated is newly collected information, the historical information in the training information set is specifically the newly collected information in the history (namely, non-historical stock information) when the information evaluation model is trained; when the information to be evaluated is stock information to be evaluated, the history information in the training information set is specifically the history stock information stored in the history when the information evaluation model is trained. That is, when the information evaluation model is trained and applied, the information input to the model is the same type of information.
In one possible implementation manner, the embodiment of the application provides a training process of an information evaluation model, which comprises the following steps:
a1: and inputting field values of the history information under the information characteristic field into an information evaluation model, and acquiring the prediction information evaluation probability of the history information as the target type information.
As an alternative example, the information evaluation model may be a machine learning model, and in an embodiment of the present application, the model structure of the machine learning model is not limited, and may be, for example, a gradient-lifting tree model, a decision tree model, or a neural network model.
In this step, the history information is history information in the training information set. The field value of the history information under the information characteristic field is used as the input of an information evaluation model, and the information evaluation model outputs the prediction information evaluation probability that the history information is the target type information.
Wherein the prediction information evaluation probability represents a prediction evaluation result of "history information is target type information", and the prediction information evaluation probability is usually a fraction between 0 and 1. The embodiment of the application does not limit the target type information and can be determined according to the scene. For example, in a marketing scenario, the history information is specifically cue information. At this time, quality evaluation can be performed on the history information. Then, the target type information may be "high quality cue information", the higher the prediction information evaluation probability that the history information is "high quality cue information", indicating that the history information is more likely to be "high quality cue information". The target type information may also be "non-high quality cue information", the higher the information evaluation probability that the history information is "non-high quality cue information", the more likely it is "non-high quality cue information", and the less likely it is "high quality cue information". When the quality evaluation condition is satisfied, it may be indicated that the quality of the information is high (for example, when the quality evaluation condition is that the information can promote sales of a product), and the evaluation mode limitation of high quality of the information is not performed. For example, the higher the value of the information on a certain evaluation index, the higher the quality of the information can also be indicated. It can be appreciated that in other scenarios, risk assessment may also be performed on the information to be assessed, and when risk assessment is performed on the information to be assessed, the target type information may be "risk information" or "risk-free information", which will not be described herein.
A2: and calculating a loss value according to the prediction information evaluation probability of the historical information serving as the target type information and the information label of the historical information.
The information label of the history information represents the real evaluation result of the history information being the target type information, and the information label of the history information can be obtained by manual marking. In practical application, when the history information is clue information in the marketing scene, the marketing personnel can communicate with the clients and mark according to the communication condition.
The information label of the historical information can be represented by 0 or 1, and the model training mode for representing the information evaluation model is common two-class model training. If the information label is 1, the history information is high-quality clue information; if the information tag is 0, it indicates that the history information is non-high quality cue information.
The loss value is used for representing the gap between the "prediction information evaluation probability of which the history information is the target type information" and the "information label of the history information", namely, the gap between the prediction evaluation result and the real evaluation result. In particular, a penalty function may be constructed for the "predictive information evaluation probability for the historical information being the target type information" and the "information tag for the historical information". After the prediction information evaluation probability and the information label are obtained, the prediction information evaluation probability and the information label are input into a loss function, and a loss value is obtained. The loss function may be set as a cross entropy loss function, but it is understood that the present invention is not limited thereto, and other loss functions may be set.
A3: and training the information evaluation model by using the loss value, and obtaining the trained information evaluation model.
And training the information evaluation model by using the loss value, namely adjusting the model parameters of the information evaluation model. And after the model parameters of the information evaluation model are adjusted, judging whether the training stopping condition is reached, if so, indicating that the training of the information evaluation model is finished, and obtaining the information evaluation model with the training completed. If not, the information evaluation model is not trained, at the moment, the A1-A3 is re-executed, and the re-calculated loss value is obtained. And re-adjusting the model parameters of the information evaluation model based on the recalculated loss value until the training stop condition is reached.
As an alternative example, the training stop condition is that the maximum number of training times is reached or that the loss value reaches a preset range. The embodiment of the application does not limit the maximum training times and the preset range, and can be set according to actual conditions. When the training stop condition is reached, the loss value is indicated to be approaching 0, and the difference between the predicted information evaluation probability and the information label is small enough, so that the training of the information evaluation model is finished.
In one possible implementation, when the information to be evaluated is inventory information to be evaluated, and the history information in the training information set is specifically the history inventory information, it is ensured that the field value recording time and the information tag recording time of each history inventory information are within the same time range. That is, the field value and the information tag are aligned in the time dimension.
For example, a certain history stock information is marked on the T day, and the information tag recording the history stock information at this time is L (T), and it is also necessary to record the history stock information on the T day as V (T) at each field value under the information feature field. That is, the recording times of both L (T) and V (T) are T days. Similarly, on days t+1, t+2, etc., other history stock information is also marked, and the field values and information labels of the corresponding history stock information are recorded as V (t+1), L (t+1), V (t+2), L (t+2), etc. If the history stock information is marked on the T day, the information label of the history stock information on the T day is spliced with the field value of the history stock information on the T day to be used as training data, and the training data is called field value and information label alignment. Thus, a training information set can be further acquired.
Wherein, the embodiment of the application does not limit the same time range. For example, the "same time range" may be the same day, i.e., the field value is recorded according to the dimension of "day" and the label is marked, i.e., the field value is recorded on the T day, and the label is marked on the T day. The "same time range" may also be the same hour or the same minute, i.e. the field value is updated according to the hour and minute dimensions, and the field value and the marking are recorded according to the hour and minute dimensions. This is by way of example only and is not limiting.
Referring to table 1, table 1 is a training information set in a marketing scenario, and history information in the training information set is history stock information. As shown in table 1, the field value recording time of the information feature field is the same as the information label recording time, which represents "field value and information label alignment". In addition, in the marketing scenario, the information tag of the history stock information may represent whether the history stock information is high quality cue information, and the information tag may be 1 (i.e., "yes") or 0 (i.e., "no"). In addition, as shown in table 1, the historical stock information in the training information set may also be numbered, such as information numbers "1, 2, 3, … …, 9999".
Table 1 training information set
It will be appreciated that the inventory information may vary over time. For example, the information feature field is "whether the client triggered a payment request", the field value of the stock information under the information feature field is "no" at the beginning, and as time goes by, the field value of the stock information under the information feature field may be updated to "yes" to indicate that the stock information has changed. Based on this, if the current time is T days, the field value recording time of the stock information and the information tag recording time are different, for example, the field value is the latest value of T days, the information tag is marked on the T-5 day, and the information tag on the T-5 day is marked based on the field value on the T-5 day, which may cause that the information tag on the T-5 day cannot be accurately used to represent the real evaluation result of the field value on the T day. Furthermore, training the information evaluation model based on the field value of the T day and the information label of the T-5 day can cause the problem of inconsistent training reasoning, so that the information evaluation accuracy of the information evaluation model obtained by training is low.
In the embodiment of the application, the field value recording time and the information label recording time of the historical stock information are in the same time range, so that the information label of the historical stock information can be ensured to be accurate to the real evaluation result of the historical stock information, the problem of inconsistent model training reasoning can not exist, and the information evaluation accuracy of the information evaluation model obtained by training can be improved.
Based on the above, after the information evaluation model trained in advance is acquired, the field value of the information to be evaluated under the information feature field is used as the input of the information evaluation model trained in advance, and the information evaluation model trained in advance can output the information evaluation probability that the information to be evaluated is the target type information.
S203: acquiring at least one key characteristic field in the information characteristic fields; the influence degree of the field value of the key characteristic field on the information evaluation probability is larger than a preset degree threshold.
In an embodiment of the application, the information characteristic field comprises at least one characteristic field. The information feature fields include key feature fields and normal feature fields, classified by category. That is, part of the characteristic fields in the information characteristic fields are key characteristic fields, and the rest of the characteristic fields are common characteristic fields.
Typically, the field value of the key feature field affects the probability of information evaluation to a greater extent than a preset extent threshold. That is, taking the information to be evaluated as an example, the field value of the information to be evaluated under the key feature field has a larger influence degree on the information evaluation probability that the information to be evaluated is the target type information. Here, the specific size of the preset degree threshold is not limited here. In addition, a field value of the key feature field may be represented as key information. In general, the key information occupies smaller and sparser information. For example, the history information including the key information in the training information set is small in the total history information. For another example, the key information in the information to be evaluated (i.e., the field value of the information to be evaluated in the key feature field) has a smaller duty ratio in the whole information to be evaluated.
Taking the marketing scenario as an example, the key feature field may be "whether the product officer paid page was viewed on the last 7 days," and the field value of the key feature field may include "yes" (i.e., the product officer paid Fei Yemian was viewed on the last 7 days) or "no" (i.e., the product officer paid page was not viewed on the last 7 days). The target type information may be "high quality cue information". When the field value of the key feature field is "yes" (i.e., the product official pay Fei Yemian has been browsed in the last 7 days), which means that the user is more likely to have a consumption behavior, the information evaluation probability of the information including the field value of the key feature as high-quality cue information is large. And when the field value of the key feature field is "no" (i.e., the product official pay Fei Yemian has been browsed by the last 7 days), the probability of information evaluation of the information including the field value of the key feature as high-quality cue information is smaller, which indicates that the user has a smaller possibility of occurrence of consumption behavior. That is, the field value of the key feature field affects the information evaluation probability to a greater extent.
It will be appreciated that embodiments of the present application are not limited to "degree of influence" implementations, and may be implemented using any method that can calculate or obtain the degree of influence, either existing or future.
In practical application, at least one key feature field in the information feature fields can be acquired first, and then other feature fields in the information feature fields are common feature fields. It will be appreciated that the field values of the key feature fields affect the information evaluation probability to a greater extent than the field values of the normal feature fields affect the information evaluation probability.
In one possible implementation manner, the embodiment of the application provides a specific implementation manner for acquiring at least one key feature field in information feature fields based on a training information set, which comprises the following steps:
b1: calculating the quantity ratio of the information related to the target feature field in the first information set, and acquiring the probability that the information related to the target feature field is the target type information; the target feature field is each of the information feature fields.
As an alternative example, the first information set may be the training information set used for training the information evaluation model in the above-described embodiment. The first set of information includes at least one piece of information. It can be understood that, when the information to be evaluated is the stock information to be evaluated, the historical information in the training information set is the historical stock information, and the information in the first information set in B1-B2 is also the stock information.
The target feature field is each feature field in the information feature fields, and for convenience of description, the target feature field is taken as an example. Information relating to a target feature field in a first set of information is obtained. Wherein "related to the target feature field" means that a field value of the target feature field is included in the information.
The number ratio is a ratio of the number of information related to the target feature field to the number of all information in the first set of information. The "probability that the information related to the target feature field is the target type information" is a fraction between 0 and 1, i.e., the information evaluation probability. In practical applications, the probability values may be manually noted. The probability value is acquired, for example, in response to an input of the probability value in the terminal device/server. It should be appreciated that the probability value provided manually may be an empirical value, and may be obtained by other information assessment models (not provided by embodiments of the present application), and is not limited herein.
B2: and when the quantity duty ratio is lower than a preset duty ratio threshold value and/or the probability satisfies a preset probability range, determining the target feature field as a key feature field.
The preset duty cycle threshold may be preset, and may generally be set to values of 1%, 2%, etc. When the number duty ratio is lower than the preset duty ratio threshold value, the method indicates that the number of information related to the target feature field is smaller, and accords with the characteristic that the field value (i.e. the key information) of the key feature field is sparse.
The preset probability range is also preset, the preset probability range is determined according to the target type information, the preset probability range is related to the preset probability threshold, and the preset probability threshold can be generally set to be 20%, 30% and other numerical values. For example, in a marketing scenario, when the target type information is high quality cue information, the preset probability range is greater than the preset probability threshold. When the target type information is non-high quality clue information, the preset probability range is smaller than or equal to a preset probability threshold.
When the probability meets a preset probability range, the probability is larger, and the characteristic that the influence of the target feature field on the probability is larger accords with the influence degree of the key feature field on the information evaluation probability.
In order to make the determined key feature field more accurate, the target feature field may be determined to be the key feature field when the number duty ratio is lower than the preset duty ratio threshold and the probability satisfies the preset probability range.
Referring to table 2, table 2 is a general feature field and a key feature field obtained by dividing an information feature field in a marketing scenario, and is described only as an example. As shown in table 2, the common feature fields include a field X1, a field X2, … …, and a field Xn, and the key feature fields include a field Y1, a field Y2, … …, and a field Yn.
It should be understood that the execution order of S203 and S202 is not limited when the information evaluation method provided by the embodiment of the present application is executed. As an alternative example, the key feature field and the normal feature field in the acquired information feature field in S203 may be performed, and S202 may be performed.
In practical application, the key feature field and the common feature field in the information feature field can be acquired first, and then the information evaluation model can be trained. The field values of the information characteristic fields are input into the information evaluation model, namely, the field values of the key characteristic fields and the field values of the common characteristic fields are input into the information evaluation model. In the marketing scenario, the "the influence degree of the field value of the key feature field on the information evaluation probability is large" indicates that the field value of the key feature field can directly reflect the quality of the clue information. Although the field value of the common characteristic field can not directly reflect the quality of the clue information, the clue information can be enriched and the coverage rate is higher, and when the information evaluation model is trained, the field value of the common characteristic field can assist the information evaluation model to learn more comprehensively and better. Thus, in training the information evaluation model, the information evaluation model is trained based on both the field values of the key feature fields and the field values of the normal feature fields.
Table 2 information characteristics field
S204: and adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field, and re-acquiring the information evaluation probability of the information to be evaluated as the target type information.
The influence degree of the field value of the information to be evaluated under the key characteristic field on the information evaluation probability that the information to be evaluated is the target type information is larger. Therefore, the information evaluation probability is readjusted by taking the field value of the key characteristic field with larger influence degree as an adjusting basis, and the information evaluation probability is adjusted to be more accurate.
As an alternative example, the information evaluation probability is adjusted according to the field value of the information to be evaluated under the key feature field and the target type information. To facilitate an understanding of this alternative example, details are described below.
First consider the field value of the information under evaluation under the key feature field. Specifically, when the field value of the information to be evaluated under the key feature field is a desired field value, the information evaluation probability is adjusted. If the field value of the information to be evaluated under the key characteristic field is not the expected field value, the information evaluation probability is unchanged. The expected field value has an effect of promoting the information to be evaluated as the target type information, namely if the field value of the information to be evaluated under the key characteristic field is the expected field value, the possibility that the information to be evaluated is the target type information is high. It will be appreciated that the expected field values corresponding to the different key feature fields are different, and the determination is made based on the key feature fields themselves.
Further, the target type information is considered. Specifically, if the information evaluation probability is adjusted, the adjustment direction of the information evaluation probability includes increasing the information evaluation probability or decreasing the information evaluation probability. The adjustment direction may be determined by the target type information, and in practical application, a correspondence relationship between the adjustment direction and the target type information may be predetermined. For example, in a marketing scenario, when the target type information is high quality cue information, the direction is adjusted to increase the information evaluation probability. When the target type information is non-high quality cue information, the direction is adjusted to reduce the information evaluation probability. It will be appreciated that embodiments of the present application are not limited to increasing or decreasing the magnitude of the probability, and may be determined according to the actual situation.
For example, the key feature field is "whether the product web payment page was viewed on the last 7 days," and the field value under the key feature field is "yes" (i.e., the product web payment page was viewed on the last 7 days, which is the desired field value) or "no" (i.e., the product web payment page was not viewed on the last 7 days). When the target type information is high-quality clue information, if the field value is 'no', the information evaluation probability is not adjusted. If yes, the information evaluation probability is increased. When the target type information is non-high quality clue information, if the field value is 'no', the information evaluation probability is not adjusted. If yes, the information evaluation probability is reduced.
It is understood that in this step, if the key feature field is a plurality of key feature fields and the field values of the plurality of key feature fields are all desired field values, the number of key feature fields used as the adjustment basis is not limited. After determining the number of key feature fields, the field value of which key feature field is used as the adjustment basis is not limited. For example, the key feature fields include field X1, field X2, … …, field Xn. The field values corresponding to the field X1, the field X2 and the field X3 are all the corresponding expected field values. The information evaluation probability may be adjusted according to the field value of one or more of the fields X1, X2, X3. As an alternative example, the larger the number of key feature fields as the basis of adjustment, the larger the amplitude of adjustment of the information evaluation probability may be, or may not be, increased proportionally, and may be set according to the actual situation.
Based on the above-mentioned related content of S201-S204, the present application provides an information evaluation method, which includes first obtaining information to be evaluated and a field value of the information to be evaluated under an information feature field. And further, inputting a field value of the information to be evaluated under the information characteristic field into a pre-trained information evaluation model, and acquiring the information evaluation probability of the information to be evaluated output by the information evaluation model as the target type information. The information evaluation probability represents an evaluation result of the information to be evaluated. Further, at least one key feature field in the information feature fields is obtained, and the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold. Therefore, the obtained information evaluation probability is adjusted based on the field value of the information to be evaluated under the key characteristic field, and the information evaluation probability that the information to be evaluated is the target type information is acquired again. Therefore, the field value of the key feature field with larger influence degree is taken as an adjustment basis, the information evaluation probability output by the information evaluation model is readjusted, and the information evaluation probability can be adjusted to be more accurate.
In a possible implementation manner, the embodiment of the present application provides a specific implementation manner for adjusting an information evaluation probability according to a field value of information to be evaluated in a key feature field, and re-acquiring the information evaluation probability that the information to be evaluated is target type information, including:
and adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field and the weight of the key characteristic field, and obtaining the information evaluation probability of the information to be evaluated.
The weights of the key feature fields are used to represent the magnitude of the adjustment, typically a fraction between 0-1, to the probability of evaluating the information. Different weights indicate different adjustment amplitudes. The larger the weight is, the larger the adjustment amplitude of the information evaluation probability is, and the smaller the weight is, the smaller the adjustment amplitude of the information evaluation probability is.
As an optional example, a correspondence between the weight and the probability adjustment amount may be preset, and when the information evaluation probability is actually adjusted, the corresponding probability adjustment amount may be obtained according to the weight, and then the information evaluation probability is adjusted.
As another alternative example, the weights themselves may be used as the probability adjustment amounts. For example, when the field value of the key feature field is the corresponding expected field value, if the adjustment direction is to increase the information evaluation probability, the weight corresponding to the key feature field can be directly increased based on the information evaluation probability of the information to be evaluated. If the adjustment direction is to reduce the information evaluation probability, the weight corresponding to the key feature field can be reduced directly on the basis of the information evaluation probability of the information to be evaluated.
Based on the above, the embodiment of the present application provides a specific implementation manner for adjusting an information evaluation probability according to a field value of information to be evaluated in a key feature field and a weight of the key feature field, to obtain the information evaluation probability of the information to be evaluated, including:
binarizing a field value of the information to be evaluated under the key characteristic field to obtain a binarization result, calculating a product of the binarization result and the weight of the key characteristic field, calculating a sum of the products, and obtaining a result value;
and re-determining the information evaluation probability of the information to be evaluated as the target type information according to the result value and the information evaluation probability.
There are two cases of "information evaluation probability of re-determining that information to be evaluated is target type information according to the result value and the information evaluation probability".
First, the sum of the result value and the information evaluation probability is redetermined as the information evaluation probability that the information to be evaluated is the target type information. That is, prob2=prob1+w1 · y1 · · · sn.
The values of Y1, Y2, … …, yn are specifically binary values obtained by binarizing the field value of each key feature field, and are 0 or 1. For example, when the field value of the key feature field is the corresponding expected field value, the field value thereof may be denoted by "1"; when the field value of the key feature field is not the corresponding expected field value, the field value thereof may be represented by "0". The parameters w1, w2, … …, wn are the weights of each key feature field. prob2 evaluates probability for the information of which the acquired information to be evaluated is the target type information.
It will be appreciated that the condition for the formula to hold is that the direction of adjustment is to increase the information evaluation probability.
And secondly, the difference value between the information evaluation probability and the result value is redetermined as the information evaluation probability that the information to be evaluated is the target type information. That is, prob2=prob1-w1 · y1 · · · j+wn.
It will be appreciated that the condition for the formula to hold is that the direction of adjustment is to reduce the probability of information evaluation. It should be understood that the adjustment direction should be determined according to the target type information, and then a corresponding calculation formula should be selected.
It is also understood that both of the above formulas are formulas expressing "weight itself as probability adjustment amount". Wherein Y1 and Y2. Yn are field values of key feature fields which are used as adjustment basis of information evaluation probability, and can be field values of the whole key feature fields and field values of part of key feature fields.
From the above formula, when the information to be evaluated hits a certain key feature field (i.e. the field value of the key feature field is the desired field value), the information evaluation probability is increased (for example, when Y1 in w1 x Y1 is 1, the prob2 is increased by w1 x Y1), and the increase range of the information evaluation probability is determined by the weight of the key feature field. Therefore, whether the key feature is hit in the formula is determined by the value of Yn, the increasing amplitude of the information evaluation probability is determined by wn, and the more key feature fields are hit, the greater the increasing amplitude of the information evaluation probability, the higher prob2 will be. Therefore, the information evaluation probability can be more accurate through the obtained field value of the key characteristic field, and the information to be evaluated, which is the target type information, can be more accurately identified. The higher prob2 represents the greater likelihood that the information to be evaluated is the target type information (e.g., in a marketing scenario, the higher prob2 represents the higher quality of the information to be evaluated, if the target type information is high quality cue information).
As an alternative example, the weights of the key feature fields may be manually input, and the weights of the key feature fields are acquired in response to an input operation of the weights of the key feature fields in the terminal device/server. In practical applications, the weights of the key feature fields may be obtained from empirical values.
As another alternative example, a second set of information is obtained, the second set of information comprising at least one piece of information, information in the second set of information relating to the key feature field is determined. The weight of each key feature field is obtained and inversely related to the number of the information related to the key feature field in the second information set. That is, the higher the number of information related to the key feature field in the second set of information, the lower the weight of the key feature field. The lower the number of information related to the key feature field in the second set of information, the higher the weight of the key feature field. The weight value of the sparse key characteristic field is increased, so that the field value of the sparse key characteristic field has a larger effect on adjusting the information evaluation probability, and the situation of inaccurate information evaluation probability caused by the sparse key information is improved.
Wherein the second information set may be the same information set as the first information set. As an alternative example, the second information set may be the training information set used for training the information evaluation model in the above-described embodiment. It can be understood that when the information to be evaluated is the stock information to be evaluated, the history information in the training information set is the history stock information, and the information in the second information set is also the stock information.
When the weight of the key feature field is inversely related to the number of the information related to the key feature field in the second information set, the embodiment of the present application provides a detailed implementation manner of obtaining the weight of the key feature field.
Table 3 data related to acquiring weights of key feature fields
Referring to table 3, table 3 is data related to acquiring weights of key feature fields. The data includes an information tag of information in the second information set, an information evaluation probability prob1 of information in the second information set that is obtainable in S202, and a field value of information in the second information set that is obtainable in S203 under a key feature field.
Based on the data in table 3, in one possible implementation manner, the embodiment of the present application provides a specific implementation manner for acquiring the weight of each key feature field, which includes:
S301-1: setting a parameter step length, a target number and a duty ratio threshold of the key feature field, and initializing the weight of the key feature field.
The parameter step length is the adjustment quantity of each adjustment of the weight, and the target quantity is the screening quantity of the information. It will be appreciated that the weights of the key feature fields are typically initialized to 0.
S302-1: and acquiring a second information set, inputting information in the second information set into a pre-trained information evaluation model, and acquiring a first information evaluation probability of the information.
In specific implementation, field values of information in the second information set under the information characteristic field are input into a pre-trained information evaluation model, and first information evaluation probability of the information is obtained. The first information evaluation probability is the information evaluation probability prob1 obtained in S202.
S303: whether the weight adjustment condition is satisfied is determined, and if so, S304-1 is executed.
It is understood that the present application is not limited to the weight adjustment conditions, and can be determined according to practical situations. For example, in the following embodiments, alternative implementation examples of two weight adjustment conditions are provided.
S304-1: if so, acquiring second information evaluation probability of the information according to the first information evaluation probability of the information, the field value of the information under the key feature field and the weight of the key feature field, and acquiring target number of information with the highest second information evaluation probability.
The second information evaluation probability of the information is the adjusted information evaluation probability prob2. As an alternative example, prob2=prob1+w1+y1+jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj j (the specific formula is determined according to the actual situation), the second information evaluation probability can be calculated according to the formula.
After the second information evaluation probability of each piece of information is obtained, the second information evaluation probabilities of the pieces of information are ordered, and the target number of pieces of information with the highest second information evaluation probability is obtained. For example, when the target number is K, then after sorting, the first K pieces of information are selected.
S305: calculating the duty ratio of information related to key characteristic fields in the target quantity of information; and when the duty ratio is smaller than the corresponding duty ratio threshold value, the sum of the weight of the key feature field and the parameter step length is redetermined as the weight of the key feature field, and when the duty ratio is larger than or equal to the corresponding duty ratio threshold value, the weight of the key feature field is unchanged.
The duty ratio of the information related to the key feature field is the ratio of the information related to the key feature field in the second information set in all the information of the second information set. When the duty cycle is smaller than the corresponding duty cycle threshold, the information related to the key feature field is sparse, and the weight of the key feature field needs to be increased.
In one or more embodiments, the present application further provides an optional implementation example of the weight adjustment condition, and fuses the setting and updating process of the weight adjustment condition into the weight adjustment process described above, as follows:
s301-2: before judging whether the weight adjustment condition is satisfied, the method further comprises: setting an accuracy threshold and the maximum iteration number, and initializing the iteration number.
In this example, an accuracy threshold, a maximum number of iterations, is used to set the weight adjustment condition. It will be appreciated that the number of iterations is typically initialized to 0.
S302-2: and calculating first evaluation accuracy according to the first information evaluation probability of the information and the information label of the information, and initializing second evaluation accuracy by using the value of the first evaluation accuracy.
It will be appreciated that any accuracy calculation method, existing or occurring in the future, may be employed to calculate the first and second assessment accuracy. The evaluation accuracy may be used to represent the degree to which the predicted value coincides with the true value.
S304-2: when the weight adjustment condition is satisfied, further comprising: and calculating the second evaluation accuracy according to the second information evaluation probability of the information and the information label of the information.
As an alternative example, the weight adjustment condition is that the number of iterations is less than or equal to the maximum number of iterations, or that the difference between the first evaluation accuracy and the second evaluation accuracy is less than or equal to the accuracy threshold. I.e. whether the weight still needs to be adjusted or not, depending on the number of iterations or the evaluation accuracy.
S306: when the duty ratio is greater than or equal to the corresponding duty ratio threshold value, after the weight of the key feature field is unchanged, the method further comprises the following steps: updating the iteration times, and re-executing the steps of judging whether the weight adjustment condition is met or not and the follow-up steps until the weight adjustment condition is not met.
It will be appreciated that updating the iteration number may continue with the execution of a new weight adjustment. As an alternative example, the iteration number may be increased by one based on the original iteration number, so as to implement updating of the iteration number.
Referring to fig. 3, fig. 3 is a flowchart illustrating adjustment of weights of key feature fields according to an embodiment of the present application. As shown in fig. 3, in combination with the above, the complete step of adjusting the weights of the key feature fields may include S301-S306:
s301: setting an accuracy threshold, a parameter step length, a maximum iteration number, a target number and a duty ratio threshold of the key feature field, and initializing the iteration number and the weight of the key feature field.
It is understood that S301 is composed of S301-1 and S301-2.
The accuracy threshold may be expressed as auc _threshold, which is typically a fraction close to 0, such as 0.001 and 0.002. The parameter step size may be expressed as delta, and typically takes a value of a fraction close to 0, such as 0.01, 0.05. The maximum number of iterations may be expressed as iterjmax, which is a relatively large integer, such as 50, 100. The target number may be denoted as K, and may take on a relatively large integer, such as 1000, 2000. For each key feature field Y1, Y2, … …, yn, a duty ratio threshold is set, and the duty ratio threshold is represented as P1, P2, … …, pn, and the duty ratio threshold is usually 1%, 5%, 20%, or the like.
The number of iterations is denoted iter, which is initialized to 0, i.e., iter=0. The weights of the key feature fields are initialized, i.e. the weights w1, w2, … …, wn of the key feature fields are all initialized to 0.
S302: the method comprises the steps of obtaining a second information set, inputting information in the second information set into a pre-trained information evaluation model, obtaining first information evaluation probability of the information, calculating first evaluation accuracy according to the first information evaluation probability of the information and an information label of the information, and initializing the second evaluation accuracy by using a value of the first evaluation accuracy.
It is understood that S302 is comprised of S302-1 and S302-2. After acquiring the first information evaluation probability prob1 of the information, a first evaluation accuracy of both prob1 and the information tag is calculated from them. The first evaluation accuracy is denoted by auc _base. The second evaluation accuracy is initialized with the value of the first evaluation accuracy, i.e. auc _prob2= auc _base. Wherein auc _prob2 represents the second evaluation accuracy, the initial value of auc _prob2 is auc _base.
S303: judging whether a weight adjustment condition is met; if yes, S304 is executed. Otherwise, ending.
The weight adjustment condition is that the iteration times are smaller than or equal to the maximum iteration times, or the difference value between the first evaluation accuracy and the second evaluation accuracy is smaller than or equal to an accuracy threshold. I.e. whether iter is less than or equal to iter_max, auc _base-auc _prob2 is less than or equal to auc _threshold.
S304: acquiring second information evaluation probability of the information according to the first information evaluation probability of the information, the field value of the information under the key feature field and the weight of the key feature field, calculating second evaluation accuracy according to the second information evaluation probability of the information and the information label of the information, and acquiring target number of information with the highest second information evaluation probability.
It is understood that S304 is comprised of S304-1 and S304-2. For example, the second information evaluation probability prob2 of the information may be calculated according to the formula prob2=prob1+w1×y1+ … … +wn (the specific formula is determined according to the actual situation), and the second evaluation accuracy auc _prob2 of the information and the information label may be recalculated according to prob2 and the information label. It will be appreciated that the recalculated second evaluation accuracy auc _prob2 covers the initial value of auc _prob2 in S302 above.
Further, the pieces of information are sorted from large to small according to prob2, and the first K pieces of information are taken.
S305: calculating the duty ratio of information related to key characteristic fields in the target quantity of information; and when the duty ratio is smaller than the corresponding duty ratio threshold value, the sum of the weight of the key feature field and the parameter step length is redetermined as the weight of the key feature field, and when the duty ratio is larger than or equal to the corresponding duty ratio threshold value, the weight of the key feature field is unchanged.
Calculating the duty ratio, and enabling m to take the values of 1, 2, … … and n in sequence, wherein m is a positive integer. For each key feature field Ym, the duty ratio of the information related to the key feature field among the first K pieces of information is calculated, and the duty ratios are denoted as percentage_1, percentage_2, … …, and percentage_n, respectively.
The w1, w2, … …, wn values are updated. Let m take the values 1, 2, … …, n in order, when percentage_m < Pm, update the value of wm according to the formula wm=wm+delta, and if percentage_m > =pm, not update the value of wm.
S306: the iteration number is updated and S303 and subsequent steps are re-performed until the weight adjustment condition is not satisfied.
The value of the iteration number iter is updated according to the formula iter=iter+1, and S303 and subsequent steps are re-executed to adjust the weights of the key feature fields until the weight adjustment condition is not satisfied.
Based on the above-mentioned S301-S306, the embodiment of the present application provides a specific implementation manner of obtaining the weight of each key feature field, where the obtained weight of each key feature field satisfies the condition that "the weight of the key feature field and the number ratio of the information related to the key feature field in the second information set are inversely related".
In summary, the embodiment of the present application provides an application example of S201-S204 in a marketing scenario, as follows:
in the marketing scene, the cue information to be evaluated is stock cue information. It is assumed, without loss of generality, that the quality assessment of the thread information to be assessed is to be performed on day t+s.
First, on the t+s day, the field value of the stock thread information under the information feature field is acquired and denoted as V (t+s). Further, the information feature fields may be divided into common feature fields X1, … …, xn and key feature fields Y1, … …, yn. Further, V (t+s) is also divided into a field value of a normal feature field and a field value of a key feature field.
The field value of the common feature field and the field value of the key feature field (i.e. the field value under the information feature field) are input into the information evaluation model to obtain the information evaluation probability prob1 of the stock clue information, wherein the information evaluation probability prob1 of the stock clue information is used for indicating the possibility that the stock clue information is high-quality clue information.
Further, the information evaluation probability prob1 is adjusted, and the information evaluation probability prob2 of the adjusted stock thread information is calculated according to the formula prob2=prob1+w1+y1+ … … +wn.
It will be appreciated that the same processing may be performed on the plurality of stock thread information, so as to obtain the prob2 of each of the plurality of stock thread information. prob2 is a quality evaluation index of the stock thread information, and the higher prob2 is, the higher the possibility that the stock thread information becomes "high quality thread information" is.
Furthermore, the stock clue information can be sorted from high to low according to prob2, and then the first K pieces of stock clue information (K is 1000 and 2000 equivalent) are taken, so that K pieces of high-quality stock clue information can be obtained. Furthermore, the marketer can determine the follow-up priority of the stock clue information based on prob 2.
Referring to table 4, table 4 is model evaluation index obtained after training the information evaluation model using the stock information. Model evaluation metrics include accuracy, precision, and F1 score. Wherein the stock information includes two kinds, one is the stock information in which the recording time of the field value under the information feature field and the recording time of the information tag of the stock information are not within the same time range. The other is stock information in which the recording time of the field value under the information feature field and the recording time of the information tag of the stock information are within the same time range (i.e., the training manner provided by the embodiment of the application when the information is specifically the stock information).
As can be seen from Table 4, the index value of the information evaluation model obtained by training in the manner provided by the embodiment of the application is higher as compared with the model evaluation index. The method is characterized in that field values of stock information and information labels are aligned in a time dimension, so that the problem of inconsistent training reasoning in the model training process is solved, and index values of model evaluation indexes are improved.
Referring to table 5, table 5 is the duty cycle result of the key feature field. As shown in table 5, a method of "information evaluation only by using the information evaluation model" and a method of "adjusting the probability of information evaluation after information evaluation by using the information evaluation model" provided by the embodiment of the present application are provided, and the accuracy obtained by the two methods, and the proportion of hit key feature fields in the information of 1000 before the ranking of the probability of information evaluation by the two methods are provided.
Table 4 model evaluation index
Table 5 duty cycle results
As can be seen from table 5, in the case of accuracy comparable to the "information evaluation only with the information evaluation model" (for example, the difference is not more than the given threshold value of 0.002), the method of "adjusting the probability of information evaluation after the information evaluation with the information evaluation model" provided in the embodiment of the present application significantly improves the duty ratio of the information "hit several key feature fields" in the first 1000 pieces of information, and illustrates that the key feature fields better reflect the effect of determining whether the information is the target type information (such as the high quality cue information in the marketing scenario). Meanwhile, the method provided by the embodiment of the application is described, so that the information evaluation probability of the finally obtained information is more accurate.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the information evaluation method provided by the above method embodiment, the embodiment of the present application further provides an information evaluation device, and the information evaluation device will be described below with reference to the accompanying drawings. Because the principle of solving the problem by the device in the embodiment of the present disclosure is similar to that of the information evaluation method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is not repeated.
Referring to fig. 4, a schematic structural diagram of an information evaluation device according to an embodiment of the present application is shown. As shown in fig. 4, the information evaluation apparatus includes:
the first acquisition unit is used for acquiring information to be evaluated and a field value of the information to be evaluated under an information characteristic field;
the input unit is used for inputting a field value of the information to be evaluated under an information characteristic field into a pre-trained information evaluation model, and acquiring information evaluation probability of the information to be evaluated, which is output by the information evaluation model, as target type information;
A second obtaining unit, configured to obtain at least one key feature field in the information feature fields; the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold;
and the adjusting unit is used for adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field and re-acquiring the information evaluation probability of the information to be evaluated as the target type information.
In one possible implementation, the information evaluation model is trained according to a training information set; the training information set comprises at least one piece of history information, a field value of each piece of history information under an information characteristic field and an information label of each piece of history information;
the training process of the information evaluation model comprises the following steps:
inputting field values of the history information under the information characteristic fields into an information evaluation model, and acquiring prediction information evaluation probability of the history information as the target type information;
calculating a loss value according to the estimated probability of the historical information for the predicted information of the target type information and the information label of the historical information;
And training the information evaluation model by using the loss value, and obtaining the trained information evaluation model.
In one possible implementation, when the information to be evaluated is inventory information to be evaluated, the history information is history inventory information; the field value recording time and the information tag recording time of each history stock information are within the same time range.
In one possible implementation manner, the information feature field includes at least one feature field, and the second obtaining unit includes:
a first calculating subunit, configured to calculate a quantity ratio of information related to a target feature field in a first information set in the first information set, and obtain a probability that the information related to the target feature field is the target type information; the target feature field is each of the information feature fields;
and the first determining subunit is used for determining the target characteristic field as a key characteristic field when the quantity duty ratio is lower than a preset duty ratio threshold value and/or the probability meets a preset probability range.
In one possible implementation, the adjusting unit includes:
and the adjustment subunit is used for adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field and the weight of the key characteristic field, and obtaining the information evaluation probability of the information to be evaluated.
In one possible implementation, the adjusting subunit includes:
the first acquisition subunit is used for binarizing the field value of the information to be evaluated under the key characteristic field to obtain a binarization result, calculating the product of the binarization result and the weight of the key characteristic field, calculating the sum of the products, and obtaining a result value;
and the second determination subunit is used for re-determining the information evaluation probability for the information to be evaluated as the target type information according to the result value and the information evaluation probability.
In one possible implementation, the apparatus further includes:
a third acquisition unit configured to acquire a second information set, and determine information related to the key feature field in the second information set;
a fourth obtaining unit, configured to obtain a weight of each key feature field; the weight of the key feature field is inversely related to the number of information related to the key feature field in the second set of information.
In one possible implementation, the apparatus further includes: a fifth acquisition unit;
the fifth obtaining unit is configured to obtain a weight of each key feature field;
The fifth acquisition unit includes:
the first setting subunit is used for setting parameter step length, target number and the duty ratio threshold of the key feature field, and initializing the weight of the key feature field;
the second acquisition subunit is used for acquiring a second information set, inputting information in the second information set into the trained pre-trained information evaluation model, and acquiring a first information evaluation probability of the information;
the judging subunit is used for judging whether the weight adjustment condition is met, if so, acquiring a second information evaluation probability of the information according to the first information evaluation probability of the information, the field value of the information under the key characteristic field and the weight of the key characteristic field, and acquiring the target number of information with the highest second information evaluation probability;
a second calculating subunit, configured to calculate a duty ratio of information related to the key feature field in the target number of information; and when the duty ratio is smaller than a corresponding duty ratio threshold value, the sum of the weight of the key feature field and the parameter step length is redetermined to be the weight of the key feature field, and when the duty ratio is larger than or equal to the corresponding duty ratio threshold value, the weight of the key feature field is unchanged.
In one possible implementation manner, the fifth obtaining unit further includes:
the second setting subunit is used for setting an accuracy threshold and a maximum iteration number before the judgment whether the weight adjustment condition is met or not, and initializing the iteration number;
a third computing subunit, configured to compute a first evaluation accuracy according to a first information evaluation probability of the information and an information tag of the information, and initialize a second evaluation accuracy by using a value of the first evaluation accuracy;
when the weight adjustment condition is satisfied, the fifth acquisition unit further includes:
a fourth calculating subunit, configured to calculate a second evaluation accuracy according to the second information evaluation probability of the information and the information tag of the information;
the weight adjustment condition is that the iteration number is smaller than or equal to the maximum iteration number, or the difference value between the first evaluation accuracy and the second evaluation accuracy is smaller than or equal to the accuracy threshold;
the fifth acquisition unit further includes:
and the updating subunit is used for updating the iteration times after the weight of the key characteristic field is unchanged when the duty ratio is larger than or equal to the corresponding duty ratio threshold value, and re-executing the judgment whether the weight adjustment condition is met or not and the subsequent steps until the weight adjustment condition is not met.
Further combinations of the present application may be made to provide further implementations based on the implementations provided in the above aspects.
It should be noted that, for specific implementation of each unit in this embodiment, reference may be made to the related description in the above method embodiment. The division of the units in the embodiment of the application is schematic, only one logic function is divided, and other division modes can be adopted in actual implementation. The functional units in the embodiment of the application can be integrated in one processing unit, or each unit can exist alone physically, or two or more units are integrated in one unit. For example, in the above embodiment, the processing unit and the transmitting unit may be the same unit or may be different units. The integrated units may be implemented in hardware or in software functional units.
Based on the information evaluation method provided by the embodiment of the method, the application further provides electronic equipment, which comprises the following steps: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the information evaluation method described in any one of the above embodiments.
Referring now to fig. 5, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present application is shown. The terminal device in the embodiment of the present application may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistant, personal digital assistants), PADs (portable android device, tablet computers), PMPs (Portable Media Player, portable multimedia players), car terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs (televisions), desktop computers, and the like. The electronic device shown in fig. 5 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the present application.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the method of the embodiment of the present application are performed when the computer program is executed by the processing means 501.
The electronic device provided by the embodiment of the present application belongs to the same inventive concept as the information evaluation method provided by the above embodiment, and technical details not described in detail in the present embodiment can be seen in the above embodiment, and the present embodiment has the same beneficial effects as the above embodiment.
Based on the information evaluation method provided in the foregoing method embodiments, an embodiment of the present application provides a computer readable medium having a computer program stored thereon, where the program when executed by a processor implements the information evaluation method according to any one of the foregoing embodiments.
The computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the information evaluation method described above.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented in software or in hardware. The name of the unit/module is not limited to the unit itself in some cases, and, for example, the voice data acquisition module may also be described as a "data acquisition module".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of the present application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that, in the present description, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system or device disclosed in the embodiments, since it corresponds to the method disclosed in the embodiments, the description is relatively simple, and the relevant points refer to the description of the method section.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. An information evaluation method, the method comprising:
acquiring information to be evaluated, and acquiring a field value of the information to be evaluated under an information characteristic field;
inputting a field value of the information to be evaluated under an information characteristic field into a pre-trained information evaluation model, and acquiring information evaluation probability of the information to be evaluated, which is output by the information evaluation model, as target type information;
acquiring at least one key feature field in the information feature fields; the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold;
And adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field, and re-acquiring the information evaluation probability of the information to be evaluated as the target type information.
2. The method of claim 1, wherein the information assessment model is trained from a training information set; the training information set comprises at least one piece of history information, a field value of each piece of history information under an information characteristic field and an information label of each piece of history information;
the training process of the information evaluation model comprises the following steps:
inputting field values of the history information under the information characteristic fields into an information evaluation model, and acquiring prediction information evaluation probability of the history information as the target type information;
calculating a loss value according to the estimated probability of the historical information for the predicted information of the target type information and the information label of the historical information;
and training the information evaluation model by using the loss value, and obtaining the trained information evaluation model.
3. The method according to claim 2, wherein when the information to be evaluated is stock information to be evaluated, the history information is history stock information; the field value recording time and the information tag recording time of each history stock information are within the same time range.
4. A method according to any of claims 1-3, wherein the information feature field comprises at least one feature field, and the obtaining at least one key feature field of the information feature field comprises:
calculating the quantity ratio of the information related to the target characteristic field in a first information set in the first information set, and acquiring the probability that the information related to the target characteristic field is the target type information; the target feature field is each of the information feature fields;
and determining the target characteristic field as a key characteristic field when the number duty ratio is lower than a preset duty ratio threshold value and/or the probability satisfies a preset probability range.
5. A method according to any one of claims 1-3, wherein said adjusting the information evaluation probability according to the field value of the information to be evaluated under the key feature field, re-acquiring the information evaluation probability of the information to be evaluated as the target type information, comprises:
and adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field and the weight of the key characteristic field, and obtaining the information evaluation probability of the information to be evaluated.
6. The method according to claim 5, wherein the adjusting the information evaluation probability according to the field value of the information to be evaluated under the key feature field and the weight of the key feature field, to obtain the information evaluation probability of the information to be evaluated, includes:
binarizing a field value of the information to be evaluated under the key characteristic field to obtain a binarization result, calculating a product of the binarization result and the weight of the key characteristic field, and calculating a sum of the products to obtain a result value;
and re-determining the information evaluation probability for the information to be evaluated as the target type information according to the result value and the information evaluation probability.
7. A method according to any one of claims 1-3, wherein the method further comprises:
acquiring a second information set, and determining information related to the key feature field in the second information set;
acquiring the weight of each key characteristic field; the weight of the key feature field is inversely related to the number of information related to the key feature field in the second set of information.
8. A method according to any one of claims 1-3, wherein the method further comprises: acquiring the weight of each key characteristic field;
The obtaining the weight of each key feature field includes:
setting a parameter step length, a target number and a duty ratio threshold of the key feature field, and initializing the weight of the key feature field;
acquiring a second information set, inputting information in the second information set into the pre-trained information evaluation model, and acquiring a first information evaluation probability of the information;
judging whether a weight adjustment condition is met, if so, acquiring a second information evaluation probability of the information according to the first information evaluation probability of the information, a field value of the information under the key characteristic field and the weight of the key characteristic field, and acquiring a target number of information with the highest second information evaluation probability;
calculating the duty ratio of information related to the key characteristic field in the target quantity information; and when the duty ratio is smaller than a corresponding duty ratio threshold value, the sum of the weight of the key feature field and the parameter step length is redetermined to be the weight of the key feature field, and when the duty ratio is larger than or equal to the corresponding duty ratio threshold value, the weight of the key feature field is unchanged.
9. The method of claim 8, wherein prior to said determining whether the weight adjustment condition is met, the method further comprises:
setting an accuracy threshold and a maximum iteration number, and initializing the iteration number;
calculating a first evaluation accuracy according to the first information evaluation probability of the information and the information label of the information, and initializing a second evaluation accuracy by using the value of the first evaluation accuracy;
when the weight adjustment condition is satisfied, the method further includes:
calculating a second evaluation accuracy according to the second information evaluation probability of the information and the information label of the information;
the weight adjustment condition is that the iteration number is smaller than or equal to the maximum iteration number, or the difference value between the first evaluation accuracy and the second evaluation accuracy is smaller than or equal to the accuracy threshold;
after the weights of the key feature fields are unchanged when the duty ratio is greater than or equal to a corresponding duty ratio threshold, the method further includes:
and updating the iteration times, and re-executing the judgment whether the weight adjustment condition is met or not and the follow-up steps until the weight adjustment condition is not met.
10. An information evaluation apparatus, characterized in that the apparatus comprises:
the first acquisition unit is used for acquiring information to be evaluated and a field value of the information to be evaluated under an information characteristic field;
the input unit is used for inputting a field value of the information to be evaluated under an information characteristic field into a pre-trained information evaluation model, and acquiring information evaluation probability of the information to be evaluated, which is output by the information evaluation model, as target type information;
a second obtaining unit, configured to obtain at least one key feature field in the information feature fields; the influence degree of the field value of the key feature field on the information evaluation probability is larger than a preset degree threshold;
and the adjusting unit is used for adjusting the information evaluation probability according to the field value of the information to be evaluated under the key characteristic field and re-acquiring the information evaluation probability of the information to be evaluated as the target type information.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the information evaluation method of any one of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the information evaluation method according to any one of claims 1-9.
CN202310745106.3A 2023-06-21 2023-06-21 Information evaluation method, device and equipment Pending CN116975580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310745106.3A CN116975580A (en) 2023-06-21 2023-06-21 Information evaluation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310745106.3A CN116975580A (en) 2023-06-21 2023-06-21 Information evaluation method, device and equipment

Publications (1)

Publication Number Publication Date
CN116975580A true CN116975580A (en) 2023-10-31

Family

ID=88478680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310745106.3A Pending CN116975580A (en) 2023-06-21 2023-06-21 Information evaluation method, device and equipment

Country Status (1)

Country Link
CN (1) CN116975580A (en)

Similar Documents

Publication Publication Date Title
CN107908740B (en) Information output method and device
CN114265979B (en) Method for determining fusion parameters, information recommendation method and model training method
US20140358694A1 (en) Social media pricing engine
CN112204610A (en) Neural network based electronic content
CN110766513A (en) Information sorting method and device, electronic equipment and readable storage medium
CN114417174B (en) Content recommendation method, device, equipment and computer storage medium
CN112836128A (en) Information recommendation method, device, equipment and storage medium
CN113393299A (en) Recommendation model training method and device, electronic equipment and storage medium
CN110766184A (en) Order quantity prediction method and device
CN113128773B (en) Training method of address prediction model, address prediction method and device
CN114119123A (en) Information pushing method and device
CN113763077A (en) Method and apparatus for detecting false trade orders
CN111369293A (en) Advertisement bidding method and device and electronic equipment
CN116109374A (en) Resource bit display method, device, electronic equipment and computer readable medium
CN112860999B (en) Information recommendation method, device, equipment and storage medium
CN115795345A (en) Information processing method, device, equipment and storage medium
CN116975580A (en) Information evaluation method, device and equipment
CN114757700A (en) Article sales prediction model training method, article sales prediction method and apparatus
CN111582456B (en) Method, apparatus, device and medium for generating network model information
CN114723455A (en) Service processing method and device, electronic equipment and storage medium
CN114662001A (en) Resource interaction prediction model training method and device and resource recommendation method and device
CN113391988A (en) Method and device for losing user retention, electronic equipment and storage medium
CN111339770A (en) Method and apparatus for outputting information
CN116823407B (en) Product information pushing method, device, electronic equipment and computer readable medium
CN116934153A (en) Training method of information evaluation model, information evaluation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination