CN117556264B - Training method and device for evaluation model and electronic equipment - Google Patents

Training method and device for evaluation model and electronic equipment Download PDF

Info

Publication number
CN117556264B
CN117556264B CN202410038790.6A CN202410038790A CN117556264B CN 117556264 B CN117556264 B CN 117556264B CN 202410038790 A CN202410038790 A CN 202410038790A CN 117556264 B CN117556264 B CN 117556264B
Authority
CN
China
Prior art keywords
user
target user
data
model
behavior data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410038790.6A
Other languages
Chinese (zh)
Other versions
CN117556264A (en
Inventor
柯林江
黄昕宇
郭云三
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Tonghuashun Intelligent Technology Co Ltd
Original Assignee
Zhejiang Tonghuashun Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Tonghuashun Intelligent Technology Co Ltd filed Critical Zhejiang Tonghuashun Intelligent Technology Co Ltd
Priority to CN202410038790.6A priority Critical patent/CN117556264B/en
Publication of CN117556264A publication Critical patent/CN117556264A/en
Application granted granted Critical
Publication of CN117556264B publication Critical patent/CN117556264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a training method and device for an evaluation model and electronic equipment; the method comprises the following steps: acquiring historical behavior data corresponding to a target user corresponding to a model to be evaluated; determining a first-level index layer and a second-level index item according to the historical behavior data, and carrying out intention recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user; training an evaluation model according to the behavior sequence data corresponding to the target user; and optimizing the evaluation model according to the latest behavior data corresponding to the target user. The training method of the evaluation model provided by the application can train the evaluation model based on the user behavior data, and improves the accuracy of quality evaluation of the model to be evaluated.

Description

Training method and device for evaluation model and electronic equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a training method and apparatus for an evaluation model, and an electronic device.
Background
At present, the existing model quality assessment method is mainly focused on a model layer, does not consider individual differences caused by influences of subjective factors of users, and lacks objectivity and consistency. Therefore, how to improve the quality evaluation method of the model is a problem to be solved in the application.
Disclosure of Invention
The embodiment of the application provides a training method, a training device and electronic equipment for an evaluation model, which can train the evaluation model based on historical behavior data with individual differences of a target user, improve the quality evaluation effect of the model and improve the accuracy of model evaluation.
The technical scheme of the embodiment of the application is realized as follows:
In a first aspect, an embodiment of the present application provides a training method for an evaluation model, including:
Acquiring historical behavior data corresponding to a target user corresponding to a model to be evaluated;
determining a first-level index layer and a second-level index item according to the historical behavior data, and carrying out intention recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user;
training an evaluation model according to the behavior sequence data corresponding to the target user;
and optimizing the evaluation model according to the latest behavior data corresponding to the target user.
In the above solution, the obtaining the target user corresponding to the model to be evaluated and the historical behavior data corresponding to the target user includes:
acquiring historical behavior data of each user in a business system where the model to be evaluated is located, wherein the historical behavior data comprises at least one of text data, behavior data and external data;
Judging whether the user is a strong correlation user in the business field of the model to be evaluated according to the historical behavior data of each user;
If the user is the strong correlation user, judging whether the user is a core user of a service system where the model to be evaluated is located;
and if the user is the core user, determining the user as the target user.
In the above scheme, the determining the first-level index layer and the second-level index item according to the historical behavior data includes:
determining a first number of first-level index layers according to the characteristics of the historical behavior data and a preset first number; each first-level index layer is used for representing the acceptance level of the target user to-be-evaluated model;
Determining a second number of second-level index items corresponding to each first-level index layer according to the behavior characteristics of the historical behavior data corresponding to each first-level index layer and a preset second number; the secondary index item is used for representing the corresponding characteristic index of the target user in the primary index layer.
In the above scheme, the performing intent recognition on the text data in the historical behavior data to obtain behavior sequence data corresponding to the target user includes:
Preprocessing the historical behavior data corresponding to the target user to obtain keyword data corresponding to the preprocessed target user;
For each keyword data, if the feature corresponding to the keyword data is matched with the corresponding target index layer in the first-level index layer and the feature corresponding to the keyword data is matched with the corresponding target index item in the second-level index item in the target index layer, determining the value of the rank corresponding to the second-level index item in the behavior sequence data as a target numerical value;
and determining the corresponding behavior sequence data after all the keyword data corresponding to the target user are matched as the behavior sequence data corresponding to the target user.
In the above scheme, if the feature corresponding to the keyword data is not matched with the corresponding first-level index layer, or the feature corresponding to the keyword data is not matched with the corresponding second-level index item, the value corresponding to each bit in the behavior sequence data is not changed.
In the above solution, the optimizing the evaluation model according to the latest behavior data corresponding to the target user includes:
determining a lost user according to the latest behavior data corresponding to the target user;
Determining a first scoring value corresponding to the loss user according to the latest behavior data corresponding to the loss user;
determining behavior sequence data corresponding to the churn user according to all behavior data corresponding to the churn user;
and retraining the evaluation model according to the behavior sequence data corresponding to the loss user and the first scoring value to obtain an optimized evaluation model.
In the above scheme, the determining the attrition user according to the latest behavior data corresponding to the target user includes:
Acquiring text data in the latest interaction process of the target user and the model to be evaluated, and latest behavior data of the target user in a business system where the model to be evaluated is located;
And if the first behavior data exists in the behavior data of the target user, and/or the text data and the behavior data do not exist in the target user within a preset time, determining the target user as the loss user.
In a second aspect, an embodiment of the present application provides a training apparatus for an evaluation model, where the training apparatus for an evaluation model includes:
the acquisition module is used for acquiring a target user corresponding to the model to be evaluated and historical behavior data corresponding to the target user;
The sequence determining module is used for determining a primary index layer and a secondary index item according to the historical behavior data, and carrying out intention recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user;
the training module is used for training an evaluation model according to the behavior sequence data corresponding to the target user;
and the optimizing module is used for optimizing the evaluation model according to the latest behavior data corresponding to the target user.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, so that the at least one processor can execute the training method of the evaluation model provided by the embodiment of the application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising a set of computer-executable instructions, which when executed, are configured to perform a training method for an assessment model provided by embodiments of the present application.
According to the training method of the evaluation model, which is provided by the embodiment of the application, the target user corresponding to the model to be evaluated and the historical behavior data corresponding to the target user are obtained; determining a first-level index layer and a second-level index item according to the historical behavior data, and carrying out intention recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user; training an evaluation model according to the behavior sequence data corresponding to the target user; and optimizing the evaluation model according to the latest behavior data corresponding to the target user. According to the training method of the evaluation model, through acquiring the historical behavior data corresponding to the target user and determining the primary index layer and the secondary index item according to the historical behavior data, intention recognition can be accurately carried out on text data, feedback results of different target users under different scenes and different indexes of the model to be evaluated can be comprehensively acquired, and more accurate behavior sequence data can be obtained. And the quality evaluation effect of the evaluation model is further improved by optimizing the trained evaluation model.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1 is a schematic diagram of an alternative process flow of a training method for an evaluation model according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a primary index layer and a secondary index item according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an alternative architecture of a training apparatus for an assessment model according to an embodiment of the present application;
fig. 4 is a schematic block diagram of an alternative electronic device provided by an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", and the like are merely used to distinguish between similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", or the like may be interchanged with one another, if permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before further describing the embodiments of the present application in detail, a description is given of a technical solution related to the embodiments of the present application in terms of related technologies.
1. A generative dialog model is typically used to generate a reasonable, consistent natural language reply from the natural language input by the user. The core of a neural network-based generative dialog model is typically a GPT (GENERATIVE PRE-trained Transformer), pre-trained generative transducer) model.
2. When evaluating the generated dialogue model, five kinds of evaluation indexes are generally included, which are respectively: model size, scale and quality of training data, language model task performance, performance of a particular downstream task, and speed of reasoning. The size of the model can be measured through parameters of the model, and a larger model usually has more parameters, so that more complex language structures and context relations can be captured; the scale and the quality of training data can be measured by the scale and the quality of a corpus used by the model, and a larger-scale and higher-quality corpus can provide more and richer language information, thereby being beneficial to improving the performance of the model; language model task performance can be measured by confusion, which is typically used as an index for characterizing the model's predictive uncertainty of input data, with lower confusion indicating better understanding of the data by the model; performance of a particular downstream task may be assessed by using, for performance of the particular downstream task, such as text analysis, emotion analysis, machine translation, etc., indices of accuracy, recall, etc., to evaluate performance of the model at different downstream tasks; the speed of reasoning can be measured by the efficiency with which the model processes the data, typically in terms of the amount of data processed per second.
3. The prior art has some defects in the actual application scene based on the evaluation index of the generated dialogue model. The main appearance is that: in terms of objectivity and consistency, the existing evaluation indexes are mainly concentrated on a model layer, user feedback in a floor application scene cannot be specified, and the evaluation results are different under different users and different scenes due to the influence of individual subjective factors, so that the objectivity and the consistency are lacking; in terms of evaluation cost, for many specific professional application scenarios, manual evaluation is often required, and a large amount of time and labor cost are involved, so that the evaluation process is time-consuming and high in cost; in terms of semantic and context complexity, the generated dialogue model needs to understand and generate natural language, the complexity of the semantic and context makes the evaluation difficult, and simple quantitative indexes often cannot fully capture the accurate understanding of the model on complex language structures and context relations; in terms of field requirements, the universal rating index may not be able to adapt to the requirements of a specific field, and the universal evaluation index may not provide a targeted evaluation method for the generated dialogue model diagrams of different fields.
Referring to fig. 1, fig. 1 is a schematic diagram of an optional process flow of the training method of the evaluation model according to the embodiment of the present application, and the following description will refer to steps S101 to S104 shown in fig. 1.
Step S101, a target user corresponding to a model to be evaluated and historical behavior data corresponding to the target user are obtained.
In some embodiments, the model to be evaluated may be a generative dialog model. The historical behavior data corresponding to each user can be obtained according to the business system where the model to be evaluated is located, wherein the historical behavior data mainly comprises at least one of text data, behavior data and external data.
The model to be evaluated can be a generated dialogue model or other models, and is generally applied to a business field or a business system in a certain industry field, such as a financial business field or a financial industry field. The text data may include text data generated by a user when interacting in the business system based on the model to be evaluated. Such as the user based on the question of the dialogue model, the response of the dialogue model; the behavior data can comprise data of various interaction behaviors related to feedback data of the model to be evaluated in the business system, such as data of interaction between the user and the business system, such as page access, search, collection, sharing, comment and the like; the external data may include data related to the user outside the service system where the model to be evaluated is located or domain dynamic data related to the service domain, such as user credit, market trend, professional information, and some external data related to the service domain and not belonging to the service system where the model to be evaluated is located.
Through multi-level data acquisition corresponding to the model to be evaluated, the user behavior mode can be analyzed more accurately by utilizing the multi-level data, the accuracy of the quality evaluation model can be trained based on the user historical behavior data conveniently, and the performance of the model to be evaluated can be evaluated more objectively.
In some embodiments, it may be determined whether each user is a strongly relevant user in the business domain where the model to be evaluated is located through the obtained historical behavior data. And judging whether each user is a core user of the service where the model to be evaluated is located according to the historical behavior data of each user. If each user is a strong relevant user in the financial business field, the judgment can be made by the characteristics related to the financial business field, such as whether the credit of the user meets the requirement, the occurrence frequency of keywords related to the financial business, the number of page accesses related to the financial business, the frequency of searching financial products and the like. If the duty ratio of the features related to the financial business field in the historical behavior data of the user exceeds a certain threshold value, the user can be determined to be a strong related user in the financial business field. The decision rules of the strongly related users in other business fields are similar to the above, and the application is not limited.
In some embodiments, if the user is a strong correlation user in the service domain where the model to be evaluated is located, further determining whether the user is a core user of the service system where the model to be evaluated is located, if the user is a core user, determining the core user as a target user, and acquiring historical behavior data corresponding to the target user, where only the target user and the historical behavior data corresponding to the target user are used as training data of a subsequent evaluation model. For example, whether the user is a core user can be further determined according to the characteristics of the strong correlation user in the financial business field, such as access to a specific page, use of a specific function, specific interaction behavior and the like, based on a core user judgment rule or a core user judgment module. The core user judgment rule or the core user judgment module can be determined based on an actual service system, and the application is not limited.
By using the behavior data of the core user in the service field only for the training of the subsequent evaluation model, the accuracy and pertinence of the training of the evaluation model can be determined, and the trained evaluation model can determine the satisfaction condition of the user to the evaluation model based on the behavior data of the user in the service field.
Step S102, determining a primary index layer and a secondary index item according to the historical behavior data, and carrying out intention recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user.
In some embodiments, the first number of first-level index layers may be determined according to the features of the historical behavior data corresponding to all the target users and the preset first number. Each level of index layer may be used to characterize a different level of acceptance of the model to be evaluated by the target user.
As an example, corresponding first-level index layers may be determined according to different acceptance characteristics of the model to be evaluated according to the historical behavior data, if the preset first number is 2, the acceptance may be divided into two levels, two first-level index layers are determined, and the first-level index layers may be represented as a first index layer and a second index layer. The characteristics of the positive, forward, higher-evaluation and higher-satisfaction user behavior data corresponding to the target user can be determined to be a first index layer, and can also be expressed as a high-quality service evaluation scene; the characteristics of the negative, low-evaluation and low-satisfaction user behavior data corresponding to the target user can be determined as the second index layer, and can also be expressed as a low-quality service evaluation scene.
In some embodiments, a second number of secondary index items may be set in each primary index layer according to the features of the historical behavior data corresponding to each primary index layer and the preset second number, and each secondary index item may be used to represent a feature index corresponding to the target user in the primary index layer, that is, a classification index corresponding to the primary index layer after multi-classification is performed on the primary index layer. The historical behavior data of all target users in each primary index layer can be classified again based on different characteristic indexes, and each primary index layer is divided into a preset second number of secondary index items. If the preset index items comprise four items, four index layers in each level index layer can be determined.
As an example, the corresponding historical behavior data generated by the model generated when the user uses the dialog in a business system in the financial domain is given. If the preset second number is 4, each first-level index layer can be divided into four categories, and four different second-level index items are determined. As shown in fig. 2, the first index layer includes two layers, namely a first index layer and a second index layer; the second index layer comprises four items, the first index layer corresponding to the target user comprises at least one second index item in adopting advice, active sharing, archiving or implementation and user promotion, and the second index layer corresponding to the target user comprises at least one second index item in negative feedback, advice rejection, question or error correction and repeated operation.
In the first index layer shown in fig. 2, the adoption suggestions in the secondary index items are used to characterize the characteristics of whether the user adopted or not, the professional suggestions given for the model to be evaluated. For example, when the user interacts with the generated dialogue model in the service system, whether the user has a search action on the professional advice provided by the generated dialogue model, whether stocks mentioned in the professional advice are added as optional stocks, whether the user has a use action on a recommendation function in the generated dialogue model, whether the user has an account opening action on a recommendation account opening of the generated dialogue model, whether the user and the generated dialogue model have a payment action after the dialogue is finished, and the like. Users in the secondary index items share features for characterizing whether the users actively share interactive contents aiming at the model to be evaluated. Such as whether the user has active sharing of the generated dialog content of the generated dialog model. Archiving or implementing features in the secondary index items to characterize recommended content for the model to be evaluated, whether the user is favorites or applies. E.g., whether the user is collecting the corresponding reminder, or whether the user has a behavior that uses the functionality of the corresponding investment logic. The user step in the second-level index item is used for representing the interaction content aiming at the model to be evaluated, and whether the expertise of the user is improved or not is judged. For example, if the user dialogue is more and more complex, if there is obvious error information or error fact which cannot be matched with the database in the business system in the text of the user question, or if the user is guided by the generated content of the generated dialogue model to learn about the correct knowledge.
In the second level of indicators shown in fig. 2, negative feedback in the secondary indicators is used to characterize the interactive content for the model to be evaluated, whether the user is doing the negative feedback feature. For example, when the user feeds back, the replying dialogue keyword or a sentence that was last replied to contains negative evaluation behavior. Rejection suggestions in the secondary index term are used to characterize professional suggestions given for the model to be evaluated, whether the user rejects adoption. E.g., whether the user has an action to refuse to be taken for the offered service. The question or error correction in the secondary index item is used for characterizing whether the user performs the characteristic of question or error correction or not according to the professional advice given by the model to be evaluated. Such as whether the user is actively engaged in a conversation or is asking again after the intervention, or whether the user is feeding back in a conversation information that is contrary to the generated text provided by the generated conversation model. And the repeated operation in the secondary index item is used for representing the characteristic of whether the user performs the repeated operation or not in the single-round interaction process of the model to be evaluated. For example, whether the user has a behavior of repeatedly asking a question during a single round of dialogue, whether the user has a behavior of repeatedly similar queries during a single round of dialogue, or whether the user has a behavior of repeatedly generating the same button and selecting multiple times during a single round of dialogue.
By classifying the historical behavior data of all target users in the service system in a scene level, namely user acceptance, and classifying the behavior data of the users in scenes corresponding to different levels of index layers again, the evaluation model can evaluate the behavior data corresponding to each level of index layers more accurately, and therefore more accurate evaluation results are obtained.
In some embodiments, after the primary index layer and the secondary index item are obtained, intent recognition may be performed on historical behavior data corresponding to each target user, so as to obtain behavior sequence data corresponding to the target user.
In some embodiments, the behavior sequence data may be initialized for the target user, for example, each value in the behavior sequence data corresponding to each user is determined to be an initial value, the initial value may be set to 0, for example, the behavior sequence data is 8 bits, and then the behavior sequence data may be 00000000 after initialization.
The historical behavior data of each target user can be preprocessed to obtain the keyword data corresponding to the preprocessed historical behavior data. The preprocessing can include text cleaning processing of text data in the interaction process of the target user and the model to be evaluated, or text form data extraction of behavior data related to user comments, collection, clicking, browsing and the like in a business system where the target user and the model to be evaluated are located, and text cleaning processing of the extracted text form data. The obtained text behavior data can be subjected to word segmentation processing to obtain corresponding keyword data. The text data in the interaction process of the target user and the model to be evaluated may be text data in the text interaction process of the target user or corresponding text data in the voice interaction process.
In some embodiments, after obtaining the keyword data corresponding to each target user, intent recognition may be performed on the keyword data. The process of intent recognition may be: and determining whether the keyword data are matched with the first-level index layers, if so, continuing to match the second-level index items in the target index layers, and if so, determining the value of the rank corresponding to the second-level index items in the behavior sequence data as a target value. If the primary index layer or the secondary index item is not matched, the value of each rank corresponding to the behavior sequence data after the keyword data are matched is not changed. And determining the corresponding behavior sequence data after all the keyword data corresponding to each target user are matched as the final behavior sequence data corresponding to the target user. Wherein, each level value in the behavior sequence data corresponds to different secondary index items respectively. If the rank of the second-level index item corresponding to the keyword data is 4, the 8 th bit value in the behavior sequence data can be determined as the target value, and if the target value is 1, the behavior sequence data after the keyword data is matched is 00010000.
As an example, the matching may be performed by semantic similarity between the keyword data and the first-level index layer, and if the matching is performed on the first-level index layer, the matched first-level index layer may be determined as the target index layer, and the matching may be performed on the keyword data and the second-level index item in the target index layer. The keyword data can be converted into corresponding vector representations based on word embedding technology, natural word frequency, neural network and other technologies, the first-level index layer and the second-level index item are also respectively converted into corresponding vector representations, if the similarity between the keyword data and one first-level index layer exceeds a preset threshold value, the matching is successful, and the first-level index layer with the highest similarity with the keyword is determined as the target index layer. Similarly, a second-level index item having the highest similarity with the keyword data and having a similarity exceeding a preset threshold value may be determined as the target index item.
By carrying out finer judgment on the historical behavior data of the user based on the first-level index layer and the second-level index item, different feedback of the target user to the model to be evaluated can be captured, and meanwhile, the generated behavior sequence data can also comprehensively reflect the overall feedback of the quality of the target user to the model to be evaluated, so that the evaluation model trained by using the behavior sequence data is more comprehensive and accurate.
And step S103, training an evaluation model according to the behavior sequence data corresponding to the target user.
In some embodiments, the evaluation model may be trained for behavior sequence data corresponding to the target user and scoring data corresponding to the target user based on a multi-classification logistic regression algorithm, to determine parameters corresponding to the evaluation model. The trained evaluation model can learn the difference of satisfaction degree or acceptance degree of the user to the evaluation model from the behavior sequence of the user, and the score of the target user can be predicted according to the input new behavior sequence data of the target user.
The scoring data corresponding to the target user may be determined based on feedback provided by the target user participating in the questionnaire. The questionnaire may include questions about the satisfaction of the target user with respect to the experience of the model to be evaluated, preferences of specific functions in the model to be evaluated, and the scoring data corresponding to the target user may be obtained through answers of the target user to the questions. The higher the score the higher the personal satisfaction and acceptance of the model to be evaluated. The evaluation model may be implemented based on a neural network, and the regression function expression may be as shown in formula (1) for the j-th secondary index item in the i-th primary index layer.
In the case of the formula (1),Representing the score corresponding to the target user,Are parameters of the evaluation model, i represents an ith primary index layer in the primary index layers, and j represents a jth secondary index item in the primary index layers.
And step S104, optimizing the evaluation model according to the latest behavior data corresponding to the target user.
In some embodiments, the latest behavior data corresponding to the target user may be obtained. The method comprises text data in the interaction process of a target user and a model to be evaluated, such as text data generated in the text-to-speech process, converted text data of voice data generated in the voice dialogue process, and behavior data of the target user in a business system where the model to be evaluated is located, including clicking, searching, collecting and other behavior data of the target user.
In some embodiments, the churn user may be determined based on the latest behavior data corresponding to the target user. The method for determining the lost user can be as follows: if the behavior data of the target user has the first behavior data, and/or if the target user does not have text data in the interaction process within a preset time and the target user does not have the latest behavior data of the service system where the model to be evaluated is located within the preset time, the user can be determined to be a lost user. As an example, the first behavior data may be a behavior of unsubscribing from a service system, logging off from a service system, and if the target user unsubscribes from the first behavior data such as a model to be evaluated, an account of the target user logging off from the service system, and the like, the target user may be determined as a attrition user. If the target user does not log in the model to be evaluated for a long time or does not interact with the model to be evaluated for a long time, the target user can be determined to be a lost user.
In some embodiments, the first scoring value corresponding to the churn user may be determined according to the latest behavior data corresponding to the churn user. For example, the first scoring value corresponding to the attrition user may be determined according to the latest interaction times between the attrition user and the model to be evaluated, the operation behavior of the attrition user and the service system or the frequency of the operation behavior, such as click frequency, collection frequency, account cancellation, etc. If, for example, the attrition user has a log-off account and is not reopened, the first score value for the attrition user may be determined to be 0. If the behavior of logging out the account number does not exist in the attrition user, the corresponding first scoring value can be determined based on the classification algorithm by the behaviors such as interaction times, click frequencies and collection frequencies, or different weights can be respectively given to the behavior data. The specific setting may be determined by the actual situation.
In some embodiments, all behavior data corresponding to the churn user may be obtained, including all text data during interaction between the churn user and the model to be evaluated, and all behavior data of the churn user in the service system where the model to be evaluated is located, and the behavior sequence data corresponding to the churn user is determined.
The method for determining all behavior sequence data corresponding to the churn user according to all behavior data of the churn user can be the same as the above. All behavior data corresponding to the lost user can be preprocessed, including converting all behavior data into data in a text form, performing text cleaning processing on the data in the text form, and performing word segmentation processing to obtain corresponding keyword data.
After obtaining the keyword data corresponding to each loss user, carrying out intention recognition on the keyword data, if the semantic features corresponding to the keyword data are in the matched first-level index layer, determining the first-level index layer as a target index layer, matching the semantic features corresponding to the keyword data with the semantic features of the second-level index items in the target index layer again, if the semantic features are matched with the corresponding second-level index items, determining the value of the rank corresponding to the second-level index items in the behavior sequence data corresponding to the loss user as a target numerical value, if the target numerical value is determined to be 1, and if the keyword data are not matched with the first-level index layer or the second-level index items, not changing the value of each rank in the behavior sequence data. The matching method can be used for determining a first-level index layer with the highest semantic feature similarity and the similarity exceeding a preset threshold value of the keyword data as a target index layer and determining a second-level index item in the target index layer with the highest semantic feature similarity and the similarity exceeding the preset threshold value of the keyword data as a target index item by calculating the similarity between the semantic feature corresponding to the keyword data and the semantic feature corresponding to the first-level index layer.
In some embodiments, the evaluation model may be retrained according to behavior sequence data corresponding to the churn user and the first score value, so as to obtain an optimized evaluation model.
By training by using the historical behavior data of the target user and the initial scoring data of the target user to predict the service quality score of the user and then introducing the behavior data corresponding to the lost user and the first scoring value of the lost user, the accuracy of the evaluation model can be improved, and the model service quality evaluation of the model to be evaluated can be more effectively improved.
Fig. 3 is a schematic diagram of an alternative device structure of a training device for an evaluation model according to an embodiment of the present application, where the training device 300 for an evaluation model includes an acquisition module 301, a sequence determination module 302, a training module 303, and an optimization module 304. Wherein,
The acquiring module 301 is configured to acquire a target user corresponding to a model to be evaluated and historical behavior data corresponding to the target user;
The sequence determining module 302 is configured to determine a first level index layer and a second level index item according to the historical behavior data, and perform intent recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user;
The training module 303 is configured to train an evaluation model according to the behavior sequence data corresponding to the target user;
And an optimizing module 304, configured to optimize the evaluation model according to the latest behavior data corresponding to the target user.
In some embodiments, the acquisition module 301 is further configured to: acquiring historical behavior data of each user in a business system where the model to be evaluated is located, wherein the historical behavior data comprises at least one of text data, behavior data and external data; judging whether the user is a strong correlation user in the business field of the model to be evaluated according to the historical behavior data of each user; if the user is the strong correlation user, judging whether the user is a core user of a service system where the model to be evaluated is located; and if the user is the core user, determining the user as the target user.
In some embodiments, the sequence determination module 302 is further to: determining a first number of first-level index layers according to the characteristics of the historical behavior data and a preset first number; each first-level index layer is used for representing the acceptance level of the target user to-be-evaluated model; determining a second number of second-level index items corresponding to each first-level index layer according to the behavior characteristics of the historical behavior data corresponding to each first-level index layer and a preset second number; the secondary index item is used for representing the corresponding characteristic index of the target user in the primary index layer.
In some embodiments, the sequence determination module 302 is further to: preprocessing the historical behavior data corresponding to the target user to obtain keyword data corresponding to the preprocessed target user; for each keyword data, if the feature corresponding to the keyword data is matched with the corresponding target index layer in the first-level index layer and the feature corresponding to the keyword data is matched with the corresponding target index item in the second-level index item in the target index layer, determining the value of the rank corresponding to the second-level index item in the behavior sequence data as a target numerical value; and determining the corresponding behavior sequence data after all the keyword data corresponding to the target user are matched as the behavior sequence data corresponding to the target user.
In some embodiments, the sequence determination module 302 is further to: and if the characteristics corresponding to the keyword data are not matched with the corresponding first-level index layers or the characteristics corresponding to the keyword data are not matched with the corresponding second-level index items, the value of each rank in the behavior sequence data is not changed.
In some embodiments, the optimization module 304 is further to: determining a lost user according to the latest behavior data corresponding to the target user; determining a first scoring value corresponding to the loss user according to the latest behavior data corresponding to the loss user; determining behavior sequence data corresponding to the churn user according to all behavior data corresponding to the churn user; and retraining the evaluation model according to the behavior sequence data corresponding to the loss user and the first scoring value to obtain an optimized evaluation model.
In some embodiments, the optimization module 304 is further to: acquiring text data in the latest interaction process of the target user and the model to be evaluated, and latest behavior data of the target user in a business system where the model to be evaluated is located; and if the first behavior data exists in the behavior data of the target user, and/or the text data and the behavior data do not exist in the target user within a preset time, determining the target user as the loss user.
It should be noted that, the training device of the evaluation model in the embodiment of the present application is similar to the description of the embodiment of the training method of the evaluation model, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. The technical details of the training device for the evaluation model provided in the embodiment of the present application may be understood from the description of any one of fig. 1 to fig. 2.
Fig. 4 illustrates a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. The electronic device 400 is used to implement the training method of the assessment model of the embodiments of the present disclosure. In some alternative embodiments, the electronic device 400 may implement the training method of the evaluation model provided by the embodiment of the present application by running a computer program, for example, the computer program may be a software module in an operating system; a local (Native) APP (Application), i.e. a program that needs to be installed in an operating system to run; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
In practical applications, the electronic device 400 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a Cloud server that provides Cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic Cloud computing services such as big data and artificial intelligence platforms, where Cloud Technology (Cloud Technology) refers to a hosting Technology that unifies serial resources such as hardware, software, and networks in a wide area network or a local area network to implement computing, storage, processing, and sharing of data. The electronic device 400 may be, but is not limited to, a smart phone, tablet computer, notebook computer, desktop computer, smart speaker, smart television, smart watch, etc.
Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, wearable devices, vehicle terminals, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 4, the electronic device 400 includes a computing unit 401 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the electronic device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in electronic device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the electronic device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the respective methods and processes described above, for example, a training method of an evaluation model. For example, in some alternative embodiments, the training method of the assessment model may be implemented as a computer software program, which is tangibly embodied on a machine-readable medium, such as the storage unit 408. In some alternative embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the training method of the evaluation model described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured as a training method of the evaluation model by any other suitable means (e.g. by means of firmware).
Embodiments of the present application provide a computer readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform the training method of the assessment model provided by the embodiments of the present application.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each implementation process does not mean that the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The above is merely an example of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (8)

1. A method of training an assessment model, the method comprising:
Acquiring historical behavior data corresponding to a target user corresponding to a model to be evaluated;
determining a first-level index layer and a second-level index item according to the historical behavior data, and carrying out intention recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user;
training an evaluation model according to the behavior sequence data corresponding to the target user;
optimizing the evaluation model according to the latest behavior data corresponding to the target user;
Wherein the determining the first-level index layer and the second-level index item according to the historical behavior data comprises:
determining a first number of first-level index layers according to the characteristics of the historical behavior data and a preset first number; each first-level index layer is used for representing the acceptance level of the target user to-be-evaluated model;
Determining a second number of second-level index items corresponding to each first-level index layer according to the behavior characteristics of the historical behavior data corresponding to each first-level index layer and a preset second number; the secondary index item is used for representing a characteristic index corresponding to the target user in the primary index layer;
The optimizing the evaluation model according to the latest behavior data corresponding to the target user comprises the following steps:
determining a lost user according to the latest behavior data corresponding to the target user;
Determining a first scoring value corresponding to the loss user according to the latest behavior data corresponding to the loss user;
determining behavior sequence data corresponding to the churn user according to all behavior data corresponding to the churn user;
and retraining the evaluation model according to the behavior sequence data corresponding to the loss user and the first scoring value to obtain an optimized evaluation model.
2. The method according to claim 1, wherein the obtaining the target user corresponding to the model to be evaluated and the historical behavior data corresponding to the target user includes:
acquiring historical behavior data of each user in a business system where the model to be evaluated is located, wherein the historical behavior data comprises at least one of text data, behavior data and external data;
Judging whether the user is a strong correlation user in the business field of the model to be evaluated according to the historical behavior data of each user;
If the user is the strong correlation user, judging whether the user is a core user of a service system where the model to be evaluated is located;
and if the user is the core user, determining the user as the target user.
3. The method according to claim 1, wherein the performing intent recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user includes:
Preprocessing the historical behavior data corresponding to the target user to obtain keyword data corresponding to the preprocessed target user;
For each keyword data, if the feature corresponding to the keyword data is matched with the corresponding target index layer in the first-level index layer and the feature corresponding to the keyword data is matched with the corresponding target index item in the second-level index item in the target index layer, determining the value of the rank corresponding to the second-level index item in the behavior sequence data as a target numerical value;
and determining the corresponding behavior sequence data after all the keyword data corresponding to the target user are matched as the behavior sequence data corresponding to the target user.
4. A method according to claim 3, wherein the value of each rank in the behavioural sequence data is unchanged if the feature corresponding to the keyword data does not match the corresponding primary index layer or the feature corresponding to the keyword data does not match the corresponding secondary index item.
5. The method of claim 1, wherein determining the attrition user based on the latest behavior data corresponding to the target user comprises:
Acquiring text data in the latest interaction process of the target user and the model to be evaluated, and latest behavior data of the target user in a business system where the model to be evaluated is located;
And if the first behavior data exists in the behavior data of the target user, and/or the text data and the behavior data do not exist in the target user within a preset time, determining the target user as the loss user.
6. A training apparatus for evaluating a model, the apparatus comprising:
the acquisition module is used for acquiring a target user corresponding to the model to be evaluated and historical behavior data corresponding to the target user;
The sequence determining module is used for determining a primary index layer and a secondary index item according to the historical behavior data, and carrying out intention recognition on text data in the historical behavior data to obtain behavior sequence data corresponding to the target user;
the training module is used for training an evaluation model according to the behavior sequence data corresponding to the target user;
the optimizing module is used for optimizing the evaluation model according to the latest behavior data corresponding to the target user;
the sequence determining module is specifically configured to: determining a first number of first-level index layers according to the characteristics of the historical behavior data and a preset first number; each first-level index layer is used for representing the acceptance level of the target user to-be-evaluated model; determining a second number of second-level index items corresponding to each first-level index layer according to the behavior characteristics of the historical behavior data corresponding to each first-level index layer and a preset second number; the secondary index item is used for representing a characteristic index corresponding to the target user in the primary index layer;
the optimizing module is specifically configured to: determining a lost user according to the latest behavior data corresponding to the target user; determining a first scoring value corresponding to the loss user according to the latest behavior data corresponding to the loss user; determining behavior sequence data corresponding to the churn user according to all behavior data corresponding to the churn user; and retraining the evaluation model according to the behavior sequence data corresponding to the loss user and the first scoring value to obtain an optimized evaluation model.
7. An electronic device, the electronic device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
8. A computer readable storage medium, characterized in that the storage medium comprises a set of computer executable instructions for performing the training method of the assessment model according to any one of claims 1-5 when said instructions are executed.
CN202410038790.6A 2024-01-11 2024-01-11 Training method and device for evaluation model and electronic equipment Active CN117556264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410038790.6A CN117556264B (en) 2024-01-11 2024-01-11 Training method and device for evaluation model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410038790.6A CN117556264B (en) 2024-01-11 2024-01-11 Training method and device for evaluation model and electronic equipment

Publications (2)

Publication Number Publication Date
CN117556264A CN117556264A (en) 2024-02-13
CN117556264B true CN117556264B (en) 2024-05-07

Family

ID=89820870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410038790.6A Active CN117556264B (en) 2024-01-11 2024-01-11 Training method and device for evaluation model and electronic equipment

Country Status (1)

Country Link
CN (1) CN117556264B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805422A (en) * 2018-05-24 2018-11-13 国信优易数据有限公司 A kind of data assessment model training systems, data assessment platform and method
CN110097278A (en) * 2019-04-28 2019-08-06 广东省科技基础条件平台中心 A kind of scientific and technological resources intelligent sharing Fusion training system and application system
CN111798123A (en) * 2020-06-30 2020-10-20 平安国际智慧城市科技股份有限公司 Compliance evaluation method, device, equipment and medium based on artificial intelligence
CN112837099A (en) * 2021-02-05 2021-05-25 深圳市欢太科技有限公司 Potential loss user identification method and device, storage medium and electronic equipment
CN113761343A (en) * 2021-01-27 2021-12-07 北京沃东天骏信息技术有限公司 Information pushing method and device, terminal equipment and storage medium
WO2022121083A1 (en) * 2020-12-09 2022-06-16 南威软件股份有限公司 Enterprise risk early warning method based on association analysis fp-tree algorithm
CN114663223A (en) * 2022-04-08 2022-06-24 平安国际智慧城市科技股份有限公司 Credit risk assessment method, device and related equipment based on artificial intelligence
CN115660431A (en) * 2022-09-21 2023-01-31 中国信息通信研究院 Method and device for evaluating intelligent operation and maintenance system, electronic equipment and storage medium
CN115879463A (en) * 2022-10-12 2023-03-31 华南农业大学 Course element recognition model training and recognition method based on text mining

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110282869A1 (en) * 2010-05-11 2011-11-17 Maxim Zhilyaev Access to information by quantitative analysis of enterprise web access traffic

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805422A (en) * 2018-05-24 2018-11-13 国信优易数据有限公司 A kind of data assessment model training systems, data assessment platform and method
CN110097278A (en) * 2019-04-28 2019-08-06 广东省科技基础条件平台中心 A kind of scientific and technological resources intelligent sharing Fusion training system and application system
CN111798123A (en) * 2020-06-30 2020-10-20 平安国际智慧城市科技股份有限公司 Compliance evaluation method, device, equipment and medium based on artificial intelligence
WO2022121083A1 (en) * 2020-12-09 2022-06-16 南威软件股份有限公司 Enterprise risk early warning method based on association analysis fp-tree algorithm
CN113761343A (en) * 2021-01-27 2021-12-07 北京沃东天骏信息技术有限公司 Information pushing method and device, terminal equipment and storage medium
CN112837099A (en) * 2021-02-05 2021-05-25 深圳市欢太科技有限公司 Potential loss user identification method and device, storage medium and electronic equipment
CN114663223A (en) * 2022-04-08 2022-06-24 平安国际智慧城市科技股份有限公司 Credit risk assessment method, device and related equipment based on artificial intelligence
CN115660431A (en) * 2022-09-21 2023-01-31 中国信息通信研究院 Method and device for evaluating intelligent operation and maintenance system, electronic equipment and storage medium
CN115879463A (en) * 2022-10-12 2023-03-31 华南农业大学 Course element recognition model training and recognition method based on text mining

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于贝叶斯网络的隧道火灾安全评估模型;刘凯;王俊峰;聂于斐;;消防科学与技术;20171015(10);全文 *
基于遗传算法的银行客户信用评估模型研究;陈李钢;叶强;李一军;;计算机工程;20070205(03);全文 *
新媒体环境下新闻传播效果评估的指标和权重;刘建明;徐恬;;新闻与传播评论;20180908(04);全文 *

Also Published As

Publication number Publication date
CN117556264A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11663409B2 (en) Systems and methods for training machine learning models using active learning
CN110175227B (en) Dialogue auxiliary system based on team learning and hierarchical reasoning
US11704500B2 (en) Techniques to add smart device information to machine learning for increased context
US20190180196A1 (en) Systems and methods for generating and updating machine hybrid deep learning models
US20180053092A1 (en) Method and System for Innovation Management and Optimization Under Uncertainty
CN102708153B (en) Self-adaption finding and predicting method and system for hot topics of online social network
US9710829B1 (en) Methods, systems, and articles of manufacture for analyzing social media with trained intelligent systems to enhance direct marketing opportunities
CN112799747A (en) Intelligent assistant evaluation and recommendation method, system, terminal and readable storage medium
CN116663525B (en) Document auditing method, device, equipment and storage medium
CN114647741A (en) Process automatic decision and reasoning method, device, computer equipment and storage medium
CN116561542B (en) Model optimization training system, method and related device
CN111639247A (en) Method, apparatus, device and computer-readable storage medium for evaluating quality of review
US11636411B2 (en) Apparatus for determining role fitness while eliminating unwanted bias
US20220156862A1 (en) System and method for analyzing grantability of a legal filing
Haryono et al. Aspect-based sentiment analysis of financial headlines and microblogs using semantic similarity and bidirectional long short-term memory
CN117556264B (en) Training method and device for evaluation model and electronic equipment
CN116757835A (en) Method and device for monitoring transaction risk in credit card customer credit
US20230063686A1 (en) Fine-grained stochastic neural architecture search
CN115292167A (en) Life cycle prediction model construction method, device, equipment and readable storage medium
CN114238798A (en) Search ranking method, system, device and storage medium based on neural network
CN111160662A (en) Risk prediction method, electronic equipment and storage medium
Ali Assessing AI Chatbots Through Meta-Analysis of Deep Learning Models
Zhao et al. A Method for Forecasting The Pork Price Based on Fluctuation Forecasting and Attention Mechanism
CN117033540A (en) Report generation method, report generation device, electronic equipment and medium
CN117808043A (en) Information processing method, training method, device, equipment and medium for model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant