CN117171403A - Data processing method, device, computer equipment and storage medium - Google Patents

Data processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117171403A
CN117171403A CN202311145933.5A CN202311145933A CN117171403A CN 117171403 A CN117171403 A CN 117171403A CN 202311145933 A CN202311145933 A CN 202311145933A CN 117171403 A CN117171403 A CN 117171403A
Authority
CN
China
Prior art keywords
target
data information
classification result
data
reply content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311145933.5A
Other languages
Chinese (zh)
Inventor
赵滢
李金泽
宫婉钰
殷文莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202311145933.5A priority Critical patent/CN117171403A/en
Publication of CN117171403A publication Critical patent/CN117171403A/en
Pending legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application relates to a data processing method, apparatus, computer device, storage medium and computer program product, relating to the field of artificial intelligence technology, financial science and technology or other related fields. The method comprises the following steps: receiving target data information; the target data information comprises voice data and/or text data; semantic analysis is carried out on the voice data and/or the text data through a language analysis model for reinforcement learning, and target intention corresponding to the voice data and/or the text data is determined; the language analysis model for reinforcement learning updates model parameters based on a feedback mechanism of rewarding functions and environment interaction feedback; classifying the target intention to obtain a classification result of target data information corresponding to the target intention; and determining the reply content corresponding to the target data information based on the classification result. The method can improve the data processing efficiency of the target data information.

Description

Data processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a data processing method, apparatus, computer device, storage medium, and computer program product.
Background
With the development of banking industry, banking business is more and more complex, and target objects in banking industry can complain about unsatisfactory products, services or behaviors, so that the complaints of the target objects are important components of the supervision of the service quality of the banks and are also important means for guaranteeing the rights and interests of the target objects.
In the traditional technology, a bank carries out complaint acceptance, complaint investigation, complaint treatment, complaint feedback and complaint record analysis on the complaint of a target object in a manual processing mode.
However, the current manual processing method processes the complaint of the target object, and the traditional technology has lower efficiency for complaint processing due to more contents of complaint data to be processed and more complex feedback for the complaint contents.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a data processing method, apparatus, computer device, computer readable storage medium, and computer program product.
In a first aspect, the present application provides a data processing method. The method comprises the following steps:
receiving target data information; the target data information comprises voice data and/or text data;
Performing semantic analysis on the voice data and/or the text data through a language analysis model for reinforcement learning, and determining target intention corresponding to the voice data and/or the text data; the language analysis model of reinforcement learning updates model parameters based on a feedback mechanism of rewarding function and environment interaction feedback;
classifying the target intention to obtain a classification result of the target data information corresponding to the target intention;
and determining the reply content corresponding to the target data information based on the classification result.
In one embodiment, after determining the reply content corresponding to the target data information based on the classification result, the method further includes:
receiving feedback data information, wherein the feedback data information comprises evaluation feedback of a target object aiming at the reply content;
and taking the evaluation feedback in the feedback data information as environment interaction feedback of the language analysis model, and updating the feature weight, bias, learning rate and super-parameters in the model parameters of the language analysis model through the reward function and the environment interaction feedback.
In one embodiment, the classifying the target intention to obtain a classification result of the target data information corresponding to the target intention includes:
Extracting features of the target intention to obtain a feature vector corresponding to the target intention;
calculating the probability of the target intention corresponding to the classification label based on the feature vector;
and taking the classification label corresponding to the maximum value of the probability as a classification result of the target data information corresponding to the target intention.
In one embodiment, the determining, based on the classification result, reply content corresponding to the target data information includes:
inquiring whether the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content;
if the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content, the preset reply content which corresponds to the predicted target data information is used as the reply content which corresponds to the target data information;
and if the classification result which is the same as the classification result of the target data information does not exist in the classification result corresponding to the predicted reply content, generating the reply content corresponding to the target data information based on the historical target data record and the behavior data of the target object.
In one embodiment, the generating reply content corresponding to the target data information based on the historical target data record and the behavior data of the target object includes:
acquiring a historical target record and behavior data of a target object;
analyzing and processing the historical target record and the behavior data of the target object according to the language analysis model to obtain the historical target demand characteristics and the historical target preference characteristics of the target object;
and generating reply content corresponding to the target data information according to the classification result, the historical target demand characteristic and the historical target preference characteristic, and displaying the reply content corresponding to the target data information to the target object.
In one embodiment, the linguistic analysis model comprises a predictor model, the method further comprising:
acquiring a full target record and an increment target record, and respectively corresponding target object information of the full target record and the increment target record;
preprocessing the full target record and the increment target record to obtain a preprocessed full target record and an preprocessed increment target record;
predicting the preprocessed full target record, the preprocessed increment target record and the target object information according to the predictor model to obtain predicted target data information;
And generating preset reply content according to the predicted target data information.
In one embodiment, the language analysis model based on reinforcement learning includes a risk assessment sub-model, and after determining reply content corresponding to the target data information based on the classification result, the method further includes:
acquiring comprehensive data;
and performing risk assessment on the comprehensive data according to the risk assessment sub-model to obtain a risk assessment result, and displaying the risk assessment result.
In one embodiment, after determining the reply content corresponding to the target data information based on the classification result, the method further includes:
responding to a manual processing request of a target object, extracting regional information in the target data information according to the language analysis model, and taking the regional information and the target intention as conditions of a rule engine;
and distributing recommended objects according to the conditions of the rule engine and the rule engine.
In a second aspect, the application further provides a data processing device. The device comprises:
the first receiving module is used for receiving the target data information; the target data information comprises voice data and/or text data;
The analysis module is used for carrying out semantic analysis on the voice data and/or the text data through a language analysis model for reinforcement learning and determining target intention corresponding to the voice data and/or the text data; the language analysis model of reinforcement learning updates model parameters based on a feedback mechanism of rewarding function and environment interaction feedback;
the classifying module is used for classifying the target intention to obtain a classifying result of the target data information corresponding to the target intention;
and the first generation module is used for determining reply content corresponding to the target data information based on the classification result.
In one embodiment, the apparatus further comprises:
the second receiving module is used for receiving feedback data information, wherein the feedback data information comprises evaluation feedback of a target object for the reply content;
and the updating module is used for taking the evaluation feedback in the feedback data information as the environment interaction feedback of the language analysis model, and updating the characteristic weight, the bias, the learning rate and the super parameter in the model parameters of the language analysis model through the reward function and the environment interaction feedback.
In one embodiment, the classifying module is specifically configured to:
extracting features of the target intention to obtain a feature vector corresponding to the target intention;
calculating the probability of the target intention corresponding to the classification label based on the feature vector;
and taking the classification label corresponding to the maximum value of the probability as a classification result of the target data information corresponding to the target intention.
In one embodiment, the first generating module is specifically configured to:
inquiring whether the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content;
if the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content, the preset reply content which corresponds to the predicted target data information is used as the reply content which corresponds to the target data information;
and if the classification result which is the same as the classification result of the target data information does not exist in the classification result corresponding to the predicted reply content, generating the reply content corresponding to the target data information based on the historical target data record and the behavior data of the target object.
In one embodiment, the first generating module is specifically configured to:
acquiring a historical target record and behavior data of a target object;
analyzing and processing the historical target record and the behavior data of the target object according to the language analysis model to obtain the historical target demand characteristics and the historical target preference characteristics of the target object;
and generating reply content corresponding to the target data information according to the classification result, the historical target demand characteristic and the historical target preference characteristic, and displaying the reply content corresponding to the target data information to the target object.
In one embodiment, the apparatus further comprises:
the first acquisition module is used for acquiring the full target record and the increment target record and the target object information respectively corresponding to the full target record and the increment target record;
the preprocessing module is specifically used for preprocessing the full-volume target record and the increment target record to obtain the preprocessed full-volume target record and increment target record;
the prediction module is used for predicting the preprocessed full target record, the preprocessed increment target record and the target object information according to the predictor model to obtain predicted target data information;
And the second generation module is used for generating preset reply content according to the predicted target data information.
In one embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring the comprehensive data;
and the risk assessment module is used for carrying out risk assessment on the comprehensive data according to the risk assessment sub-model to obtain a risk assessment result and displaying the risk assessment result.
In one embodiment, the apparatus further comprises:
the extraction module is used for responding to a manual processing request of a target object, extracting regional information in the target data information according to the language analysis model, and taking the regional information and the target intention as conditions of a rule engine;
and the distribution module is used for distributing the recommended objects according to the conditions of the rule engine and the rule engine.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
receiving target data information; the target data information comprises voice data and/or text data;
Performing semantic analysis on the voice data and/or the text data through a language analysis model for reinforcement learning, and determining target intention corresponding to the voice data and/or the text data; the language analysis model of reinforcement learning updates model parameters based on a feedback mechanism of rewarding function and environment interaction feedback;
classifying the target intention to obtain a classification result of the target data information corresponding to the target intention;
and determining the reply content corresponding to the target data information based on the classification result.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
receiving target data information; the target data information comprises voice data and/or text data;
performing semantic analysis on the voice data and/or the text data through a language analysis model for reinforcement learning, and determining target intention corresponding to the voice data and/or the text data; the language analysis model of reinforcement learning updates model parameters based on a feedback mechanism of rewarding function and environment interaction feedback;
Classifying the target intention to obtain a classification result of the target data information corresponding to the target intention;
and determining the reply content corresponding to the target data information based on the classification result.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
receiving target data information; the target data information comprises voice data and/or text data;
performing semantic analysis on the voice data and/or the text data through a language analysis model for reinforcement learning, and determining target intention corresponding to the voice data and/or the text data; the language analysis model of reinforcement learning updates model parameters based on a feedback mechanism of rewarding function and environment interaction feedback;
classifying the target intention to obtain a classification result of the target data information corresponding to the target intention;
and determining the reply content corresponding to the target data information based on the classification result.
According to the data processing method, the device, the computer equipment, the storage medium and the computer program product, the target requirements of the target object can be automatically replied by classifying the language analysis model based on reinforcement learning and generating reply content according to the classification result of the target data information, meanwhile, the language analysis model can be enabled to have flexibility through reinforcement learning, model parameters of the language analysis model are continuously updated through a feedback mechanism of rewarding functions and environment interaction feedback, the accuracy of generating the reply content according to the target data information is improved, and the efficiency of data processing of the target data information is further improved.
Drawings
FIG. 1 is a flow diagram of a data processing method in one embodiment;
FIG. 2 is a flow chart of a continuous training step of a language analysis model in one embodiment;
FIG. 3 is a flow diagram of classifying target intent in one embodiment;
FIG. 4 is a flow diagram of generating reply content from preset reply content in one embodiment;
FIG. 5 is a flow diagram of direct generation of reply content in one embodiment;
FIG. 6 is a flow diagram of a method of generating preset reply content in one embodiment;
FIG. 7 is a flow diagram of a method of risk assessment in one embodiment;
FIG. 8 is a flow diagram of a method of assigning a recommendation object, in one embodiment;
FIG. 9 is a flow diagram illustrating an example of a method of data processing in one embodiment;
FIG. 10 is a block diagram of a data processing apparatus in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a data processing method is provided, where this embodiment is applied to a terminal to illustrate the method, and it is understood that the method may also be applied to a server, and may also be applied to a system including a terminal and a server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 102, receiving target data information.
Wherein the target data information comprises voice data and/or text data.
In the embodiment of the application, the target data information can be complaint data information, experience data information or consultation data information of the target object, and the type of the target data information is not limited. In the embodiment of the application, the target data information is taken as complaint data information as an example for explanation.
The customer of the bank can interact with the terminal through channels such as telephone, web pages, client software or video as a target object, and complaint data information of the customer is sent to the terminal. The terminal containing the language analysis model may act as a digital employee to receive complaint data information in the form of voice data and/or text data.
Optionally, if the target object uses the voice input device to transmit the complaint content to the terminal in a voice form, the terminal converts the voice of the user into a text form through a voice recognition technology, and the text is used as complaint data information, so that the terminal can further analyze and process the complaint data information.
And 104, carrying out semantic analysis on the voice data and/or the text data through the language analysis model for reinforcement learning, and determining the target intention corresponding to the voice data and/or the text data.
The language analysis model for reinforcement learning updates model parameters based on a feedback mechanism of rewarding functions and environment interaction feedback.
Wherein the language analysis model is based on a LLM model (Large Language Model, deep learning-based natural language processing model) and comprises a plurality of natural language processing tasks, and the data processing tasks are realized by combining a plurality of different model structures, for example, the language analysis model can be an attention mechanism model, a convolution neural network model or a circulation neural network model for semantic analysis and intention prediction tasks.
In the embodiment of the application, taking the intention as a complaint intention as an example for explanation, the terminal performs feature extraction on the received complaint data information in the form of voice data and/or text data by using a language analysis model trained by using the reinforcement learning principle, specifically, if the complaint data information is voice data, the terminal may perform feature extraction through a voice feature extraction technology, for example, mel frequency spectrum features (Mel-frequency Spectral Coefficients, mel frequency spectrum features) or MFCC (Mel-scale Frequency Cepstral Coefficients, mel frequency cepstrum coefficient), and the terminal may also convert the voice data into text data through a voice recognition technology and then perform semantic analysis based on the text data. For text data, the terminal can perform word segmentation, stem extraction, word vector table and other processes on the text through an NLP (Natural Language Processing ) technology so as to perform feature extraction on the text data. Finally, the terminal predicts the possible complaint intention through the learned knowledge and rewarding mechanism by the voice data characteristic or the character data characteristic.
Alternatively, the predicted complaint intent may be a classification task, the model outputs as the input data of the next layer with the highest probability, the language analysis model may be continuously trained according to the environment interaction during the application process, during the training process, the language analysis model may trade off between exploring new actions and utilizing learned strategies, and epsilon-greedy strategies (greedy strategies) or other exploratory strategies may be used to ensure that the model is not limited to known excellent actions or sample labels during output training, and explore more possible options.
And 106, classifying the target intention to obtain a classification result of the target data information corresponding to the target intention.
In the embodiment of the application, the terminal classifies the complaint intention of the current complaint data information, the terminal can judge the type of the complaint intention through a classification task in the language analysis model to obtain a classification result of the complaint data information corresponding to the complaint intention, for example, the terminal can classify the current complaint data information as a product quality problem or service complaint and the like.
And step 108, determining reply content corresponding to the target data information based on the classification result.
In the embodiment of the application, the terminal performs semantic learning and contextual information learning by utilizing an automatic target processing task in a language analysis model according to the feature codes of the voice data or the text data of the current complaint data information to obtain the feature representation of the complaint data information, determines the reply content corresponding to the complaint data information based on the learned language knowledge and semantic relation, and displays the reply content to a target object. The part of the language analysis model for generating the reply content task may be a transducer architecture (a neural network architecture).
Optionally, preset reply contents of different complaint intentions can be stored in the terminal, and if preset reply contents corresponding to the complaint intentions exist in the terminal, the language analysis model can directly call and display the preset reply contents corresponding to the complaint intentions.
According to the data processing method, the language analysis model based on reinforcement learning is classified, and the reply content is generated according to the classification result of the target data information, so that the target requirement of an automatic reply target object can be met, meanwhile, the language analysis model can be enabled to have flexibility through reinforcement learning, model parameters of the language analysis model are continuously updated through a feedback mechanism of rewarding functions and environment interaction feedback, the accuracy of generating the reply content according to the target data information is improved, and the efficiency of data processing of the target data information is further improved.
In one embodiment, the reinforcement-learned language recognition model may be continuously trained during the application process to improve the output accuracy of each part of the task, learning the optimal strategy by observing the state of the environment, performing actions, obtaining feedback (rewards or penalties). After the product type and the like are changed, the language analysis model for reinforcement learning can be self-adjusted and optimized through interaction with the new environment, as shown in fig. 2, step 108 determines the reply content corresponding to the target data information based on the classification result, and the method further comprises:
step 202, feedback data information is received.
The feedback data information comprises evaluation feedback of the target object aiming at the reply content.
In the embodiment of the application, after the terminal displays the reply content to the target object (i.e., the consumer who proposes the complaint), the target object may evaluate according to the quality of the reply content, for example, the target object may score the reply content, or the target object evaluates whether the reply content solves the problem, whether the text of the reply content is complete, whether the text of the reply content is accurate, etc. in terms of dimensions, as feedback data information, the feedback data information is fed back to the language recognition model of the terminal. That is, the terminal may receive feedback data information of the target object as environmental interaction feedback.
And 204, taking the evaluation feedback in the feedback data information as the environment interaction feedback of the language analysis model, and updating the feature weights, the bias, the learning rate and the super parameters in the model parameters of the language analysis model through the reward function and the environment interaction feedback.
In the embodiment of the application, the language analysis model learns new knowledge according to interaction with a new environment, namely, takes evaluation feedback in feedback data information as environment interaction feedback of the language analysis model, updates strategies and cost functions in the language analysis model, adjusts the priority of action selection and the accuracy of value estimation according to the new feedback data information, iterates and improves feature weights, bias, learning rates and super-parameters in model parameters of the language analysis model through continuous interaction and learning, and adapts to new tasks and environment requirements through repeated attempts, strategy optimization and knowledge migration.
The reward function is used to evaluate the quality of the action selected by the linguistic analysis model. In the linguistic analysis model, the reward function may be determined based on task goals, such as generated text quality, interactive effects of dialog with the user, etc., and parameters of the policy model are updated based on the reward function and environmental interaction feedback using reinforcement learning algorithms, e.g., deep reinforcement learning algorithms (e.g., deep Q-network, policy gradient methods, etc.), to maximize cumulative rewards.
In this embodiment, the model parameters are continuously updated on the language analysis model through the feedback mechanism of the reward function and the environment interaction feedback, so that the accuracy of generating the reply content for the target data information is improved, and the efficiency of data processing on the target data information is further improved.
In one embodiment, as shown in fig. 3, step 106 performs a classification process on the target intention to obtain a classification result of the target data information corresponding to the target intention, including:
step 302, extracting features of the target intention to obtain a feature vector corresponding to the target intention.
In the embodiment of the application, the terminal can select the characteristics related to the target intention. For example, the terminal may be implemented by TF-IDF (term frequency-inverse document frequency-based keyword extraction algorithm) or domain knowledge-based method (e.g., keyword extraction), and select features capable of distinguishing different target intentions. Then, the terminal performs numerical representation on the extracted features, and combines the results of the numerical representation into feature vectors. For example, the feature vector may be formed by combining a single feature or a plurality of features, where the combined feature may be obtained by stitching, weighted summation, or the like, and the method for generating the feature vector is not limited in the present application.
Step 304, calculating the probability of the target intention corresponding to the classification label based on the feature vector.
In the embodiment of the application, the terminal predicts the feature vector through the LLM classifier to determine the classification probability corresponding to the target intention in a plurality of classification labels. The classifier can be constructed based on the principles of logistic regression, support vector machine or naive Bayes, and the like, and the type of the classifier is not limited by the application.
And 306, taking the classification label corresponding to the maximum value of the probability as a classification result of the target data information corresponding to the target intention.
In the embodiment of the application, the classification label with the maximum probability value is selected from the classification labels corresponding to the target intention and is used as the classification result of the target data information corresponding to the target intention.
In this embodiment, the accuracy of generating the reply content by the LLM model may be improved by performing feature extraction on the target intention to determine a feature vector, and determining a classification result of the target data information corresponding to the target intention by using the feature vector.
In one embodiment, as shown in fig. 4, step 108 determines reply content corresponding to the target data information based on the classification result, including:
and step 402, inquiring whether the classification result which is the same as the classification result of the target data information exists in the classification results corresponding to the predicted reply content.
In the embodiment of the application, the terminal firstly judges whether the classification results are the same by comparing the labels or the categories of the classification results, searches whether the classification results corresponding to the predicted reply content are the same as the classification results of the target data information, and determines whether the classification results which are the same as the classification results of the target data information exist.
In step 404, if the classification result corresponding to the predicted reply content is the same as the classification result of the target data information, the preset reply content corresponding to the predicted target data information is used as the reply content corresponding to the target data information.
In the embodiment of the application, if the classification result which is the same as the classification result of the target data information exists in the classification result corresponding to the predicted reply content, the terminal can take the preset reply content corresponding to the predicted target data information as the reply content corresponding to the target data information.
Alternatively, the terminal may also judge the confidence level of using the preset reply and the dynamically generated reply for the case where the predicted classification result is the same as the target data information classification result. For example, in some special cases, the terminal may dynamically generate personalized reply content based on preset reply content according to a specific question and context of the user.
In step 406, if the classification result identical to the classification result of the target data information does not exist in the classification result corresponding to the predicted reply content, the reply content corresponding to the target data information is generated based on the historical target data record and the behavior data of the target object.
In the embodiment of the present application, if the classification result which is the same as the classification result of the target data information does not exist in the classification result corresponding to the predicted reply content, the terminal generates the reply content based on the same principle as in the step 108, and the specific process of generating the reply content is not repeated in the embodiment of the present application.
In this embodiment, by using preset reply content generated in advance as a default option, performing query judgment before generating reply content, and using preset reply content corresponding to predicted target data information as reply content corresponding to target data information, in which the classification result identical to the classification result of the target data information exists in the classification result corresponding to the predicted reply content, the efficiency of generating reply content for the target data information can be improved.
In one embodiment, as shown in fig. 5, step 108 determines reply content corresponding to the target data information based on the classification result, including:
Step 502, historical target records and behavior data of a target object are obtained.
In the embodiment of the application, taking a history target record as a complaint record as an example, a terminal determines a target object according to the complaint data information, searches the target object in a database according to the account information of the target object, and acquires the history complaint record and behavior data corresponding to the target object so as to be used for generating a personalized solution of the target object.
And step 504, analyzing and processing the historical target record and the behavior data of the target object according to the language analysis model to obtain the historical target demand characteristics and the historical target preference characteristics of the target object.
In the embodiment of the application, firstly, a terminal performs data cleaning and standardization tasks according to a natural language processing technology in a language analysis model, and performs word segmentation, stop word removal, punctuation mark removal and other operations on historical complaint records and behavior data of text data types. Then, converting the text data into high-dimensional vector representation through a feature extraction structure (e.g., a text embedding structure) in the language analysis model, obtaining feature vectors corresponding to the historical complaint records and the behavior data respectively, inputting the feature vectors corresponding to the historical complaint records and the feature vectors corresponding to the behavior data into the language analysis model, extracting main body features, key word features, emotion tendency features and the like in the historical complaint records, extracting product consumption tendency features of the target object, complaint features in the complaint process and the like in the behavior data, and obtaining historical complaint demand features and historical complaint preference features of the target object.
And step 506, generating reply contents corresponding to the target data information according to the classification result, the historical target demand characteristics and the historical target preference characteristics, and displaying the reply contents corresponding to the target data information to the target object.
In the embodiment of the application, the terminal generates the reply content corresponding to the complaint data information through the personalized complaint solving task in the language analysis model based on the classification result, the history complaint demand characteristic and the history complaint preference characteristic and through the learned language knowledge and the semantic relation in the language analysis model, and optionally, the language analysis model can also comprise a predefined reply template, and the reply template matched with the characteristics of the current complaint data information is determined according to the classification result of the current complaint data information of the target object, the characteristic information of the history complaint demand characteristic and the history complaint preference characteristic, and the characteristic information is embedded into the reply template to obtain the reply content.
After generating the reply content, the terminal presents the reply content to the target object by sending the reply content to the user interface of the target object, or otherwise.
Optionally, the terminal can induce the target object to continuously describe and expand the complaint content and the requirements through the digital staff in the form of a question and answer box so as to realize that the language analysis model provides targeted suggestions and solutions for the target object.
In this embodiment, by performing feature extraction on the historical target demand feature and the historical target preference feature of the target object sending the target data information, independent analysis can be performed according to the target object individual, and personalized reply content is provided for the target object feature, so that accuracy of generating the reply content by the language analysis model for the target data information of the user is improved, and efficiency of performing data processing on the target data information is improved.
In one embodiment, the reinforcement learning based language analysis model includes a predictor model for predicting target behavior that may occur in the future, as shown in fig. 6, the method further comprising:
step 602, obtaining a full-volume target record and an increment target record, and target object information respectively corresponding to the full-volume target record and the increment target record.
In the embodiment of the application, the terminal can predict possible complaints of the target object according to a preset time period, for example, the terminal of the banking system predicts possible complaints of consumers according to a time unit of a quarter, wherein the total complaint record can be all complaint records recorded by the banking system before the current quarter, and the incremental complaint record can be newly added complaint records in the current quarter.
The terminal acquires the full-volume complaint record and the increment complaint record according to the dividing rule, and determines a target object and target object information according to each complaint data information in the complaint record.
And step 604, preprocessing the full target record and the increment target record to obtain the preprocessed full target record and the preprocessed increment target record.
In the embodiment of the application, the preprocessing operation comprises the steps of removing abnormal data and repeated data, segmenting and marking the content in each complaint data information, and extracting keywords and key phrases. Specifically, the terminal processes abnormal data or invalid data in the full complaint records and the incremental complaint records, for example, deletes, repairs or fills the missing values, the error formats and the abnormal characters, detects repeated data of the complaint data set, and identifies and deletes repeated complaint records; the complaint data information is subjected to word segmentation processing through rule-based word segmentation, statistics-based word segmentation and deep learning-based word segmentation, part-of-speech tagging is carried out on text after word segmentation, and each word is endowed with part-of-speech category in sentences so as to carry out further semantic analysis and keyword extraction; the terminal extracts keywords related to the subject or the focus of complaints from the complaint content using a keyword extraction algorithm, for example, TF-IDF, textRank (a text-based keyword extraction algorithm), etc., based on the results of the word segmentation and the part-of-speech tagging, and extracts key phrases in the complaint data information using a phrase extraction algorithm.
And step 606, predicting the preprocessed full-scale target record, the preprocessed increment target record and the target object information according to the predictor model to obtain predicted target data information.
In the embodiment of the application, the terminal predicts the newly added characteristic information (such as word frequency, emotion characteristics, complaint subject and the like of complaint content) of the total complaint record according to the incremental complaint record to obtain the classification category serving as the output result in the prediction sub-model, further obtain the predicted complaint data information and characterize the complaint data information possibly received in the future.
At step 608, a preset reply is generated based on the predicted target data information.
In the embodiment of the application, the terminal performs the intelligent complaint analysis task according to the processing of the language analysis model on the predicted complaint data information to generate the preset reply content, wherein the specific processing process of the language analysis model to generate the preset reply content is the same as the principle of generating the reply information in the step 108, and the embodiment of the application is not repeated.
In this embodiment, the prediction of the target data information is performed on the full-scale target record and the incremental target record by the predictor model, the preset reply content can be generated in advance according to the predicted data information, and the preset reply content matched with the target data information is invoked for output and display in the application process of the data processing method of the present application, so that the efficiency of generating the reply content can be improved, and the efficiency of performing data processing on the target data information can be further improved.
In one embodiment, the language analysis model based on reinforcement learning includes a risk assessment sub-model, where the risk assessment sub-model is used for risk assessment of business, products, etc. in a bank, as shown in fig. 7, after determining reply content corresponding to the target data information based on the classification result in step 108, the method further includes:
step 702, acquiring comprehensive data.
In the embodiment of the application, the terminal acquires various data in the internal system of the bank, such as complaint data of consumers, product sales data, feedback data of consumers and the like. Wherein, complaint data and feedback data of consumers can reflect the concern and dissatisfaction of a target object for a product or service.
And step 704, performing risk assessment on the comprehensive data according to the risk assessment sub-model to obtain a risk assessment result, and displaying the risk assessment result.
In the embodiment of the application, a terminal performs operations such as feature extraction and feature conversion on comprehensive data according to a risk assessment sub-model in a language analysis model, so as to describe risk features of factors with risk possibly occurring in different dimensions, and performs risk assessment on factors (such as products or services) with risk possibly occurring, wherein the processing process of the risk assessment can be a classification task, and classification of risk class labels is performed on different factors to obtain labels corresponding to different products or different services, and the labels can be high risk, medium risk or low risk. Optionally, the risk assessment process may also be a regression task, where risk value scores are calculated for different factors, and different risk degrees are classified for the factors that may have risks according to the risk value scores and preset risk presets.
Optionally, the terminal establishes a digitized record for each complaint data information through the digitized complaint record and management task in the language analysis model, including complaint time, complaint content, complaint person information, etc., and the record can be rapidly retrieved and analyzed. Meanwhile, the terminal can track the processing progress and time limit through the system and quickly respond to the complaint demands of the target object. The terminal can display the processing result in a BI (Business Intelligence ) mode through a visual analysis technology, and display the processing result in a chart, a map and other modes, so that a bank manager can more intuitively know the complaint condition.
The complete technical scheme for realizing the digital complaint record and management of the bank comprises the following steps: first, consumer complaints are received and recorded by creating a digital platform, including complaint information submitted through various channels (e.g., telephone, text messaging, and mobile software feedback). Secondly, complaints are automatically classified, screened and archived by a language analysis model including natural language processing. And establishes a digitized record for each complaint, including complaint time, complaint content, complaint person information, etc., which can be quickly retrieved and analyzed. Finally, digital complaint management is realized, including assigning complaint treatment personnel, setting treatment progress and time limit, tracking complaint progress and the like, so that the treatment efficiency and transparency are improved, and the complaint demands of consumers can be responded quickly. Through digital complaint recording and management, the bank can better master consumer complaint conditions and take measures in time to solve the problem.
In this embodiment, the risk assessment sub-model predicts the comprehensive data, and the prediction result may reflect possible points of interest and dissatisfaction of the target object, so as to improve timeliness of data processing, thereby effectively avoiding complaint behavior of the target object.
In one embodiment, as shown in fig. 8, after determining the reply content corresponding to the target data information in step 108 based on the classification result, the method further includes:
step 802, responding to the manual processing request of the target object, extracting the regional information in the target data information according to the language analysis model, and taking the regional information and the target intention as the conditions of the rule engine.
In the embodiment of the application, if the target object is not satisfied with the reply content generated by the language analysis model, a manual processing request can be initiated for the target object, the terminal extracts the region information in the complaint data information according to the entity identification and keyword extraction tasks in the language analysis model corresponding to the manual processing request, and the region information is used as one of the conditions of the rule engine. Meanwhile, the terminal takes the complaint intention and the region information of the complaint data information obtained in the step 104 as the condition of a rule engine so as to carry out subsequent calculation through a language analysis model.
In step 804, a recommended object is assigned according to the condition of the rule engine and the rule engine.
In the embodiment of the application, the condition of the rule engine can be set based on specific input data, states or events, for example, the condition can be a complaint subject, a complaint requirement, region information and the like in complaint data information, the terminal is matched through the rule engine according to the extracted region information and complaint intention, when the condition is met, a corresponding processing flow is triggered, for example, when the region information is Beijing, the complaint intention is a product quality problem, a corresponding processing flow is triggered, and a recommended object for processing the current complaint data information is determined to be a Beijing region and a processor responsible for a product.
Optionally, after the language analysis model determines the recommended object, the terminal performs association analysis on the historical complaint demand characteristics and the historical complaint preference characteristics of the target object through an association analysis algorithm to obtain comprehensive complaint data of the target object. And applying a correlation analysis algorithm to perform correlation analysis on the historical complaint demand characteristics and the historical complaint preference characteristics, and finding out a correlation rule between the historical complaint demand characteristics and the historical complaint preference characteristics through the correlation analysis. These association rules may reveal relationships between target object complaint needs and preferences and are used to generate comprehensive complaint data for the target object.
The terminal classifies the comprehensive complaint data to obtain a classification result corresponding to the comprehensive complaint data, and performs semantic analysis on the comprehensive complaint data according to the classification result to obtain a complaint subject of the comprehensive complaint data. Then, the terminal generates a reference solution for the target object based on the complaint theme and displays the reference solution to the recommended object. The reference solution may include a summary of comprehensive complaint data of the target object, including showing detailed complaint records and behavior data of the target object, and summary contents of the complaint records and behavior data of the target object, so that the recommended object replies to complaint data information of the target object in combination with the reference solution according to the summary of the comprehensive complaint data, the detailed complaint records and behavior data of the target object, and the summary contents of the complaint records and behavior data of the target object.
In this embodiment, the accuracy of the response of the recommended object can be improved by extracting the condition of the rule engine according to the complaint data information of the target object and responding to the complaint data information of the target object by distributing the recommended object according to the rule engine.
In one embodiment, there is provided an example of a data transmission method, as shown in fig. 9, the method including:
Step 901, receiving complaint data information;
step 902, generating reply content according to an automatic complaint processing module in the language analysis model, and outputting feedback of the reply content;
step 903, predicting the full-quantity complaint records and the incremental complaint records according to the intelligent complaint analysis module in the language analysis model, and generating preset reply content;
step 904, responding to the manual processing request of the target object, and distributing the recommended object to reply through a language analysis model;
step 905, generating a reference solution according to a personalized complaint solving module in the language analysis model, displaying the reference solution to a recommended object, and recording the processing result of the recommended object in a digital complaint recording module;
step 906, continuously training the language analysis model and optimizing model parameters according to the digital complaint recording and managing module in the language analysis model, the automatic complaint processing module, the intelligent complaint analyzing module and the personalized complaint solving module.
Alternatively, step 903 may follow step 902 or may be juxtaposed with step 902, and step 902 may match the preset reply content to generate a reply content, the specific execution order of which may be set based on the business requirements.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a data processing device for realizing the above related data processing method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the data processing device provided below may refer to the limitation of the data processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in FIG. 10, there is provided a data processing apparatus comprising: a first receiving module 1001, an analyzing module 1002, a categorizing module 1003, and a first generating module 1004, wherein:
a first receiving module 1001, configured to receive target data information; the target data information contains speech data and/or text data.
The analysis module 1002 is configured to perform semantic analysis on the voice data and/or text data through a language analysis model for reinforcement learning, and determine a target intention corresponding to the voice data and/or text data; the reinforcement-learning language analysis model updates model parameters based on a feedback mechanism of the reward function and the environmental interaction feedback.
The classifying module 1003 is configured to perform classifying processing on the target intention, and obtain a classification result of the target data information corresponding to the target intention.
The first generating module 1004 is configured to determine, based on the classification result, reply content corresponding to the target data information.
In one embodiment, the apparatus 1000 further comprises:
the second receiving module is used for receiving feedback data information, wherein the feedback data information comprises evaluation feedback of the target object aiming at the reply content;
and the updating module is used for taking the evaluation feedback in the feedback data information as the environment interaction feedback of the language analysis model, and updating the feature weight, the bias, the learning rate and the super parameter in the model parameters of the language analysis model through the rewarding function and the environment interaction feedback.
In one embodiment, the categorizing module 1003 is specifically configured to:
extracting features of the target intention to obtain a feature vector corresponding to the target intention;
calculating the probability of the classification label corresponding to the target intention based on the feature vector;
and taking the classification label corresponding to the maximum value of the probability as a classification result of the target data information corresponding to the target intention.
In one embodiment, the first generating module 1004 is specifically configured to:
inquiring whether the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content;
if the classification result corresponding to the predicted reply content is the same as the classification result of the target data information, the preset reply content corresponding to the predicted target data information is used as the reply content corresponding to the target data information;
if the classification result which is the same as the classification result of the target data information does not exist in the classification result corresponding to the predicted reply content, generating the reply content corresponding to the target data information based on the historical target data record and the behavior data of the target object.
In one embodiment, the first generating module 1004 is specifically configured to:
acquiring a historical target record and behavior data of a target object;
Analyzing and processing the historical target record and the behavior data of the target object according to the language analysis model to obtain the historical target demand characteristics and the historical target preference characteristics of the target object;
and generating reply contents corresponding to the target data information according to the classification result, the historical target demand characteristics and the historical target preference characteristics, and displaying the reply contents corresponding to the target data information to the target object.
In one embodiment, the apparatus 1000 further comprises:
the first acquisition module is used for acquiring the full target record and the increment target record and the target object information respectively corresponding to the full target record and the increment target record;
the preprocessing module is specifically used for preprocessing the full target record and the increment target record to obtain the preprocessed full target record and increment target record;
the prediction module is used for predicting the preprocessed full target record, the preprocessed increment target record and the target object information according to the predictor model to obtain predicted target data information;
and the second generation module is used for generating preset reply content according to the predicted target data information.
In one embodiment, the apparatus 1000 further comprises:
The second acquisition module is used for acquiring the comprehensive data;
the risk assessment module is used for carrying out risk assessment on the comprehensive data according to the risk assessment sub-model to obtain a risk assessment result, and displaying the risk assessment result.
In one embodiment, the apparatus 1000 further comprises:
the extraction module is used for responding to the manual processing request of the target object, extracting the regional information in the target data information according to the language analysis model, and taking the regional information and the target intention as the conditions of the rule engine;
and the distribution module is used for distributing the recommended objects according to the conditions of the rule engine and the rule engine.
Each of the modules in the above-described data processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store complaint data information. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data processing method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
receiving target data information; the target data information comprises voice data and/or text data;
semantic analysis is carried out on the voice data and/or the text data through a language analysis model for reinforcement learning, and target intention corresponding to the voice data and/or the text data is determined; the language analysis model for reinforcement learning updates model parameters based on a feedback mechanism of rewarding functions and environment interaction feedback;
classifying the target intention to obtain a classification result of target data information corresponding to the target intention;
and determining the reply content corresponding to the target data information based on the classification result.
In one embodiment, the processor when executing the computer program further performs the steps of:
receiving feedback data information, wherein the feedback data information comprises evaluation feedback of a target object aiming at reply content;
and taking the evaluation feedback in the feedback data information as the environment interaction feedback of the language analysis model, and updating the characteristic weight, the bias, the learning rate and the super parameter in the model parameters of the language analysis model through the reward function and the environment interaction feedback.
In one embodiment, the processor when executing the computer program further performs the steps of:
extracting features of the target intention to obtain a feature vector corresponding to the target intention;
calculating the probability of the classification label corresponding to the target intention based on the feature vector;
and taking the classification label corresponding to the maximum value of the probability as a classification result of the target data information corresponding to the target intention.
In one embodiment, the processor when executing the computer program further performs the steps of:
inquiring whether the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content;
if the classification result corresponding to the predicted reply content is the same as the classification result of the target data information, the preset reply content corresponding to the predicted target data information is used as the reply content corresponding to the target data information;
If the classification result which is the same as the classification result of the target data information does not exist in the classification result corresponding to the predicted reply content, generating the reply content corresponding to the target data information based on the historical target data record and the behavior data of the target object.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a historical target record and behavior data of a target object;
analyzing and processing the historical target record and the behavior data of the target object according to the language analysis model to obtain the historical target demand characteristics and the historical target preference characteristics of the target object;
and generating reply contents corresponding to the target data information according to the classification result, the historical target demand characteristics and the historical target preference characteristics, and displaying the reply contents corresponding to the target data information to the target object.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring target object information corresponding to the full target record and the increment target record respectively;
preprocessing the full target record and the increment target record to obtain the preprocessed full target record and the preprocessed increment target record;
Predicting the preprocessed full target record, the preprocessed increment target record and the target object information according to the predictor model to obtain predicted target data information;
and generating preset reply content according to the predicted target data information.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring comprehensive data;
and carrying out risk assessment on the comprehensive data according to the risk assessment sub-model to obtain a risk assessment result, and displaying the risk assessment result.
In one embodiment, the processor when executing the computer program further performs the steps of:
responding to the manual processing request of the target object, extracting regional information in the target data information according to the language analysis model, and taking the regional information and the target intention as conditions of a rule engine;
the recommended objects are assigned according to the conditions of the rule engine and the rule engine.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (12)

1. A method of data processing, the method comprising:
receiving target data information; the target data information comprises voice data and/or text data;
performing semantic analysis on the voice data and/or the text data through a language analysis model for reinforcement learning, and determining target intention corresponding to the voice data and/or the text data; the language analysis model of reinforcement learning updates model parameters based on a feedback mechanism of rewarding function and environment interaction feedback;
Classifying the target intention to obtain a classification result of the target data information corresponding to the target intention;
and determining the reply content corresponding to the target data information based on the classification result.
2. The method according to claim 1, wherein after determining reply content corresponding to the target data information based on the classification result, the method further comprises:
receiving feedback data information, wherein the feedback data information comprises evaluation feedback of a target object aiming at the reply content;
and taking the evaluation feedback in the feedback data information as environment interaction feedback of the language analysis model, and updating the feature weight, bias, learning rate and super-parameters in the model parameters of the language analysis model through the reward function and the environment interaction feedback.
3. The method according to claim 1, wherein the classifying the target intention to obtain a classification result of the target data information corresponding to the target intention includes:
extracting features of the target intention to obtain a feature vector corresponding to the target intention;
Calculating the probability of the target intention corresponding to the classification label based on the feature vector;
and taking the classification label corresponding to the maximum value of the probability as a classification result of the target data information corresponding to the target intention.
4. The method according to claim 1, wherein the determining reply content corresponding to the target data information based on the classification result includes:
inquiring whether the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content;
if the classification result which is the same as the classification result of the target data information exists in the classification result which corresponds to the predicted reply content, the preset reply content which corresponds to the predicted target data information is used as the reply content which corresponds to the target data information;
and if the classification result which is the same as the classification result of the target data information does not exist in the classification result corresponding to the predicted reply content, generating the reply content corresponding to the target data information based on the historical target data record and the behavior data of the target object.
5. The method of claim 4, wherein generating reply content corresponding to the target data information based on the historical target data record and the behavior data of the target object, comprises:
Acquiring a historical target record and behavior data of a target object;
analyzing and processing the historical target record and the behavior data of the target object according to the language analysis model to obtain the historical target demand characteristics and the historical target preference characteristics of the target object;
and generating reply content corresponding to the target data information according to the classification result, the historical target demand characteristic and the historical target preference characteristic, and displaying the reply content corresponding to the target data information to the target object.
6. The method of claim 1, wherein the language analysis model comprises a predictor model, the method further comprising:
acquiring a full target record and an increment target record, and respectively corresponding target object information of the full target record and the increment target record;
preprocessing the full target record and the increment target record to obtain a preprocessed full target record and an preprocessed increment target record;
predicting the preprocessed full target record, the preprocessed increment target record and the target object information according to the predictor model to obtain predicted target data information;
And generating preset reply content according to the predicted target data information.
7. The method of claim 1, wherein the reinforcement learning-based language analysis model includes a risk assessment sub-model, and wherein after determining reply content corresponding to the target data information based on the classification result, the method further comprises:
acquiring comprehensive data;
and performing risk assessment on the comprehensive data according to the risk assessment sub-model to obtain a risk assessment result, and displaying the risk assessment result.
8. The method according to claim 1, wherein after determining reply content corresponding to the target data information based on the classification result, the method further comprises:
responding to a manual processing request of a target object, extracting regional information in the target data information according to the language analysis model, and taking the regional information and the target intention as conditions of a rule engine;
and distributing recommended objects according to the conditions of the rule engine and the rule engine.
9. A data processing apparatus, the apparatus comprising:
the first receiving module is used for receiving the target data information; the target data information comprises voice data and/or text data;
The analysis module is used for carrying out semantic analysis on the voice data and/or the text data through a language analysis model for reinforcement learning and determining target intention corresponding to the voice data and/or the text data; the language analysis model of reinforcement learning updates model parameters based on a feedback mechanism of rewarding function and environment interaction feedback;
the classifying module is used for classifying the target intention to obtain a classifying result of the target data information corresponding to the target intention;
and the first generation module is used for determining reply content corresponding to the target data information based on the classification result.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 8.
CN202311145933.5A 2023-09-06 2023-09-06 Data processing method, device, computer equipment and storage medium Pending CN117171403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311145933.5A CN117171403A (en) 2023-09-06 2023-09-06 Data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311145933.5A CN117171403A (en) 2023-09-06 2023-09-06 Data processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117171403A true CN117171403A (en) 2023-12-05

Family

ID=88935099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311145933.5A Pending CN117171403A (en) 2023-09-06 2023-09-06 Data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117171403A (en)

Similar Documents

Publication Publication Date Title
US11893641B2 (en) Sentiment and rules-based equity analysis using customized neural networks in multi-layer, machine learning-based model
CN110175227B (en) Dialogue auxiliary system based on team learning and hierarchical reasoning
US7107254B1 (en) Probablistic models and methods for combining multiple content classifiers
US20210174020A1 (en) Recipient based text prediction for electronic messaging
CA3052174A1 (en) System and method for call centre management
US20220318522A1 (en) User-centric and event sensitive predictive text summary
US20230034820A1 (en) Systems and methods for managing, distributing and deploying a recursive decisioning system based on continuously updating machine learning models
US20230237276A1 (en) System and Method for Incremental Estimation of Interlocutor Intents and Goals in Turn-Based Electronic Conversational Flow
CN116468460A (en) Consumer finance customer image recognition system and method based on artificial intelligence
Nithya et al. A comprehensive survey of machine learning: Advancements, applications, and challenges
CN117422181A (en) Fuzzy label-based method and system for early warning loss of issuing clients
Bashar et al. Machine learning for predicting propensity-to-pay energy bills
Abdullah et al. An introduction to data analytics: its types and its applications
WO2023164312A1 (en) An apparatus for classifying candidates to postings and a method for its use
Kanchinadam et al. Graph neural networks to predict customer satisfaction following interactions with a corporate call center
CN115293818A (en) Advertisement putting and selecting method and device, equipment and medium thereof
CN115619571A (en) Financing planning method, system and device
CN117171403A (en) Data processing method, device, computer equipment and storage medium
US20210216287A1 (en) Methods and systems for automated screen display generation and configuration
Al-Baity et al. Towards effective service discovery using feature selection and supervised learning algorithms
CN113688636A (en) Extended question recommendation method and device, computer equipment and storage medium
Trivedi Machine Learning Fundamental Concepts
CN117668205B (en) Smart logistics customer service processing method, system, equipment and storage medium
Boppana et al. Machine Learning Based Stock Price Prediction by Integrating ARIMA model and Sentiment Analysis with Insights from News and Information
Edrisi Towards Dynamic Feature Selection with Attention to Assist Banking Customers in Establishing a New Business

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination