CN116362777A - Service processing method, device, equipment and medium based on user emotion recognition - Google Patents

Service processing method, device, equipment and medium based on user emotion recognition Download PDF

Info

Publication number
CN116362777A
CN116362777A CN202310345325.2A CN202310345325A CN116362777A CN 116362777 A CN116362777 A CN 116362777A CN 202310345325 A CN202310345325 A CN 202310345325A CN 116362777 A CN116362777 A CN 116362777A
Authority
CN
China
Prior art keywords
information
emotion
user
moment
current moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310345325.2A
Other languages
Chinese (zh)
Inventor
薛璐文
陈永录
张晶晶
郭新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310345325.2A priority Critical patent/CN116362777A/en
Publication of CN116362777A publication Critical patent/CN116362777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)

Abstract

The application belongs to the technical field of artificial intelligence, and provides a business processing method, device, equipment and medium based on user emotion recognition. And obtaining key text information based on a preset self-attention model by acquiring the voice information of the first user at the current moment under the current service. Global dialogue information and emotion state information of a first user at the previous moment and emotion representation information of a second user at the previous moment are obtained, and the emotion state information of the first user at the current moment is determined according to the key text information, the global dialogue information and the emotion state information. And determining the emotion characterization information of the first user at the current moment according to the emotion characterization information and the emotion state information of the first user at the current moment. And calculating the probability of the emotion characterization information under different emotions, identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment, and providing corresponding business service for the user according to the emotion, thereby improving the service quality.

Description

Service processing method, device, equipment and medium based on user emotion recognition
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a medium for processing a service based on user emotion recognition.
Background
In the service industry, in order to improve the service quality and the service grade, the emotion of the user can be identified, the requirement and the satisfaction degree of the user on the service can be better known through emotion identification, and the customer service level is improved.
In the prior art, when the emotion of the user is identified, the emotion of the user can be identified only according to the current voice information of the user, and the real emotion of the user can not be understood by contacting the context information of the dialogue between the user and the seat service personnel, so that the emotion identification of the user is inaccurate, the service quality of the user is affected, and the user experience is poor.
Disclosure of Invention
The application provides a business processing method, device, equipment and medium based on user emotion recognition, which are used for solving the problems that the emotion recognition of a user is inaccurate and the user experience is poor in the prior art.
In a first aspect, the present application provides a service processing method based on user emotion recognition, including:
acquiring voice information of a first user at the current moment under the current service;
Text feature extraction processing is carried out on the voice information based on a preset self-attention model, so that key text information is obtained;
acquiring global dialogue information and emotion state information of the first user at the previous moment and emotion characterization information of the second user at the previous moment;
determining the emotion state information of the first user at the current moment based on a first gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information;
determining emotion representation information of the first user at the current moment based on a second gating circulating unit according to emotion representation information of the second user at the previous moment and emotion state information of the first user at the current moment;
inputting emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, and identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment;
and providing corresponding business services for the user according to the identified emotion.
Optionally, the text feature extraction processing is performed on the voice information based on a preset self-attention model to obtain key text information, including:
Classifying the voice information to determine a service scene to which the voice information belongs;
coding the voice information after the service scene is determined to obtain text information in a vectorization form;
based on a preset multi-head self-attention model, extracting text features of the text information to obtain the text information containing key features;
and decoding the text information containing the key features to obtain the key text information in a preset form.
Optionally, the classifying the voice information to determine a service scenario to which the voice information belongs includes:
identifying keywords in the voice information;
matching the keywords with text information stored in a preset service scene library, wherein the preset service scene library comprises a plurality of service scenes, and each service scene stores text information conforming to the service scene;
and determining the business scene to which the text information belongs as the business scene to which the voice information belongs when the keyword is matched with the text information.
Optionally, the text feature extraction processing is performed on the text information based on a preset multi-head self-attention model to obtain text information including key features, including:
Performing word segmentation processing on the text information in the vectorization form to obtain a plurality of segmented words;
based on a preset multi-head self-attention model, performing attention calculation on each word segment, and determining a plurality of word segments focused in the voice information;
and performing splicing processing on the plurality of segmented words to obtain text information containing key features.
Optionally, the determining, based on the first gating circulation unit, the emotional state information of the first user at the current moment according to the key text information, the global dialogue information at the previous moment and the emotional state information includes:
acquiring global dialogue information of the first user before the previous moment;
determining context dialogue information of the current moment related to the key text information according to the previous moment and the previous global dialogue information;
and updating the emotion state information through the first gating circulation unit according to the key text information, the contextual dialogue information and the emotion state information at the previous moment at the current moment, and determining the emotion state information of the first user at the current moment.
Optionally, the determining the contextual dialogue information of the current moment related to the key text information according to the previous moment and the previous global dialogue information includes:
Normalizing the global dialogue information at the previous moment and before, and determining the attention weight of the key text information;
and carrying out pooling processing with the attention weight according to the global dialogue information at the previous moment and before to obtain the contextual dialogue information of the current moment related to the key text information.
Optionally, the method further comprises:
and determining global dialogue information of the first user at the current moment based on a third gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information at the previous moment.
Optionally, the method further comprises:
and outputting prompt information corresponding to the emotion of the first user at the current moment.
In a second aspect, the present application provides a service processing apparatus based on user emotion recognition, including:
the acquisition module is used for acquiring the voice information of the first user at the current moment under the current service;
the processing module is used for extracting text characteristics of the voice information based on a preset self-attention model to obtain key text information;
the acquisition module acquires global dialogue information and emotion state information of the first user at the previous moment and emotion representation information of the second user at the previous moment;
The determining module is used for determining the emotion state information of the first user at the current moment based on a first gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information;
the determining module is further configured to determine, based on a second gating circulation unit, emotion representation information of the first user at the current moment according to emotion representation information of the second user at a previous moment and emotion state information of the first user at the current moment;
the identification module is used for inputting the emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, and identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment;
the processing module is further used for providing corresponding business services for the user according to the identified emotion.
In a third aspect, the present application provides an electronic device, comprising:
at least one processor, memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored in the memory to cause the electronic device to perform the business processing method based on user emotion recognition of any one of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, are configured to implement the business processing method based on user emotion recognition according to any one of the first aspects.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the user emotion recognition based service processing method of any of the first aspects.
According to the business processing method, device, equipment and medium based on user emotion recognition, the voice information of the first user at the current moment under the current business is obtained, and text feature extraction processing is carried out on the voice information based on the preset self-attention model, so that key text information is obtained. And acquiring global dialogue information and emotion state information of the first user at the previous moment and emotion representation information of the second user at the previous moment, and determining the emotion state information of the first user at the current moment based on the first gating circulating unit according to the key text information, the global dialogue information and the emotion state information at the previous moment. And determining the emotion representation information of the first user at the current moment based on the second gating circulating unit according to the emotion representation information of the second user at the previous moment and the emotion state information of the first user at the current moment. And then inputting the emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment, and providing corresponding business services for the user according to the emotion. According to the method, the emotion of the first user is accurately identified, so that better business service is conveniently provided for the user, the service quality is improved, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flow chart of a business processing method based on user emotion recognition according to an embodiment of the present application;
fig. 2 is a flow chart of a method for obtaining key text information according to an embodiment of the present application;
fig. 3 is a flowchart of a method for determining emotional state information of a first user at a current moment according to an embodiment of the present application;
fig. 4 is a schematic diagram of a business processing method based on user emotion recognition according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a service processing device based on user emotion recognition according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
In the description of the embodiments of the present application, the terms "inner", "outer", etc. refer to directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or component must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present application.
Furthermore, in the description of the embodiments of the present application, unless explicitly stated and limited otherwise, the terms "connected," "connected," and "coupled" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be the communication between the two components. The specific meaning of the above terms in the embodiments of the present application will be understood by those skilled in the art according to the specific circumstances.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Taking the financial service industry as an example, with the increase of the requirements of financial service institutions on the service quality, differentiated services become a trend more and more, and the gist of the differentiated services is to improve the service quality and the service grade of the financial institutions, so that the service can better meet the requirements of users, and thereby the market and the clients are won as much as possible.
The core of the differentiated service is to understand the real demands of users, and each sentence of the seat service personnel can influence the emotion of the users in the customer service dialogue process, so that the emotion of the users and the seat service personnel is accurately identified to have certain necessity and practicability, and the seat service personnel can timely know the demands and satisfaction degree of the users on the service and the service attitude of the users, thereby improving the service level.
In the prior art, when the emotion of the user is identified, the emotion of the user can be identified only according to the current voice information of the user, and the user cannot be contacted with the context information of the dialogue between the user and the seat service personnel to understand the true emotion of the user, so that the emotion identification of the user is inaccurate, and the user experience is affected.
Therefore, in order to solve the above technical problems in the prior art, the present application proposes a method, an apparatus, a device, and a medium for processing a service based on user emotion recognition, where by obtaining voice information of a first user at a current time under a current service, text feature extraction processing is performed on the voice information based on a preset self-attention model, so as to obtain key text information. And acquiring global dialogue information and emotion state information of the first user at the previous moment and emotion representation information of the second user at the previous moment, and determining the emotion state information of the first user at the current moment based on the first gating circulating unit according to the key text information, the global dialogue information and the emotion state information at the previous moment. And determining the emotion representation information of the first user at the current moment based on the second gating circulating unit according to the emotion representation information of the second user at the previous moment and the emotion state information of the first user at the current moment. And further, inputting the emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment, and providing corresponding business services for the user according to the identified emotion. The method effectively improves the service quality of the user and improves the user experience.
The business processing method based on user emotion recognition can be used in customer service, and the application scene of the business processing method can be various industries requiring customer service, including but not limited to: financial industry, electronic commerce industry, educational industry, banking institution, etc.
It should be noted that the service processing method, device, equipment and medium based on user emotion recognition, which are provided by the application, can be used in the technical field of artificial intelligence, and can also be used in any field except the technical field of artificial intelligence, and the application field is not limited.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a business processing method based on user emotion recognition according to an embodiment of the present application, where an execution subject of the method may be a terminal with emotion recognition function, for example, may be a personal computer, a notebook computer, a smart phone, a tablet computer, a portable wearable device, a server or a server cluster, etc. The method in this embodiment may be implemented by software, hardware, or a combination of software and hardware. As shown in fig. 1, the method specifically includes the following steps:
S101, acquiring voice information of a first user at the current moment under the current service.
In this embodiment, taking the financial industry as an example, the first user may be a client performing business transaction, and the current business may be a financial consultation business, a financial recommendation business, or the like.
The voice information may be dialogue information exchanged with the seat service personnel at the current time.
S102, text feature extraction processing is carried out on the voice information based on a preset self-attention model, and key text information is obtained.
After the voice information is acquired, the voice information needs to be preprocessed to determine the service scene to which the voice information belongs. The preprocessing method can be a dictionary mapping method.
In particular, the method comprises the steps of,
and recognizing keywords in the voice information, matching the keywords with text information stored in a preset business scene library, wherein the preset business scene library comprises various business scenes, such as boring scenes, loan recommended scenes, credit card introduction scenes or financial recommended scenes. Text information conforming to each service scene is stored in each service scene.
And determining the business scene to which the text information belongs when the keyword is matched with the text information, as the business scene to which the voice information belongs. For example, it is determined that the voice scene belongs to a loan recommendation scene, or that the voice scene belongs to a financial recommendation scene, or the like.
After the business scene is determined, text feature processing is carried out on the voice information through a Multi-head self-attention (Multi-head self-attention) model, so that information focused on is extracted from the voice information, and key text information is obtained.
For example, assume that the voice information is "i want to consult with the financial service in the afternoon today", and the obtained key text information after text feature processing is "consult with the financial service".
It is to be understood that the above examples are illustrative only and are not to be construed as limiting the present application.
S103, global dialogue information, emotion state information and emotion representation information of the first user at the previous moment are obtained.
The second user may be an attendant of a financial institution talking to the first user. When the first user performs an interactive dialogue with the second user, the emotion of the second user affects the emotion of the first user, so in this embodiment, emotion characterization information of the second user at a previous moment needs to be acquired. Wherein, the emotion characterization information refers to index information capable of reflecting emotion of a user.
And acquiring global dialogue information of the first user at the previous moment, wherein the global dialogue information refers to all dialogue information between the user and the seat service personnel at the previous moment.
And acquiring the emotional state information of the first user at the previous moment, wherein the emotional state information refers to information capable of reflecting the emotional state of the user.
S104, determining the emotion state information of the first user at the current moment based on the first gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information.
In this step, the first user, i.e. the client, is updated with the emotional state information at the current moment by means of the first gating loop unit (Gate Recurrent Unit, GRU).
S105, determining the emotion representation information of the first user at the current moment based on the second gating circulating unit according to the emotion representation information of the second user at the previous moment and the emotion state information of the first user at the current moment.
In the step, the emotion representation information of the first user, namely the client, at the current moment is updated through the second gating circulating unit.
S106, inputting the emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, and identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment.
The preset emotion classification model may be, for example, a double-layer sensor with a normalized softmax function, and belongs to a discrimination model. The method and the device are mainly used for calculating the probability of emotion representation information of the first user at the current moment under different preset emotions. Wherein the preset emotions include, but are not limited to: happy, neutral, anger, frustration, etc.
By way of example, the calculated probability of this emotion characterization information is assumed to be 80% in a "happy" emotion, 60% in a "neutral" emotion, 65% in an "anger" emotion, and 50% in a "frustrated" emotion. And identifying the emotion 'happy' corresponding to the highest probability value of 80% as the emotion of the client at the current moment.
And S107, providing corresponding business services for the user according to the identified emotion.
For example, if the emotion of the user is identified as curiosity, the user is interested in the business currently being consulted, and more information about the business is provided for the user according to the emotion, so that the user is better served and the requirement of the user is met.
It should be noted that, in the present application, the first user may also be an agent service person of a financial institution, and the second user may be a customer of the financial institution who performs a conversation with the agent service person. The method for identifying the emotion of the first user, i.e. the seat service personnel, is the same as the method in this embodiment, and in order to avoid redundancy, repeated description is omitted here.
In the above embodiment of the present application, by acquiring the voice information of the first user at the current moment under the current service, text feature extraction processing is performed on the voice information based on a preset self-attention model, so as to obtain the key text information. And acquiring global dialogue information and emotion state information of the first user at the previous moment and emotion representation information of the second user at the previous moment, and determining the emotion state information of the first user at the current moment based on the first gating circulating unit according to the key text information, the global dialogue information and the emotion state information at the previous moment. And determining the emotion representation information of the first user at the current moment based on the second gating circulating unit according to the emotion representation information of the second user at the previous moment and the emotion state information of the first user at the current moment. And further, inputting the emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment, and providing corresponding business services for the user according to the identified emotion. According to the embodiment, the emotion of the first user is accurately recognized, so that better business service is conveniently provided for the user, and the service quality is improved.
Further, on the basis of the above embodiment, the process of extracting text features from the voice information based on the preset self-attention model in S102 to obtain the key text information is described in the following embodiment. Fig. 2 is a flow chart of a method for obtaining key text information according to an embodiment of the present application, as shown in fig. 2, the method includes the following steps:
s201, classifying the voice information, and determining the service scene to which the voice information belongs.
S202, coding the voice information after the service scene is determined, and obtaining text information in a vectorization form.
The voice information is encoded by an encoder to obtain text information in a vectorized form.
S203, based on a preset multi-head self-attention model, text feature extraction processing is carried out on the text information, and the text information containing key features is obtained.
In particular, the method comprises the steps of,
the method comprises the steps of performing word segmentation on vectorized text information to obtain a plurality of words, performing attention calculation on each word based on a preset multi-head self-attention model, and determining a plurality of words focused in the voice information.
Since the focused word is different from each time calculated by the self-attention model in a sentence, different focused words can be obtained by performing the self-attention model calculation a plurality of times.
Optionally, the attention of each word is calculated by the following formula (1):
Figure BDA0004159492980000101
wherein Q represents a query vector; k represents a key vector; v represents a value vector; d, d k Representing the preset dimension parameters of the vector.
Optionally, a plurality of word segments head focused on in the voice information is calculated by the following formula (2) i
Figure BDA0004159492980000102
Wherein i represents an i-th word;
Figure BDA0004159492980000103
a first initialization parameter preset on a query vector Q for representing the ith segmentation word; />
Figure BDA0004159492980000104
A first initialization parameter preset on a key vector K for representing the ith segmentation word; />
Figure BDA0004159492980000105
The first initialization parameter preset on the value vector V by the ith segmentation word is represented.
And performing splicing processing on the plurality of segmented words to obtain text information containing key features.
Optionally, the text information containing the key features is calculated by the following formula (3):
MultiHead(Q,K,V)=Contact(head i ,…,head n )L O (3)
wherein L is O The second preset initialization parameters preset in the query vector Q, the key vector K and the value vector V are shown.
S204, decoding the text information containing the key features to obtain the key text information in a preset form.
And decoding the text information containing the key features through a decoder to obtain the key text information in a preset form.
In the above embodiment of the present application, by performing classification processing on the voice information, determining a service scenario to which the voice information belongs, performing coding processing on the voice information after determining the service scenario to obtain text information in a vectorized form, performing text feature extraction processing on the text information based on a preset multi-head self-attention model to obtain text information containing key features, and finally performing decoding processing on the text information containing the key features to obtain key text information in a preset form. The method of the embodiment ensures that the obtained key text information is more accurate.
Further, on the basis of the above embodiment, the process of determining the emotional state information of the first user at the current moment based on the global dialogue information, the previous moment and the emotional state information in step S104 by the first recurrent neural network gating and circulating unit is described, and fig. 3 is a flowchart of a method for determining the emotional state information of the first user at the current moment according to the embodiment of the present application, as shown in fig. 3, the method includes the following steps:
s301, global dialogue information of a first user before the previous moment is acquired.
In this embodiment, the global session information refers to all session information between the first user and the second user.
The global dialogue information before the previous time is [ g ] 1 ,g 2 ,g 3 ,…,g t-1 ]。
The global dialogue information at different moments can be based on the global dialogue information at the previous moment through the third gating loop unit GRU G And updating.
Therefore, the global dialogue information at the current moment can be based on the third gating loop unit GRU according to the key text information, the global dialogue information at the previous moment and the emotion state information at the previous moment G And (5) determining.
Optionally, the global dialogue information gt of the first user at the current moment is calculated by the following formula (4):
Figure BDA0004159492980000111
Wherein g t Global dialogue information representing the current time; g t-1 Global dialogue information indicating a previous time; d, d t Key text information representing the current time;
Figure BDA0004159492980000112
and the emotion state information indicating the previous time.
S302, determining the context dialogue information of the current moment related to the key text information according to the global dialogue information of the previous moment and the previous moment.
In particular, the method comprises the steps of,
and carrying out normalization processing on the global dialogue information at the previous moment and before, and determining the attention weight of the key text information.
Optionally, the attention weight α of the key text information is calculated by the following formula (5):
Figure BDA0004159492980000121
wherein W is α Representing preset training parameters.
And carrying out pooling processing with the attention weight according to the global dialogue information at the previous moment and before to obtain the contextual dialogue information of the current moment related to the key text information.
Alternatively, the contextual dialog information c is calculated by the following equation (6) t
c t =α[g 1 ,g 2 ,…,g t-1 ] T (6)
S303, according to the key text information, the context dialogue information and the emotion state information of the previous moment, the first gating is performedCirculation unit GRU P And updating the emotion state information to determine the emotion state information of the first user at the current moment.
Alternatively, the emotional state information of the first user at the current moment is calculated by the following formula (7)
Figure BDA0004159492980000122
Figure BDA0004159492980000123
In the above embodiment of the present application, by acquiring global dialogue information before a previous moment, according to the previous moment and the previous global dialogue information, the context dialogue information of the current moment related to the key text information is determined, and further according to the key text information, the context dialogue information and the emotional state information of the previous moment, the emotional state information is updated through the first gating circulation unit, so as to determine the emotional state information of the first user at the current moment, where the determined emotional state information of the first user at the current moment is more accurate.
After determining the emotion state information of the first user at the current moment, according to the emotion representation information of the second user at the previous moment, the emotion representation information is based on the second gating circulating unit GRU E And determining emotion characterization information of the first user at the current moment.
Alternatively, the emotion characterization information e of the first user at the current moment is calculated by the following formula (8) t
Figure BDA0004159492980000124
Wherein e t-1 And representing emotion characterization information of the second user at a previous moment.
After determining emotion characterization information of a first user at the current moment, inputting the emotion characterization information into a preset emotion classification model, calculating the probability of the emotion characterization information under a plurality of preset emotions, and identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment.
Optionally, the emotion of the first user at the current moment is calculated by the following formulas (9) - (10):
P t =softmax(W smax ReLU(W l e t +b l )+b smax ) (9)
Figure BDA0004159492980000131
wherein P is t Representing the probability of emotion characterization information under a certain preset emotion; w represents a training first parameter; b represents training a second parameter;
Figure BDA0004159492980000132
indicating that the probability takes a maximum value.
After determining the emotion of the first user at the current moment, the prompting information corresponding to the emotion can be output.
In example 1, if the first user is a client, it is determined that the service scenario to which the voice information belongs is a credit card introduction scenario according to the voice message of the user, and the user is assumed to be identified as "confused and confused" emotion, and the seat service personnel is prompted through the display panel to introduce the credit card service into the user in detail. Thereby better helping seat service personnel guide users to know products and complete business handling.
In example 2, if the first user is a client, it is determined according to the voice message of the user that the service scene to which the voice message belongs is a financial recommendation scene, and if the emotion of the user is identified as a negative and annoying emotion, the agent service personnel is prompted through the display panel, and the user has a low interest in the current financial service. Thus helping the seat service personnel to perform related query work on the user so as to know the interest of the client in depth, and perform differentiated recommendation and the like.
In example 3, if the first user is an agent service person, the agent service person is prompted to pay attention to the service phrase through the display panel on the assumption that the emotion of the first user is identified as "impatient and anger" emotion, so as to help the agent service person adjust the emotion in time and improve the service quality.
Further, for facilitating understanding of the method of the present application, a schematic diagram shown in fig. 4 is briefly described, and fig. 4 is a schematic diagram of a business processing method based on user emotion recognition according to an embodiment of the present application, where the schematic diagram is shown in fig. 4:
assuming that the first user to be identified is speaker a, it is desirable to identify the emotion of speaker a at the current moment.
The voice information Dt of the speaker a at the present moment is acquired.
The voice information Dt is input to an encoder, the encoder inputs the output content to a preset self-attention model, text feature extraction processing is performed on the voice information, the output content is input to a decoder, and the decoder outputs key text information Dt.
Based on global dialogue information [ g ] of speaker A at the previous time and before 1 ,g 2 ,g 3 ,…,g t-1 ]The contextual dialog information Ct at the current time associated with the key text information is determined.
Acquiring emotion state information q of speaker A at previous time A,t-1 Emotion characterization information e of speaker B at the previous time t-1
Based on the key text information dt, the emotion state information q at the previous moment A,t-1 And context dialogue information Ct at the current moment, based on the first gating cycle unit GRU p Determining emotional state information q of speaker a at a current time A,t
According to emotion-characterizing information e of speaker B at the previous moment t-1 Emotional state information q of speaker a at the present moment A,t GRU based on second gating circulation unit E Determining emotion characterization information e of speaker a at current time t
Emotion characterization information e of speaker a at the current time t Inputting a preset emotion classification model, calculating the probability of emotion characterization information under a plurality of preset emotions, and maximizing the probability
Figure BDA0004159492980000141
The corresponding emotion is identified as the emotion of speaker a at the current time.
Wherein, according to the key text information dt and the global dialogue information g at the previous moment t-1 Emotion state information q of Emotion state information at previous time A,t-1 GRU based on third gating circulating unit G Determining global dialog information g for speaker a at the current instant t . Correspondingly, g can also be obtained according to the method t+1
In fig. 4, the manner of determining the emotion of speaker B at the next time is similar to that of speaker a, and in order to avoid repetition, the description will not be repeated here.
Fig. 5 is a schematic structural diagram of a service processing device based on user emotion recognition according to an embodiment of the present application, where the device includes: an acquisition module 501, a processing module 502, a determination module 503, and an identification module 504.
The obtaining module 501 is configured to obtain voice information of a first user at a current moment under a current service.
The processing module 502 is configured to perform text feature extraction processing on the voice information based on a preset self-attention model, so as to obtain key text information.
The obtaining module 501 is further configured to obtain global dialogue information, emotion state information of the first user at a previous moment, and emotion characterization information of the second user at a previous moment.
The determining module 503 is configured to determine, based on the first gating loop unit, the emotional state information of the first user at the current time according to the key text information, the global dialogue information at the previous time, and the emotional state information.
The determining module 503 is further configured to determine, based on the second gating circulation unit, emotion characterization information of the first user at the current moment according to emotion characterization information of the second user at a previous moment and emotion state information of the first user at the current moment.
The identifying module 504 is configured to input emotion characterization information of the first user at the current moment into a preset emotion classification model, calculate probabilities of the emotion characterization information under a plurality of preset emotions, and identify an emotion corresponding to a highest value of the probabilities as an emotion of the first user at the current moment;
And the processing module is also used for providing corresponding business services for the user according to the identified emotion.
In one possible implementation, the processing module 502 is specifically configured to:
and classifying the voice information to determine the service scene to which the voice information belongs.
And carrying out coding processing on the voice information after the service scene is determined to obtain text information in a vectorization form.
And carrying out text feature extraction processing on the text information based on a preset multi-head self-attention model to obtain the text information containing key features.
And decoding the text information containing the key features to obtain the key text information in a preset form.
In one possible implementation, the processing module 502 is specifically configured to:
keywords in the voice information are identified.
Matching the keywords with text information stored in a preset service scene library, wherein the preset service scene library comprises a plurality of service scenes, and each service scene stores text information conforming to the service scene.
And determining the business scene to which the text information belongs when the keyword is matched with the text information, as the business scene to which the voice information belongs.
In one possible implementation, the processing module 502 is specifically configured to:
And performing word segmentation processing on the text information in the vectorization form to obtain a plurality of word segments.
And carrying out attention calculation on each word segment based on a preset multi-head self-attention model, and determining a plurality of word segments focused in the voice information.
And performing splicing processing on the plurality of segmented words to obtain text information containing key features.
In one possible implementation, the determining module 503 is specifically configured to:
global dialogue information of the first user before the previous moment is obtained.
And determining the contextual dialogue information of the current moment related to the key text information according to the global dialogue information of the previous moment and the previous moment.
And updating the emotion state information through the first gating circulation unit according to the key text information, the context dialogue information and the emotion state information of the previous moment at the current moment, and determining the emotion state information of the first user at the current moment.
In one possible implementation, the determining module 503 is specifically configured to:
and carrying out normalization processing on the global dialogue information at the previous moment and before, and determining the attention weight of the key text information.
And carrying out pooling processing with the attention weight according to the global dialogue information at the previous moment and before to obtain the contextual dialogue information of the current moment related to the key text information.
In one possible implementation, the determining module 503 is further configured to:
and determining the global dialogue information of the first user at the current moment based on the third gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information at the previous moment.
In one possible implementation, the identification module 505 is specifically configured to:
and outputting prompt information corresponding to the emotion of the first user at the current moment.
The service processing device based on user emotion recognition provided in this embodiment is configured to execute any one of the foregoing method embodiments, and its implementation principle is similar to that of the technical effect, which is not repeated herein.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 6, where the device may include: at least one processor 601 and a memory 602.
A memory 602 for storing programs. In particular, the program may include program code including computer-operating instructions.
The memory 602 may include high-speed RAM memory or may further include non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 601 is operative to execute computer-executable instructions stored in the memory 602 to implement the method described in any of the embodiments described above. The processor 601 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more integrated circuits configured to implement embodiments of the present application.
Optionally, the electronic device may also include a communication interface 603. In a specific implementation, if the communication interface 603, the memory 602, and the processor 601 are implemented independently, the communication interface 603, the memory 602, and the processor 601 may be connected to each other through a bus and complete communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Peripheral Component, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. Buses may be divided into address buses, data buses, control buses, etc., but do not represent only one bus or one type of bus.
Alternatively, in a specific implementation, if the communication interface 603, the memory 602, and the processor 601 are integrated on a chip, the communication interface 603, the memory 602, and the processor 601 may complete communication through internal interfaces.
The electronic device provided in this embodiment is configured to execute the foregoing service processing method based on user emotion recognition, and its implementation principle and technical effects are similar to those of the method embodiment, which are not described herein again.
The present application also provides a computer-readable storage medium, which may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, etc., which may store program codes, and in particular, the computer-readable storage medium has stored thereon computer-executable instructions that, when executed by a processor, are used to implement the above-mentioned business processing method based on user emotion recognition.
The present application also provides a computer program product comprising a computer program stored in a readable storage medium. The computer program may be read from a readable storage medium by at least one processor of the electronic device, and execution of the program by at least one processor causes the electronic device to implement the above-described business processing method based on user emotion recognition provided by the above-described various embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A business processing method based on user emotion recognition, comprising the steps of:
acquiring voice information of a first user at the current moment under the current service;
text feature extraction processing is carried out on the voice information based on a preset self-attention model, so that key text information is obtained;
acquiring global dialogue information and emotion state information of the first user at the previous moment and emotion characterization information of the second user at the previous moment;
determining the emotion state information of the first user at the current moment based on a first gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information;
determining emotion representation information of the first user at the current moment based on a second gating circulating unit according to emotion representation information of the second user at the previous moment and emotion state information of the first user at the current moment;
Inputting emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, and identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment;
and providing corresponding business services for the user according to the identified emotion.
2. The method according to claim 1, wherein the text feature extraction processing is performed on the voice information based on a preset self-attention model to obtain key text information, including:
classifying the voice information to determine a service scene to which the voice information belongs;
coding the voice information after the service scene is determined to obtain text information in a vectorization form;
based on a preset multi-head self-attention model, extracting text features of the text information to obtain the text information containing key features;
and decoding the text information containing the key features to obtain the key text information in a preset form.
3. The method of claim 2, wherein the classifying the voice information to determine a service scenario to which the voice information belongs comprises:
Identifying keywords in the voice information;
matching the keywords with text information stored in a preset service scene library, wherein the preset service scene library comprises a plurality of service scenes, and each service scene stores text information conforming to the service scene;
and determining the business scene to which the text information belongs as the business scene to which the voice information belongs when the keyword is matched with the text information.
4. The method according to claim 3, wherein the text feature extraction process is performed on the text information based on a preset multi-head self-attention model to obtain text information including key features, and the method comprises:
performing word segmentation processing on the text information in the vectorization form to obtain a plurality of segmented words;
based on a preset multi-head self-attention model, performing attention calculation on each word segment, and determining a plurality of word segments focused in the voice information;
and performing splicing processing on the plurality of segmented words to obtain text information containing key features.
5. The method of claim 4, wherein the determining the emotional state information of the first user at the current time based on the first gating loop unit according to the key text information, the global dialogue information at the previous time, and the emotional state information comprises:
Acquiring global dialogue information of the first user before the previous moment;
determining context dialogue information of the current moment related to the key text information according to the previous moment and the previous global dialogue information;
and updating the emotion state information through the first gating circulation unit according to the key text information, the contextual dialogue information and the emotion state information at the previous moment at the current moment, and determining the emotion state information of the first user at the current moment.
6. The method of claim 5, wherein determining contextual dialog information for a current time associated with the key text information based on previous and previous global dialog information comprises:
normalizing the global dialogue information at the previous moment and before, and determining the attention weight of the key text information;
and carrying out pooling processing with the attention weight according to the global dialogue information at the previous moment and before to obtain the contextual dialogue information of the current moment related to the key text information.
7. The method as recited in claim 6, further comprising:
And determining global dialogue information of the first user at the current moment based on a third gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information at the previous moment.
8. The method of any one of claims 1-7, further comprising:
and outputting prompt information corresponding to the emotion of the first user at the current moment.
9. A business processing device based on user emotion recognition, comprising:
the acquisition module is used for acquiring the voice information of the first user at the current moment under the current service;
the processing module is used for extracting text characteristics of the voice information based on a preset self-attention model to obtain key text information;
the acquisition module acquires global dialogue information and emotion state information of the first user at the previous moment and emotion representation information of the second user at the previous moment;
the determining module is used for determining the emotion state information of the first user at the current moment based on a first gating circulating unit according to the key text information, the global dialogue information at the previous moment and the emotion state information;
The determining module is further configured to determine, based on a second gating circulation unit, emotion representation information of the first user at the current moment according to emotion representation information of the second user at a previous moment and emotion state information of the first user at the current moment;
the identification module is used for inputting the emotion representation information of the first user at the current moment into a preset emotion classification model, calculating the probability of the emotion representation information under a plurality of preset emotions, and identifying the emotion corresponding to the highest probability value as the emotion of the first user at the current moment;
the processing module is further used for providing corresponding business services for the user according to the identified emotion.
10. An electronic device, comprising: at least one processor, memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory to cause the electronic device to perform the user emotion recognition-based business processing method of any of claims 1 to 8.
11. A computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, which when executed by a processor, is configured to implement the user emotion recognition-based business processing method of any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements the user emotion recognition based business processing method of any of claims 1 to 8.
CN202310345325.2A 2023-03-31 2023-03-31 Service processing method, device, equipment and medium based on user emotion recognition Pending CN116362777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310345325.2A CN116362777A (en) 2023-03-31 2023-03-31 Service processing method, device, equipment and medium based on user emotion recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310345325.2A CN116362777A (en) 2023-03-31 2023-03-31 Service processing method, device, equipment and medium based on user emotion recognition

Publications (1)

Publication Number Publication Date
CN116362777A true CN116362777A (en) 2023-06-30

Family

ID=86906891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310345325.2A Pending CN116362777A (en) 2023-03-31 2023-03-31 Service processing method, device, equipment and medium based on user emotion recognition

Country Status (1)

Country Link
CN (1) CN116362777A (en)

Similar Documents

Publication Publication Date Title
CN107492379B (en) Voiceprint creating and registering method and device
CN107481720B (en) Explicit voiceprint recognition method and device
CN112988991B (en) Method and system for performing anti-fraud intervention through man-machine conversation
CN110020009B (en) Online question and answer method, device and system
CN113205817A (en) Speech semantic recognition method, system, device and medium
CN110597952A (en) Information processing method, server, and computer storage medium
CN112732911A (en) Semantic recognition-based conversational recommendation method, device, equipment and storage medium
CN109767787A (en) Emotion identification method, equipment and readable storage medium storing program for executing
CN111696558A (en) Intelligent outbound method, device, computer equipment and storage medium
CN110704618B (en) Method and device for determining standard problem corresponding to dialogue data
CN112632244A (en) Man-machine conversation optimization method and device, computer equipment and storage medium
CN113240510A (en) Abnormal user prediction method, device, equipment and storage medium
CN112632248A (en) Question answering method, device, computer equipment and storage medium
CN113468894A (en) Dialogue interaction method and device, electronic equipment and computer-readable storage medium
CN113094478A (en) Expression reply method, device, equipment and storage medium
CN110399473B (en) Method and device for determining answers to user questions
CN116414959A (en) Digital person interaction control method and device, electronic equipment and storage medium
CN112836515A (en) Text analysis method, recommendation device, electronic equipment and storage medium
CN113051384A (en) User portrait extraction method based on conversation and related device
CN113761377A (en) Attention mechanism multi-feature fusion-based false information detection method and device, electronic equipment and storage medium
CN117370512A (en) Method, device, equipment and storage medium for replying to dialogue
CN112463922A (en) Risk user identification method and storage medium
CN114818665B (en) Multi-purpose recognition method and system based on bert+bilstm+crf and xgboost model
CN116775873A (en) Multi-mode dialogue emotion recognition method
CN113064983B (en) Semantic detection method, semantic detection device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination