CN116719937A - User type identification method, device, equipment and medium - Google Patents

User type identification method, device, equipment and medium Download PDF

Info

Publication number
CN116719937A
CN116719937A CN202310720445.6A CN202310720445A CN116719937A CN 116719937 A CN116719937 A CN 116719937A CN 202310720445 A CN202310720445 A CN 202310720445A CN 116719937 A CN116719937 A CN 116719937A
Authority
CN
China
Prior art keywords
emotion
call
user
conversation
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310720445.6A
Other languages
Chinese (zh)
Inventor
于凤英
王健宗
程宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310720445.6A priority Critical patent/CN116719937A/en
Publication of CN116719937A publication Critical patent/CN116719937A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Development Economics (AREA)
  • Evolutionary Biology (AREA)
  • Educational Administration (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Game Theory and Decision Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application relates to a user type identification method, device, equipment and medium, belonging to the technical field of artificial intelligence. The method comprises the steps of obtaining a call text of at least one user; analyzing the call text based on an emotion recognition model, and recognizing emotion classification of the user; based on the call upgrading distinguishing model, analyzing the emotion classification and the call text of the user, identifying whether the user type is a first user type or a second user type, and knowing the emotion and complaint risk of the user according to the user type so as to facilitate subsequent follow-up processing in time, promote user experience under a financial service platform and facilitate providing reference for maintenance and rectification of the financial service platform.

Description

User type identification method, device, equipment and medium
Technical Field
The application is suitable for the technical field of artificial intelligence, and particularly relates to a user type identification method, device, equipment and medium.
Background
At present, with the rise of artificial intelligence technology, the intelligent robot customer service system is widely applied to financial service platforms because of being capable of responding to the consultation demands of users at the first time, the financial service platforms can be insurance platforms, banking platforms, transaction platforms, order platforms and the like, and the financial service platforms can support functions of shopping, social interaction, interactive games, resource transfer and the like and can also have functions of applying loans, credit cards or purchasing insurance, financial products and the like.
However, intelligent robotic user systems also have several problems. For example, intelligent robot customer service agents may sometimes fail to address the problem of user consultation, understand user emotion, and risk complaints if the user has emotional vigilance. However, the current intelligent robot customer service system cannot recognize and solve the problem, which may result in unnecessary improvement of communication cost and decrease of satisfaction of users with respect to the financial service platform, so how to effectively recognize emotion and complaint risk of users to increase satisfaction of users with respect to the financial service platform is a urgent problem to be solved.
Disclosure of Invention
In view of the above, the embodiments of the present application provide a method, apparatus, device, and medium for identifying a user type, so as to solve the problem that satisfaction of a user with a financial service platform is low due to failure to identify emotion and complaint risks of the user.
In a first aspect, an embodiment of the present application provides a method for identifying a user type, including:
acquiring a call text of at least one user;
analyzing the call text based on an emotion recognition model, and recognizing emotion classification of the user;
and analyzing emotion classification of the user and the call text based on a call upgrading distinguishing model, and identifying the type of the user.
The emotion recognition model is based on analyzing the call text, and the emotion classification of the user is recognized to comprise: classifying the conversation text into a no emotion surge speech operation or an emotion surge speech operation based on the emotion recognition model; when the call text is the emotion fluctuation voice operation, judging whether the words in the first-level complaint word list are preset in the emotion fluctuation voice operation, if so, classifying the emotion fluctuation voice operation into a first emotion fluctuation voice operation, if not, judging whether the words in the second-level complaint word list are preset in the emotion fluctuation voice operation, if so, judging that the emotion fluctuation voice operation is a second emotion fluctuation voice operation, and if not, judging that the emotion fluctuation voice operation is a third emotion fluctuation voice operation.
The call text comprises at least one call feature; the analyzing the emotion classification of the user and the call text based on the call upgrading distinction model, and the identifying the user as the first user type or the second user type comprises: and the call upgrading distinguishing model analyzes the emotion classification of the user and the call text according to the call characteristics, and identifies the type of the user as a first user type or a second user type.
The call upgrading distinguishing model analyzes emotion classification of the user and the call text according to call characteristics, and the identification of the user as a first user type or a second user type comprises the following steps: analyzing each call feature, assigning a value to each call feature, and calculating the prediction probability of the call text according to the assigned value of each call feature based on a logistic regression algorithm; and if the prediction probability is smaller than or equal to the preset probability, identifying the user as a first user type, and if the prediction probability is larger than the preset probability, identifying the user as a second user type.
The call features include at least one of: the number of call turns, the call duration, the hit first-level complaint word list speech rate, the hit second-level complaint word list speech rate, the no emotion fluctuation speech rate, the first emotion fluctuation speech rate, the second emotion fluctuation speech rate, or the third emotion fluctuation speech rate.
The number of call rounds includes: the total number of rounds of user communication; the hit first-level complaint vocabulary term ratio comprises: the ratio of the number of rounds hitting the words in the first-level complaint vocabulary in the call text to the number of rounds of call; the hit second-level complaint vocabulary term ratio comprises: and the conversation text hits the duty ratio of the number of the words in the secondary complaint word list in the conversation round number.
The emotion-free surge speech ratio comprises: the conversation text is identified by the emotion identification model as the ratio of the number of conversation rounds without emotion fluctuation in the conversation rounds;
the first mood swing session duty cycle comprises: the conversation text is identified by the emotion identification model as the ratio of the number of the first emotion fluctuation conversation operation in the conversation number of the conversation;
the second mood swing session duty cycle comprises: the conversation text is identified by the emotion identification model as the ratio of the number of the second emotion fluctuation conversation operation in the conversation number of the conversation;
the third mood swing session duty cycle comprises: and the conversation text is identified by the emotion identification model as the ratio of the number of the third emotion fluctuation conversation operation in the conversation number of the conversation.
In a second aspect, an embodiment of the present application provides a user type identification apparatus, including:
the receiving module is used for acquiring the call text of at least one user;
the first recognition module is used for analyzing the call text received by the receiving module based on an emotion recognition model and recognizing emotion classification of the user;
and the second recognition module is used for analyzing the emotion classification of the user recognized by the first recognition module and the call text received by the receiving module based on the call upgrading distinguishing model, and recognizing the type of the user.
In a third aspect, an embodiment of the present application provides a computer device, the computer device comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the emotion recognition method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the emotion recognition method according to the first aspect.
Compared with the prior art, the embodiment of the application has the beneficial effects that: the application acquires the call text of at least one user; analyzing the call text based on an emotion recognition model, and recognizing emotion classification of the user; and analyzing emotion classification of the user and the call text based on the call upgrading distinguishing model, identifying whether the user is a first user type or a second user type, and knowing the emotion and complaint risk of the user according to the user type so as to facilitate subsequent timely processing, and improving user experience, namely improving satisfaction of the user to a financial service platform.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application environment of a user type identification method according to a first embodiment of the present application;
fig. 2 is a flow chart of a method for identifying a user type according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of a user type identification device according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence is the intelligence of simulating, extending and expanding a person using a digital computer or a machine controlled by a digital computer, sensing the environment, obtaining knowledge, and using knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
It should be understood that the sequence numbers of the steps in the following embodiments do not mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present application.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
The method for identifying the user type provided by the embodiment of the application can be applied to an application environment as shown in fig. 1, wherein a client communicates with a server to form a cloud edge combined system. The clients include, but are not limited to, palm top computers, desktop computers, notebook computers, ultra-mobile personal computer (UMPC), netbooks, server computer devices, personal digital assistants (personal digital assistant, PDA), and other computer devices. The server may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
The client and the server can be deployed in a financial service platform, the financial service platform can be an insurance platform, a banking platform, a transaction platform, an order platform and the like, and the financial service platform can support functions of shopping, social interaction, interactive games, resource transfer and the like and can also have functions of applying loans, credit cards or purchasing insurance, financial products and the like.
Referring to fig. 2, a flow chart of a user type identification method provided by a second embodiment of the present application is shown, where the identification method is applied to a server in fig. 1, and a computer device corresponding to the server is connected to a corresponding database to obtain corresponding data in the database. The computer equipment can be connected with a corresponding client, the client can also collect corresponding data for the server, and the data can refer to call data provided by a user based on a financial service platform. As shown in fig. 2, the identification method may include the steps of:
step 201, obtaining a call text of at least one user.
In the application, the call text comprises a call text which is communicated and exchanged between the user and an intelligent robot customer service system in a financial service platform.
Step S202 analyzes the call text based on the emotion recognition model, and recognizes the emotion classification of the user.
In the application, the conversation text is classified into a no emotion fluctuation speech operation or an emotion fluctuation speech operation based on the emotion recognition model. When the call text is the emotion fluctuation voice operation, judging whether the words in the preset first-level complaint word list are in the emotion fluctuation voice operation, if so, classifying the emotion fluctuation voice operation into a first emotion fluctuation voice operation, if not, judging whether the emotion fluctuation voice operation hits the words in the preset second-level complaint word list, if so, judging that the emotion fluctuation voice operation is a second emotion fluctuation voice operation, and if not, judging that the emotion fluctuation voice operation is a third emotion fluctuation voice operation.
Wherein, the first-level complaint vocabulary may include at least one of the following: related to threat supervision, related to public opinion, related to legal litigation, or related to words affecting operational order, etc. The secondary complaint vocabulary may include the following: words related to rough mouths or related to rough violence, etc.
In the present application, the degree of the mood swings may be: the mood swing free session > the third mood swing session > the first mood swing session > the second mood swing session. Preferably, the third mood swing session may comprise a mild mood swing session, the first mood swing session may comprise a severe mood swing session, and the second mood swing session may comprise an uncontrolled mood swing session.
The application adds the first-level complaint word list and the second-level complaint word list, thereby greatly improving the accuracy of user type identification.
And step S203, analyzing the emotion classification of the user and the call text based on the call upgrading distinguishing model, and identifying the type of the user.
Specifically, the user types include a first user type and/or a second user type. The first user type may include complaint-free risk users and the second user type may include complaint-risk users.
The call upgrading distinguishing model analyzes each call feature, assigns a value for each call feature, and calculates the prediction probability of the call text according to the assigned value of each call feature based on a logistic regression algorithm; and if the prediction probability is smaller than or equal to the preset probability, the user is identified as a first user type, and if the prediction probability is larger than the preset probability, the user type is identified as a second user type.
The call features include at least one of: the number of call turns, the call duration, the hit first-level complaint word list speech rate, the hit second-level complaint word list speech rate, the no emotion fluctuation speech rate, the first emotion fluctuation speech rate, the second emotion fluctuation speech rate, or the third emotion fluctuation speech rate.
In the present application, the examples and assignment examples of the call features may be exemplified as follows:
the number of call rounds may include a total number of rounds of communication for the user. In general, the more the total number of communication rounds, the better the problem of the user can be solved, the more the emotion of the user tends to be stable, and the risk of complaint upgrading of the user is smaller. And compared with the historical maximum call round number in the intelligent robot customer service system, the call round number feature is normalized to be between 0 and 1, and when the call round number exceeds the historical maximum call round number, the call round number feature is assigned to be 1.
The call duration comprises the time of the call between the user and the intelligent robot customer service system. In general, the longer the call duration, the better the problem of the user can be solved, and the risk of complaint upgrade of the user is smaller. And comparing the call duration with the historical maximum call duration in the user system of the intelligent robot, so that the characteristic of the call duration is normalized to be between 0 and 1. And when the call duration exceeds the historical maximum call duration, the feature of the call duration is assigned to be 1.
The hit first-level complaint vocabulary term ratio comprises the ratio of the number of rounds hit in the first-level complaint vocabulary in the call text to the number of rounds in the call. Illustratively, in the full-pass call interaction, the number of call rounds is 10, and the user has 3 rounds to mention the words in the first-level complaint vocabulary, then the feature assignment of the hit first-level complaint vocabulary speech ratio is 0.3.
The hit secondary complaint vocabulary term ratio comprises the ratio of the number of rounds of the words in the secondary complaint vocabulary in the call text to the number of rounds of the call. In the whole-pass conversation interaction, the number of conversation rounds is 10, the user has 3 rounds to mention the words in the secondary complaint vocabulary, and then the feature of the hit secondary complaint vocabulary is assigned to 0.3.
The first mood swing session duty cycle comprises: and the conversation text is identified by the emotion identification model as the ratio of the number of the first emotion fluctuation conversation operation to the conversation number of the conversation. Illustratively, the number of call rounds is 10, and the user has 1 round of emotion recognition model identified as the no emotion fluctuation session, then the first emotion fluctuation session feature is assigned a value of 0.1.
The second mood swing session duty cycle comprises: and the conversation text is identified by the emotion recognition model as the ratio of the number of the second emotion fluctuation conversation operation to the conversation number of the conversation. Illustratively, the number of call rounds is 10, and the user has 2 rounds of emotion recognition models recognized as the second emotion voice pattern, then the feature of the second emotion voice pattern duty cycle is assigned a value of 0.2.
The third mood swing session duty cycle comprises: and the conversation text is identified by the emotion recognition model as the ratio of the number of the third emotion fluctuation conversation operation to the conversation number of the conversation. Illustratively, in the whole-call interaction, the number of call rounds is 10, the user has 3 rounds of the third emotion fluctuation voice operation recognized by the emotion recognition model, and then the third emotion fluctuation voice operation occupies a bit sign to be assigned as 0.3.
And finally obtaining the prediction probability of the whole communication through the 8 features through a logistic regression model LR, and if the prediction probability is larger than 0.8, considering the user as a user with complaint upgrading risk.
In the application, the call upgrading distinguishing model adopts at least 8 types of call characteristics, integrally describes the level of the whole call risk, and adopts the logistic regression LR algorithm to enable the 8 types of characteristic assignment to automatically learn and reasonably attach to the actual situation.
The embodiment of the application acquires the call text of at least one user; analyzing the call text based on an emotion recognition model, and recognizing emotion classification of the user; based on the call upgrading distinguishing model, the emotion classification of the user and the call text are analyzed, the type of the user is identified, and the emotion and complaint risk of the user can be known according to the type of the user, so that follow-up processing can be performed in time, user experience is improved, and satisfaction degree of the user to a financial service platform is improved. The application adopts a mode of combining the model and the complaint vocabulary, improves the accuracy of recognition and improves the accuracy of the second emotion fluctuation speech surgery and the third emotion fluctuation speech surgery.
Fig. 3 shows a block diagram of a user type recognition device according to a third embodiment of the present application, where the emotion recognition device is applied to a server in fig. 1, and a computer device corresponding to the server is connected to a corresponding database to obtain corresponding data in the database. The computer equipment can be connected with a corresponding client, the client can also collect corresponding data for the server, and the data can refer to call data provided by a user based on a financial service platform. For convenience of explanation, only portions relevant to the embodiments of the present application are shown.
Referring to fig. 3, the user type identification apparatus includes:
a receiving module 301, configured to obtain a call text of at least one user; the call text comprises call text communicated and exchanged between the user and the intelligent robot customer service system.
The first recognition module 302 is configured to analyze the call text received by the receiving module 301 based on a emotion recognition model, and recognize an emotion classification of the user. Specifically, the conversation text is classified as either no emotion surge speech or emotion surge speech based on the emotion recognition model. When the call text is the emotion fluctuation voice operation, judging whether words in a preset first-level complaint word list are hit in the emotion fluctuation voice operation, if so, classifying the emotion fluctuation voice operation into a first emotion fluctuation voice operation, if not, judging whether the emotion fluctuation voice operation hits words in a preset second-level complaint word list, if so, judging that the emotion fluctuation voice operation is a second emotion fluctuation voice operation, and if not, judging that the emotion fluctuation voice operation is a third emotion fluctuation voice operation.
Wherein, the first-level complaint vocabulary may include at least one of the following: related to threat supervision, related to public opinion, related to legal litigation, or related to words affecting operational order, etc. The secondary complaint vocabulary may include the following: words related to rough mouths or related to rough violence, etc.
In the present application, the degree of the mood swings may be: the mood swing free session > the third mood swing session > the first mood swing session > the second mood swing session. Preferably, the third mood swing session may comprise a mild mood swing session, the first mood swing session may comprise a severe mood swing session, and the second mood swing session may comprise an uncontrolled mood swing session.
The application adds the first-level complaint word list and the second-level complaint word list, thereby greatly improving the accuracy of user type identification.
And a second identifying module 303, configured to analyze the emotion classification of the user identified by the first identifying module 302 and the call text received by the receiving module 301 based on a call upgrade differentiating model, and identify the type of the user. Specifically, the user types include a first user type and/or a second user type. The first user type may include complaint-free risk users and the second user type may include complaint-risk users.
The call upgrading distinguishing model analyzes each call feature, assigns a value for each call feature, and calculates the prediction probability of the call text according to the assigned value of each call feature based on a logistic regression algorithm; and if the prediction probability is smaller than or equal to the preset probability, identifying the user as a first user type, and if the prediction probability is larger than the preset probability, identifying the user as a second user type.
The call features include at least one of: the number of call turns, the call duration, the hit first-level complaint word list speech rate, the hit second-level complaint word list speech rate, the no emotion fluctuation speech rate, the first emotion fluctuation speech rate, the second emotion fluctuation speech rate, or the third emotion fluctuation speech rate.
In the embodiment of the present application, the examples and assignment examples of the call features may be the following examples:
the number of call rounds may include a total number of rounds of communication for the user. In general, the more the total number of communication rounds, the better the problem of the user can be solved, the more the emotion of the user tends to be stable, and the risk of complaint upgrading of the user is smaller. And compared with the historical maximum call round number in the intelligent robot customer service system, the call round number feature is normalized to be between 0 and 1, and when the call round number exceeds the historical maximum call round number, the call round number feature is assigned to be 1.
The call duration comprises the time of the call between the user and the intelligent robot customer service system. In general, the longer the call duration, the better the problem of the user can be solved, and the risk of complaint upgrade of the user is smaller. And comparing the call duration with the historical maximum call duration in the user system of the intelligent robot, so that the characteristic of the call duration is normalized to be between 0 and 1. And when the call duration exceeds the historical maximum call duration, the feature of the call duration is assigned to be 1.
The hit first-level complaint vocabulary term ratio comprises the ratio of the number of rounds hit in the first-level complaint vocabulary in the call text to the number of rounds in the call. Illustratively, in the whole-pass conversation interaction, the number of conversation rounds is 10, the user has 3 rounds to mention the words in the first-level complaint vocabulary, and then the feature of the hit first-level complaint vocabulary conversation ratio is assigned to be 0.3.
The hit secondary complaint vocabulary term ratio comprises the ratio of the number of rounds of the words in the secondary complaint vocabulary in the call text to the number of rounds of the call. In the whole-pass conversation interaction, the number of conversation rounds is 10, the user has 3 rounds to mention the words in the secondary complaint vocabulary, and then the feature of the hit secondary complaint vocabulary is assigned to 0.3.
The first mood swing session duty cycle comprises: and the conversation text is identified by the emotion identification model as the ratio of the number of the first emotion fluctuation conversation operation to the conversation number of the conversation. Illustratively, in the whole-call interaction, the number of call rounds is 10, and the user has 1 round of emotion recognition model to recognize that the emotion fluctuation speech is not generated, then the first emotion fluctuation speech feature is assigned to be 0.1.
The second mood swing session duty cycle comprises: and the conversation text is identified by the emotion recognition model as the ratio of the number of the second emotion fluctuation conversation operation to the conversation number of the conversation. Illustratively, in the whole-conversation interaction, the number of conversation rounds is 10, the user has 2 rounds of recognition by the emotion recognition model as the second emotion fluctuation conversation, and then the feature of the second emotion fluctuation conversation ratio is assigned to 0.2.
The third mood swing session duty cycle comprises: and the conversation text is identified by the emotion recognition model as the ratio of the number of the third emotion fluctuation conversation operation to the conversation number of the conversation. Illustratively, in the whole-call interaction, the number of call rounds is 10, the user has 3 rounds of the third emotion fluctuation voice operation recognized by the emotion recognition model, and then the third emotion fluctuation voice operation occupies a bit sign to be assigned as 0.3.
And finally obtaining the prediction probability of the whole communication through the 8 features through a logistic regression model LR, and if the prediction probability is larger than 0.8, considering the user as the user with complaint upgrading risk.
In the application, the call upgrading distinguishing model adopts at least 8 types of call features, integrally describes the level of the whole call risk, and adopts the logistic regression LR algorithm to enable the 8 types of feature assignment to automatically learn and reasonably attach to the actual situation.
In the embodiment of the present application, the receiving module 301 obtains at least one call text of the user; the first recognition module 302 analyzes the call text received by the receiving module 301 based on an emotion recognition model, and recognizes an emotion classification of the user; the second recognition module 303 analyzes the emotion classification of the user and the call text received by the receiving module 301, which are obtained by analyzing the second recognition module 302, based on a call upgrade differentiation model, recognizes that the user is a first user type or a second user type, and according to the user type, the emotion and complaint risk of the user can be known, so that the user experience can be further processed in time. The application adopts a mode of combining the model and the complaint vocabulary, improves the accuracy of recognition and improves the accuracy of the second emotion fluctuation speech surgery and the third emotion fluctuation speech surgery.
It should be noted that, because the content of information interaction and execution process between the modules and the embodiment of the method of the present application are based on the same concept, specific functions and technical effects thereof may be referred to in the method embodiment section, and details thereof are not repeated herein.
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present application. As shown in fig. 4, the computer device of this embodiment includes: at least one processor (only one shown in fig. 4), a memory, and a computer program stored in the memory and executable on the at least one processor, the processor executing the computer program to perform the steps of any of the above-described embodiments of the user type identification method.
The computer device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that fig. 4 is merely an example of a computer device and is not intended to limit the computer device, and that a computer device may include more or fewer components than shown, or may combine certain components, or different components, such as may also include a network interface, a display screen, an input device, and the like.
The processor may be a CPU, but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory includes a readable storage medium, an internal memory, etc., where the internal memory may be the memory of the computer device, the internal memory providing an environment for the execution of an operating system and computer-readable instructions in the readable storage medium. The readable storage medium may be a hard disk of a computer device, and in other embodiments may be an external storage device of the computer device, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. that are provided on the computer device. Further, the memory may also include both internal storage units and external storage devices of the computer device. The memory is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs such as program codes of computer programs, and the like. The memory may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiment, and may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiment described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The present application may also be implemented as a computer program product for implementing all or part of the steps of the method embodiments described above, when the computer program product is run on a computer device, causing the computer device to execute the steps of the method embodiments described above.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided by the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A method for identifying a user type, the method comprising:
acquiring a call text of at least one user;
analyzing the call text based on an emotion recognition model, and recognizing emotion classification of the user;
and analyzing emotion classification of the user and the call text based on a call upgrading distinguishing model, and identifying the type of the user.
2. The method of claim 1, wherein the analyzing the call text based on the emotion recognition model, the recognizing the emotion classification of the user comprises:
classifying the conversation text into a no emotion surge speech operation or an emotion surge speech operation based on the emotion recognition model;
when the call text is the emotion fluctuation voice operation, judging whether the emotion fluctuation voice operation presets words in a first-level complaint word list, if so, classifying the emotion fluctuation voice operation into a first emotion fluctuation voice operation, if not, judging whether the emotion fluctuation voice operation hits words in a second-level complaint word list, if so, judging that the emotion fluctuation voice operation is a second emotion fluctuation voice operation, and if not, judging that the emotion fluctuation voice operation is a third emotion fluctuation voice operation.
3. The method of claim 1, wherein the call text comprises at least one call feature; the analyzing the emotion classification of the user and the call text based on the call upgrading distinction model, and identifying the type of the user comprises: and the call upgrading distinguishing model analyzes the emotion classification of the user and the call text according to the call characteristics, and identifies the type of the user as a first user type or a second user type.
4. The emotion recognition method of claim 3, wherein the call upgrade discrimination model analyzes emotion classifications of the users and the call text according to call characteristics, and recognizing the users as first user type or second user type comprises: the call upgrading distinguishing model analyzes each call feature, assigns a value for each call feature, and calculates the prediction probability of the call text according to the assigned value of each call feature based on a logistic regression algorithm; and if the prediction probability is smaller than or equal to the preset probability, identifying the user as a first user type, and if the prediction probability is larger than the preset probability, identifying the user as a second user type.
5. The emotion recognition method of claim 4, wherein the call features include at least one of: the number of call turns, the call duration, the hit first-level complaint word list speech rate, the hit second-level complaint word list speech rate, the no emotion fluctuation speech rate, the first emotion fluctuation speech rate, the second emotion fluctuation speech rate, or the third emotion fluctuation speech rate.
6. The emotion recognition method of claim 4, wherein the number of call rounds comprises: the total number of rounds of user communication; the hit first-level complaint vocabulary term ratio comprises: the ratio of the number of rounds hitting the words in the first-level complaint vocabulary in the call text to the number of rounds of call; the hit second-level complaint vocabulary term ratio comprises: and the conversation text hits the duty ratio of the number of the words in the secondary complaint word list in the conversation round number.
7. The emotion recognition method of claim 5, wherein the emotion-free surge speech ratio comprises: the conversation text is identified by the emotion recognition model as the duty ratio of the conversation round number of the round number without emotion fluctuation in the conversation round number;
the first mood swing session duty cycle comprises: the conversation text is identified by the emotion recognition model as the ratio of the number of the first emotion fluctuation conversation operation in the conversation number of the conversation;
the second mood swing session duty cycle comprises: the conversation text is identified by the emotion recognition model as the duty ratio of the number of the second emotion fluctuation conversation in the conversation number of the conversation;
the third mood swing session duty cycle comprises: and the conversation text is identified by the emotion recognition model as the ratio of the number of the third emotion fluctuation conversation operation to the conversation number of the conversation.
8. An apparatus for identifying a user type, the apparatus comprising:
the receiving module is used for acquiring the call text of at least one user;
the first recognition module is used for analyzing the call text received by the receiving module based on an emotion recognition model and recognizing emotion classification of the user;
and the second recognition module is used for analyzing the emotion classification of the user recognized by the first recognition module and the call text received by the receiving module based on the call upgrading distinguishing model, and recognizing the type of the user.
9. A computer device comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the emotion recognition method of any of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the emotion recognition method of any one of claims 1 to 7.
CN202310720445.6A 2023-06-16 2023-06-16 User type identification method, device, equipment and medium Pending CN116719937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310720445.6A CN116719937A (en) 2023-06-16 2023-06-16 User type identification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310720445.6A CN116719937A (en) 2023-06-16 2023-06-16 User type identification method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116719937A true CN116719937A (en) 2023-09-08

Family

ID=87865765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310720445.6A Pending CN116719937A (en) 2023-06-16 2023-06-16 User type identification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116719937A (en)

Similar Documents

Publication Publication Date Title
CN109241418B (en) Abnormal user identification method and device based on random forest, equipment and medium
US11645515B2 (en) Automatically determining poisonous attacks on neural networks
US11538236B2 (en) Detecting backdoor attacks using exclusionary reclassification
US11531987B2 (en) User profiling based on transaction data associated with a user
CN111107048A (en) Phishing website detection method and device and storage medium
CN112883990A (en) Data classification method and device, computer storage medium and electronic equipment
CN113657773B (en) Method and device for voice operation quality inspection, electronic equipment and storage medium
US11625630B2 (en) Identifying intent in dialog data through variant assessment
CN112214770B (en) Malicious sample identification method, device, computing equipment and medium
CN113472860A (en) Service resource allocation method and server under big data and digital environment
CN112579781A (en) Text classification method and device, electronic equipment and medium
CN111582647A (en) User data processing method and device and electronic equipment
CN116680385A (en) Dialogue question-answering method and device based on artificial intelligence, computer equipment and medium
CN116719937A (en) User type identification method, device, equipment and medium
CN113836297B (en) Training method and device for text emotion analysis model
CN115080864A (en) Artificial intelligence based product recommendation method and device, computer equipment and medium
CN117216249A (en) Data classification method, device, electronic equipment, medium and vehicle
CN114282940A (en) Method and apparatus for intention recognition, storage medium, and electronic device
CN113469237A (en) User intention identification method and device, electronic equipment and storage medium
CN110704614B (en) Information processing method and device for predicting user group type in application
CN111325225B (en) Image classification method, electronic device and storage medium
CN113449506A (en) Data detection method, device and equipment and readable storage medium
US20200265270A1 (en) Mutual neighbors
CN116911313B (en) Semantic drift text recognition method and device
CN116663562A (en) Entity identification method, device, computer equipment and medium based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination