CN115460323A - Method, device, equipment and storage medium for intelligent external call transfer - Google Patents

Method, device, equipment and storage medium for intelligent external call transfer Download PDF

Info

Publication number
CN115460323A
CN115460323A CN202211083161.2A CN202211083161A CN115460323A CN 115460323 A CN115460323 A CN 115460323A CN 202211083161 A CN202211083161 A CN 202211083161A CN 115460323 A CN115460323 A CN 115460323A
Authority
CN
China
Prior art keywords
emotion
user
intelligent
voice
analysis engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211083161.2A
Other languages
Chinese (zh)
Inventor
徐勇攀
李乾
毛振苏
王诗达
张琛
潘仰耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pudong Development Bank Co Ltd
Original Assignee
Shanghai Pudong Development Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pudong Development Bank Co Ltd filed Critical Shanghai Pudong Development Bank Co Ltd
Priority to CN202211083161.2A priority Critical patent/CN115460323A/en
Publication of CN115460323A publication Critical patent/CN115460323A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an intelligent external call transfer manual method, device, equipment and storage medium. Acquiring an intelligent emotion analysis engine model; inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user; and executing an external call transfer manual work when the emotion value meets a preset condition. The matched emotion value of the user voice is obtained through the intelligent emotion analysis engine model, and whether manual switching is needed or not is determined according to the emotion value, so that seamless switching can be achieved under the condition that a client has problems but does not express switching intention definitely, and experience effects of the user are improved.

Description

Method, device, equipment and storage medium for intelligent external call transfer
Technical Field
The invention relates to the technical field of data processing, in particular to an intelligent external call transfer manual method, device, equipment and storage medium.
Background
The traditional intelligent voice platform can only serve the customer according to the data stored in the database, when the intelligent voice can not solve the problem for the customer, the intelligent voice platform needs to be switched to an artificial service platform in an external call-to-artificial mode, and the customer is served by the artificial customer.
The current external call transfer manual mode comprises the following steps: setting a manual transfer node, enabling a customer to reach the manual transfer node according to configured speech interaction when answering a call, and obtaining a call transferred to a manual seat after the customer answers the call; setting a call threshold, starting timing from call connection, judging that a problem occurs in the man-machine conversation of the client when the call duration exceeds the call threshold, and automatically switching the call to a manual seat at the moment; and changing into manual work according to different scenes, and providing different man-machine switching modules according to different application scenes.
However, in the mode of setting the manual-to-manual node, the problem that the interaction of the client is encountered before the client switches to the manual process node exists, but the manual process cannot be triggered because the process does not go to the manual-to-manual node, so that the client experience is reduced; aiming at the simple and rough way of setting the call threshold value, whether the current customer has the intention of transferring to manual work or not can not be judged; the manual transferring mode according to different scenes still uses the manual transferring intention expressed by the client as the transferring basis, and the situation that manual transferring is triggered when the client encounters a problem but does not obviously express the transferring intention cannot be solved.
Disclosure of Invention
The invention provides an intelligent external call transfer manual method, device, equipment and storage medium, which are used for realizing intelligent external call transfer manual according to the emotion value of a client.
According to a first aspect of the present invention, there is provided a method of intelligent call out manual, comprising: acquiring an intelligent emotion analysis engine model;
inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user;
and executing external call transfer manual work when the emotion value meets a preset condition.
According to another aspect of the present invention, there is provided an intelligent external call transfer manual apparatus, comprising: the intelligent emotion analysis engine model acquisition module is used for acquiring an intelligent emotion analysis engine model;
the emotion value acquisition module is used for inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user;
and the external call transfer manual work module is used for executing external call transfer manual work when the emotion value meets a preset condition.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method according to any of the embodiments of the invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to perform the method according to any one of the embodiments of the present invention when the computer instructions are executed.
According to the technical scheme of the embodiment of the invention, the matched emotion value of the user voice is obtained through the intelligent emotion analysis engine model, and whether manual switching is needed or not is determined according to the emotion value, so that seamless switching can be realized under the condition that a client has problems but does not express switching intention definitely, and the experience effect of the user is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for providing an intelligent external call transfer manual according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative method for providing intelligent call out override in accordance with an embodiment of the present invention;
fig. 3 is a flowchart of a method for intelligent external call transfer manual according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of an intelligent external call transfer manual device according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of an intelligent external call transfer manual method according to an embodiment of the present invention, which is applicable to a case of manual transfer of an external call, and the method may be performed by an intelligent external call transfer manual device, and the device may be implemented in a form of hardware and/or software. As shown in fig. 1, the method includes:
and step S101, obtaining an intelligent emotion analysis engine model.
Fig. 2 shows a flowchart of another method for intelligently transferring external call to manual, which specifically describes step S101, and includes:
step S1011, an emotion sentence sample set is obtained.
The emotion statement sample set comprises a positive emotion statement sample set and a negative emotion statement sample set. A positive emotion sentence sample is included in the positive emotion sentence sample set, for example, "the flow of your explanation i hear what you understand, thank you for premium services"; the negative emotion sentence sample set includes negative emotion sentence samples, for example, "not to understand what is meant, not to hear what is said," and the like, and of course, this embodiment is merely an example, and the specific contents of the sentence samples in the positive emotion sentence sample set and the negative emotion sentence sample set are not limited.
Step S1012, performing word segmentation processing on the emotion sentence sample set to obtain an initial word segmentation result.
Specifically, in the embodiment, the classified positive emotion sentence sample set and negative emotion sentence sample set are respectively preprocessed, specifically, the emotion sentence sample set is subjected to word segmentation to obtain an initial word segmentation result. When performing word segmentation processing on an emotion sentence sample set, a word segmentation method based on character string matching, a word segmentation method based on understanding, a word segmentation method based on statistics, and the like may be specifically used.
For example, for a negative emotion sentence sample in the negative emotion sentence sample set: the words are divided respectively to obtain an initial word division result, namely, the words of which the meaning is unclear and the words of which the user is not understood are not clear.
And S1013, screening the initial word segmentation results according to the exclusive foreign call emotion dictionary to obtain effective word segmentation results.
The method includes the steps of obtaining an initial segmentation result, and obtaining an effective segmentation result, wherein the initial segmentation result includes words with strong emotion colors in a man-machine interaction process in an outbound exclusive emotion dictionary, and the words with strong emotion colors in the man-machine interaction process can be removed by the outbound exclusive emotion dictionary, so that words capable of obviously representing statement emotion categories are extracted.
And step S1014, converting the effective word segmentation result into a sample word vector, and training the sample word vector by adopting a naive Bayes algorithm to obtain an intelligent emotion analysis engine model.
Specifically, in the text vector representation stage, a word vector model is adopted to convert effective word segmentation results into sample word vectors so as to convert the effective word vectors into voices capable of being recognized by a computer, and a naive Bayes algorithm is adopted to train the sample word vectors so as to obtain an intelligent emotion analysis engine model. The naive Bayes algorithm principle is based on the assumption of characteristic independence, namely that the characteristics are mutually independent, under the condition that the sample to be classified appears, the appearance probability of other samples is calculated, and then the class of the sample is judged. Since the specific principle of the naive bayes algorithm is not the focus of the present application, it is not described in detail in this embodiment.
And S102, inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user.
Optionally, inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user, including: performing word segmentation processing on the voice of the user in the current call turn to obtain a word segmentation result of the user; converting the word segmentation result of the user into a user word vector; and inputting the user word vector into the intelligent emotion analysis engine model to obtain an emotion value matched with the user voice.
Specifically, in this embodiment, when the client connects to the intelligent outbound call and interacts with the robot, the voice recognition function converts the user voice in the current call turn into text, performs word segmentation processing on the obtained text to obtain a user word segmentation result, for example, the user word segmentation result is "without understanding and speaking again", converts the obtained user word segmentation result into a user word vector, inputs the obtained user word vector into the intelligent emotion analysis engine model, and obtains an emotion value, for example, 0.5, matching the user voice.
And step S103, executing an external call transfer manual work when the emotion value meets a preset condition.
Optionally, when the emotion value satisfies a preset condition, performing an external call transfer manual operation, including: acquiring an alarm call record, wherein the alarm call record comprises the alarm times of historical call rounds; and when the emotion value is smaller than a preset threshold value and the alarm times reach the specified times, executing the manual call transfer.
Specifically, the terminal obtains an alarm call record stored this time and a preset threshold, where the alarm call record includes the number of alarms of a historical call turn, the preset threshold indicates that a critical value that a client needs to change to manual work is 0.7, and if the emotion value is lower than the threshold, the client is likely to need to change to manual work, so that when the emotion value obtained by the current call turn is smaller than the preset threshold and the number of alarms reaches a specified number of times, an external call change manual work is executed, for example, when the number of alarms recorded in the alarm call record is 2 times, and the emotion value in the current call turn is lower than the preset threshold, the emotion values of calls in three consecutive turns are all lower than the preset threshold, and at this time, the external call change manual work is executed.
Optionally, the method further comprises: when the emotion value is smaller than a preset threshold value and the alarm times are smaller than the designated times, generating alarm information of the current turn; and updating the alarm times in the alarm call records according to the alarm information of the current turn.
Specifically, when the emotion value is smaller than the preset threshold value but the number of alarms in the alarm call record is smaller than 2, the call transfer manual is not executed temporarily, but only the alarm information is sent to the seat terminal, and the number of alarms in the alarm call record is updated according to the alarm information of the current turn. And after the alarm information of the current turn is generated, the voice of the user is sent to the seat terminal, so that the seat terminal can display the voice of the user in real time. Therefore, the subsequent speech node can be conveniently and timely adjusted by the manual work at the seat terminal side according to the displayed user voice, and the user problem can be conveniently and efficiently solved.
Optionally, the method further comprises: when the emotion value is larger than a preset threshold value, triggering and starting a talking node of the next round of talking; and carrying out a new round of man-machine interaction conversation with the user based on the telephony node.
When the emotion value is larger than the preset threshold value, the current man-machine interaction of the client is normal, the problem is not met, manual work is not needed, then the next round of talk operation node is triggered to be started, and a new round of man-machine interaction conversation is carried out between the user and the newly started talk operation node.
It should be noted that in this embodiment, a manual call transfer prompt is given to the client when the emotion value satisfies the preset condition, and an external call transfer manual call is executed after the confirmation information of the client is determined to be received. After receiving the user instruction, the external call forwarding manual work is executed, so that the wrong judgment is avoided, and the experience effect of the user is further improved.
It should be noted that, in this embodiment, while obtaining the emotion value by using the intelligent emotion analysis engine model, the speech recognition method may be synchronously used to recognize the user speech, and when the user speech includes a vocabulary directly converted into an artificial vocabulary, the conversion from external call to artificial is directly performed. Therefore, the accuracy of executing external call transfer manual work is ensured by synchronous operation of the intelligent emotion analysis engine model and the voice recognition in the embodiment.
According to the embodiment of the invention, the matched emotion value of the user voice is obtained through the intelligent emotion analysis engine model, and whether manual switching is needed or not is determined according to the emotion value, so that seamless switching can be performed under the condition that a client has problems but does not express switching intention definitely, and the experience effect of the user is improved.
Example two
Fig. 3 is a flowchart of a method for intelligent outbound call transfer manual work according to a second embodiment of the present invention, where after the outbound call transfer manual work is executed, the present embodiment further includes switching back to the human-machine interaction after the manual call interaction is completed. As shown in fig. 3, the method includes:
step S201, an intelligent emotion analysis engine model is obtained.
Optionally, obtaining an intelligent emotion analysis engine model includes: acquiring an emotion statement sample set, wherein the emotion statement sample set comprises a positive emotion statement sample set and a negative emotion statement sample set; performing word segmentation processing on the emotion sentence sample set to obtain an initial word segmentation result; screening the initial word segmentation result according to the outbound exclusive emotion dictionary to obtain an effective word segmentation result; and converting the effective word segmentation result into a sample word vector, and training the sample word vector by adopting a naive Bayes algorithm to obtain an intelligent emotion analysis engine model.
Step S202, inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user.
Optionally, the method of inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user includes: performing word segmentation processing on the voice of the user in the current call turn to obtain a word segmentation result of the user; converting the word segmentation result of the user into a user word vector; and inputting the user word vectors into the intelligent emotion analysis engine model to obtain an emotion value matched with the user voice.
And step S203, executing an external call transfer manual work when the emotion value meets the preset condition.
Optionally, when the emotion value satisfies a preset condition, performing an external call transfer manual operation, including: acquiring an alarm call record, wherein the alarm call record comprises the alarm times of historical call rounds; and when the emotion value is smaller than a preset threshold value and the alarm times reach the specified times, executing the manual call transfer.
And step S204, switching back to the man-machine interaction after the manual call interaction is finished.
Specifically, in the embodiment, when the emotion value meets the preset condition and the external call transfer manual work is executed, and after the customer and the manual work are interacted, the terminal system automatically switches back to the human-computer interaction process, so that on one hand, the user is prevented from frequently operating after the user solves the problem, on the other hand, the terminal system can free up the manual work to serve other customers as soon as possible, and long-term line occupation is avoided.
It should be noted that, in this embodiment, after the manual call interaction is completed, prompt information for switching back to the human-computer interaction is fed back to the client, and after the confirmation information of the client is determined to be received, the human-computer interaction is switched back, so that the experience effect of the user is further improved.
According to the embodiment of the invention, the matched emotion value of the user voice is obtained through the intelligent emotion analysis engine model, and whether manual switching is needed or not is determined according to the emotion value, so that seamless switching can be performed under the condition that a client has problems but does not express switching intention definitely, and the experience effect of the user is improved. After the customer and the manual interaction are completed, the terminal system can automatically switch back to the human-computer interaction process, so that the user is prevented from frequently operating after the user solves the problem, and meanwhile, the terminal system can leave the manual work to serve other customers as soon as possible, and long-term line occupation is avoided.
EXAMPLE III
Fig. 4 is a schematic structural diagram of an intelligent external call transfer manual device according to a third embodiment of the present invention. As shown in fig. 4, the apparatus includes: an intelligent emotion analysis engine model acquisition module 310, an emotion value acquisition module 320, and an outbound call manual module 330.
An intelligent emotion analysis engine model acquisition module 310, configured to acquire an intelligent emotion analysis engine model;
the emotion value acquisition module 320 is used for inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user;
and an external call transfer manual work module 330 for executing an external call transfer manual work when the emotion value satisfies a preset condition.
Optionally, the intelligent emotion analysis engine model obtaining module is configured to obtain an emotion statement sample set, where the emotion statement sample set includes a positive emotion statement sample set and a negative emotion statement sample set;
performing word segmentation processing on the emotion sentence sample set to obtain an initial word segmentation result;
screening the initial word segmentation result according to the outbound exclusive emotion dictionary to obtain an effective word segmentation result;
and converting the effective word segmentation result into a sample word vector, and training the sample word vector by adopting a naive Bayes algorithm to obtain an intelligent emotion analysis engine model.
Optionally, the emotion value acquisition module is configured to perform word segmentation on the voice of the user in the current call turn to obtain a word segmentation result of the user;
converting the word segmentation result of the user into a user word vector;
and inputting the user word vectors into the intelligent emotion analysis engine model to obtain an emotion value matched with the user voice.
Optionally, the external call forwarding manual module is configured to obtain an alarm call record, where the alarm call record includes the number of alarms in a historical call turn;
and when the emotion value is smaller than a preset threshold value and the alarm times reach the specified times, executing the manual call transfer.
Optionally, the device further includes an alarm call record updating module, configured to generate alarm information of a current turn when the emotion value is smaller than a preset threshold and the alarm frequency is smaller than a specified frequency;
and updating the alarm times in the alarm call record according to the alarm information of the current turn.
Optionally, the device further includes a speaking node triggering module, configured to trigger a speaking node for starting a next round of conversation when the emotion value is greater than a preset threshold;
and carrying out a new round of man-machine interaction conversation with the user based on the telephony node.
Optionally, the apparatus further includes a user voice display module, configured to send the user voice to the seat terminal, so that the seat terminal displays the user voice in real time.
The intelligent external call transfer manual device provided by the embodiment of the invention can execute the intelligent external call transfer manual method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example 5
FIG. 5 illustrates a block diagram of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM13, various programs and data necessary for the operation of the electronic apparatus 10 may also be stored. The processor 11, the ROM12, and the RAM13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to the bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. Processor 11 performs the various methods and processes described above, such as the intelligent outbound forwarding human method.
In some embodiments, the intelligent callout transfer manual method can be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM12 and/or the communication unit 19. When the computer program is loaded into RAM13 and executed by processor 11, one or more steps of the intelligent callout human method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the intelligent callout human method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired result of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent external call transfer manual method is characterized by comprising the following steps:
acquiring an intelligent emotion analysis engine model;
inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user;
and executing an external call transfer manual work when the emotion value meets a preset condition.
2. The method of claim 1, wherein obtaining the intelligent sentiment analysis engine model comprises:
acquiring an emotion statement sample set, wherein the emotion statement sample set comprises a positive emotion statement sample set and a negative emotion statement sample set;
performing word segmentation processing on the emotion sentence sample set to obtain an initial word segmentation result;
screening the initial word segmentation result according to the outbound exclusive emotion dictionary to obtain an effective word segmentation result;
and converting the effective word segmentation result into a sample word vector, and training the sample word vector by adopting a naive Bayes algorithm to obtain the intelligent emotion analysis engine model.
3. The method of claim 1, wherein the inputting the user speech of the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matching the user speech comprises:
performing word segmentation processing on the voice of the user in the current call turn to obtain a word segmentation result of the user;
converting the user word segmentation result into a user word vector;
and inputting the user word vector into the intelligent emotion analysis engine model to obtain the emotion value matched with the user voice.
4. The method according to claim 1, wherein the performing a call transfer manual when the emotion value satisfies a preset condition comprises:
acquiring an alarm call record, wherein the alarm call record comprises alarm times of historical call turns;
and when the emotion value is smaller than a preset threshold value and the alarm times reach specified times, executing external call transfer manual work.
5. The method of claim 4, further comprising:
when the emotion value is smaller than the preset threshold value and the alarm times are smaller than the designated times, generating alarm information of the current turn;
and updating the alarm times in the alarm call records according to the alarm information of the current turn.
6. The method of claim 4, further comprising:
when the emotion value is larger than the preset threshold value, triggering and starting an operation node of the next round of communication;
and carrying out a new round of man-machine interaction conversation with the user based on the conversation node.
7. The method of claim 5, wherein after generating the alarm information of the current turn, the method further comprises:
and sending the user voice to an agent terminal so that the agent terminal can display the user voice in real time.
8. An intelligent external call transfer artificial device, comprising:
the intelligent emotion analysis engine model acquisition module is used for acquiring an intelligent emotion analysis engine model;
the emotion value acquisition module is used for inputting the voice of the user in the current call turn into the intelligent emotion analysis engine model to obtain an emotion value matched with the voice of the user;
and the external call transfer manual module is used for executing external call transfer manual work when the emotion value meets the preset condition.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to perform the method of any one of claims 1-7 when executed.
CN202211083161.2A 2022-09-06 2022-09-06 Method, device, equipment and storage medium for intelligent external call transfer Pending CN115460323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211083161.2A CN115460323A (en) 2022-09-06 2022-09-06 Method, device, equipment and storage medium for intelligent external call transfer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211083161.2A CN115460323A (en) 2022-09-06 2022-09-06 Method, device, equipment and storage medium for intelligent external call transfer

Publications (1)

Publication Number Publication Date
CN115460323A true CN115460323A (en) 2022-12-09

Family

ID=84303256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211083161.2A Pending CN115460323A (en) 2022-09-06 2022-09-06 Method, device, equipment and storage medium for intelligent external call transfer

Country Status (1)

Country Link
CN (1) CN115460323A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866112A (en) * 2018-08-14 2020-03-06 阿里巴巴集团控股有限公司 Response sequence determination method, server and terminal equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991178A (en) * 2019-11-08 2020-04-10 苏宁金融科技(南京)有限公司 Intelligent customer service and artificial customer service switching method and device and computer equipment
CN112269863A (en) * 2020-10-15 2021-01-26 和美(深圳)信息技术股份有限公司 Man-machine conversation data processing method and system of intelligent robot
CN112967725A (en) * 2021-02-26 2021-06-15 平安科技(深圳)有限公司 Voice conversation data processing method and device, computer equipment and storage medium
CN113435912A (en) * 2021-06-29 2021-09-24 平安科技(深圳)有限公司 Data analysis method, device, equipment and medium based on client portrait
CN113450793A (en) * 2021-06-25 2021-09-28 平安科技(深圳)有限公司 User emotion analysis method and device, computer readable storage medium and server
CN113840040A (en) * 2021-09-22 2021-12-24 平安普惠企业管理有限公司 Man-machine cooperation outbound method, device, equipment and storage medium
CN114173008A (en) * 2021-11-12 2022-03-11 杭州摸象大数据科技有限公司 Customer service switching calling method, customer service switching calling device, computer equipment and storage medium
US20220084542A1 (en) * 2020-09-11 2022-03-17 Fidelity Information Services, Llc Systems and methods for classification and rating of calls based on voice and text analysis
CN114449297A (en) * 2020-11-04 2022-05-06 阿里巴巴集团控股有限公司 Multimedia information processing method, computing equipment and storage medium
CN114942973A (en) * 2022-04-19 2022-08-26 尚特杰电力科技有限公司 Emotion recognition method and system for electric power intelligent customer service system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991178A (en) * 2019-11-08 2020-04-10 苏宁金融科技(南京)有限公司 Intelligent customer service and artificial customer service switching method and device and computer equipment
US20220084542A1 (en) * 2020-09-11 2022-03-17 Fidelity Information Services, Llc Systems and methods for classification and rating of calls based on voice and text analysis
CN112269863A (en) * 2020-10-15 2021-01-26 和美(深圳)信息技术股份有限公司 Man-machine conversation data processing method and system of intelligent robot
CN114449297A (en) * 2020-11-04 2022-05-06 阿里巴巴集团控股有限公司 Multimedia information processing method, computing equipment and storage medium
CN112967725A (en) * 2021-02-26 2021-06-15 平安科技(深圳)有限公司 Voice conversation data processing method and device, computer equipment and storage medium
CN113450793A (en) * 2021-06-25 2021-09-28 平安科技(深圳)有限公司 User emotion analysis method and device, computer readable storage medium and server
CN113435912A (en) * 2021-06-29 2021-09-24 平安科技(深圳)有限公司 Data analysis method, device, equipment and medium based on client portrait
CN113840040A (en) * 2021-09-22 2021-12-24 平安普惠企业管理有限公司 Man-machine cooperation outbound method, device, equipment and storage medium
CN114173008A (en) * 2021-11-12 2022-03-11 杭州摸象大数据科技有限公司 Customer service switching calling method, customer service switching calling device, computer equipment and storage medium
CN114942973A (en) * 2022-04-19 2022-08-26 尚特杰电力科技有限公司 Emotion recognition method and system for electric power intelligent customer service system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866112A (en) * 2018-08-14 2020-03-06 阿里巴巴集团控股有限公司 Response sequence determination method, server and terminal equipment

Similar Documents

Publication Publication Date Title
US11657234B2 (en) Computer-based interlocutor understanding using classifying conversation segments
US9105268B2 (en) Method and apparatus for predicting intent in IVR using natural language queries
CN112100352A (en) Method, device, client and storage medium for interacting with virtual object
CN110347863A (en) Talk about art recommended method and device and storage medium
CN113129868B (en) Method for obtaining speech recognition model, speech recognition method and corresponding device
US20230169964A1 (en) Methods and apparatus for leveraging sentiment values in flagging and/or removal of real time workflows
CN113450759A (en) Voice generation method, device, electronic equipment and storage medium
CN115309877A (en) Dialog generation method, dialog model training method and device
CN112148850A (en) Dynamic interaction method, server, electronic device and storage medium
CN115460323A (en) Method, device, equipment and storage medium for intelligent external call transfer
CN114979387A (en) Network telephone service method, system, equipment and medium based on analog telephone
CN113851105A (en) Information reminding method, device, equipment and storage medium
CN114202363A (en) Artificial intelligence based call method, device, computer equipment and medium
CN113689866A (en) Training method and device of voice conversion model, electronic equipment and medium
CN112632241A (en) Method, device, equipment and computer readable medium for intelligent conversation
CN114221940B (en) Audio data processing method, system, device, equipment and storage medium
CN115277951A (en) Intelligent voice outbound method, device, equipment and medium
CN112309399B (en) Method and device for executing task based on voice and electronic equipment
CN110125946B (en) Automatic call method, automatic call device, electronic equipment and computer readable medium
US20240220731A1 (en) To Computer-based Interlocutor Understanding Using Classifying Conversation Segments
CN113079262B (en) Data processing method and device for intelligent voice conversation, electronic equipment and medium
CN115798479A (en) Method and device for determining session information, electronic equipment and storage medium
CN115473963A (en) Call processing method and device, electronic equipment and computer readable storage medium
CN115731937A (en) Information processing method, information processing device, electronic equipment and storage medium
CN116939091A (en) Voice call content display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination