CN111640436A - Method for providing a dynamic customer representation of a call partner to an agent - Google Patents

Method for providing a dynamic customer representation of a call partner to an agent Download PDF

Info

Publication number
CN111640436A
CN111640436A CN202010413039.1A CN202010413039A CN111640436A CN 111640436 A CN111640436 A CN 111640436A CN 202010413039 A CN202010413039 A CN 202010413039A CN 111640436 A CN111640436 A CN 111640436A
Authority
CN
China
Prior art keywords
call
agent
text
dynamic
providing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010413039.1A
Other languages
Chinese (zh)
Other versions
CN111640436B (en
Inventor
许杰
马传峰
孔卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qingniu Technology Co ltd
Original Assignee
Beijing Qingniu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qingniu Technology Co ltd filed Critical Beijing Qingniu Technology Co ltd
Priority to CN202010413039.1A priority Critical patent/CN111640436B/en
Priority claimed from CN202010413039.1A external-priority patent/CN111640436B/en
Publication of CN111640436A publication Critical patent/CN111640436A/en
Application granted granted Critical
Publication of CN111640436B publication Critical patent/CN111640436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends

Abstract

The invention provides a method for providing a dynamic customer portrait of a call object to an agent, which comprises the following steps: searching an initial client portrait corresponding to a telephone number according to the telephone number of a call object called or accessed by an agent, and pushing the initial client portrait to the agent; and updating the initial customer portrait according to the real-time voice stream data between the seat and the call object so as to obtain a dynamic customer portrait. In addition, the invention also provides a corresponding computer medium. The invention can ensure the update rate and accuracy of the customer portrait provided in the conversation process, and is beneficial to improving the communication efficiency of the seat and the customer satisfaction.

Description

Method for providing a dynamic customer representation of a call partner to an agent
Technical Field
The invention relates to the field of call centers and user portrayal, in particular to a method for providing a dynamic client portrayal of a call object to an agent.
Background
At present, a call center has become an important way for enterprises to provide online comprehensive service information, and an agent of the call center develops outgoing or incoming telephone traffic services according to the needs of the enterprises to accept services such as opinion feedback, consultation suggestions and the like of clients, or performs services such as market research, telephone sales, after-sales tracking and the like on products of the enterprises.
In order to obtain a better communication effect, it is often desired that the seat can know the requirements and basic conditions of the client before answering or dialing the telephone of the client, so that the seat can grasp the appeal of the client in a prepared and targeted manner, and the condition that the client hangs up the telephone due to communication failure is avoided. Based on the expectation, in some technical schemes, the data mining technology is considered to be combined with the service of the call center, the client image is generated by using the available client background data, and the client requirements and the basic situation can be known by the agent before or during the call through the client image, so that the communication quality and efficiency in the call process are improved. However, this method still has limitations because the client image in the prior art is usually generated according to the client background data available before the call, and only reflects the historical needs and the historical basic situation of the client before the current call, so the client image is static relative to the current call, and neither is updated as the current call progresses nor reflects the latest needs of the client in the current call, and whether a better communication effect can be achieved or depends on the service level and comprehension ability of the seat customer service. Therefore, the manner of providing the client representation in the prior art is not sufficiently related to the content of the current call, so that the prior art has the defects of single information, lag in updating and the like relative to the specific content of the current call.
Disclosure of Invention
To overcome the above-mentioned drawbacks of the prior art, the present invention provides a method for providing a dynamic customer representation of a call partner to an agent, the method comprising:
searching an initial client portrait corresponding to a telephone number according to the telephone number of a call object called or accessed by an agent, and pushing the initial client portrait to the agent;
and updating the initial customer portrait according to the real-time voice stream data between the seat and the call object so as to obtain a dynamic customer portrait.
According to one aspect of the invention, the step of updating the initial client representation to obtain a dynamic client representation based on real-time voice stream data between the agent and the call partner in the method comprises: collecting the real-time voice stream data; performing text transcription processing on the real-time voice stream data to obtain a voice text; adding a message identifier for describing call attributes of the real-time voice stream data to the voice text so as to package the voice text into a call text message; adding the call text message into a message queue; and pulling the call text message from the message queue to perform natural language analysis to obtain a recognition result, searching a plurality of portrait tags corresponding to the recognition result, and updating the initial customer portrait according to the portrait tags to obtain the dynamic customer portrait.
According to another aspect of the invention, the method wherein the initial client representation includes: at least one base attribute tag determined from the telephone number; and/or at least one history label with corresponding mapping relation with the telephone number.
According to another aspect of the invention, the method wherein the message identification comprises: an identity and a timestamp; the identity is used for defining identity information and organization structure attribution information of the seat.
According to another aspect of the present invention, in the method, the identity information includes a customer service identifier corresponding to the agent; the organization structure attribution information comprises an enterprise identifier and a department identifier corresponding to the seat.
According to another aspect of the present invention, the step of pulling the talk text message from the message queue for natural language analysis to obtain a recognition result in the method comprises: pulling the call text message from the message queue; separating the voice text from the call text message; calling a dialogue analysis model to carry out dialogue analysis on the voice text; and determining the recognition result according to the dialogue analysis.
According to another aspect of the invention, the method wherein the dialogue analysis comprises: any one or combination of call record standard format preprocessing, role analysis, word segmentation, part of speech tagging, entity identification, dependency analysis, entity relation extraction, emotion analysis, automatic classification, text similarity calculation and automatic summarization.
According to another aspect of the invention, the method wherein the recognition result comprises: and any one or combination of a dialog scene, a client intention, a theme and a keyword, and a business entity corresponding to the voice text.
According to another aspect of the invention, the method further comprises: performing weight sorting on a plurality of tags contained in the dynamic customer representation; preferentially displaying the labels with the weight greater than a predetermined threshold.
According to another aspect of the invention, the message queue is stored and managed by a message center in the method.
According to another aspect of the invention, the step of pulling the text-to-talk message from the message queue in the method is performed by one or more logics/devices arranged to subscribe to the text-to-talk message.
According to another aspect of the invention, the method further comprises: and clustering a plurality of labels contained in the dynamic customer images according to the conversation scene categories.
Accordingly, the present invention also provides one or more computer-readable media storing computer-executable instructions that, when used by one or more computer devices, cause the one or more computer devices to perform the method of providing a dynamic customer representation of a call object to an agent as described above.
The method for providing the dynamic customer portrait of the call object for the seat can update the initial customer portrait according to the real-time voice stream between the seat and the call object to obtain the dynamic customer portrait, and the mode for generating the dynamic customer portrait is closely related to the call content of the seat.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is a flow diagram illustrating one embodiment of a method for providing a dynamic client representation of a call partner to an agent in accordance with the present invention;
FIG. 2 is a schematic flow chart of an alternative embodiment of step S200 shown in FIG. 1;
FIG. 3 is a schematic flow chart diagram of an alternative embodiment of step S250 shown in FIG. 2;
fig. 4 is a schematic flowchart of another alternative embodiment of step S200 shown in fig. 1.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
For a better understanding and explanation of the present invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings. The present invention is not limited to these specific embodiments only. Rather, modifications and equivalents of the invention are intended to be included within the scope of the claims.
It should be noted that numerous specific details are set forth in the following detailed description. It will be understood by those skilled in the art that the present invention may be practiced without these specific details. In the following detailed description of various embodiments, structures and components well known in the art are not described in detail in order to not unnecessarily obscure the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a method for providing a dynamic client representation of a call object to an agent, the method including:
step S100, searching an initial customer portrait corresponding to a telephone number according to the telephone number of a call object called out or accessed by an agent, and pushing the initial customer portrait to the agent;
step S200, updating the initial client portrait according to the real-time voice stream data between the seat and the call object so as to obtain a dynamic client portrait.
Specifically, whether the agent of the call center calls out or accesses the call, it is desirable that the agent can obtain the client image of the call object communicating with the agent, and obtaining the client image requires at least obtaining a retrieval factor for marking the source of the call object, and directly, the telephone number of the call object is visible to the call center when the agent calls out or accesses the call, and the telephone number can be naturally used as the retrieval factor, so that the initial client image corresponding to the telephone number is selected to be searched for according to the telephone number in step S100 and pushed to the agent. The purpose of providing the initial customer representation to the agent is for the agent to recognize basic information about the call partner in preparation for subsequent active communication. Because searching the initial client representation uses the telephone number as a search factor, the initial client representation often only contains information that is available after data mining of the telephone number. Typically, the client representation includes: and determining at least one basic attribute label according to the telephone number, and/or at least one history label having a corresponding mapping relation with the telephone number. Wherein the basic attribute tag is, for example, the owner name of the phone number and related geographic information, the related geographic information includes the phone number attribution, an active location, demographic information of the active location, economic statistics of the active location, and the like; the history label is, for example, registered user data and history call summary, wherein the registered user data includes historical operation, consumption record and the like of a service platform registered by using the telephone number, and the history call summary includes summary data stripped by generating history call record according to the telephone number and the call center, and is generally used for marking the type (such as complaint, consultation, purchase and the like) of service generated by a client corresponding to the telephone number in the past. Those skilled in the art will appreciate that any legal data mining results available from the telephone number may be included in the initial customer representation, which may be helpful in improving the awareness of the agent to the calling party.
It will be understood by those skilled in the art that the source from which the initial customer representation provided to the agent in step S100 is obtained is static and lagging for the call currently being made by the agent, and does not substantially reflect the intention and purpose of the call object in the call, so an important purpose for performing step S200 is to enable the agent to obtain a dynamic, real-time, updated customer representation during the call, that is, to select to update the initial customer representation according to the real-time voice stream data between the agent and the call object in step S200, so as to obtain a dynamic customer representation. Step S200 is executed to utilize the content of the call between the agent and the call partner, the real-time voice stream data is closely related to the intention and purpose of the call partner, and the dynamic client figure generated from the real-time voice stream data naturally matches with the intention and purpose of the call partner. With the continuous generation of the real-time voice stream data, the dynamic client image is updated correspondingly all the time, which ensures the dynamic property and real-time property of the dynamic client image.
Referring to fig. 2, fig. 2 is a schematic flowchart of an alternative embodiment of step S200 shown in fig. 1, in which step S200 further includes the following steps:
step S210, collecting the real-time voice stream data;
step S220, performing text transcription processing on the real-time voice stream data to obtain a voice text;
step S230, adding a message identifier for describing the call attribute of the real-time voice stream data to the voice text so as to package the voice text into a call text message;
step S240, adding the call text message into a message queue;
step S250, pulling the conversation text message from the message queue to perform natural language analysis to obtain a recognition result, finding a plurality of portrait tags corresponding to the recognition result, and updating the initial customer portrait according to the portrait tags to obtain the dynamic customer portrait.
Specifically, after the telephone traffic between the seat and the call target is successfully lapped, a real-time voice stream is generated due to the speech communication between the two parties of the call, and the communication topics involved in the real-time voice stream often surround the services, such as telephone sales, after-sales visits, market research, difficult answers, and the like, which are required by the seat and provided for the customer. The step S210 is executed to collect real-time voice stream data corresponding to the real-time voice stream, and more specifically, the real-time voice stream data is a digital signal for recording the real-time voice stream, and the digital signal is usually converted by using an analog signal of a seat or a customer voice.
The body performing the text transcription is typically a trained speech recognition engine providing a recognition interface towards the call center, which is invoked by the call center agent to convert the language words in the live speech stream data into character-type speech text in step S220, which is further processed in step S230. It should be noted that, since the real-time voice stream data is continuously generated along with the call of the agent, and the data amount that can be processed by the voice recognition engine in a unit time is limited, if necessary, a plurality of real-time voice stream data may be generated in the process of one complete call, or one piece of real-time voice stream data may be divided into a plurality of data chunks, and at this time, a plurality of steps S220 may be executed in parallel. In any case, the real-time voice stream data is eventually converted into the voice text.
It will be understood by those skilled in the art that there may be a large number of concurrent agent calls in the call center at the same time, and in order to ensure that the real-time speech generated by each agent call is allocated to the appropriate computing resources, the speech text is considered for further processing in step S230. That is, the step S230 is executed to encapsulate the voice text into a call text message carrying a message identifier, where the message identifier is used to describe a call attribute of the real-time voice stream data. Next, in step S240, the call text message is added to the message queue, and since it has been mentioned above that the call center may have multiple agent calls concurrently at the same time, it can be understood by those skilled in the art that the call text message corresponding to the real-time voice of multiple different agent calls may be included in the message queue. The messages are distinguished among the plurality of call text messages according to the message identifications of the plurality of call text messages, and the generation time and the source of each call text message can be verified and traced according to the message identifications.
For one of the call text messages, typically, the message identifier includes an identity identifier and a timestamp, the identity identifier is used to define identity information and organization structure attribution information of the agent, the identity information includes, for example, a customer service identifier corresponding to the agent, and the organization structure attribution information includes, for example, a business identifier and a department identifier corresponding to the agent. Preferably, a complete message identifier may be formed by "enterprise identifier + department identifier + customer service identifier + timestamp", so as to ensure that the message identifier can correctly describe the time sequence of the call text message in the message queue, and ensure the global uniqueness of the call text message in the message queue.
The application scene that a large number of concurrent seat calls exist in the call center is handled in a message queue mode, and voice recognition resources occupied by the concurrent seat calls can be coordinated orderly, so that real-time voice stream data generated by each seat can be analyzed and processed through natural language. Preferably, the message queue is stored and managed by a message center, the size of the message center can be determined according to the size of the call center, and when the size of the call center is large, the message center is often implemented by a high-performance cluster.
Steps S210 to S240 are performed to convert the real-time voice stream data into a recognizable data format and establish a queue mechanism for a plurality of possible call text messages. Step S250 is executed next to realize the identification of the single call text message, and further subsequent processing is performed on the result of the identification, for example, the subsequent steps in step S250: a plurality of portrait tags corresponding to the identification results are located, and the initial customer portrait is updated based on the portrait tags to obtain the dynamic customer portrait.
Referring to fig. 3, fig. 3 is a flowchart illustrating an alternative embodiment of step S250 shown in fig. 2, and particularly relates to an alternative embodiment of the step of pulling the call text message from the message queue to perform natural language analysis to obtain the recognition result in step S250, and as shown in fig. 3, the step of pulling the call text message from the message queue to perform natural language analysis to obtain the recognition result includes:
step S251, the conversation text message is pulled from the message queue;
step S252, separating the voice text from the call text message;
step S253, calling a dialogue analysis model to carry out dialogue analysis on the voice text;
and step S254, determining the recognition result according to the dialogue analysis.
Specifically, when the unprocessed call text message exists in the message queue, the call text message may be taken out by executing step S251. Step S252 is executed to separate the voice text carrying the dialog information from the call text message, and preferably, before step S252 is executed, the call text message may be traced back and verified according to the message identifier of the call text message, so as to ensure the validity of the call text message.
Step S253 is performed next, that is, a dialogue analysis model is invoked to perform a dialogue analysis on the speech text, where the dialogue analysis typically includes any one or a combination of call log standard format preprocessing, character analysis, word segmentation, part of speech tagging, entity recognition, dependency analysis, entity relationship extraction, emotion analysis, automatic classification, text similarity calculation, and automatic summarization, so as to structure the speech text and strip out available information used for describing the actual ongoing dialogue between the agent and the call object. For the purpose of implementing the dialogue analysis, the dialogue processing model may include a trained natural language processing neural network, which can determine the specific content of each dimension of the dialogue carried by the speech text through the speech text, for example, the aforementioned call record standard format preprocessing can arrange the speech text into a call record standard format that can be recognized, the role analysis can respectively determine the roles of multiple speech speakers (i.e., the seat and the call object) in the speech text, the word segmentation is usually to divide continuous sentences in the speech text into words according to dialogue logic by means of a probability model, the part-of-speech tag can assign a part-of-speech to the words, and the entity can recognize the named entity corresponding to the words (i.e., the named entity to which type the words belong, such as names, time, numbers, etc.), the dependency analysis may further improve the recognition rate of the vocabularies according to the context information of the sentences, the entity relationship extraction may label the relationship between entities represented by different vocabularies to ensure the accuracy of language recognition, the emotion analysis may determine the emotion expression tendency of the speaker according to the vocabularies, the automatic classification may split the sentences, vocabularies, etc. in the speech text according to a predetermined rule, or split the output result of any one of the preceding dialog analyses, the text similarity calculation may analyze the homophones in the speech text, and the automatic summarization may abstract the whole or part of the speech text to obtain the summary content. As will be understood by those skilled in the art, the specific operations included in the dialogue analysis are related to the required accuracy of the dialogue analysis, and the implementer of step S253 can reasonably select which types of specific operations are included in the dialogue analysis based on comprehensive consideration of algorithm time complexity, computational resource load and accuracy requirement.
Through the above dialog analysis, the natural language meaning of the dialog carried by the speech text can be determined, and further the function of determining the recognition result according to the dialog analysis is implemented in step S254. Considering that the purpose of performing speech recognition is to find a plurality of portrait tags corresponding to the recognition result, here the recognition result preferably includes: any one or combination of dialog scenes, client intents, topics and keywords, business entities corresponding to the voice texts, as will be understood by those skilled in the art, the topic and the keyword can be determined by extracting the local content of the text output by the dialogue analysis, if necessary, the dialogue scene, the client intention and the business entity can be obtained according to the dialogue analysis, it is usually necessary to match the text meaning of the speech text obtained by the dialogue analysis with the corresponding scene model, intention model, business entity model in the database, to determine the current dialog scenario and service entity type to which the speech text belongs, and to determine the client intention conveyed by the speech text, the model samples in the database may be preset by the implementer of this embodiment, or output through another trained neural network. The conversation scenario is, for example, a scenario of product introduction, disagreement processing, deal confirmation, and the like abstracted from the actual language communication, the client intention is, for example, a client intention such as complaint, consultation, business handling, and the like abstracted from the actual language communication, and the business entity type is, for example, business such as pre-sale consultation, after-sale service, telephone sales, advertisement promotion, and the like related to a call purpose. In more embodiments, in order to deal with more complicated conversation contents between the agent and the conversation object thereof, the recognition result may further include abstract indicators such as emotion and speech speed.
With continued reference to fig. 2, since the final purpose of determining the recognition result in step S250 is to find out a plurality of portrait tags corresponding to the recognition result, any abstract index sufficient for expressing the communication intention and communication effect of the two parties of the conversation in the currently processed real-time speech stream data can be included in the recognition result, and the abstract indexes are obtained by relying on the conversation analysis.
Specifically, the portrait label may be provided by a preset label library, the label library and the recognition result have a preset mapping relationship, and the plurality of portrait labels corresponding to the portrait label can be obtained by inputting the recognition result into the label library; in another embodiment, the portrait tag may be generated by directly extracting text from the recognition result. In general, the portrait label is used to help the agent to understand the information or intention of the called party, and any data that can achieve the above purpose may be included in the portrait label.
The operation of updating the initial client representation based on the representation tags in step S250 is, for example: pushing the portrait tags to the agent, further selecting part or all of the portrait tags to be added into the initial customer portrait, or selecting part or all of the portrait tags to replace the existing tags in the initial customer portrait, wherein the updated initial customer portrait is the dynamic customer portrait. By checking the dynamic customer portraits, the agent can be helped to deepen the understanding of the information or the intention of the call object, the communication requirement of the customer is quickly met, the communication failure caused by the lack of the service level of the customer service of the agent is avoided, and the effects of improving the communication quality and the satisfaction degree of the customer are finally achieved.
Referring to fig. 4, fig. 4 is a schematic flowchart of another alternative embodiment of step S200 shown in fig. 1, and is different from the embodiment shown in fig. 2 in that step S200 in the alternative embodiment shown in fig. 4 further includes the following steps:
step S260, the labels included in the dynamic customer representation are weighted and sorted, and the labels with weights greater than a predetermined threshold are preferentially displayed.
And step S270, clustering a plurality of labels contained in the dynamic customer images according to the conversation scene type.
Specifically, step S260 is executed to deal with a scene containing multiple tags in the dynamic client image, and for the agent, it is necessary to ensure smooth conversation and consider that it is difficult to view multiple tags in the dynamic client image, so a weight sorting mechanism is introduced to preferentially display tags in the dynamic client image whose weights are greater than a predetermined threshold, so that an important part of the dynamic client image can be displayed to the agent in an emphasized manner. As will be appreciated by those skilled in the art, step S260 may be implemented if the labels included in the dynamic customer representation are pre-weighted. In another embodiment, step S260 may also be implemented to preferentially display the label with the highest weight.
Considering whether the call content is normative and complete for the agent to review and whether the call content reaches the expected customer satisfaction for the agent to self-review, the tag data in the call process may be integrated by clustering the tags included in the dynamic customer representation according to the conversation scene type, so that the agent may evaluate and supervise the tags generated in each conversation scene according to the conversation scene type to achieve the purpose of increasing the service level, which is also the meaning of performing step S270. The conversation scene category includes, for example, subdivided scene segments such as opening scenes, communication sessions, closing phrases and the like which may occur in the telephone traffic of the call center, and each scene segment may be further divided according to dimensions such as content integrity, service normalization, customer intention, service satisfaction, complaint risk and the like. In general, the meaning of setting up the conversation scenario category is to count the tags of each stage in the call that the agent completes for the agent. Accordingly, step S270 may be triggered by the hang-up event of the call.
Whether step S250 shown in fig. 2 or fig. 4, or step S251 shown in fig. 3, it is contemplated that the step of pulling the text message from the message queue may be performed automatically and sequentially, and preferably one or more logic/devices configured to subscribe to the text message may be programmed to perform the above-described step of pulling the text message from the message queue, the logic/devices continuously accessing the message queue to determine whether the text message is contained therein, and upon the text message appearing in the message queue, the logic/devices pulling the text message in real time. When multiple text-to-talk messages are included in the message queue, the multiple text-to-talk messages are typically arranged in chronological order of generation. It should be noted that the logic is, for example, a running instance of computer code that can implement the pull operation, such as a virtual server, service logic, a process or thread, and so on. The pull mechanism can ensure that the call text messages in the message queue are not missed or missed, so as to ensure that the global call text messages in the call center can be distributed to computing resources for the conversation analysis processing.
To facilitate the agent viewing the initial customer representation and the dynamic customer representation, the agent typically provides a client configured to subscribe to the voice text and the representation tag and to present the results of their subscription on a computer vision interface device in a timely manner. In an alternative embodiment, the messages generated according to the voice text and the portrait label can be stored and managed by the message center.
It is noted that while the operations of the method of the present invention are depicted in the drawings in a particular order, this is not intended to require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Accordingly, one or more computer-readable media storing computer-executable instructions that, when used by one or more computer devices, cause the one or more computer devices to perform a method of providing a dynamic customer representation of a call object to an agent as described above, such as the method of providing a dynamic customer representation of a call object to an agent illustrated in FIG. 1, are also disclosed. The computer readable media may be any available media that can be accessed by the computer device and includes both volatile and nonvolatile media, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer-readable media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device. Combinations of any of the above should also be included within the scope of computer readable media.
The portions of the method of providing a dynamic client representation of a call partner provided by the present invention that involve software logic may be implemented using programmable logic devices or as a computer program product that causes a computer to perform the methods for demonstration. The computer program product includes a computer-readable storage medium having computer program logic or code portions embodied therein for performing the various steps described above with respect to the portions of software logic. The computer-readable storage medium may be a built-in medium installed in the computer or a removable medium detachable from the computer main body (e.g., a hot-pluggable storage device). The built-in medium includes, but is not limited to, rewritable nonvolatile memories such as RAM, ROM, and hard disk. The removable media include, but are not limited to: optical storage media (e.g., CD-ROMs and DVDs), magneto-optical storage media (e.g., MOs), magnetic storage media (e.g., magnetic tapes or removable hard disks), media with a built-in rewritable non-volatile memory (e.g., memory cards), and media with a built-in ROM (e.g., ROM cartridges).
Those skilled in the art will appreciate that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a computer program product. Although most of the specific embodiments described in this specification focus on software routines, alternative embodiments for implementing the methods provided by the present invention in hardware are also within the scope of the invention as claimed.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are, therefore, to be considered as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it will be obvious that the term "comprising" does not exclude other elements, units or steps, and the singular does not exclude the plural. A plurality of components, units or means recited in the claims may also be implemented by one component, unit or means in software or hardware.
The method for providing the dynamic customer portrait of the call object for the seat can update the initial customer portrait according to the real-time voice stream between the seat and the call object to obtain the dynamic customer portrait, and the mode for generating the dynamic customer portrait is closely related to the call content of the seat.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (13)

1. A method of providing a dynamic client representation of a call partner to an agent, the method comprising:
searching an initial client portrait corresponding to a telephone number according to the telephone number of a call object called or accessed by an agent, and pushing the initial client portrait to the agent;
and updating the initial customer portrait according to the real-time voice stream data between the seat and the call object so as to obtain a dynamic customer portrait.
2. A method of providing a dynamic client representation of a call partner to an agent as claimed in claim 1, wherein said step of updating said initial client representation to obtain a dynamic client representation based on real-time speech stream data between said agent and said call partner comprises:
collecting the real-time voice stream data;
performing text transcription processing on the real-time voice stream data to obtain a voice text;
adding a message identifier for describing call attributes of the real-time voice stream data to the voice text so as to package the voice text into a call text message;
adding the call text message into a message queue;
and pulling the call text message from the message queue to perform natural language analysis to obtain a recognition result, searching a plurality of portrait tags corresponding to the recognition result, and updating the initial customer portrait according to the portrait tags to obtain the dynamic customer portrait.
3. A method of providing a dynamic client representation of a call partner to an agent as claimed in claim 1 or 2, wherein the initial client representation comprises:
at least one base attribute tag determined from the telephone number; and/or
And at least one history label with a corresponding mapping relation with the telephone number.
4. A method of providing a dynamic client representation of a call partner to an agent as claimed in claim 2, wherein the message identification comprises:
an identity and a timestamp;
the identity is used for defining identity information and organization structure attribution information of the seat.
5. A method of providing a dynamic client representation of a call partner to an agent, as claimed in claim 4, wherein:
the identity information comprises a customer service identifier corresponding to the seat;
the organization structure attribution information comprises an enterprise identifier and a department identifier corresponding to the seat.
6. A method of providing a dynamic client representation of a call partner to an agent as claimed in claim 2, wherein said step of pulling said call text message from said message queue for natural language analysis to obtain a recognition result comprises:
pulling the call text message from the message queue;
separating the voice text from the call text message;
calling a dialogue analysis model to carry out dialogue analysis on the voice text;
and determining the recognition result according to the dialogue analysis.
7. A method of providing a dynamic client representation of a call partner to an agent as claimed in claim 6, wherein the dialogue analysis comprises:
any one or combination of call record standard format preprocessing, role analysis, word segmentation, part of speech tagging, entity identification, dependency analysis, entity relation extraction, emotion analysis, automatic classification, text similarity calculation and automatic summarization.
8. A method of providing a dynamic client representation of a call partner to an agent as claimed in claim 2 or 6, wherein the recognition result comprises:
and any one or combination of a dialog scene, a client intention, a theme and a keyword, and a business entity corresponding to the voice text.
9. A method of providing a dynamic client representation of a call partner to an agent, as claimed in claim 2, the method further comprising:
performing weight sorting on a plurality of tags contained in the dynamic customer representation;
preferentially displaying the labels with the weight greater than a predetermined threshold.
10. A method of providing a dynamic client representation of a call partner to an agent, as claimed in claim 2, wherein:
the message queue is stored and managed by a message center.
11. A method of providing a dynamic client representation of a call partner to an agent, as claimed in claim 2, wherein:
the step of pulling the conversational text message from the message queue is performed by one or more logics/devices arranged to subscribe to the conversational text message.
12. A method of providing a dynamic client representation of a call partner to an agent, as claimed in claim 2, the method further comprising:
and clustering a plurality of labels contained in the dynamic customer images according to the conversation scene categories.
13. One or more computer-readable media storing computer-executable instructions that, when used by one or more computer devices, cause the one or more computer devices to perform the method of providing a dynamic client representation of a call partner to an agent of any of claims 1 to 12.
CN202010413039.1A 2020-05-15 Method for providing dynamic customer portraits of conversation objects to agents Active CN111640436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010413039.1A CN111640436B (en) 2020-05-15 Method for providing dynamic customer portraits of conversation objects to agents

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010413039.1A CN111640436B (en) 2020-05-15 Method for providing dynamic customer portraits of conversation objects to agents

Publications (2)

Publication Number Publication Date
CN111640436A true CN111640436A (en) 2020-09-08
CN111640436B CN111640436B (en) 2024-04-19

Family

ID=

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214588A (en) * 2020-10-16 2021-01-12 平安国际智慧城市科技股份有限公司 Multi-intention recognition method and device, electronic equipment and storage medium
CN112235470A (en) * 2020-09-16 2021-01-15 重庆锐云科技有限公司 Incoming call client follow-up method, device and equipment based on voice recognition
CN112509575A (en) * 2020-11-26 2021-03-16 上海济邦投资咨询有限公司 Financial consultation intelligent guiding system based on big data
CN112632989A (en) * 2020-12-29 2021-04-09 中国农业银行股份有限公司 Method, device and equipment for prompting risk information in contract text
CN112967721A (en) * 2021-02-03 2021-06-15 上海明略人工智能(集团)有限公司 Sales lead information identification method and system based on voice identification technology
CN113344415A (en) * 2021-06-23 2021-09-03 中国平安财产保险股份有限公司 Deep neural network-based service distribution method, device, equipment and medium
CN113794851A (en) * 2021-09-08 2021-12-14 平安信托有限责任公司 Video call processing method and device, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039165B1 (en) * 1999-09-13 2006-05-02 Microstrategy Incorporated System and method for personalizing an interactive voice broadcast of a voice service based on automatic number identification
CN101150419A (en) * 2007-11-12 2008-03-26 中国电信股份有限公司 A new generation call center system and automatic service realization method
CN101478613A (en) * 2009-02-03 2009-07-08 中国电信股份有限公司 Multi-language voice recognition method and system based on soft queuing call center
US20130030854A1 (en) * 2011-07-29 2013-01-31 Avaya Inc. Method and system for managing contacts in a contact center
CN106503015A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of method for building user's portrait
CN107864301A (en) * 2017-10-26 2018-03-30 平安科技(深圳)有限公司 Client's label management method, system, computer equipment and storage medium
CN109684330A (en) * 2018-12-17 2019-04-26 深圳市华云中盛科技有限公司 User's portrait base construction method, device, computer equipment and storage medium
CN110326041A (en) * 2017-02-14 2019-10-11 微软技术许可有限责任公司 Natural language interaction for intelligent assistant
CN110908883A (en) * 2019-11-15 2020-03-24 江苏满运软件科技有限公司 User portrait data monitoring method, system, equipment and storage medium
CN111026843A (en) * 2019-12-02 2020-04-17 北京智乐瑟维科技有限公司 Artificial intelligent voice outbound method, system and storage medium
CN111028007A (en) * 2019-12-06 2020-04-17 中国银行股份有限公司 User portrait information prompting method, device and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039165B1 (en) * 1999-09-13 2006-05-02 Microstrategy Incorporated System and method for personalizing an interactive voice broadcast of a voice service based on automatic number identification
CN101150419A (en) * 2007-11-12 2008-03-26 中国电信股份有限公司 A new generation call center system and automatic service realization method
CN101478613A (en) * 2009-02-03 2009-07-08 中国电信股份有限公司 Multi-language voice recognition method and system based on soft queuing call center
US20130030854A1 (en) * 2011-07-29 2013-01-31 Avaya Inc. Method and system for managing contacts in a contact center
CN106503015A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of method for building user's portrait
CN110326041A (en) * 2017-02-14 2019-10-11 微软技术许可有限责任公司 Natural language interaction for intelligent assistant
CN107864301A (en) * 2017-10-26 2018-03-30 平安科技(深圳)有限公司 Client's label management method, system, computer equipment and storage medium
CN109684330A (en) * 2018-12-17 2019-04-26 深圳市华云中盛科技有限公司 User's portrait base construction method, device, computer equipment and storage medium
CN110908883A (en) * 2019-11-15 2020-03-24 江苏满运软件科技有限公司 User portrait data monitoring method, system, equipment and storage medium
CN111026843A (en) * 2019-12-02 2020-04-17 北京智乐瑟维科技有限公司 Artificial intelligent voice outbound method, system and storage medium
CN111028007A (en) * 2019-12-06 2020-04-17 中国银行股份有限公司 User portrait information prompting method, device and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235470A (en) * 2020-09-16 2021-01-15 重庆锐云科技有限公司 Incoming call client follow-up method, device and equipment based on voice recognition
CN112214588A (en) * 2020-10-16 2021-01-12 平安国际智慧城市科技股份有限公司 Multi-intention recognition method and device, electronic equipment and storage medium
CN112214588B (en) * 2020-10-16 2024-04-02 深圳赛安特技术服务有限公司 Multi-intention recognition method, device, electronic equipment and storage medium
CN112509575A (en) * 2020-11-26 2021-03-16 上海济邦投资咨询有限公司 Financial consultation intelligent guiding system based on big data
CN112632989A (en) * 2020-12-29 2021-04-09 中国农业银行股份有限公司 Method, device and equipment for prompting risk information in contract text
CN112632989B (en) * 2020-12-29 2023-11-03 中国农业银行股份有限公司 Method, device and equipment for prompting risk information in contract text
CN112967721A (en) * 2021-02-03 2021-06-15 上海明略人工智能(集团)有限公司 Sales lead information identification method and system based on voice identification technology
CN113344415A (en) * 2021-06-23 2021-09-03 中国平安财产保险股份有限公司 Deep neural network-based service distribution method, device, equipment and medium
CN113794851A (en) * 2021-09-08 2021-12-14 平安信托有限责任公司 Video call processing method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US11289077B2 (en) Systems and methods for speech analytics and phrase spotting using phoneme sequences
US9477752B1 (en) Ontology administration and application to enhance communication data analytics
CN111641757A (en) Real-time quality inspection and auxiliary speech pushing method for seat call
US9582757B1 (en) Scalable curation system
CN104598445B (en) Automatically request-answering system and method
US8798255B2 (en) Methods and apparatus for deep interaction analysis
US20220141335A1 (en) Partial automation of text chat conversations
CN108520046B (en) Method and device for searching chat records
US10860566B1 (en) Themes surfacing for communication data analysis
CN111639484A (en) Method for analyzing seat call content
CN111966689B (en) Application knowledge base construction method and device
WO2022267174A1 (en) Script text generating method and apparatus, computer device, and storage medium
US11361759B2 (en) Methods and systems for automatic generation and convergence of keywords and/or keyphrases from a media
EP4352630A1 (en) Reducing biases of generative language models
JP6208794B2 (en) Conversation analyzer, method and computer program
CN111640436A (en) Method for providing a dynamic customer representation of a call partner to an agent
US20220309413A1 (en) Method and apparatus for automated workflow guidance to an agent in a call center environment
CN111640436B (en) Method for providing dynamic customer portraits of conversation objects to agents
CN113297365B (en) User intention judging method, device, equipment and storage medium
CN112836517A (en) Method for processing mining risk signal based on natural language
CN111916110A (en) Voice quality inspection method and device
CN113094471A (en) Interactive data processing method and device
CN110879868A (en) Consultant scheme generation method, device, system, electronic equipment and medium
CN110929005A (en) Emotion analysis-based task follow-up method, device, equipment and storage medium
US11947872B1 (en) Natural language processing platform for automated event analysis, translation, and transcription verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant