CN111640436B - Method for providing dynamic customer portraits of conversation objects to agents - Google Patents

Method for providing dynamic customer portraits of conversation objects to agents Download PDF

Info

Publication number
CN111640436B
CN111640436B CN202010413039.1A CN202010413039A CN111640436B CN 111640436 B CN111640436 B CN 111640436B CN 202010413039 A CN202010413039 A CN 202010413039A CN 111640436 B CN111640436 B CN 111640436B
Authority
CN
China
Prior art keywords
conversation
call
customer
agent
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010413039.1A
Other languages
Chinese (zh)
Other versions
CN111640436A (en
Inventor
许杰
马传峰
孔卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qingniu Technology Co ltd
Original Assignee
Beijing Qingniu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qingniu Technology Co ltd filed Critical Beijing Qingniu Technology Co ltd
Priority to CN202010413039.1A priority Critical patent/CN111640436B/en
Publication of CN111640436A publication Critical patent/CN111640436A/en
Application granted granted Critical
Publication of CN111640436B publication Critical patent/CN111640436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends

Abstract

The invention provides a method for providing a dynamic customer portrait of a call object to a seat, which comprises the following steps: searching an initial customer portrait corresponding to a telephone number according to the telephone number of a call object called out or accessed by an agent, and pushing the initial customer portrait to the agent; and updating the initial customer portrait according to the real-time voice stream data between the agent and the call object so as to obtain a dynamic customer portrait. In addition, the invention also provides a corresponding computer medium. The invention can ensure the update rate and accuracy of customer portrait in the conversation process, and is beneficial to improving the communication efficiency and customer satisfaction of the seat.

Description

Method for providing dynamic customer portraits of conversation objects to agents
Technical Field
The present invention relates to the field of call centers and user portraits, and more particularly to a method for providing a dynamic customer portraits of a call object to a seat.
Background
At present, a call center has become an important way for enterprises to provide online comprehensive service information, and agents of the call center develop outbound or inbound telephone traffic services according to the needs of the enterprises to accept services such as opinion feedback, consultation advice and the like of clients, or conduct services such as market investigation, telephone sales, after-sales tracking and the like aiming at products of the enterprises.
In order to obtain a better communication effect, the customer needs and basic conditions can be known before the customer calls are answered or dialed by the customer, so that the customer's appeal can be mastered in a ready and targeted manner by the customer, and the condition that the customer hangs up due to communication failure is avoided. Based on the above expectations, in some technical solutions, the combination of the data mining technology and the service of the call center is considered, the available background data of the client is utilized to generate the client image, and the customer needs and the basic situation can be known by the agent through the client image before or during the call, so as to improve the communication quality and efficiency in the call process. However, this method still has limitations because the customer portraits in the prior art are usually generated according to the customer background data available before the call, and only reflect the historical requirements and the basic conditions of the customer before the current call, so that the customer portraits are static relative to the current call, and neither can be updated with the progress of the current call nor can the latest requirements of the customer in the current call be reflected, and whether a better communication effect can be achieved or the service level and understanding ability of the customer service of the seat are relied on. It can be seen that the manner of providing the customer portrait in the prior art is not fully associated with the content of the current call, so that the method has the defects of single information, lag in updating and the like relative to the specific content of the current call.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks of the prior art, the present invention provides a method for providing a dynamic customer representation of a call object to an agent, the method comprising:
Searching an initial customer portrait corresponding to a telephone number according to the telephone number of a call object called out or accessed by an agent, and pushing the initial customer portrait to the agent;
And updating the initial customer portrait according to the real-time voice stream data between the agent and the call object so as to obtain a dynamic customer portrait.
According to one aspect of the invention, the step of updating the initial customer representation based on real-time voice stream data between the agent and the call object to obtain a dynamic customer representation comprises: collecting the real-time voice stream data; performing text transcription processing on the real-time voice stream data to obtain voice text; attaching a message identifier for describing the call attribute of the real-time voice stream data to the voice text so as to encapsulate the voice text into a call text message; adding the call text message into a message queue; and pulling the call text message from the message queue for natural language analysis to obtain a recognition result, searching a plurality of portrait labels corresponding to the recognition result, and updating the initial customer portrait according to the portrait labels to obtain the dynamic customer portrait.
According to another aspect of the invention, the method wherein the initial customer representation comprises: at least one basic attribute tag determined from the telephone number; and/or at least one history tag having a corresponding mapping relationship with the telephone number.
According to another aspect of the invention, the message identification in the method comprises: identity and timestamp; the identity mark is used for defining the identity information and the organization structure attribution information of the seat.
According to another aspect of the present invention, in the method, the identity information includes a customer service identifier corresponding to the agent; the organization structure attribution information comprises enterprise identifications and department identifications corresponding to the agents.
According to another aspect of the present invention, the step of pulling the call text message from the message queue for natural language analysis to obtain the recognition result in the method includes: pulling the call text message from the message queue; separating the voice text from the call text message; calling a dialogue analysis model to perform dialogue analysis on the voice text; and determining the identification result according to the dialogue analysis.
According to another aspect of the invention, the dialog analysis in the method comprises: preprocessing the call record standard format, analyzing roles, segmenting words, labeling parts of speech, identifying entities, analyzing dependency, extracting entity relation, analyzing emotion, automatically classifying, calculating text similarity and automatically abstracting any one or combination of the above.
According to another aspect of the invention, the identification result in the method comprises: any one or combination of dialogue scenes, customer intentions, topics, keywords and business entities corresponding to the voice texts.
According to another aspect of the invention, the method further comprises: weight ordering a plurality of labels contained in the dynamic customer representation; the tags having weights greater than a predetermined threshold are preferentially displayed.
According to another aspect of the invention, the message queues in the method are stored and managed by a message center.
According to another aspect of the invention, the step of pulling the call text message from the message queue in the method is performed by one or more logical bodies/devices arranged to subscribe to the call text message.
According to another aspect of the invention, the method further comprises: and clustering a plurality of labels contained in the dynamic customer portrait according to the dialogue scene category.
Accordingly, the present invention also provides one or more computer-readable media storing computer-executable instructions that, when used by one or more computer devices, cause the one or more computer devices to perform a method of providing a dynamic customer representation of a conversation object to a conversation agent as described hereinbefore.
The method for providing the dynamic customer portrait of the call object to the agent can update the initial customer portrait according to the real-time voice stream between the agent and the call object so as to obtain the dynamic customer portrait, and the mode of generating the dynamic customer portrait is closely related to the call content of the agent at present.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of a method for providing a dynamic customer representation of a conversation object to a conversation in accordance with the present invention;
FIG. 2 is a flow chart of an alternative embodiment of step S200 shown in FIG. 1;
FIG. 3 is a flow chart of an alternative embodiment of step S250 shown in FIG. 2;
fig. 4 is a flow chart of another alternative embodiment of step S200 shown in fig. 1.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
For a better understanding and explanation of the present invention, reference will be made to the following detailed description of the invention taken in conjunction with the accompanying drawings. The invention is not limited to these specific embodiments only. On the contrary, the invention is intended to cover modifications and equivalent arrangements included within the scope of the appended claims.
It should be noted that numerous specific details are set forth in the following detailed description. It will be understood by those skilled in the art that the present invention may be practiced without these specific details. In the following description of various embodiments, structures and components well known in the art are not described in detail in order to facilitate the salient features of the present invention.
The present invention provides a method for providing a call target dynamic customer portrait to an agent, please refer to fig. 1, fig. 1 is a flow chart of a specific embodiment of a method for providing a call target dynamic customer portrait to an agent according to the present invention, the method comprises:
step S100, searching an initial customer portrait corresponding to a telephone number according to the telephone number of a call object called out or accessed by an agent, and pushing the initial customer portrait to the agent;
Step 200, updating the initial customer portrait according to the real-time voice stream data between the agent and the call object to obtain a dynamic customer portrait.
Specifically, whether the agent of the call center is calling or accessing the call, it is expected that the agent can obtain the customer image of the call object with which the agent is calling, and obtaining the customer image at least needs to obtain a search factor for marking the source of the call object, and directly, the telephone number of the call object is visible to the call center when the agent is calling or accessing the call, and the telephone number can be naturally used as the search factor, so that in step S100, the initial customer image corresponding to the telephone number is selected according to the telephone number, and pushed to the agent. The purpose of providing the initial customer representation to the agent is to make the agent recognize the basic information of the call object, so as to prepare for subsequent effective communication. Because the telephone number is used as a search factor in searching the initial customer representation, the initial customer representation often only contains information obtained by data mining the telephone number. Typically, the customer representation comprises: and determining at least one basic attribute label according to the telephone number and/or at least one history label with a corresponding mapping relation with the telephone number. Wherein, the basic attribute labels are, for example, the owner name of the telephone number and related geographic information, and the related geographic information comprises the attribution place of the telephone number, the active position, population information of the active position, economic statistics data of the active position and the like; the history tag is, for example, user data and history call summary registered in correspondence with the telephone number, the registered user data including history operations, consumption records, and the like of a service platform registered using the telephone number, and the history call summary includes summary data stripped from the history call records generated by the telephone number and the call center, and is generally used for marking types of services (such as complaints, consultations, purchases, and the like) generated in the past by a customer corresponding to the telephone number. Those skilled in the art will appreciate that any legal data mining result available based on the telephone number may be included in the initial customer representation, which may be helpful to enhance the awareness of the agent to the conversation partner.
It will be appreciated by those skilled in the art that the initial customer view provided to the agent in step S100 is static and lagging with respect to the call currently being conducted by the agent and does not fundamentally reflect the willingness and purpose of the call object in the call, so that it is an important object of performing step S200 to enable the agent to obtain a dynamic, real-time, updated customer view during the call, that is, to select to update the initial customer view based on real-time voice stream data between the agent and the call object in step S200 to obtain a dynamic customer view. And step S200 is executed to make use of the conversation content between the seat and the conversation object, wherein the real-time voice stream data is closely related to the willingness and purpose of the conversation object, and the dynamic customer portrait generated according to the real-time voice stream data is matched with the willingness and purpose of the conversation object naturally. Along with the continuous generation of the real-time voice stream data, the dynamic client portrait is also updated correspondingly all the time, which ensures the dynamic property and real-time property of the dynamic client portrait.
Referring to fig. 2, fig. 2 is a schematic flow chart of an alternative embodiment of step S200 shown in fig. 1, in which step S200 further includes the following steps:
step S210, collecting the real-time voice stream data;
Step S220, performing text transcription processing on the real-time voice stream data to obtain voice text;
step S230, attaching a message identifier for describing the call attribute of the real-time voice stream data to the voice text so as to package the voice text into a call text message;
Step S240, adding the call text message into a message queue;
Step S250, pull the said conversation text message from the said message queue and carry on the natural language analysis in order to obtain the recognition result, find out the multiple portrait labels corresponding to said recognition result, update the said initial customer portrait according to the multiple portrait labels, in order to obtain the said dynamic customer portrait.
Specifically, when the call overlap between the agent and the call object is successful, a real-time voice stream is generated due to the language communication between the two parties of the call, and the communication subject related to the real-time voice stream often surrounds the service provided by the agent for the client, such as the communication subject of telephone sales, after-sale return visit, market investigation, problematic answer, etc. Step S210 is performed to collect real-time voice stream data corresponding to the real-time voice stream, more specifically, the real-time voice stream data is a digital signal for recording the real-time voice stream, and typically the digital signal is converted from an analog signal using the seat or the customer voice.
The subject performing the text transcription is typically a trained speech recognition engine that is directed to providing a call center-oriented recognition interface that the call center agent further processes in step S230 by invoking to convert the linguistic words in the real-time voice stream data into character-type phonetic text. It should be noted that, since the real-time voice stream data is continuously generated along with the conversation of the seat, and the amount of data that can be processed by the speech recognition engine in a unit time is limited, if necessary, a plurality of real-time voice stream data may be generated during a complete conversation, or one piece of real-time voice stream data is split into a plurality of data segments, and at this time, a situation that a plurality of steps S220 are executed in parallel may occur. In any event, the real-time voice stream data is ultimately converted to the voice text.
Those skilled in the art will appreciate that there may be a large number of concurrent agent calls at the same time at the call center, so as to ensure that the real-time speech generated by each agent call is allocated to the appropriate computing resources, and thus consider the further processing of the speech text in step S230. That is, the step S230 is performed for the purpose of encapsulating the voice text into a call text message carrying a message identification describing a call attribute of the real-time voice stream data. Next, in step S240, the call text message is added to the message queue, and since it has been mentioned that the call center may have multiple agent calls concurrently at the same time, it will be understood by those skilled in the art that the message queue may include call text messages corresponding to real-time voices of multiple different agent calls. The plurality of call text messages are distinguished according to the message identification thereof, and the generation time and the source of each call text message can be verified and traced back according to the message identification.
For one of the call text messages, typically, the message identifier includes an identity identifier and a timestamp, where the identity identifier is used to define identity information of the agent and organization structure attribute information, where the identity information includes, for example, a customer service identifier corresponding to the agent, and the organization structure attribute information includes, for example, an enterprise identifier and a department identifier corresponding to the agent. Preferably, a complete message identifier may be formed by means of "enterprise identifier + department identifier + customer service identifier + timestamp", so that it is ensured that the message identifier correctly describes the time sequence of the call text message in the message queue, and that the global uniqueness of the call text message in the message queue is ensured.
The application scene that a great number of concurrent agent conversations exist in the call center is dealt with by adopting a message queue mode, and voice recognition resources occupied by the concurrent agent conversations can be coordinated in an orderly manner so as to ensure that real-time voice stream data generated by each agent can be analyzed and processed by natural language. Preferably, the message queues are stored and managed by a message center, which may be sized according to the size of the call center, which is often implemented by high performance clusters when the size of the call center is large.
Steps S210 to S240 are performed to convert the real-time voice stream data into a recognizable data form and to establish a queuing mechanism for a plurality of call text messages that may exist. Step S250 is then performed to implement recognition of the single call text message, and further subsequent processing is performed on the result of the recognition, such as subsequent steps in step S250: and searching a plurality of portrait labels corresponding to the identification result, and updating the initial customer portrait according to the plurality of portrait labels so as to obtain the dynamic customer portrait.
Referring to fig. 3, fig. 3 is a schematic flow chart of an alternative embodiment of step S250 shown in fig. 2, and in particular, an alternative embodiment of step S250, in which the step of pulling the call text message from the message queue for performing natural language analysis to obtain the recognition result, as shown in fig. 3, includes:
step S251, pull the said conversation text message from the said message queue;
Step S252, separating the voice text from the call text message;
step S253, calling a dialogue analysis model to perform dialogue analysis on the voice text;
step S254, determining the recognition result according to the dialogue analysis.
Specifically, when the unprocessed call text message exists in the message queue, the call text message may be fetched by performing step S251. Step S252 is performed to separate the voice text carrying dialogue information from the call text message, and preferably before step S252, the call text message may be traced and verified according to the message identifier of the call text message to ensure validity of the call text message.
Step S253 is performed next, i.e. a dialogue analysis model is called to perform dialogue analysis on the voice text, where the dialogue analysis typically includes any one or a combination of preprocessing of a call record standard format, role analysis, word segmentation, part-of-speech tagging, entity recognition, dependency analysis, entity relationship extraction, emotion analysis, automatic classification, text similarity calculation, and automatic summarization, so as to perform structuring processing on the voice text, and strip out therefrom available information conveyed in a dialogue describing actual progress of the agent and the call object. In order to implement the dialogue analysis, the dialogue processing model may include a trained natural language processing neural network, the natural language processing neural network may determine, through the voice text, specific contents of each dimension of a dialogue carried by the voice text, for example, the pre-processing of the dialogue recording standard format described above may sort the voice text into a recognizable dialogue recording standard format, the role analysis may determine roles of a plurality of voice speakers (i.e., the seat and the dialogue object) in the voice text, the word segmentation may generally divide continuous sentences in the voice text into words in accordance with dialogue logic by means of a probabilistic model, the word part labels may assign word parts to the words, the entity recognition may analyze the corresponding named entities of the words (i.e., analyze which type of named entity the words belong to, such as name, time, number, etc.), the dependency analysis may further improve recognition rate of the words according to context information of the sentences, extract the word part relations between the entities, may automatically extract word relations between the words, may also obtain the word part relations of the words, may automatically analyze the words according to the word relations may be extracted, may automatically obtain the word part relations may be mapped to the text abstract relations, and may automatically obtain the word part relations may be classified according to the text rules, the specific operations included in the dialog analysis are related to the accuracy required by the dialog analysis, and the practitioner of step S253 may reasonably select which types of specific operations are included in the dialog analysis based on comprehensive consideration of the algorithm time complexity, the computing resource load, and the accuracy requirement.
Through the above-mentioned dialogue analysis, the natural language meaning of the dialogue carried by the voice text can be determined, and further a function of determining the recognition result according to the dialogue analysis is implemented in step S254. Considering that the purpose of performing speech recognition is to find a plurality of portrait tags corresponding to the recognition result, the recognition result preferably includes: one skilled in the art can understand that, if the dialog scene, the client intention, and the business entity are required to be obtained according to the dialog analysis, the text meaning of the voice text obtained by the dialog analysis is usually required to be matched with the corresponding scene model, intention model, and business entity model in the database to determine the dialog scene and business entity type to which the voice text belongs currently, and determine the client intention conveyed by the voice text, and the model sample in the database can be preset by the implementer of the specific embodiment or output through another trained neural network. The dialogue scene is a scene such as product introduction, objection processing, transaction confirmation and the like abstracted and summarized from actual language communication, the customer intention legend is a customer intention such as complaint, consultation, business handling and the like abstracted and summarized from actual language communication, and the business entity type is business such as pre-sale consultation, after-sale service, telephone sales, advertisement popularization and the like related to conversation purposes. In more embodiments, to cope with more complex dialogue contents between the agents and the conversation objects, the recognition result may further include abstract indexes such as emotion, speech speed and the like.
With continued reference to fig. 2, since the final purpose after determining the recognition result in step S250 is to find out a plurality of portrait tags corresponding to the recognition result, any abstract index that is sufficient to express the communication intention and the communication effect of the two parties of the conversation in the currently processed real-time voice stream data may be included in the recognition result, and these abstract indexes are all obtained depending on the conversation analysis.
Specifically, the portrait labels may be provided by a preset label library, the label library and the recognition result have a preset mapping relationship, and the recognition result is input into the label library to obtain a plurality of portrait labels corresponding to the portrait labels; in another embodiment, the portrait tag may be generated by directly extracting text from the recognition result. In general, the portrait tag is used to help the agent to deepen the information or intention of the call target, and any data that can achieve the above object may be included in the portrait tag.
The operation of updating the initial customer portrait based on the plurality of portrait tags in step S250 is, for example: pushing the plurality of portrait labels to the seat, further selecting part or all of the plurality of portrait labels to be added into the initial customer portrait, or selecting part or all of the plurality of portrait labels to replace labels existing in the initial customer portrait, wherein the updated initial customer portrait is the dynamic customer portrait. By looking up the dynamic customer portrait, the agent can be helped to know the information or intention of the call object deeply, the communication requirement of the customer is met rapidly, communication failure caused by the shortage of the service level of customer service of the agent is avoided, and the effects of improving the communication quality and the customer satisfaction degree are finally achieved.
Referring to fig. 4, fig. 4 is a schematic flow chart of another alternative embodiment of step S200 shown in fig. 1, which is different from the embodiment shown in fig. 2 in that step S200 further includes the following steps in the alternative embodiment shown in fig. 4:
And step S260, the tags contained in the dynamic customer portrait are weighted and the tags with weights greater than a preset threshold value are preferentially displayed.
Step S270, clustering the plurality of labels contained in the dynamic customer portrait according to dialogue scene categories.
Specifically, the step S260 is performed to cope with a scene including a plurality of labels in the dynamic customer image, and for the agent, it is difficult to both ensure smooth conversation and view the labels in the dynamic customer image, so a weight sorting mechanism is introduced to preferentially display the labels in the dynamic customer image with weights greater than a predetermined threshold, so that important parts in the dynamic customer image can be highlighted for the agent. It will be understood by those skilled in the art that step S260 may be implemented on the premise that a plurality of tags included in the moving client portrait are given weights in advance, respectively. In another embodiment, step S260 may also be implemented to preferentially display the tag with the highest weight.
Considering that in order to facilitate the agent to review whether the whole call content is standard and complete and to facilitate the agent to self-check whether the call content reaches the expected customer satisfaction, the agent may integrate tag data in the call process by clustering a plurality of tags included in the dynamic customer portrait according to the dialogue scene type, which makes it possible for the agent to evaluate and supervise the tags generated in each dialogue scene according to the dialogue scene type, so as to achieve the purpose of improving the service level, which is also the meaning of executing step S270. The dialogue scene category comprises subdivision scene segments such as a start, a communication, an ending language and the like which possibly occur in traffic service of a call center, and each scene segment can be further divided according to the dimensions such as content integrity, service normalization, customer intention, service satisfaction, complaint risk and the like. In general, the significance of setting up the dialog scene category is to count the tags for each stage in the call that it completed for the agent. Accordingly, step S270 may be triggered to begin execution by a hang-up event for the call.
Whether step S250 shown in fig. 2 or fig. 4, or step S251 shown in fig. 3, it is contemplated that the step of pulling the talk text message from the message queue may be performed automatically and sequentially, preferably, one or more logic bodies/devices configured to subscribe to the talk text message may be preset to perform the step of pulling the talk text message from the message queue, and the logic bodies/devices continuously access the message queue to determine whether the talk text message is contained therein, and pull the talk text message in real time once the talk text message appears in the message queue. When the message queue contains a plurality of call text messages, the plurality of call text messages are typically arranged in a chronological order of generation. It should be noted that the logic is, for example, a running instance of computer code, such as a virtual server, service logic, a process or thread, etc., that may implement the pull operation. The pull mechanism may ensure that the text messages in the message queue are not missed or missed to ensure that global text messages in the call center are distributed to computing resources for the dialog analysis process.
To facilitate viewing of the initial customer representation and the dynamic customer representation by the agent, a client is typically provided on the agent that is configured to subscribe to the voice text and the representation tags and to present the results of their subscription in time on a computer vision interface device. In an alternative embodiment, various messages generated according to the voice text and the portrait tag can be stored and managed uniformly by the message center.
It should be noted that although the operations of the method of the present invention are depicted in the drawings in a particular order, this does not require or imply that the operations be performed in that particular order or that all illustrated operations be performed to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Accordingly, one or more computer-readable media storing computer-executable instructions that, when used by one or more computer devices, cause the one or more computer devices to perform a method of providing a dynamic customer representation of a conversation object to a conversation agent as described previously, such as the method of providing a dynamic customer representation of a conversation object to a conversation agent shown in FIG. 1. Computer readable media can be any available media that can be accessed by the computer device and includes both volatile and nonvolatile media, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer-readable media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device. Combinations of any of the above should also be included within the scope of computer readable media.
The portions of the method of providing a dynamic customer representation of a call object to an agent that relate to software logic in the method of the present invention may be implemented using programmable logic devices or as a computer program product that causes a computer to perform the methods as exemplified. The computer program product comprises a computer-readable storage medium having computer program logic or code portions embodied therein for carrying out the steps of the above-described portions relating to software logic. The computer readable storage medium may be a built-in medium installed in a computer or a removable medium (e.g., a hot-pluggable storage device) detachable from a computer main body. The built-in medium includes, but is not limited to, rewritable nonvolatile memory such as RAM, ROM, and hard disk. The removable media includes, but is not limited to: optical storage media (e.g., CD-ROM and DVD), magneto-optical storage media (e.g., MO), magnetic storage media (e.g., magnetic tape or removable hard disk), media with built-in rewritable non-volatile memory (e.g., memory card), and media with built-in ROM (e.g., ROM cartridge).
It will be appreciated by those skilled in the art that any computer system having suitable programming means is capable of executing the steps of the method of the present invention embodied in a computer program product. Although most of the specific embodiments described in this specification focus on software programs, alternative embodiments that implement the methods provided by the present invention in hardware are also within the scope of the invention as claimed.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements, units or steps, and that the singular does not exclude a plurality. A plurality of components, units or means recited in the claims can also be implemented by means of one component, unit or means in software or hardware.
The method for providing the dynamic customer portrait of the call object to the agent can update the initial customer portrait according to the real-time voice stream between the agent and the call object so as to obtain the dynamic customer portrait, and the mode of generating the dynamic customer portrait is closely related to the call content of the agent at present.
The foregoing disclosure is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims and their equivalents.

Claims (12)

1. A method of providing a dynamic customer representation of a conversation object to an agent, the method comprising:
Searching an initial customer portrait corresponding to a telephone number according to the telephone number of a call object called out or accessed by an agent, and pushing the initial customer portrait to the agent;
Updating the initial customer representation according to real-time voice stream data between the agent and the call object to obtain a dynamic customer representation, comprising the steps of: collecting the real-time voice stream data; performing text transcription processing on the real-time voice stream data to obtain voice text; attaching a message identifier for describing the call attribute of the real-time voice stream data to the voice text so as to encapsulate the voice text into a call text message; adding the call text message into a message queue; and pulling the call text message from the message queue for natural language analysis to obtain a recognition result, searching a plurality of portrait labels corresponding to the recognition result, and updating the initial customer portrait according to the portrait labels to obtain the dynamic customer portrait.
2. The method of providing a dynamic customer representation of a conversation object to an agent of claim 1 wherein the initial customer representation comprises:
at least one basic attribute tag determined from the telephone number; and/or
At least one history tag having a corresponding mapping relationship with the telephone number.
3. The method of providing a dynamic customer representation of a conversation object to an agent of claim 1 wherein the message identification comprises:
Identity and timestamp;
the identity mark is used for defining the identity information and the organization structure attribution information of the seat.
4. The method for providing a dynamic customer representation of a conversation object to a conversation of claim 3 wherein:
The identity information comprises customer service identifiers corresponding to the agents;
the organization structure attribution information comprises enterprise identifications and department identifications corresponding to the agents.
5. The method of providing a dynamic customer representation of a conversation object to a conversation of claim 1 wherein the step of pulling the conversation text message from the message queue for natural language analysis to obtain a recognition result comprises:
pulling the call text message from the message queue;
Separating the voice text from the call text message;
Calling a dialogue analysis model to perform dialogue analysis on the voice text;
and determining the identification result according to the dialogue analysis.
6. The method of providing a dynamic customer representation of a conversation object to a conversation of claim 5 wherein the conversation analysis comprises:
Preprocessing the call record standard format, analyzing roles, segmenting words, labeling parts of speech, identifying entities, analyzing dependency, extracting entity relation, analyzing emotion, automatically classifying, calculating text similarity and automatically abstracting any one or combination of the above.
7. The method for providing a dynamic customer representation of a conversation object to a conversation of claim 1 or 5 wherein the recognition result comprises:
Any one or combination of dialogue scenes, customer intentions, topics, keywords and business entities corresponding to the voice texts.
8. The method of providing a dynamic customer representation of a conversation object to a conversation of claim 1 further comprising:
weight ordering a plurality of labels contained in the dynamic customer representation;
the tags having weights greater than a predetermined threshold are preferentially displayed.
9. The method for providing a dynamic customer representation of a conversation object to a conversation of claim 1 wherein:
the message queues are stored and managed by a message center.
10. The method for providing a dynamic customer representation of a conversation object to a conversation of claim 1 wherein:
the step of pulling the talk text message from the message queue is performed by one or more logic bodies/devices arranged to subscribe to the talk text message.
11. The method of providing a dynamic customer representation of a conversation object to a conversation of claim 1 further comprising:
And clustering a plurality of labels contained in the dynamic customer portrait according to the dialogue scene category.
12. One or more computer-readable media storing computer-executable instructions that, when used by one or more computer devices, cause the one or more computer devices to perform the method of providing a dynamic customer representation of a conversation object to an agent of any of claims 1 to 11.
CN202010413039.1A 2020-05-15 2020-05-15 Method for providing dynamic customer portraits of conversation objects to agents Active CN111640436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010413039.1A CN111640436B (en) 2020-05-15 2020-05-15 Method for providing dynamic customer portraits of conversation objects to agents

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010413039.1A CN111640436B (en) 2020-05-15 2020-05-15 Method for providing dynamic customer portraits of conversation objects to agents

Publications (2)

Publication Number Publication Date
CN111640436A CN111640436A (en) 2020-09-08
CN111640436B true CN111640436B (en) 2024-04-19

Family

ID=72332855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010413039.1A Active CN111640436B (en) 2020-05-15 2020-05-15 Method for providing dynamic customer portraits of conversation objects to agents

Country Status (1)

Country Link
CN (1) CN111640436B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235470B (en) * 2020-09-16 2021-11-23 重庆锐云科技有限公司 Incoming call client follow-up method, device and equipment based on voice recognition
CN112214588B (en) * 2020-10-16 2024-04-02 深圳赛安特技术服务有限公司 Multi-intention recognition method, device, electronic equipment and storage medium
CN112509575B (en) * 2020-11-26 2022-07-22 上海济邦投资咨询有限公司 Financial consultation intelligent guiding system based on big data
CN112632989B (en) * 2020-12-29 2023-11-03 中国农业银行股份有限公司 Method, device and equipment for prompting risk information in contract text
CN112967721A (en) * 2021-02-03 2021-06-15 上海明略人工智能(集团)有限公司 Sales lead information identification method and system based on voice identification technology
CN113344415A (en) * 2021-06-23 2021-09-03 中国平安财产保险股份有限公司 Deep neural network-based service distribution method, device, equipment and medium
CN113794851A (en) * 2021-09-08 2021-12-14 平安信托有限责任公司 Video call processing method and device, electronic equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039165B1 (en) * 1999-09-13 2006-05-02 Microstrategy Incorporated System and method for personalizing an interactive voice broadcast of a voice service based on automatic number identification
CN101150419A (en) * 2007-11-12 2008-03-26 中国电信股份有限公司 A new generation call center system and automatic service realization method
CN101478613A (en) * 2009-02-03 2009-07-08 中国电信股份有限公司 Multi-language voice recognition method and system based on soft queuing call center
CN106503015A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of method for building user's portrait
CN107864301A (en) * 2017-10-26 2018-03-30 平安科技(深圳)有限公司 Client's label management method, system, computer equipment and storage medium
CN109684330A (en) * 2018-12-17 2019-04-26 深圳市华云中盛科技有限公司 User's portrait base construction method, device, computer equipment and storage medium
CN110326041A (en) * 2017-02-14 2019-10-11 微软技术许可有限责任公司 Natural language interaction for intelligent assistant
CN110908883A (en) * 2019-11-15 2020-03-24 江苏满运软件科技有限公司 User portrait data monitoring method, system, equipment and storage medium
CN111028007A (en) * 2019-12-06 2020-04-17 中国银行股份有限公司 User portrait information prompting method, device and system
CN111026843A (en) * 2019-12-02 2020-04-17 北京智乐瑟维科技有限公司 Artificial intelligent voice outbound method, system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8923501B2 (en) * 2011-07-29 2014-12-30 Avaya Inc. Method and system for managing contacts in a contact center

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039165B1 (en) * 1999-09-13 2006-05-02 Microstrategy Incorporated System and method for personalizing an interactive voice broadcast of a voice service based on automatic number identification
CN101150419A (en) * 2007-11-12 2008-03-26 中国电信股份有限公司 A new generation call center system and automatic service realization method
CN101478613A (en) * 2009-02-03 2009-07-08 中国电信股份有限公司 Multi-language voice recognition method and system based on soft queuing call center
CN106503015A (en) * 2015-09-07 2017-03-15 国家计算机网络与信息安全管理中心 A kind of method for building user's portrait
CN110326041A (en) * 2017-02-14 2019-10-11 微软技术许可有限责任公司 Natural language interaction for intelligent assistant
CN107864301A (en) * 2017-10-26 2018-03-30 平安科技(深圳)有限公司 Client's label management method, system, computer equipment and storage medium
CN109684330A (en) * 2018-12-17 2019-04-26 深圳市华云中盛科技有限公司 User's portrait base construction method, device, computer equipment and storage medium
CN110908883A (en) * 2019-11-15 2020-03-24 江苏满运软件科技有限公司 User portrait data monitoring method, system, equipment and storage medium
CN111026843A (en) * 2019-12-02 2020-04-17 北京智乐瑟维科技有限公司 Artificial intelligent voice outbound method, system and storage medium
CN111028007A (en) * 2019-12-06 2020-04-17 中国银行股份有限公司 User portrait information prompting method, device and system

Also Published As

Publication number Publication date
CN111640436A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN111640436B (en) Method for providing dynamic customer portraits of conversation objects to agents
US9477752B1 (en) Ontology administration and application to enhance communication data analytics
CN111641757A (en) Real-time quality inspection and auxiliary speech pushing method for seat call
US20160019885A1 (en) Word cloud display
US20160019882A1 (en) Systems and methods for speech analytics and phrase spotting using phoneme sequences
US11258902B2 (en) Partial automation of text chat conversations
CN107633380A (en) The task measures and procedures for the examination and approval and system of a kind of anti-data-leakage system
CN111539221B (en) Data processing method and system
CN111639484A (en) Method for analyzing seat call content
US9697246B1 (en) Themes surfacing for communication data analysis
US11436446B2 (en) Image analysis enhanced related item decision
CN111260102A (en) User satisfaction prediction method and device, electronic equipment and storage medium
CN109800354B (en) Resume modification intention identification method and system based on block chain storage
CN111966689B (en) Application knowledge base construction method and device
WO2022267174A1 (en) Script text generating method and apparatus, computer device, and storage medium
CN110929011A (en) Conversation analysis method, device and equipment
CN110955770A (en) Intelligent dialogue system
JP6208794B2 (en) Conversation analyzer, method and computer program
CN102402717A (en) Data analysis facility and method
CN111126071A (en) Method and device for determining questioning text data and data processing method of customer service group
CN113297365B (en) User intention judging method, device, equipment and storage medium
CN114356982A (en) Marketing compliance checking method and device, computer equipment and storage medium
CN111611391B (en) Method, device, equipment and storage medium for classifying conversation
CN114860742A (en) Artificial intelligence-based AI customer service interaction method, device, equipment and medium
CN115186051A (en) Sensitive word detection method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant