CN113886547B - Client real-time dialogue switching method and device based on artificial intelligence and electronic equipment - Google Patents
Client real-time dialogue switching method and device based on artificial intelligence and electronic equipment Download PDFInfo
- Publication number
- CN113886547B CN113886547B CN202111156648.4A CN202111156648A CN113886547B CN 113886547 B CN113886547 B CN 113886547B CN 202111156648 A CN202111156648 A CN 202111156648A CN 113886547 B CN113886547 B CN 113886547B
- Authority
- CN
- China
- Prior art keywords
- dialogue
- feature
- question
- decision tree
- answer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 28
- 238000003066 decision tree Methods 0.000 claims abstract description 160
- 238000012216 screening Methods 0.000 claims abstract description 88
- 238000000605 extraction Methods 0.000 claims abstract description 34
- 238000010586 diagram Methods 0.000 claims description 26
- 238000005516 engineering process Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 18
- 238000012546 transfer Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 12
- 230000000750 progressive effect Effects 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 description 7
- 238000001914 filtration Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 description 1
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/01—Customer relationship services
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Development Economics (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Accounting & Taxation (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Finance (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application relates to the technical field of artificial intelligence, and particularly discloses a client real-time dialogue switching method and device based on artificial intelligence and electronic equipment, wherein the client real-time dialogue switching method comprises the following steps: acquiring a history dialogue set, wherein the history dialogue set comprises at least one history dialogue data; according to at least one preset dimension, respectively carrying out feature extraction on each historical dialogue data to obtain at least one feature group, wherein each feature group in the at least one feature group comprises at least one dialogue feature; according to a preset rule, determining a corresponding relation between each dialogue feature in at least one dialogue feature and each node in the decision tree model to obtain an intention decision tree; determining a screening rule according to the intention decision tree; and acquiring real-time dialogue data, and switching the client corresponding to the real-time dialogue data to the manual customer service when the real-time dialogue data accords with the screening rule.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a client real-time dialogue switching method and device based on artificial intelligence and electronic equipment.
Background
In recent years, as the number of scene marketing clients increases, the cost of manpower required by the human customer service increases. At present, in order to reduce the labor cost, the industry mostly adopts an outbound mode of 'intelligent voice robot+manual', an outbound marketing list of the intelligent voice robot is firstly used, and then intentional clients in outbound are transferred to manual customer service to follow up, so that the labor cost is reduced.
However, compared with the full-manual follow-up mode, the outbound mode of the intelligent voice robot and the manual only transfers the clients with clear intention in outbound to customer service, and the rest clients which are not transferred do not continue to follow up. Therefore, the accuracy of the current intelligent voice robot in the aspect of identifying the intention of the client is low, so that the screening granularity of the client is not fine enough, the client is easy to leak, and the marketing amount is reduced.
Disclosure of Invention
In order to solve the problems in the prior art, the embodiment of the application provides a client real-time dialogue switching method, device and electronic equipment based on artificial intelligence, which can improve the accuracy of identifying the intention of a client and further ensure that the screening granularity of the client is finer.
In a first aspect, an embodiment of the present application provides an artificial intelligence-based method for forwarding a customer real-time session, including:
Acquiring a historical dialogue set, wherein the historical dialogue set comprises at least one historical dialogue data, and each historical dialogue data in the at least one historical dialogue data is used for recording one complete dialogue content between a client and a voice robot;
According to at least one preset dimension, respectively carrying out feature extraction on each historical dialogue data to obtain at least one feature group, wherein the at least one feature group corresponds to the at least one historical dialogue data one by one, and each feature group in the at least one feature group comprises at least one dialogue feature;
Determining a corresponding relation between each dialogue feature in at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree;
Determining a screening rule according to the intention decision tree;
and acquiring real-time dialogue data, and switching the client corresponding to the real-time dialogue data to the manual customer service when the real-time dialogue data accords with the screening rule.
In a second aspect, embodiments of the present application provide an artificial intelligence based client real-time conversation switching apparatus, including:
The system comprises an acquisition module, a voice robot and a processing module, wherein the acquisition module is used for acquiring a historical dialogue set, the historical dialogue set comprises at least one historical dialogue data, and each historical dialogue data in the at least one historical dialogue data is used for recording one complete dialogue content between a client and the voice robot;
The feature extraction module is used for carrying out feature extraction on each historical dialogue data according to at least one preset dimension to obtain at least one feature group, wherein the at least one feature group corresponds to the at least one historical dialogue data one by one, and each feature group in the at least one feature group comprises at least one dialogue feature;
The processing module is used for determining the corresponding relation between each dialogue feature in at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree, and determining a screening rule according to the intention decision tree;
And the screening module is used for acquiring the real-time dialogue data, and switching the client corresponding to the real-time dialogue data to the manual customer service when the real-time dialogue data accords with the screening rule.
In a third aspect, an embodiment of the present application provides an electronic device, including: and a processor coupled to the memory, the memory for storing a computer program, the processor for executing the computer program stored in the memory to cause the electronic device to perform the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program causing a computer to perform the method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer being operable to cause a computer to perform a method as in the first aspect.
The implementation of the embodiment of the application has the following beneficial effects:
In the embodiment of the application, a plurality of historical dialogue data for recording one-time complete dialogue content between a client and a voice robot are obtained to form a historical dialogue set. And then, respectively carrying out feature extraction on each historical dialogue data according to at least one preset dimension to obtain at least one feature group. And then, determining the corresponding relation between each dialogue feature in at least one dialogue feature and each node in the decision tree model to obtain an intention decision tree. Finally, rule extraction is carried out on each decision tree branch of the intention decision tree, and the extracted rule is screened based on a preset rule screening rule, so that a final screening rule is determined. Therefore, the real-time dialogue is screened according to the final screening rule, and the clients corresponding to the real-time dialogue conforming to the rule are transferred to the manual service. Therefore, through the decision tree model, important factors really identifying the intention of the client can be screened out, and threshold values are sequentially divided, so that the accuracy of the intention identification of the client is improved, the screening granularity of the client is finer, the missing client in the outbound mode of the intelligent voice robot and the manual is reduced, and the handling condition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic hardware structure diagram of a client real-time dialogue switching device based on artificial intelligence according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for forwarding a customer real-time dialogue based on artificial intelligence according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for feature extraction of each historical dialogue data in the dialogue round dimension according to an embodiment of the present application;
FIG. 4 is a flow chart of a method for feature extraction of each historical dialog data in a critical dimension of a node according to an embodiment of the present application;
FIG. 5 is a flow chart of a method for feature extraction of each historical dialog data in the dimension of the dialog end node according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a method for creating a topological graph according to n+1 question-answering features according to an embodiment of the present application;
FIG. 7 is a flowchart of a method for determining a correspondence between each dialog feature of at least one dialog feature and each node of a decision tree model according to a preset rule to obtain an intent decision tree according to an embodiment of the present application;
FIG. 8 is a label diagram of each node in a decision tree model according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an intent decision tree provided by an embodiment of the present application;
FIG. 10 is a flowchart of a method for determining a filtering rule according to an intent decision tree according to an embodiment of the present application;
FIG. 11 is a functional block diagram of an artificial intelligence based device for forwarding a customer's real-time conversation according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the present application. All other embodiments, based on the embodiments of the application, which are apparent to those of ordinary skill in the art without inventive faculty, are intended to be within the scope of the application.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic hardware structure diagram of a client real-time session transfer device based on artificial intelligence according to an embodiment of the present application. The client real-time conversation transfer apparatus 100 includes at least one processor 101, a communication line 102, a memory 103, and at least one communication interface 104.
In this embodiment, the processor 101 may be a general-purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program according to the present application.
Communication line 102 may include a pathway to transfer information between the above-described components.
The communication interface 104, which may be any transceiver-like device (e.g., antenna, etc.), is used to communicate with other devices or communication networks, such as ethernet, RAN, wireless local area network (wireless local area networks, WLAN), etc.
The memory 103 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-only memory, EEPROM), a compact disc (compact disc read-only memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In this embodiment, the memory 103 may be independently provided and connected to the processor 101 via the communication line 102. Memory 103 may also be integrated with processor 101. The memory 103 provided by embodiments of the present application may generally have non-volatility. The memory 103 is used for storing computer-executable instructions for executing the scheme of the present application, and is controlled by the processor 101 to execute the instructions. The processor 101 is configured to execute computer-executable instructions stored in the memory 103 to implement the methods provided in the embodiments of the present application described below.
In alternative embodiments, computer-executable instructions may also be referred to as application code, as the application is not particularly limited.
In alternative embodiments, processor 101 may include one or more CPUs, such as CPU0 and CPU1 in fig. 1.
In alternative embodiments, the client real-time conversation transfer apparatus 100 may include multiple processors, such as processor 101 and processor 107 in FIG. 1. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In an alternative embodiment, if the client real-time session transfer device 100 is a server, for example, it may be a stand-alone server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery network (ContentDelivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platform. The customer real-time conversation transfer apparatus 100 may further include an output device 105 and an input device 106. The output device 105 communicates with the processor 101 and may display information in a variety of ways. For example, the output device 105 may be a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 106 is in communication with the processor 101 and may receive user input in a variety of ways. For example, the input device 106 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
The client real-time conversation transfer apparatus 100 may be a general-purpose device or a special-purpose device. Embodiments of the present application are not limited to the type of client real-time conversation transfer apparatus 100.
Secondly, it should be noted that, the embodiment of the present disclosure may acquire and process related data based on artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The client real-time dialogue transferring method based on artificial intelligence disclosed by the application is described as follows:
Referring to fig. 2, fig. 2 is a schematic flow chart of a client real-time dialogue transferring method based on artificial intelligence according to an embodiment of the present application. The customer real-time dialogue transferring method comprises the following steps:
201: a historical dialog set is obtained.
In this embodiment, the historical dialog set may include at least one historical dialog data, each of the at least one historical dialog data being used to record a complete dialog content between the client and the voice robot. For example, the conversation robot may extract conversation content after each conversation is completed, form historical conversation data, and store the historical conversation data in a historical database to wait for call.
202: And respectively carrying out feature extraction on each historical dialogue data according to at least one preset dimension to obtain at least one feature group.
In this embodiment, at least one feature group corresponds to at least one historical dialog data one-to-one, and each feature group in the at least one feature group includes at least one dialog feature. By way of example, the at least one dimension may include a dialog turn number dimension, a key operation node dimension, and a dialog end node dimension, and the feature extraction manners in the three dimensions will be described below:
in the number of dialogue turns dimension, the application provides a method for extracting characteristics of each historical dialogue data, as shown in fig. 3, which comprises the following steps:
301: and extracting keywords from each historical dialogue data to obtain at least one keyword.
In this embodiment, word segmentation processing may be performed for each history dialogue data, for example: and segmenting each historical dialogue data into a plurality of candidate words through the semantics of each historical dialogue data. Then, calculating the inverse document frequency of each candidate word, and determining the candidate words with the inverse document frequency larger than a preset threshold value as keywords.
302: And cutting each history dialogue data according to at least one keyword to obtain at least one first history dialogue sub-data.
In this embodiment, at least one first history dialogue sub-data corresponds to at least one keyword one by one.
303: The number of conversation rounds per first historical conversation sub-data in the at least one first historical conversation sub-data is determined separately.
In this embodiment, the number of rounds of dialogue refers to the number of rounds of questions and answers occurring in one history dialogue sub-data, and one question and answer includes one question sentence and one answer sentence, and a round of dialogue is calculated.
304: And determining the average conversation round number of the conversation type corresponding to each first historical conversation sub-data according to the keywords corresponding to each first historical conversation sub-data.
In this embodiment, the word vector may be obtained by performing word embedding processing on the keyword corresponding to each of the first history dialogue sub-data. Specifically, the keyword can be subjected to data coding, and the coded data is mapped to a real space to obtain a feature vector, wherein the feature vector is a word vector corresponding to the keyword. For example, for the keyword "fund," a word vector [1,2,2,3,3,3] can be obtained after data encoding and real space mapping. And classifying the keywords according to the word vector, and determining the dialogue type of the first historical dialogue sub-data corresponding to the keywords.
Meanwhile, the average conversation round number can be determined according to massive historical conversation data stored in the historical database, and after the determination, the average conversation round number is associated with the corresponding conversation type and then stored so as to be directly called when needed. In addition, the average number of dialog turns may be dynamically updated based on an accumulation of historical dialog data to ensure the accuracy of the subsequently determined screening rules.
305: And determining the first dialogue characteristic of each historical dialogue data according to the average dialogue round number of the dialogue type corresponding to each first historical dialogue sub-data and the dialogue round number of each first historical dialogue sub-data.
In this embodiment, the average number of dialog rounds of the dialog type corresponding to each first historical dialog sub-data and the difference between the number of dialog rounds of each first historical dialog sub-data may be determined separately, and then the obtained difference may be arranged according to the arrangement order of each corresponding first historical dialog sub-data in each historical dialog data, so as to obtain a number vector of dialog rounds as the first dialog feature.
For example, for a certain historical dialogue data, the corresponding number of dialogue rounds is: type 1,2 wheels, [ type 2,5 wheels ], [ type 3,7 wheels ] and [ type 4,3 wheels ]. Meanwhile, through inquiry, the average number of dialogue rounds corresponding to the type 1 is 3 rounds, the average number of dialogue rounds corresponding to the type 2 is 4 rounds, the average number of dialogue rounds corresponding to the type 3 is 7 rounds, and the average number of dialogue rounds corresponding to the type 4 is 2 rounds. Based on this, it can be obtained by calculation that the difference value corresponding to type 1 is 1, the difference value corresponding to type 2 is-1, the difference value corresponding to type 3 is 0, and the difference value corresponding to type 4 is-1. In the historical dialogue data, the arrangement sequence of the first historical dialogue sub data corresponding to the 4 types is as follows: type 2, type 3, type 1, and type 4. Thus, the number of dialog turns vector can be obtained: [ -1,0,1, -1].
Meanwhile, in the dimension of the key operation node, the application also provides a method for extracting the characteristics of each historical dialogue data, as shown in fig. 4, which comprises the following steps:
401: at least one critical node is determined in each historical session data based on session flow information for each historical session data.
In this embodiment, the session flow information may store the handling situation of the flow nodes of the service in the re-session process, and based on this, each flow node may be used as the at least one key-operation node.
402: And according to the at least one key conversation node, segmenting each historical conversation data to obtain at least one second historical conversation sub-data.
In this embodiment, at least one second history dialogue sub-data corresponds to at least one key-dialogue node one-to-one.
403: Semantic features of each of the at least one second historical dialog sub-data are determined separately.
404: And constructing a dialogue node jump graph according to the arrangement sequence of at least one key dialogue node in each historical dialogue data.
In this embodiment, a dialog node jump graph is used to declare a jump relationship between each of at least one key technology node.
405: And determining the weight of each key operation node according to the dialogue node jump graph.
In this embodiment, the dialog node state jump diagram may represent a behavior distribution situation of the user interaction with the man-machine dialog system. In particular, the dialog node state jump graph may be represented as a sequential decision model, and the model may be utilized to solve for the weight parameters of each key-microphone node. The weight parameter may represent an impact value or contribution of the corresponding critical operation node to the communication traffic.
406: The second dialogue feature of each historical dialogue data is determined according to the semantic feature of each second historical dialogue sub data, the dialogue node jump map and the weight of each key dialogue node.
In this embodiment, the ordering of each second historical dialog sub-data may be determined according to the dialog node jump map, and then, according to the ordering, the semantic features corresponding to each second historical dialog sub-data are spliced to obtain the first fusion feature as the second dialog feature of each historical dialog data. In addition, each history dialogue data may have a plurality of second dialogue features, based on which, according to the weight of each key dialogue node, the semantic features corresponding to the second history dialogue sub-data are weighted and summed to obtain a second fusion feature as another second dialogue feature of each history dialogue data.
Finally, in the dimension of the session end node, the present application also provides a method for extracting features from each historical session data, as shown in fig. 5, where the method includes:
501: and determining a question-answer pair corresponding to the dialogue ending node in each historical dialogue data as a first question-answer pair.
In this embodiment, the question-answer pair is composed of one question and one answer, and the answer is used to answer the question, and each history dialogue data is composed of a plurality of question-answer pairs.
502: And extracting the first question-answer pair and the first n question-answer pairs of the first question-answer pair from each historical dialogue data to obtain n+1 pairs of second question-answer pairs.
In this embodiment, n may be obtained by analyzing the current historical dialogue data according to the maximum a posteriori estimation theory, and is typically an integer greater than or equal to 1.
503: And respectively extracting the characteristics of each pair of the n+1 pairs of second question-answer pairs to obtain n+1 question-answer characteristics.
In this embodiment, n+1 question-answer features are in one-to-one correspondence with n+1 pairs of second question-answer pairs. For example, semantic extraction may be performed on each pair of second question-answer pairs, and a corresponding semantic vector may be obtained as a question-answer feature.
504: And establishing a topological relation diagram according to the n+1 question-answer features.
In this embodiment, a method for establishing a topological relation diagram according to n+1 question-answer features is provided, as shown in fig. 6, and the method includes:
601: n-! Sub-randomly selecting, and combining the two question-answer features selected randomly each time to obtain n-! And a set of features.
In this embodiment, the question and answer features randomly selected any two times are not exactly the same, where n-! Each of the feature groups includes a first question feature and a second question feature, and the first question feature is different from the second question feature.
For example, for 3 question-answering features { A, B, C }, 2 ∈! =3 random choices, resulting in feature sets { a, B }, { a, C } and { B, C }.
602: Determining correlation coefficients between the first question-answer feature and the second question-answer feature in each feature group respectively to obtain n-! And a correlation coefficient.
In this embodiment, the first question-answer feature may be subjected to a modulus taking to obtain a first modulus, and the second question-answer feature may be subjected to a modulus taking to obtain a second modulus. Then, a product value of the first and second modes is determined, and an inner product between the first question-answer feature and the second question-answer feature is determined. Finally, the quotient of the inner product and the product value is used as a correlation coefficient between the first question-answer characteristic and the second question-answer characteristic.
Illustratively, an included angle cosine value between the first question-answer feature and the second question-answer feature is calculated by a dot product, and is used as a correlation coefficient between the first question-answer feature and the second question-answer feature.
Specifically, for the first question feature a= [ a1, a2, …, ai, …, an ], and the second question feature b= [ B1, B2, …, bi, …, bn ], where i=1, 2, …, n. The angle cosine value can be expressed by the formula ①:
Wherein A.B represents the inner product of the first question-answering feature A and the second question-answering feature B, I is a modulo sign, I A I represents the modulo of the first question-answering feature A, I B I represents the modulo of the second question-answering feature B.
Further, the inner product of the first question-answer feature a and the second question-answer feature B may be represented by the formula ②:
further, the modulus of the first question feature a can be expressed by the formula ③:
And finally, taking the cosine value of the included angle as a correlation coefficient between the first question-answering feature A and the second question-answering feature B. Illustratively, the correlation coefficient between the first question-answer feature a and the second question-answer feature B may be represented by the formula ④:
d=cosθ…………④
Because the range of the cosine value is [ 1, 1], the cosine value still has the same property of 1 when in high dimension, 0 when in quadrature and1 when in opposite. That is, the closer the cosine value is to 1, the closer the direction between the two features is represented, and the greater the correlation; the closer to-1, the more opposite their direction, the less relevant; approaching 0, indicating that the two features are nearly orthogonal, may represent a relative difference in the directions of the two features. Thus, the degree of correlation between the first question-answer feature and the second question-answer feature can be accurately represented by using the cosine value as the correlation coefficient between the first question-answer feature and the second question-answer feature.
603: N+1 question-answering features are taken as n+1 nodes.
In the embodiment, n+1 question-answer features are in one-to-one correspondence with n+1 nodes;
604: will n-! And each relevance coefficient in the relevance coefficients is used as an edge between two nodes corresponding to two question-answer features in the feature group corresponding to each relevance coefficient, so that a topological relation diagram is obtained.
505: And determining a third dialogue characteristic of each historical dialogue data according to the n+1 question-answer characteristics and the topological relation diagram.
In this embodiment, the keyword at the next moment is predicted based on the first question-answer pair by using the topological relation diagram and training the graph neural network, and the feature extraction is performed on the keyword, so as to obtain the keyword feature. Finally, the mean square error between the keyword feature and the real question-answer feature is calculated as the third dialogue feature of each corresponding historical dialogue data.
203: And determining the corresponding relation between each dialogue feature in at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree.
In this embodiment, each dialog feature importance may be calculated by a decision tree model algorithm, taking into each dialog feature of the at least one dialog feature, starting from the root node, based on the amount of improved performance metrics of each dialog feature for the current node. The split division standard of each node is that after the dialogue feature of the node divides at least one historical dialogue data in the historical dialogue set, the purity of each obtained branch is highest, that is, the data of each branch belongs to the same category as far as possible.
The present application provides a method for determining a correspondence between each dialog feature in at least one dialog feature and each node in a decision tree model according to a preset rule to obtain an intent decision tree, as shown in fig. 7, the method includes:
701: at least one dialog feature is composed into a first set.
702: And executing a feature matching process on the root node in the decision tree model and the first set based on the improved performance metric of each dialogue feature in the first set to obtain a fourth dialogue feature corresponding to the root node.
In this embodiment, the improved performance metric corresponding to the fourth dialogue feature is the largest, and the improved performance metric is determined by each dialogue feature corresponding to the improved performance metric and the root node in the decision tree model. Illustratively, the improved performance metric may be represented using a coefficient of kunning, the smaller the coefficient of kunning for a certain dialog feature, the purer the classification result based on that dialog feature, the higher the importance of that dialog feature, and the corresponding improved performance metric. Specifically, for each dialogue feature, a classification rule based on the feature is determined to classify at least one historical dialogue data in the historical dialogue set, and based on the classification result, the coefficient of the node's kunit is calculated.
In this embodiment, the coefficient of kunity may be represented by formula ⑤:
Wherein x represents each dialog feature; n represents the total number of classified historical dialog data; k represents the number of categories of the dialogue feature, and is determined by the nature of the dialogue feature itself, for example: when the dialogue feature is the sex of the interlocutor, as the type of the sex has 2 types, namely male and female, the number of the common categories of the dialogue feature is 2; n j represents the number of samples of dialog features x=j in the classified historical dialog data; j 1 denotes the number of positive samples in dialog feature x=j in the classified historical dialog data; j 0 denotes the number of negative samples in dialog feature x=j in the classified historical dialog data.
Based on this, in the present embodiment, the improved performance metric can be represented by the formula ⑥:
703: and removing the fourth dialogue characteristic from the first set to obtain a new first set, and taking the child node corresponding to the root node as a new root node.
704: And executing a feature matching process on the new root node and the new first set to obtain a new fourth dialogue feature corresponding to the new root node.
705: And removing the new fourth dialogue features from the new first set until the number of dialogue features in the new first set is 0, and obtaining the intention decision tree.
For more clearly describing the method of determining the correspondence between each dialog feature of the at least one dialog feature and each node of the decision tree model according to the present application, a method of labeling nodes of the decision tree model is provided. Specifically, the root node of the decision tree model is marked as 1, and the nodes of the next layer of the root node are numbered sequentially from left to right until all the nodes of the layer are marked. The nodes of the next layer of the layer are numbered sequentially from left to right, and so on. As shown in FIG. 8, FIG. 8 shows a graph of labels for various nodes in a decision tree model.
Based on the above labeling, the method for determining the correspondence between each dialog feature of the at least one dialog feature and each node in the decision tree model according to the present application will be described in detail below by way of an example:
Now assume that after step 202, 4 dialog features are obtained: feature 1, feature 2, feature 3, and feature 4, then the first set is [ feature 1, feature 2, feature 3, feature 4].
First, for node 1 (i.e., the root node), the improved performance metric for feature 1, feature 2, feature 3, and feature 4 can be calculated by the method provided in step 602, in combination with the historical dialog set, as 0.6, 0.7, 0.55, and 0.85. And taking the feature 4 as a dialogue feature corresponding to the node 1, and classifying the history dialogue set based on the feature 4 to obtain a history dialogue set 1 and a history dialogue set 2. Wherein the history dialogue set 1 corresponds to the sub-decision tree on the side of the node 2, and the history dialogue set 2 corresponds to the sub-decision tree on the side of the node 3. And simultaneously, removing the feature 4 from the first set to obtain a new first set [ feature 1, feature 2 and feature 3] serving as the first set corresponding to the feature matching process of the node 2 and the node 3.
Then, for node 2, the same method as provided in step 602 may be used to calculate, in combination with the historical dialog set 1 corresponding to node 2, an improved performance metric of 0.9 for feature 1, an improved performance metric of 0.85 for feature 2, and an improved performance metric of 0.3 for feature 3. Then the feature 1 is used as the dialogue feature corresponding to the node 2, and the history dialogue set 1 is classified based on the feature 1, so as to obtain a history dialogue set 3 and a history dialogue set 4. Wherein the history dialogue set 3 corresponds to the sub-decision tree on the side of the node 4, and the history dialogue set 4 corresponds to the sub-decision tree on the side of the node 5. And simultaneously, removing the feature 1 from the first set to obtain a new first set [ feature 2 and feature 3] serving as a first set corresponding to the feature matching process of the node 4 and the node 5.
Similarly, for node 3, the method provided in step 602 may be combined with the historical dialog set 2 corresponding to node 3 to calculate an improved performance metric of 0.7 for feature 1, 0.75 for feature 2, and 0.9 for feature 3. Then the feature 3 is used as a dialogue feature corresponding to the node 3, and the history dialogue set 2 is classified based on the feature 3, so as to obtain a history dialogue set 5 and a history dialogue set 6. Wherein the history dialogue set 5 corresponds to the sub-decision tree on the side of the node 6, and the history dialogue set 6 corresponds to the sub-decision tree on the side of the node 7. And simultaneously, removing the feature 3 from the first set to obtain a new first set [ feature 1 and feature 2] serving as a first set corresponding to the feature matching process of the node 6 and the node 7.
For node 4, the improved performance metric for feature 2 is 0.85 and the improved performance metric for feature 3 is 0.9, calculated in conjunction with the historical dialog set 3 corresponding to node 4. Feature 3 is taken as the corresponding dialog feature of node 4. At this time, after eliminating the feature 3, only the feature 2 remains in the new first set, and the feature 2 can be directly considered as the dialogue feature corresponding to the node 8, so as to simplify the operation.
For node 5, the improved performance metric for feature 2 is 0.85 and the improved performance metric for feature 3 is 0.4, combined with the historical dialog set 4 for node 5. Feature 2 is taken as the dialog feature corresponding to node 5. At this time, after eliminating the feature 2, only the feature 3 remains in the new first set, and the feature 3 may be directly considered as the dialogue feature corresponding to the node 9, so as to simplify the operation.
For node 6, the improved performance metric for feature 1 is 0.3 and the improved performance metric for feature 2 is 0.8, combined with the historical dialog set 5 corresponding to node 6. Feature 2 is taken as the corresponding dialog feature of node 6. At this time, after eliminating the feature 2, only the feature 1 remains in the new first set, and the feature 1 may be directly considered as the dialogue feature corresponding to the node 10, so as to simplify the operation.
For node 7, the improved performance metric for feature 1 is 0.95 and the improved performance metric for feature 2 is 0.8, combined with the historical dialog set 6 for node 7. Feature 1 is taken as the dialog feature corresponding to node 7. At this time, after eliminating the feature 1, only the feature 2 remains in the new first set, and the feature 2 may be directly considered as the dialogue feature corresponding to the node 11, so as to simplify the operation.
Thus, a correspondence between each of the at least one dialog feature and each node in the decision tree model may be determined. Resulting in an intent decision tree as shown in fig. 9.
204: And determining screening rules according to the intent decision tree.
In this embodiment, each node in the intent decision tree corresponds to a dialogue feature, each directed edge in the intent decision tree (as shown in fig. 8) corresponds to a classification rule, and the intent decision tree includes at least one decision tree branch. The classification rule is determined by dialogue characteristics corresponding to a father node in two nodes connected by the directed edge corresponding to the classification rule. Based on this, the present application provides a method for determining a screening rule according to an intent decision tree, as shown in fig. 10, the method includes:
1001: for each decision tree branch of the at least one decision tree branch, respectively determining dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch.
In this embodiment, the decision tree branches to any one of the result nodes from the root node of the intent decision tree.
1002: And extracting the dialogue characteristics corresponding to the nodes contained in each decision tree branch and the classification rules corresponding to the directed edges contained in each decision tree branch according to the progressive sequence of the nodes contained in each decision tree branch and the directed edges contained in each decision tree branch to obtain at least one first screening rule.
In this embodiment, at least one first filtering rule corresponds to at least one branch of the decision tree one-to-one. Illustratively, assume a decision tree branch is: and (3) arranging the dialogue characteristics 1 corresponding to the root node, the classification rules 1 corresponding to the directed edge 1, the dialogue characteristics 2 corresponding to the node 1, the classification rules 2 corresponding to the directed edge 2, the dialogue characteristics 3 corresponding to the node 2, the classification rules 3 corresponding to the directed edge 3 and the dialogue characteristics 4 corresponding to the result node 1 according to the sequence of the dialogue characteristics 1-classification rules 1-dialogue characteristics 2-classification rules 2-dialogue characteristics 3-classification rules 3-dialogue characteristics 4 to obtain the first screening rule corresponding to the branch of the decision tree.
1003: And respectively determining the first intention rate of each first screening rule in the at least one first screening rule, and arranging the at least one first screening rule according to the order of the first intention rate from the size to obtain a first rule set.
In this embodiment, the ratio of the number of dialogs in the history dialog set to the number of dialogs in the history dialog set having the transacted dialogs may be calculated as the intention ratio corresponding to the first filtering rule by calculating the history dialog set corresponding to the result node of the branch of the decision tree corresponding to each first filtering rule.
1004: And carrying out equal frequency division on the historical dialog set according to the first rule set to obtain at least one box body.
In this embodiment, the clients may be classified into 10 classes by classifying the number of historical conversations into bins according to the number of historical conversations, and classifying each bin into 10% of the number of historical conversations. Specifically, assuming that 12 first screening rules are obtained, after sorting, the following is obtained: screening rule 5, 90%,5%; screening rule 7, 80%,5%; screening rule 1, 75%,10%; screening rules 9, 65% and 15%; screening rules 12, 60%,5%; screening rule 2, 50% and 10%; screening rule 3, 45%,5%; screening rule 6, 40%,10%; screening rules 10, 30% and 10%; screening rule 8, 25%,5%; screening rule 4, 15%,10%; screening rules 11,5% and 10%; ]. Wherein the first percentage represents the intention rate corresponding to the rule, and the second percentage represents the proportion of the number of the historical dialogue data corresponding to the specification to the total number of the historical dialogue data.
Based on this, the screening rule 5 only has 5% of historical dialogue data, which is lower than 10% of the requirement of one box, and based on this, the rule requiring the next bit takes out 5% of historical dialogue data and the 5% of historical dialogue data corresponding to the screening rule 5 constitutes 10% of historical dialogue data, so as to form the first box. And the next bit of rule 5 is filtered: screening rule 7 has exactly 5% of the historical dialog data, then screening rule 5 and screening rule 7 are divided into a first box. Thus, when the number is insufficient, the history dialogue data of the missing number is "borrowed" by the rule of the next bit, and when the number overflows, the overflowed history dialogue data is divided into the next box.
1005: A second intent rate is determined for each of the at least one bin.
In the present embodiment, since each box is recombined, the corresponding intention rate also changes based on this, and thus, it is necessary to calculate a new intention rate again based on the history dialogue data in each box. Furthermore, the intent ratio per bin may also be calculated based on historical dialog data in each bin to yield an accuracy rate.
1006: And taking the first screening rule corresponding to the box body with the maximum second intention rate as the screening rule.
Illustratively, assuming that after recalculation, the second intent rate of the fourth bin is the greatest, and at the same time, the historical dialogue data in the fourth bin is derived from filter rule 6 and filter rule 8, filter rule 6 and filter rule 8 are the final filter rules.
205: And acquiring real-time dialogue data, and switching the client corresponding to the real-time dialogue data to the manual customer service when the real-time dialogue data accords with the screening rule.
In summary, in the client real-time dialogue transfer method based on artificial intelligence provided by the invention, a plurality of historical dialogue data recording one complete dialogue content between the client and the voice robot are obtained to form a historical dialogue set. And then, respectively carrying out feature extraction on each historical dialogue data according to at least one preset dimension to obtain at least one feature group. And then, determining the corresponding relation between each dialogue feature in at least one dialogue feature and each node in the decision tree model to obtain an intention decision tree. Finally, rule extraction is carried out on each decision tree branch of the intention decision tree, and the extracted rule is screened based on a preset rule screening rule, so that a final screening rule is determined. Therefore, the real-time dialogue is screened according to the final screening rule, and the clients corresponding to the real-time dialogue conforming to the rule are transferred to the manual service. Therefore, through the decision tree model, important factors really identifying the intention of the client can be screened out, and threshold values are sequentially divided, so that the accuracy of the intention identification of the client is improved, the screening granularity of the client is finer, the missing client in the outbound mode of the intelligent voice robot and the manual is reduced, and the handling condition is improved.
Referring to fig. 11, fig. 11 is a functional block diagram of a client real-time conversation switching device based on artificial intelligence according to an embodiment of the present application. As shown in fig. 11, the client real-time conversation transfer apparatus 1100 includes:
The collection module 1101 is configured to obtain a historical dialog set, where the historical dialog set includes at least one historical dialog data, and each historical dialog data in the at least one historical dialog data is used for recording a complete dialog content between a client and a voice robot;
The feature extraction module 1102 is configured to perform feature extraction on each piece of historical dialogue data according to at least one preset dimension, so as to obtain at least one feature group, where the at least one feature group corresponds to the at least one piece of historical dialogue data one by one, and each feature group in the at least one feature group includes at least one dialogue feature;
The processing module 1103 is configured to determine, according to a preset rule, a correspondence between each dialog feature in the at least one dialog feature and each node in the decision tree model, obtain an intent decision tree, and determine a screening rule according to the intent decision tree;
And the screening module 1104 is used for acquiring the real-time dialogue data, and switching the client corresponding to the real-time dialogue data to the manual customer service when the real-time dialogue data accords with the screening rule.
In an embodiment of the present invention, the at least one dimension may include: the dialogue round number dimension, based on this, in terms of feature extraction on each historical dialogue data according to at least one preset dimension, the feature extraction module 1102 is specifically configured to:
Extracting keywords from each historical dialogue data to obtain at least one keyword;
Dividing each history dialogue data according to at least one keyword to obtain at least one first history dialogue sub-data, wherein the at least one first history dialogue sub-data corresponds to the at least one keyword one by one;
determining a number of conversation rounds of each first historical conversation sub-data in the at least one first historical conversation sub-data, respectively;
Determining the average dialogue round number of the dialogue type corresponding to each first historical dialogue sub-data according to the keywords corresponding to each first historical dialogue sub-data;
And determining the first dialogue characteristic of each historical dialogue data according to the average dialogue round number of the dialogue type corresponding to each first historical dialogue sub-data and the dialogue round number of each first historical dialogue sub-data.
In an embodiment of the present invention, the at least one dimension may include: the key-operation node dimension, based on which, in terms of feature extraction for each historical dialogue data according to at least one preset dimension, the feature extraction module 1102 is specifically configured to:
Determining at least one key conversation node in each historical conversation data according to conversation process information of each historical conversation data;
Dividing each history dialogue data according to at least one key dialogue node to obtain at least one second history dialogue sub-data, wherein the at least one second history dialogue sub-data corresponds to the at least one key dialogue node one by one;
determining semantic features of each of the at least one second historical dialog sub-data separately;
Constructing a dialogue node jump graph according to the arrangement sequence of at least one key technology node in each historical dialogue data, wherein the dialogue node jump graph is used for declaring the jump relation between each key technology node of at least one key technology node;
determining the weight of each key operation node according to the dialogue node jump graph;
The second dialogue feature of each historical dialogue data is determined according to the semantic feature of each second historical dialogue sub data, the dialogue node jump map and the weight of each key dialogue node.
In an embodiment of the present invention, the at least one dimension may include: the dialogue end node dimension, based on which, in terms of feature extraction on each historical dialogue data according to at least one preset dimension, the feature extraction module 1102 is specifically configured to:
Determining a question-answer pair corresponding to a dialogue ending node in each history dialogue data as a first question-answer pair, wherein the question-answer pair consists of a question and a answer, the answer is used for replying to the question, and each history dialogue data consists of a plurality of question-answer pairs;
and extracting a first question-answer pair and the first n question-answer pairs of the first question-answer pair from each historical dialogue data to obtain a second question-answer pair of n+1 pairs, wherein n is an integer greater than or equal to 1.
Respectively extracting features of each of n+1 pairs of second question-answer pairs to obtain n+1 question-answer features, wherein the n+1 question-answer features are in one-to-one correspondence with the n+1 pairs of second question-answer pairs;
Establishing a topological relation diagram according to n+1 question-answer features;
And determining a third dialogue characteristic of each historical dialogue data according to the n+1 question-answer characteristics and the topological relation diagram.
In the embodiment of the present invention, in establishing a topological relation diagram according to n+1 question-answer features, the feature extraction module 1102 is specifically configured to:
N-! Sub-randomly selecting, and combining the two question-answer features selected randomly each time to obtain n-! The question-answer features randomly selected by any two times are not identical, wherein n-! Each of the feature groups includes a first question feature and a second question feature, and the first question feature is different from the second question feature;
Determining correlation coefficients between the first question-answer feature and the second question-answer feature in each feature group respectively to obtain n-! A correlation coefficient;
Taking n+1 question-answering features as n+1 nodes, wherein the n+1 question-answering features are in one-to-one correspondence with the n+1 nodes;
Will n-! And each relevance coefficient in the relevance coefficients is used as an edge between two nodes corresponding to two question-answer features in the feature group corresponding to each relevance coefficient, so that a topological relation diagram is obtained.
In the embodiment of the present invention, in determining, according to a preset rule, a correspondence between each dialog feature in at least one dialog feature and each node in the decision tree model, to obtain an intent decision tree, a processing module 1103 is specifically configured to:
Grouping at least one dialog feature into a first set;
Performing a feature matching process on the root node in the decision tree model and the first set based on the improved performance metric of each dialogue feature in the first set to obtain a fourth dialogue feature corresponding to the root node, wherein the improved performance metric corresponding to the fourth dialogue feature is the largest, and the improved performance metric is determined by each dialogue feature corresponding to the improved performance metric and the root node in the decision tree model;
Removing the fourth dialogue features from the first set to obtain a new first set, and taking the child node corresponding to the root node as a new root node;
Executing a feature matching process on the new root node and the new first set to obtain a new fourth dialogue feature corresponding to the new root node;
and removing the new fourth dialogue features from the new first set until the number of dialogue features in the new first set is 0, and obtaining the intention decision tree.
In the embodiment of the invention, each node in the intention decision tree corresponds to a dialogue feature, each directed edge in the intention decision tree corresponds to a classification rule, and the intention decision tree comprises at least one decision tree branch, wherein the classification rule is determined by the dialogue feature corresponding to a father node in two nodes connected by the directed edge corresponding to the classification rule, and the decision tree branch is a branch from a root node to any one result node of the intention decision tree;
based on this, in determining the screening rule according to the intent decision tree, the processing module 1103 is specifically configured to:
For each decision tree branch in at least one decision tree branch, respectively determining dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch;
Extracting dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch according to the progressive sequence of the nodes contained in each decision tree branch and the directed edges contained in each decision tree branch to obtain at least one first screening rule, wherein the at least one first screening rule corresponds to the at least one decision tree branch one by one;
determining first intention rate of each first screening rule in at least one first screening rule respectively, and arranging the at least one first screening rule according to the order of the first intention rate from the size to obtain a first rule set;
performing equal frequency division on the historical dialog set according to the first rule set to obtain at least one box body;
Determining a second intention rate of each box in at least one box respectively;
and taking the first screening rule corresponding to the box body with the maximum second intention rate as the screening rule.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 12, the electronic device 1200 includes a transceiver 1201, a processor 1202, and a memory 1203. Which are connected by a bus 1204. The memory 1203 is used for storing computer programs and data, and the data stored in the memory 1203 may be transferred to the processor 1202.
The processor 1202 is configured to read a computer program in the memory 1203 to perform the following operations:
Acquiring a historical dialogue set, wherein the historical dialogue set comprises at least one historical dialogue data, and each historical dialogue data in the at least one historical dialogue data is used for recording one complete dialogue content between a client and a voice robot;
According to at least one preset dimension, respectively carrying out feature extraction on each historical dialogue data to obtain at least one feature group, wherein the at least one feature group corresponds to the at least one historical dialogue data one by one, and each feature group in the at least one feature group comprises at least one dialogue feature;
Determining a corresponding relation between each dialogue feature in at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree;
Determining a screening rule according to the intention decision tree;
and acquiring real-time dialogue data, and switching the client corresponding to the real-time dialogue data to the manual customer service when the real-time dialogue data accords with the screening rule.
In an embodiment of the present invention, the at least one dimension may include: the dialog number dimension, based on which, in terms of feature extraction for each historical dialog data according to at least one dimension preset, the processor 1202 is specifically configured to:
Extracting keywords from each historical dialogue data to obtain at least one keyword;
Dividing each history dialogue data according to at least one keyword to obtain at least one first history dialogue sub-data, wherein the at least one first history dialogue sub-data corresponds to the at least one keyword one by one;
determining a number of conversation rounds of each first historical conversation sub-data in the at least one first historical conversation sub-data, respectively;
Determining the average dialogue round number of the dialogue type corresponding to each first historical dialogue sub-data according to the keywords corresponding to each first historical dialogue sub-data;
And determining the first dialogue characteristic of each historical dialogue data according to the average dialogue round number of the dialogue type corresponding to each first historical dialogue sub-data and the dialogue round number of each first historical dialogue sub-data.
In an embodiment of the present invention, the at least one dimension may include: the key-operation node dimension, based on which, in terms of feature extraction of each historical dialog data according to at least one dimension preset, the processor 1202 is specifically configured to:
Determining at least one key conversation node in each historical conversation data according to conversation process information of each historical conversation data;
Dividing each history dialogue data according to at least one key dialogue node to obtain at least one second history dialogue sub-data, wherein the at least one second history dialogue sub-data corresponds to the at least one key dialogue node one by one;
determining semantic features of each of the at least one second historical dialog sub-data separately;
Constructing a dialogue node jump graph according to the arrangement sequence of at least one key technology node in each historical dialogue data, wherein the dialogue node jump graph is used for declaring the jump relation between each key technology node of at least one key technology node;
determining the weight of each key operation node according to the dialogue node jump graph;
The second dialogue feature of each historical dialogue data is determined according to the semantic feature of each second historical dialogue sub data, the dialogue node jump map and the weight of each key dialogue node.
In an embodiment of the present invention, the at least one dimension may include: the session end node dimension, based on which, in terms of feature extraction for each historical session data according to at least one dimension preset, the processor 1202 is specifically configured to:
Determining a question-answer pair corresponding to a dialogue ending node in each history dialogue data as a first question-answer pair, wherein the question-answer pair consists of a question and a answer, the answer is used for replying to the question, and each history dialogue data consists of a plurality of question-answer pairs;
and extracting a first question-answer pair and the first n question-answer pairs of the first question-answer pair from each historical dialogue data to obtain a second question-answer pair of n+1 pairs, wherein n is an integer greater than or equal to 1.
Respectively extracting features of each of n+1 pairs of second question-answer pairs to obtain n+1 question-answer features, wherein the n+1 question-answer features are in one-to-one correspondence with the n+1 pairs of second question-answer pairs;
Establishing a topological relation diagram according to n+1 question-answer features;
And determining a third dialogue characteristic of each historical dialogue data according to the n+1 question-answer characteristics and the topological relation diagram.
In an embodiment of the present invention, the processor 1202 is specifically configured to perform the following operations in establishing a topological graph according to the n+1 question-answer features:
N-! Sub-randomly selecting, and combining the two question-answer features selected randomly each time to obtain n-! The question-answer features randomly selected by any two times are not identical, wherein n-! Each of the feature groups includes a first question feature and a second question feature, and the first question feature is different from the second question feature;
Determining correlation coefficients between the first question-answer feature and the second question-answer feature in each feature group respectively to obtain n-! A correlation coefficient;
Taking n+1 question-answering features as n+1 nodes, wherein the n+1 question-answering features are in one-to-one correspondence with the n+1 nodes;
Will n-! And each relevance coefficient in the relevance coefficients is used as an edge between two nodes corresponding to two question-answer features in the feature group corresponding to each relevance coefficient, so that a topological relation diagram is obtained.
In an embodiment of the present invention, in determining a correspondence between each dialog feature in the at least one dialog feature and each node in the decision tree model according to a preset rule, the processor 1202 is specifically configured to perform the following operations:
Grouping at least one dialog feature into a first set;
Performing a feature matching process on the root node in the decision tree model and the first set based on the improved performance metric of each dialogue feature in the first set to obtain a fourth dialogue feature corresponding to the root node, wherein the improved performance metric corresponding to the fourth dialogue feature is the largest, and the improved performance metric is determined by each dialogue feature corresponding to the improved performance metric and the root node in the decision tree model;
Removing the fourth dialogue features from the first set to obtain a new first set, and taking the child node corresponding to the root node as a new root node;
Executing a feature matching process on the new root node and the new first set to obtain a new fourth dialogue feature corresponding to the new root node;
and removing the new fourth dialogue features from the new first set until the number of dialogue features in the new first set is 0, and obtaining the intention decision tree.
In the embodiment of the invention, each node in the intention decision tree corresponds to a dialogue feature, each directed edge in the intention decision tree corresponds to a classification rule, and the intention decision tree comprises at least one decision tree branch, wherein the classification rule is determined by the dialogue feature corresponding to a father node in two nodes connected by the directed edge corresponding to the classification rule, and the decision tree branch is a branch from a root node to any one result node of the intention decision tree;
based on this, the processor 1202, in determining the screening rules from the intent decision tree, is specifically configured to:
For each decision tree branch in at least one decision tree branch, respectively determining dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch;
Extracting dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch according to the progressive sequence of the nodes contained in each decision tree branch and the directed edges contained in each decision tree branch to obtain at least one first screening rule, wherein the at least one first screening rule corresponds to the at least one decision tree branch one by one;
determining first intention rate of each first screening rule in at least one first screening rule respectively, and arranging the at least one first screening rule according to the order of the first intention rate from the size to obtain a first rule set;
performing equal frequency division on the historical dialog set according to the first rule set to obtain at least one box body;
Determining a second intention rate of each box in at least one box respectively;
and taking the first screening rule corresponding to the box body with the maximum second intention rate as the screening rule.
It should be understood that the client real-time dialogue transferring device based on artificial intelligence in the present application may include smart phones (such as Android Mobile phones, iOS Mobile phones, windows Phone Mobile phones, etc.), tablet computers, palm computers, notebook computers, mobile internet devices MID (Mobile INTERNET DEVICES, abbreviated as MID), robots, wearable devices, etc. The above-described client real-time conversation transfer devices are merely examples and are not intended to be exhaustive, including but not limited to the above-described client real-time conversation transfer devices. In practical application, the client real-time conversation switching device may further include: intelligent vehicle terminals, computer devices, etc.
From the above description of embodiments, it will be apparent to those skilled in the art that the present invention may be implemented in software in combination with a hardware platform. With such understanding, all or part of the technical solution of the present invention contributing to the background art may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the various embodiments or parts of the embodiments of the present invention.
Accordingly, embodiments of the present application also provide a computer readable storage medium storing a computer program for execution by a processor to perform some or all of the steps of any of the artificial intelligence based client real-time conversation transfer methods described in the method embodiments above. For example, the storage medium may include a hard disk, a floppy disk, an optical disk, a magnetic tape, a magnetic disk, a flash memory, etc.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the artificial intelligence based client real-time conversation transfer methods described in the method embodiments above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are alternative embodiments, and that the acts and modules involved are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional divisions when actually implemented, such as multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software program modules.
The integrated units, if implemented in the form of software program modules, may be stored in a computer-readable memory for sale or use as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product or all or part of the technical solution, which is stored in a memory, and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned memory includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, and the memory may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of the embodiments of the application in order that the detailed description of the principles and embodiments of the application may be implemented in conjunction with the detailed description of the embodiments that follows, the claims being merely intended to facilitate the understanding of the method and concepts underlying the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Claims (6)
1. A method for forwarding a customer real-time conversation based on artificial intelligence, the method comprising:
Acquiring a historical dialog set, wherein the historical dialog set comprises at least one historical dialog data, and each historical dialog data in the at least one historical dialog data is used for recording one complete dialog content between a client and a voice robot;
According to at least one preset dimension, extracting features of each historical dialogue data to obtain at least one feature group, wherein the at least one feature group corresponds to the at least one historical dialogue data one by one, and each feature group in the at least one feature group comprises at least one dialogue feature;
Determining the corresponding relation between each dialogue feature in the at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree;
determining a screening rule according to the intent decision tree;
Acquiring real-time dialogue data, and switching a client corresponding to the real-time dialogue data to a manual customer service when the real-time dialogue data accords with the screening rule;
Wherein the at least one dimension comprises: and the dimension of the dialogue ending node is used for respectively extracting the characteristics of each historical dialogue data according to at least one preset dimension, and the dimension comprises the following steps:
Determining a question-answer pair corresponding to a dialogue ending node in each historical dialogue data as a first question-answer pair, wherein the question-answer pair consists of one question sentence and one answer sentence, the answer sentence is used for replying to the question sentence, and each historical dialogue data consists of a plurality of question-answer pairs;
extracting the first question-answer pair and the first n question-answer pairs of the first question-answer pair from each historical dialogue data to obtain n+1 pairs of second question-answer pairs, wherein n is an integer greater than or equal to 1;
Extracting features of each of the n+1 pairs of second question-answer pairs to obtain n+1 question-answer features, wherein the n+1 question-answer features are in one-to-one correspondence with the n+1 pairs of second question-answer pairs;
Carrying out n | times of random selection on the n+1 question-answer features, and combining two question-answer features randomly selected each time to obtain n | times of feature groups, wherein the question-answer features randomly selected any two times are not completely the same, each feature group in the n | times of feature groups comprises a first question-answer feature and a second question-answer feature, and the first question-answer feature is different from the second question-answer feature;
Respectively determining correlation coefficients between the first question-answer feature and the second question-answer feature in each feature group to obtain n| correlation coefficients;
taking the n+1 question-answer features as n+1 nodes, wherein the n+1 question-answer features are in one-to-one correspondence with the n+1 nodes;
Taking each correlation coefficient in the n| correlation coefficients as an edge between two nodes corresponding to two question-answer features in the feature group corresponding to each correlation coefficient to obtain a topological relation diagram;
determining a third dialogue feature of each historical dialogue data according to the n+1 question-answer features and the topological relation diagram;
determining a correspondence between each dialogue feature in the at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree, wherein the method comprises the following steps:
grouping the at least one dialog feature into a first set;
performing a feature matching process on a root node in the decision tree model and the first set based on an improved performance metric of each dialogue feature in the first set to obtain a fourth dialogue feature corresponding to the root node, wherein the improved performance metric corresponding to the fourth dialogue feature is the largest, and the improved performance metric is determined by the each dialogue feature corresponding to the improved performance metric and the root node in the decision tree model;
Removing the fourth dialogue features from the first set to obtain a new first set, and taking the child node corresponding to the root node as a new root node;
Executing the feature matching process on the new root node and the new first set to obtain a new fourth dialogue feature corresponding to the new root node;
Removing the new fourth dialogue features from the new first set until the number of dialogue features in the new first set is 0, and obtaining the intention decision tree;
Each node in the intention decision tree corresponds to a dialogue feature, each directed edge in the intention decision tree corresponds to a classification rule, and the intention decision tree comprises at least one decision tree branch, wherein the classification rule is determined by the dialogue feature corresponding to a father node in two nodes connected by the directed edge corresponding to the classification rule, and the decision tree branch is a branch from a root node to any one result node of the intention decision tree;
the determining a screening rule according to the intent decision tree comprises the following steps:
For each decision tree branch in the at least one decision tree branch, respectively determining dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch;
Extracting dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch according to the progressive sequence of the nodes contained in each decision tree branch and the directed edges contained in each decision tree branch to obtain at least one first screening rule, wherein the at least one first screening rule corresponds to the at least one decision tree branch one by one;
determining a first intention rate of each first screening rule in the at least one first screening rule respectively, and arranging the at least one first screening rule according to the sequence from the first intention rate to the low so as to obtain a first rule set;
Performing equal frequency division on the historical dialog set according to a first rule set to obtain at least one box body;
determining a second intention rate of each box in the at least one box respectively;
And taking the first screening rule corresponding to the box body with the maximum second intention rate as the screening rule.
2. The method of claim 1, wherein the at least one dimension comprises: and the dialogue round number dimension, which respectively performs feature extraction on each historical dialogue data according to at least one preset dimension, comprises the following steps:
extracting keywords from each historical dialogue data to obtain at least one keyword;
Dividing each historical dialogue data according to the at least one keyword to obtain at least one first historical dialogue sub-data, wherein the at least one first historical dialogue sub-data corresponds to the at least one keyword one by one;
determining a number of conversation rounds for each of the at least one first historical conversation sub-data, respectively;
determining the average number of dialogue rounds of the dialogue type corresponding to each first historical dialogue sub-data according to the keywords corresponding to each first historical dialogue sub-data;
and determining the first dialogue characteristic of each historical dialogue data according to the average dialogue round number of the dialogue type corresponding to each first historical dialogue sub-data and the dialogue round number of each first historical dialogue sub-data.
3. The method of claim 1, wherein the at least one dimension comprises: and the key conversation node dimension performs feature extraction on each historical conversation data according to at least one preset dimension, and includes:
Determining at least one key conversation node in each historical conversation data according to conversation flow information of each historical conversation data;
dividing each history dialogue data according to the at least one key dialogue node to obtain at least one second history dialogue sub-data, wherein the at least one second history dialogue sub-data corresponds to the at least one key dialogue node one by one;
determining semantic features of each of the at least one second historical dialog sub-data separately;
Constructing a dialogue node jump graph according to the arrangement sequence of the at least one key technology node in each historical dialogue data, wherein the dialogue node jump graph is used for declaring the jump relation between each key technology node of the at least one key technology node;
Determining the weight of each key conversation node according to the conversation node jump diagram;
and determining the second dialogue characteristics of each historical dialogue data according to the semantic characteristics of each second historical dialogue sub-data, the dialogue node jump graph and the weight of each key dialogue node.
4. An artificial intelligence based customer real-time conversation transfer apparatus, the apparatus comprising:
The system comprises an acquisition module, a voice robot and a processing module, wherein the acquisition module is used for acquiring a historical dialogue set, the historical dialogue set comprises at least one historical dialogue data, and each historical dialogue data in the at least one historical dialogue data is used for recording one complete dialogue content between a client and the voice robot;
The feature extraction module is used for carrying out feature extraction on each historical dialogue data according to at least one preset dimension to obtain at least one feature group, wherein the at least one feature group corresponds to the at least one historical dialogue data one by one, and each feature group in the at least one feature group comprises at least one dialogue feature;
the processing module is used for determining the corresponding relation between each dialogue feature in the at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree, and determining a screening rule according to the intention decision tree;
The screening module is used for acquiring real-time dialogue data, and switching the client corresponding to the real-time dialogue data to the manual customer service when the real-time dialogue data accords with the screening rule;
wherein the at least one dimension comprises: and the dialogue ending node dimension is used for respectively carrying out feature extraction on each historical dialogue data according to at least one preset dimension, and the feature extraction module is used for:
Determining a question-answer pair corresponding to a dialogue ending node in each historical dialogue data as a first question-answer pair, wherein the question-answer pair consists of one question sentence and one answer sentence, the answer sentence is used for replying to the question sentence, and each historical dialogue data consists of a plurality of question-answer pairs;
extracting the first question-answer pair and the first n question-answer pairs of the first question-answer pair from each historical dialogue data to obtain n+1 pairs of second question-answer pairs, wherein n is an integer greater than or equal to 1;
Extracting features of each of the n+1 pairs of second question-answer pairs to obtain n+1 question-answer features, wherein the n+1 question-answer features are in one-to-one correspondence with the n+1 pairs of second question-answer pairs;
Carrying out n | times of random selection on the n+1 question-answer features, and combining two question-answer features randomly selected each time to obtain n | times of feature groups, wherein the question-answer features randomly selected any two times are not completely the same, each feature group in the n | times of feature groups comprises a first question-answer feature and a second question-answer feature, and the first question-answer feature is different from the second question-answer feature;
Respectively determining correlation coefficients between the first question-answer feature and the second question-answer feature in each feature group to obtain n| correlation coefficients;
taking the n+1 question-answer features as n+1 nodes, wherein the n+1 question-answer features are in one-to-one correspondence with the n+1 nodes;
Taking each correlation coefficient in the n| correlation coefficients as an edge between two nodes corresponding to two question-answer features in the feature group corresponding to each correlation coefficient to obtain a topological relation diagram;
determining a third dialogue feature of each historical dialogue data according to the n+1 question-answer features and the topological relation diagram;
And determining a corresponding relation between each dialogue feature in the at least one dialogue feature and each node in the decision tree model according to a preset rule to obtain an intention decision tree aspect, wherein the processing module is used for:
grouping the at least one dialog feature into a first set;
performing a feature matching process on a root node in the decision tree model and the first set based on an improved performance metric of each dialogue feature in the first set to obtain a fourth dialogue feature corresponding to the root node, wherein the improved performance metric corresponding to the fourth dialogue feature is the largest, and the improved performance metric is determined by the each dialogue feature corresponding to the improved performance metric and the root node in the decision tree model;
Removing the fourth dialogue features from the first set to obtain a new first set, and taking the child node corresponding to the root node as a new root node;
Executing the feature matching process on the new root node and the new first set to obtain a new fourth dialogue feature corresponding to the new root node;
Removing the new fourth dialogue features from the new first set until the number of dialogue features in the new first set is 0, and obtaining the intention decision tree;
Each node in the intention decision tree corresponds to a dialogue feature, each directed edge in the intention decision tree corresponds to a classification rule, and the intention decision tree comprises at least one decision tree branch, wherein the classification rule is determined by the dialogue feature corresponding to a father node in two nodes connected by the directed edge corresponding to the classification rule, and the decision tree branch is a branch from a root node to any one result node of the intention decision tree;
in the aspect of determining the screening rule according to the intent decision tree, the processing module is configured to:
For each decision tree branch in the at least one decision tree branch, respectively determining dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch;
Extracting dialogue characteristics corresponding to nodes contained in each decision tree branch and classification rules corresponding to directed edges contained in each decision tree branch according to the progressive sequence of the nodes contained in each decision tree branch and the directed edges contained in each decision tree branch to obtain at least one first screening rule, wherein the at least one first screening rule corresponds to the at least one decision tree branch one by one;
determining a first intention rate of each first screening rule in the at least one first screening rule respectively, and arranging the at least one first screening rule according to the sequence from the first intention rate to the low so as to obtain a first rule set;
Performing equal frequency division on the historical dialog set according to a first rule set to obtain at least one box body;
determining a second intention rate of each box in the at least one box respectively;
And taking the first screening rule corresponding to the box body with the maximum second intention rate as the screening rule.
5. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by the processor, the one or more programs comprising instructions for performing the steps of the method of any of claims 1-3.
6. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111156648.4A CN113886547B (en) | 2021-09-29 | 2021-09-29 | Client real-time dialogue switching method and device based on artificial intelligence and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111156648.4A CN113886547B (en) | 2021-09-29 | 2021-09-29 | Client real-time dialogue switching method and device based on artificial intelligence and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113886547A CN113886547A (en) | 2022-01-04 |
CN113886547B true CN113886547B (en) | 2024-06-28 |
Family
ID=79004576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111156648.4A Active CN113886547B (en) | 2021-09-29 | 2021-09-29 | Client real-time dialogue switching method and device based on artificial intelligence and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113886547B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114548846A (en) * | 2022-04-28 | 2022-05-27 | 中信建投证券股份有限公司 | Man-machine task allocation decision method and device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105657201A (en) * | 2016-01-26 | 2016-06-08 | 北京京东尚科信息技术有限公司 | Method and system for processing call based on decision tree model |
CN112329843A (en) * | 2020-11-03 | 2021-02-05 | 中国平安人寿保险股份有限公司 | Call data processing method, device, equipment and storage medium based on decision tree |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107590159A (en) * | 2016-07-08 | 2018-01-16 | 阿里巴巴集团控股有限公司 | The method and apparatus that robot customer service turns artificial customer service |
US20180341870A1 (en) * | 2017-05-23 | 2018-11-29 | International Business Machines Corporation | Managing Indecisive Responses During a Decision Tree Based User Dialog Session |
US11023787B2 (en) * | 2018-10-19 | 2021-06-01 | Oracle International Corporation | Method, system and program for generating decision trees for chatbots dialog flow capabilities |
CN113268579B (en) * | 2021-06-24 | 2023-12-08 | 中国平安人寿保险股份有限公司 | Dialogue content category identification method, device, computer equipment and storage medium |
-
2021
- 2021-09-29 CN CN202111156648.4A patent/CN113886547B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105657201A (en) * | 2016-01-26 | 2016-06-08 | 北京京东尚科信息技术有限公司 | Method and system for processing call based on decision tree model |
CN112329843A (en) * | 2020-11-03 | 2021-02-05 | 中国平安人寿保险股份有限公司 | Call data processing method, device, equipment and storage medium based on decision tree |
Also Published As
Publication number | Publication date |
---|---|
CN113886547A (en) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112199375B (en) | Cross-modal data processing method and device, storage medium and electronic device | |
WO2023065545A1 (en) | Risk prediction method and apparatus, and device and storage medium | |
CN106447066A (en) | Big data feature extraction method and device | |
CN105930402A (en) | Convolutional neural network based video retrieval method and system | |
CN112231592B (en) | Graph-based network community discovery method, device, equipment and storage medium | |
WO2021243903A1 (en) | Method and system for transforming natural language into structured query language | |
CN113254711B (en) | Interactive image display method and device, computer equipment and storage medium | |
CN113657274B (en) | Table generation method and device, electronic equipment and storage medium | |
CN112749556B (en) | Multi-language model training method and device, storage medium and electronic equipment | |
CN113407851A (en) | Method, device, equipment and medium for determining recommendation information based on double-tower model | |
CN116821373A (en) | Map-based prompt recommendation method, device, equipment and medium | |
CN114519397B (en) | Training method, device and equipment for entity link model based on contrast learning | |
CN113886547B (en) | Client real-time dialogue switching method and device based on artificial intelligence and electronic equipment | |
CN118410146A (en) | Cross search method, device, equipment and storage medium based on large language model | |
CN113139110A (en) | Regional feature processing method, device, equipment, storage medium and program product | |
CN116702784A (en) | Entity linking method, entity linking device, computer equipment and storage medium | |
CN114647739B (en) | Entity chain finger method, device, electronic equipment and storage medium | |
CN114580354B (en) | Information coding method, device, equipment and storage medium based on synonym | |
CN115168609A (en) | Text matching method and device, computer equipment and storage medium | |
CN113239215B (en) | Classification method and device for multimedia resources, electronic equipment and storage medium | |
CN113128225B (en) | Named entity identification method and device, electronic equipment and computer storage medium | |
CN113806541A (en) | Emotion classification method and emotion classification model training method and device | |
CN113553415A (en) | Question and answer matching method and device and electronic equipment | |
CN112818167A (en) | Entity retrieval method, entity retrieval device, electronic equipment and computer-readable storage medium | |
CN112559727A (en) | Method, apparatus, device, storage medium, and program for outputting information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |