CN113987202A - Knowledge graph-based interactive telephone calling method and device - Google Patents

Knowledge graph-based interactive telephone calling method and device Download PDF

Info

Publication number
CN113987202A
CN113987202A CN202111250980.7A CN202111250980A CN113987202A CN 113987202 A CN113987202 A CN 113987202A CN 202111250980 A CN202111250980 A CN 202111250980A CN 113987202 A CN113987202 A CN 113987202A
Authority
CN
China
Prior art keywords
information
intention
entity
knowledge graph
text information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111250980.7A
Other languages
Chinese (zh)
Inventor
许旭
熊磊
唐超
周海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tongqu Information Technology Co ltd
Original Assignee
Shanghai Tongqu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tongqu Information Technology Co ltd filed Critical Shanghai Tongqu Information Technology Co ltd
Priority to CN202111250980.7A priority Critical patent/CN113987202A/en
Publication of CN113987202A publication Critical patent/CN113987202A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/232Orthographic correction, e.g. spell checking or vowelisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42034Calling party identification service

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention provides an interactive telephone calling method and device based on a knowledge graph, wherein the interactive telephone calling method based on the knowledge graph comprises the following steps: s1: converting the received voice information of the user into text information; s2: identifying intention information corresponding to the text information and extracting entity information corresponding to the text information; s3: storing the intention information and the corresponding entity information in a knowledge graph to form an updated knowledge graph; s4: extracting based on the intention information and the entity information in the updated knowledge graph to obtain a corresponding relation between the intention information and the entity information; s5: and acquiring intention information corresponding to the new text information by utilizing the corresponding relation between the intention information and the entity information based on the newly acquired new text information of the user.

Description

Knowledge graph-based interactive telephone calling method and device
Technical Field
The invention relates to the technical field of computers, in particular to an interactive telephone calling method and device based on a knowledge graph.
Background
The intelligent incoming call is a new generation intelligent product based on the integration of technologies such as voice recognition, natural language processing, text-to-speech, internet phone and the like, and has the characteristics of response-to-incoming call, interaction boundary, high service efficiency and the like. The method is widely applied to the fields of customer service and the like. Most of the existing intelligent calling products in the market still use the idea of task-based conversation to carry out multi-round conversation interaction, so that the intelligent calling products have certain limitations and experience also improves space; because the intention of the user is distorted due to the fact that the voice is converted into the text, the intention recognition accuracy rate of the existing intelligent incoming call is not very high.
In order to solve the above-mentioned drawbacks of the prior art, it is necessary to provide a method and an apparatus for calling an interactive phone based on a knowledge graph.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for calling an interactive phone based on a knowledge graph, so as to at least partially solve the problems of the prior art or provide an alternative method for calling an interactive phone based on a knowledge graph.
In order to achieve the above object, a first aspect of the present invention provides a method for calling an interactive phone based on a knowledge graph, wherein the method for calling an interactive phone based on a knowledge graph comprises:
s1: converting the received voice information of the user into text information;
s2: identifying intention information corresponding to the text information and extracting entity information corresponding to the text information;
s3: storing the intention information and the corresponding entity information in a knowledge graph to form an updated knowledge graph;
s4: extracting based on the intention information and the entity information in the updated knowledge graph to obtain a corresponding relation between the intention information and the entity information;
s5: and acquiring intention information corresponding to the new text information by utilizing the corresponding relation between the intention information and the entity information based on the newly acquired new text information of the user.
The method for calling an interactive phone based on a knowledge graph as described above, wherein the step S1 includes:
s11: converting the received voice information of the user into initial text information;
s12: correcting the initial text information to change the initial text information into the text information.
The method for calling in an interactive phone based on a knowledge graph as described above, wherein the identifying of the intention information corresponding to the text information includes:
performing one or more pre-treatments of word segmentation, removal of stop words and synonym rewriting on the text information;
and identifying the processed text information to identify intention information corresponding to the text information.
The method for calling in an interactive phone based on the knowledge graph comprises the steps that intention information of a user can be obtained based on extracted entity information, and the intention information and corresponding entity information are stored in the knowledge graph to form an updated knowledge graph.
The method for calling in an interactive phone based on the knowledge graph as described above, wherein when the intention information of the user cannot be obtained based on the extracted entity information, the real intention of the current round is judged according to the combination of the entity information and the intention information in the knowledge graph and the combination of the entity information and the intention information identified by the current round, so as to update the knowledge graph.
The method for calling an interactive phone based on the knowledge graph as described above, wherein the extracting based on the intention information and the entity information in the updated knowledge graph comprises: and unifying entity information corresponding to the intention information under the condition that the intention information in the knowledge graph is synonymous.
A second aspect of the present invention provides a device for calling an interactive phone based on a knowledge graph, comprising:
the conversion module is used for converting the received voice information of the user into text information;
the identification and advance module is used for identifying intention information corresponding to the text information and extracting entity information corresponding to the text information;
the storage module is used for storing the intention information and the corresponding entity information in a knowledge graph to form an updated knowledge graph;
the extraction module is used for extracting based on the intention information and the entity information in the updated knowledge graph to obtain the corresponding relation between the intention information and the entity information;
and the obtaining module is used for obtaining the intention information corresponding to the new text information by utilizing the corresponding relation between the intention information and the entity information based on the newly obtained new text information of the user.
The device for calling in an interactive telephone based on the knowledge graph is characterized in that the conversion module is used for converting the received voice information of the user into initial text information; and correcting the initial text information to change the initial text information into the text information.
A third aspect of the present invention provides a terminal device, comprising a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of the method for making an interactive phone call based on a knowledge graph as described above when executing the computer program.
A fourth aspect of the present invention proposes a computer-readable storage medium storing a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method for an interactive phone call based on a knowledge-graph as described above.
The features mentioned above can be combined in various suitable ways or replaced by equivalent features as long as the object of the invention is achieved.
Drawings
FIG. 1 is a flow chart of a method for a knowledgegraph-based interactive telephone call-in according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for a knowledgegraph-based interactive telephone call-in according to an embodiment of the present invention;
FIG. 3 is a flow chart of an intent recognition algorithm according to an embodiment of the present invention;
FIG. 4 is a flowchart of an entity identification algorithm according to an embodiment of the present invention;
FIG. 5 is a flow chart of an ASR correction interaction according to an embodiment of the present invention;
FIG. 6 is a flow chart of session management according to an embodiment of the present invention;
FIG. 7 is a flow diagram of knowledge extraction according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a structure of an interactive phone call-in device based on knowledge-graph according to an embodiment of the present invention; and
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
The technical solution of the embodiments of the present invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a method for calling an interactive phone based on a knowledge graph, wherein the method for calling an interactive phone based on a knowledge graph of the present invention comprises:
s1: and converting the received voice information of the user into text information.
In one embodiment, step S1 includes: s11: converting the received voice information of the user into initial text information; s12: the original text information is error corrected to change the original text information into text information.
Specifically, ASR correction is used to determine the correctness of data after speech-to-text (ASR) and rewrite an incorrect text into a correct expression. For example: "where your king address is still in" error corrected "where your child king address is.
The speech of the user is translated into text data through speech-to-text conversion, and the ASR correction service predicts and rewrites the translated data into correct text data by using a natural language processing algorithm.
S2: and identifying intention information corresponding to the text information and extracting entity information corresponding to the text information.
In a specific embodiment, identifying the intention information corresponding to the text information includes: performing one or more pre-treatments of word segmentation, stop word removal and synonym rewriting on the text information; and identifying the processed text information to identify intention information corresponding to the text information.
Specifically, intention recognition is used for predicting the intention of the corrected text data, the intention recognition is realized by combining rules with machine learning prediction, and the fasttext is specifically used as a classification model, so that controllability and generalization capability are ensured. For example: "when the order refund arrives" means that the user intends to be "order refund aging".
The entity recognition is used for further extracting entities aiming at the description of the user after the intention recognition, an entity thesaurus is used for extracting work in combination with a machine learning model, and the bi-lstm + crf is specifically used as a sequence labeling model, so that the accuracy and the recall rate are ensured. For example: the user describes "when the order refund is paid out", wherein the entities are "order", "refund aging".
S3: storing the intention information and the corresponding entity information in a knowledge graph to form an updated knowledge graph.
In a specific embodiment, the intention information of the user can be obtained based on the extracted entity information, and the intention information and the corresponding entity information are stored in the knowledge graph to form an updated knowledge graph. Further, when the intention information of the user cannot be obtained based on the extracted entity information, the real intention of the current round is judged according to the combination of the entity information and the intention information in the knowledge graph and the entity information and the intention information identified by the current round, so that the knowledge graph is updated.
Specifically, the intention recognition requires correct prediction of the corrected text data, resulting in the true intention of the user. And extracting the entity from the text data after error correction to obtain entity data expressed by the user.
The map engine is used for storing the extracted entity and relationship data and providing high-performance query service.
The system analyzes the expression of the user, and the dialogue management system is responsible for storing the recognized intention and entity of the user dialogue, predicting the next dialogue of the user and assembling and returning the dialogue.
S4: and extracting the intention information and the entity information based on the updated knowledge graph to obtain the corresponding relation between the intention information and the entity information.
In a particular embodiment, knowledge extraction is used to extract entities and relationships in unstructured data based on rules and a business thesaurus. For example: the global purchase order refund time to account is extracted as global purchase order and refund time efficiency according to the inquiry method. Wherein, the global purchase order entity and the order entity are in an up-and-down relationship, and the refund aging entity and the refund entity are in an attribute relationship.
S5: and acquiring intention information corresponding to the new text information by using the corresponding relation between the intention information and the entity information based on the newly acquired new text information of the user.
In a specific embodiment, the next prediction of the user depends on the structuring of the existing knowledge base, and needs to be extracted into specific entity and relationship data, the data needs to be efficiently stored and managed, and the graph engine provides basic services.
The interactive telephone call-in method based on the knowledge graph realizes multi-round interaction under an intelligent call-in scene by combining technologies such as structured extraction of knowledge, intention recognition, dialogue management, knowledge reasoning and the like, improves user experience and has a better effect.
A specific embodiment of the method for interactive phone call-in based on knowledge-graph of the present invention will now be described in detail with reference to fig. 2 to 7 for clarity of the invention, which is not intended to limit the invention.
Specifically, in a specific embodiment, the method for calling an interactive phone based on the knowledge-graph comprises the following steps:
and the call center provides basic services of the network telephone, including number resource management, line management and the like.
Speech recognition for converting a user's speech utterance into text data. Further, speech recognition converts the speech expression of the user on the telephone into text data.
Specifically, ASR error correction needs to be identified for translation errors in text word data, and to give correct replacement and rewriting, a certain accuracy and recall rate need to be ensured. And ASR correction, namely correcting the error of the text data after the voice translation and translating the text data into correct data.
And (4) intention recognition, wherein each sentence of description of the user can be regarded as an intention, and the intention recognition is to recognize the intention of the description of the user by utilizing a natural language processing algorithm and correctly classify the description.
Specifically, after the intention recognition is subjected to some preprocessing processes such as word segmentation and synonym rewriting, the expression of the user is predicted by using a machine learning algorithm, the return of the intention label of the user is obtained, and a high accuracy rate needs to be ensured.
And (3) entity identification, wherein the description of the user comprises various entity data, and the entity identification extracts correct and complete entity data information aiming at the description of the user.
Specifically, entity recognition is performed for the corrected text data and the result of intention recognition, further entity extraction is performed, entity data in user expression is recognized, the problem of entity ambiguity needs to be solved in entity recognition, the accuracy is guaranteed, and the result influences the task of subsequent knowledge reasoning and context interaction.
And conversation management, namely, storing and managing data generated by the user in a conversation process and carrying out reasoning and prediction on the context information.
Specifically, the dialogue management records the intention and the recognition data of the user recognized in the one-pass dialogue, and the next round of interactive dialogue content is obtained by inquiring the map engine according to the entity data through knowledge reasoning and completion.
The map engine generates entity data and relationship data after structuring the data, and can efficiently inquire and reason aiming at the entities and the relationships by the self-research map engine to provide basic data service for intelligent incoming calls.
In particular, the graph engine provides basic services for entity and relationship storage, and needs to provide highly available services.
And speech synthesis for converting the text data into speech data.
Specifically, text-to-speech provides for converting text data into anthropomorphic speech back to the user.
And (4) knowledge extraction, namely extracting unstructured text data in the customer service knowledge base into structured entity and relationship data.
Specifically, the knowledge extraction extracts text data in a customer service knowledge base, mainly question and answer pairs, into entity and relationship data through a machine learning algorithm, and the entity and relationship data are mainly classified into service entities, such as: "order", "share and add", etc.; semantic entities such as "cash up", "return", "open", etc. The relationship mainly includes "upper and lower", "reason", "attribute", and the like.
In one embodiment, the method for calling an interactive phone based on a knowledge graph comprises the following steps:
and the intention identification service is used for identifying the intention of the user, and performing a series of preprocessing operations including word segmentation, stop word removal, synonym rewriting and the like after the text data corrected by the user is obtained. The intention recognition algorithm strategy is realized based on the combination of rules and models, the preprocessed data is subjected to rule matching and classification model prediction at the same time, the confidence coefficient is output when the data is processed together, and the most reasonable result is output after the data is sorted. The rule matching part is mainly combined with certain similarity based on a regular expression, the similarity algorithm mainly used is cosine included angle similarity, the input is a rule template and a preprocessed text, and the addition of the similarity in the rule matching only accounts for a part of the proportion. The classification model selects fasttext, good gains can be obtained in the training speed and the practical effect, and the trained corpora are trained after manual labeling for the untransformed artificial dialects in the real communication of the users. The result of the intention recognition is the intention label expressed by the user, which corresponds to a standard answer and is subsequently returned to the user.
And the entity identification service is used for analyzing entity data in the user expression by the user, and performing a series of preprocessing work including word segmentation, removal of stop words, synonym rewriting and the like after obtaining the text data corrected by the user. The entity recognition algorithm rules are based in part on a combination of rule matching and model prediction. The rule matching mainly uses an entity lexicon, and the model part mainly uses bilstm + crf. Can obtain better accuracy and recall rate. Entity recognition is an important ring in the whole invention, and the expression of the user is understood in a fine-grained manner, so that the method plays a key role in subsequent reasoning and completion.
After the ASR correction service converts speech into text, translation errors often occur due to factors such as telephone lines and call environments, and if no correction is performed, the effects of intention recognition and entity recognition are affected. ASR correction is mainly based on customer service knowledge base data, and the system marks correct data and a confusion set as training data. The confusion set generator generates a part of negative samples by using pinyin confusion and editing distance of original knowledge base data, the discrimination model selects bert, and the main characteristics comprise pinyin characteristics and Chinese character characteristics.
The context management system is mainly used for storing the entity identification result and the intention identification result according to the conversation dimension of the user. And if the user intention is clear, directly returning an answer. Judging the real intentions of the current round according to the existing entities above and the intentions in combination with the entities and intentions identified by the current round under the condition that the intentions of the user are not clear, wherein the entities are divided into business entities and semantic entities, and each question-answering intention can be divided into business entities and semantic entities; in the intention of how to refund an order, for example, the order is a business entity, how to refund is a semantic entity, and the corresponding template and the priority of the entity are maintained in the process of performing context completion to perform completion and ask the user back. When the map is constructed, the map is extracted according to a knowledge base question method, and the intention of the question method is that the map has a corresponding relation with an entity. The specific process comprises the steps of inquiring the entity passing through and the entity map engine identified in the current round, sequencing confidence degrees according to the inquired results, and outputting the result with a higher score. The confidence coefficient is calculated in the following way: the number of entities/total number of entities in the standard query is identified.
And (3) knowledge extraction, namely extracting entities and relations of question-answer pairs in the customer service knowledge base, wherein the extracted results are the entities and the relations. The extraction process comprises the steps of taking knowledge base question-answer pairs, then carrying out a series of preprocessing works, carrying out a round of recall by utilizing an entity word bank matching mode, carrying out entity unification aiming at the synonymous condition, and returning a result.
As shown in fig. 8, a second aspect of the present invention provides a knowledge-graph-based interactive phone call-in apparatus, comprising:
a conversion module 10, configured to convert the received voice information of the user into text information;
an identifying and advancing module 20, configured to identify intention information corresponding to the text information and extract entity information corresponding to the text information;
a storage module 30, configured to store the intention information and the corresponding entity information in a knowledge graph to form an updated knowledge graph;
an extraction module 40, configured to extract based on the intention information and the entity information in the updated knowledge graph, to obtain a relationship corresponding to the intention information and the entity information;
an obtaining module 50, configured to obtain, based on newly obtained new text information of the user, intention information corresponding to the new text information by using a relationship corresponding to the intention information and the entity information.
The detailed functions of the converting module 10, the identifying and advancing module 20, the storing module 30, the extracting module 40 and the obtaining module 50 correspond to the processes of the steps S1 to S5, and are not described herein again.
Fig. 9 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 9, the terminal device 6 of this embodiment includes: a processor 60, a memory 61, and a computer program 62, such as a program for a knowledgegraph-based interactive telephone call-in method, stored in the memory 61 and operable on the processor 60. The processor 60, when executing the computer program 62, implements the steps in the various data-based task dependent method embodiments described above, such as the steps S1 through S5 shown above. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the various modules/units in the various device embodiments described above, such as the functions of the modules 10 to 50 shown in fig. 8.
Illustratively, the computer program 62 may be divided into one or more modules/units, which are stored in the memory 61 and executed by the processor 60 to implement the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. Terminal device 6 may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk provided on the terminal device 6, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 61 may also include both an internal storage unit of the terminal device 6 and an external storage device. The memory 61 is used for storing computer programs and other programs and data required by the terminal device 6. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
In addition, well known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures for simplicity of illustration and discussion, and so as not to obscure the invention. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the invention, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the present invention is to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that the invention can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
Those skilled in the art will appreciate that the present invention includes apparatus directed to performing one or more of the operations described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer). It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the features specified in the block or blocks of the block diagrams and/or flowchart illustrations of the present disclosure.
Those of skill in the art will appreciate that various operations, methods, steps in the processes, acts, or solutions discussed in the present application may be alternated, modified, combined, or deleted. Further, various operations, methods, steps in the flows, which have been discussed in the present application, may be interchanged, modified, rearranged, decomposed, combined, or eliminated. Further, steps, measures, schemes in the various operations, methods, procedures disclosed in the prior art and the present invention can also be alternated, changed, rearranged, decomposed, combined, or deleted. It should be understood by one of ordinary skill in the art that the above discussion of any embodiment is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A knowledge graph-based interactive telephone calling method is characterized by comprising the following steps:
s1: converting the received voice information of the user into text information;
s2: identifying intention information corresponding to the text information and extracting entity information corresponding to the text information;
s3: storing the intention information and the corresponding entity information in a knowledge graph to form an updated knowledge graph;
s4: extracting based on the intention information and the entity information in the updated knowledge graph to obtain a corresponding relation between the intention information and the entity information;
s5: and acquiring intention information corresponding to the new text information by utilizing the corresponding relation between the intention information and the entity information based on the newly acquired new text information of the user.
2. The method for calling an interactive phone based on a knowledge-graph as claimed in claim 1, wherein the step S1 comprises:
s11: converting the received voice information of the user into initial text information;
s12: correcting the initial text information to change the initial text information into the text information.
3. The method of claim 1, wherein the identifying intent information corresponding to the text information comprises:
performing one or more pre-treatments of word segmentation, removal of stop words and synonym rewriting on the text information;
and identifying the processed text information to identify intention information corresponding to the text information.
4. The method of claim 1, wherein the intention information of the user is obtained based on the extracted entity information, and the intention information and the corresponding entity information are stored in the knowledge graph to form an updated knowledge graph.
5. The method of claim 4, wherein when the intention information of the user cannot be obtained based on the extracted entity information, the actual intention of the current round is determined according to the combination of the entity information and the intention information in the knowledge graph and the entity information and the intention information recognized by the current round, so as to update the knowledge graph.
6. The method of any of claims 1 to 5, wherein the extracting based on the intention information and the entity information in the updated knowledge graph comprises: and unifying entity information corresponding to the intention information under the condition that the intention information in the knowledge graph is synonymous.
7. An interactive phone call-in device based on knowledge graph, comprising:
the conversion module is used for converting the received voice information of the user into text information;
the identification and advance module is used for identifying intention information corresponding to the text information and extracting entity information corresponding to the text information;
the storage module is used for storing the intention information and the corresponding entity information in a knowledge graph to form an updated knowledge graph;
the extraction module is used for extracting based on the intention information and the entity information in the updated knowledge graph to obtain the corresponding relation between the intention information and the entity information;
and the obtaining module is used for obtaining the intention information corresponding to the new text information by utilizing the corresponding relation between the intention information and the entity information based on the newly obtained new text information of the user.
8. The device according to claim 7, wherein the conversion module is configured to convert the received voice message of the user into an initial text message; and correcting the initial text information to change the initial text information into the text information.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements the steps of the method for knowledgegraph-based interactive phone call-in according to any of claims 1 to 6.
10. A computer-readable storage medium, having a computer program stored thereon, wherein the computer program, when being executed by a processor, performs the steps of the method for a knowledgegraph-based interactive telephone call incoming according to any one of claims 1 to 6.
CN202111250980.7A 2021-10-26 2021-10-26 Knowledge graph-based interactive telephone calling method and device Pending CN113987202A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111250980.7A CN113987202A (en) 2021-10-26 2021-10-26 Knowledge graph-based interactive telephone calling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111250980.7A CN113987202A (en) 2021-10-26 2021-10-26 Knowledge graph-based interactive telephone calling method and device

Publications (1)

Publication Number Publication Date
CN113987202A true CN113987202A (en) 2022-01-28

Family

ID=79741986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111250980.7A Pending CN113987202A (en) 2021-10-26 2021-10-26 Knowledge graph-based interactive telephone calling method and device

Country Status (1)

Country Link
CN (1) CN113987202A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412640A (en) * 2022-11-02 2022-11-29 北京北投智慧城市科技有限公司 Call center information processing method and device based on knowledge graph

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412640A (en) * 2022-11-02 2022-11-29 北京北投智慧城市科技有限公司 Call center information processing method and device based on knowledge graph
CN115412640B (en) * 2022-11-02 2023-05-09 北京北投智慧城市科技有限公司 Knowledge-graph-based call center information processing method and device

Similar Documents

Publication Publication Date Title
CN108847241B (en) Method for recognizing conference voice as text, electronic device and storage medium
CN110442710B (en) Short text semantic understanding and accurate matching method and device based on knowledge graph
CN109147767B (en) Method, device, computer equipment and storage medium for recognizing numbers in voice
CN106649825B (en) Voice interaction system and creation method and device thereof
CN112948534A (en) Interaction method and system for intelligent man-machine conversation and electronic equipment
CN112650854B (en) Intelligent reply method and device based on multiple knowledge graphs and computer equipment
CN111783471B (en) Semantic recognition method, device, equipment and storage medium for natural language
CN111209363B (en) Corpus data processing method, corpus data processing device, server and storage medium
CN112328761A (en) Intention label setting method and device, computer equipment and storage medium
CN114757176A (en) Method for obtaining target intention recognition model and intention recognition method
CN112820269A (en) Text-to-speech method, device, electronic equipment and storage medium
CN112559687A (en) Question identification and query method and device, electronic equipment and storage medium
CN112951233A (en) Voice question and answer method and device, electronic equipment and readable storage medium
CN115186094A (en) Multi-intention classification model training method and device, electronic equipment and storage medium
CN113326702A (en) Semantic recognition method and device, electronic equipment and storage medium
CN115840808A (en) Scientific and technological project consultation method, device, server and computer-readable storage medium
CN116628163A (en) Customer service processing method, customer service processing device, customer service processing equipment and storage medium
CN111401034B (en) Semantic analysis method, semantic analysis device and terminal for text
CN111506595A (en) Data query method, system and related equipment
CN113987202A (en) Knowledge graph-based interactive telephone calling method and device
CN111898363B (en) Compression method, device, computer equipment and storage medium for long and difficult text sentence
CN111368066A (en) Method, device and computer readable storage medium for acquiring dialogue abstract
CN114528851B (en) Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium
CN111401069A (en) Intention recognition method and intention recognition device for conversation text and terminal
CN111611793A (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination