WO2020233131A1 - 问答处理方法、装置、计算机设备和存储介质 - Google Patents
问答处理方法、装置、计算机设备和存储介质 Download PDFInfo
- Publication number
- WO2020233131A1 WO2020233131A1 PCT/CN2019/130597 CN2019130597W WO2020233131A1 WO 2020233131 A1 WO2020233131 A1 WO 2020233131A1 CN 2019130597 W CN2019130597 W CN 2019130597W WO 2020233131 A1 WO2020233131 A1 WO 2020233131A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- question
- user
- word
- specific type
- trees
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/253—Grammatical analysis; Style critique
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
Definitions
- This application relates to a question and answer processing method, device, computer equipment and storage medium.
- FAQ robots which are widely used in mobile assistants and smart customer service scenarios, use regular expressions, templates, or machine learning-based classification methods to classify user intentions, and then return corresponding answers according to the intentions.
- traditional Q&A robots can only be configured with fixed answers, and cannot deal with the richer meaning of the user’s question, leading to answers The accuracy of user questions is not high.
- a question and answer processing method device, computer equipment, and storage medium are provided.
- a question and answer processing method including:
- the target syntax tree is converted into a query sentence, and the query sentence is executed to obtain a user question answer corresponding to the user's question answer command.
- a question and answer processing device including:
- the question acquisition module is configured to receive the user's question reply instruction, and obtain the user's question according to the user's question reply instruction;
- the tree building module is used to build multiple syntax trees using the user question sentence
- a target tree determination module configured to calculate the similarity between the multiple syntax trees and the user question, and determine the target syntax tree according to the similarity
- the sentence execution module is used to convert the target syntax tree into a query sentence and execute the query sentence to obtain a user question answer.
- a computer device including a memory and one or more processors, the memory stores computer readable instructions, when the computer readable instructions are executed by the processor, the one or more processors execute Computer readable instructions for the following steps:
- the target syntax tree is converted into a query sentence, and the query sentence is executed to obtain a user question answer corresponding to the user's question answer command.
- One or more non-volatile storage media storing computer-readable instructions.
- the computer-readable instructions When the computer-readable instructions are executed by one or more processors, the one or more processors execute the following steps.
- Computer-readable instructions Computer-readable instructions:
- the target syntax tree is converted into a query sentence, and the query sentence is executed to obtain a user question answer corresponding to the user's question answer command.
- Fig. 1 is an application scenario diagram of a question processing method according to one or more embodiments.
- Fig. 2 is a schematic flowchart of a question processing method according to one or more embodiments.
- Fig. 3 is a schematic flow chart of constructing multiple syntax trees according to one or more embodiments.
- Figure 3a is a schematic diagram of a syntax tree constructed in a specific embodiment.
- Fig. 3b is a schematic diagram of another syntax tree constructed in the embodiment of Fig. 3a.
- Fig. 4 is a schematic flowchart of obtaining a basic vocabulary sequence according to one or more embodiments.
- Fig. 5 is a schematic diagram of a flow of obtaining word fragments of a specific type according to one or more embodiments.
- Fig. 6 is a schematic flowchart of obtaining a target syntax tree according to one or more embodiments.
- Fig. 7 is a schematic diagram of a process of extracting features according to one or more embodiments.
- Fig. 8 is an application scenario diagram of a question processing method according to another embodiment.
- Fig. 9 is a schematic flowchart of a question processing method according to one or more specific embodiments.
- Fig. 9a is a schematic diagram of the target syntax tree obtained in the specific embodiment of Fig. 9.
- Fig. 10 is a structural block diagram of a question processing device according to one or more embodiments.
- Fig. 11 is an internal structure diagram of a computer device according to one or more embodiments.
- Fig. 12 is a diagram of the internal structure of a computer device in another embodiment.
- the question and answer processing method provided in this application can be applied to the application environment as shown in FIG. 1.
- the terminal 102 communicates with the server 104 through the network.
- the server 104 receives the user question answer instruction sent by the terminal 102, and obtains the user question according to the user question answer instruction; uses the user question to construct multiple syntax trees; calculates the similarity between the multiple syntax trees and the user question, according to the similarity Determine the target syntax tree; convert the target syntax tree into a query sentence, and execute the query sentence to obtain a user question answer corresponding to the user's question answer instruction.
- the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
- the server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
- a question and answer processing method is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
- S202 Receive the user's question reply instruction, and obtain the user's question according to the user's question reply instruction.
- the terminal obtains the text of the user's question, can obtain the voice question through the language device, convert the voice question into the text of the user's question, or obtain the text of the user's question input by the user through the input device.
- the terminal sends a user question reply instruction to the server according to the obtained user question text, and the server receives the user question reply instruction sent by the terminal.
- the user question reply instruction carries the user question text and parses the user question reply. Instruction, get user questions.
- the server uses a syntactic analysis algorithm to construct multiple syntax trees corresponding to the user question according to the obtained user question.
- the syntax analysis algorithm may be a CFG (context-free grammar) parsing algorithm or a Dependency Parser (dependency parser). Syntactic analysis) algorithm.
- S206 Calculate the similarity between the multiple syntax trees and the user question, and determine the target syntax tree according to the similarity.
- the similarity algorithm is used to calculate the similarity between each syntax tree and the user's question, and the target syntax tree is determined according to the similarity between each syntax tree and the user's question.
- the syntax tree with the greatest similarity can be used as the target syntax tree, or the syntax tree that exceeds the preset similarity threshold can be used as the target syntax tree.
- the syntax tree corresponding to the similarity closest to the preset similarity threshold is used as the target syntax tree.
- the similarity algorithm can be Euclidean distance similarity algorithm or cosine similarity algorithm.
- S208 Convert the target syntax tree into a query sentence, and execute the query sentence to obtain a user question answer corresponding to the user question answer instruction.
- the obtained target syntax tree is converted into an executable query sentence through a translator, and the query sentence is executed in the knowledge base or the knowledge graph to obtain the user question answer corresponding to the user question answer instruction.
- the above question and answer processing method is to keep multiple syntax trees when parsing user questions, and then obtain the similarity between the syntax tree and the user question sentence, filter the syntax tree according to the similarity, obtain the target syntax tree, and use the target syntax tree to construct the query sentence
- the query is performed, and the user question answer corresponding to the user question answer command is obtained. That is, the similarity is used to eliminate the ambiguity of the user's question, which improves the accuracy of getting the user's question answer.
- step S204 that is, constructing multiple grammar trees using user questions, includes the steps:
- S302 Preprocess the user question to obtain a basic vocabulary sequence.
- the basic vocabulary sequence refers to the word fragments of a specific type.
- the specific type can be entity type, time type, number type, such as: person name, place name, organization name, time, date, currency, percentage, etc.
- the specific type may also be a preset type related to the application scenario. For example, when applied to a stock price inquiry scenario, the specific types can be property, compareop, value, and entity set, and so on.
- the server preprocesses the user's question.
- the preprocessing may be to segment the user's question, and obtain the type of each word segment after the word segmentation, and obtain the basic vocabulary sequence.
- S304 Construct multiple grammar trees using grammatical rules according to the basic vocabulary sequence.
- Grammar rules are rules for constructing grammar trees based on basic vocabulary sequences, such as CFG (context-free grammar).
- the server uses grammatical rules to construct multiple grammar trees according to the basic vocabulary sequence.
- the number of grammatical trees is related to the ambiguity of the user's question. The more ambiguities of the user's question, the more grammatical trees are obtained.
- the multiple syntax trees constructed can include the syntax trees shown in Figure 3a and Figure 3b.
- the basic vocabulary sequence is obtained and multiple grammar trees are constructed according to the basic vocabulary sequence using grammatical rules, and each ambiguous grammar number can be obtained to facilitate subsequent disambiguation processing.
- step S302 which is to preprocess the user question sentence to obtain the basic vocabulary sequence, includes the steps:
- S402 Segment the user's question to obtain a segmentation result.
- Word segmentation refers to the process of recombining consecutive word sequences into word sequences according to certain specifications
- the server performs word segmentation processing on the user's question to obtain the word segmentation result.
- word segmentation can be performed using a word segmentation method based on string matching, a word segmentation method based on understanding, or a word segmentation method based on statistics. For example, after the word segmentation of "Where does Xiao Ming live", the result of the word segmentation can be "Xiao Ming", “live", “what” and "place”.
- S404 Identify the specific type corresponding to the word segmentation result, and obtain word fragments of the specific type.
- the server recognizes the word segmentation results, determines the specific type corresponding to each word segmentation result, and respectively marks the corresponding specific type for each word segmentation result, and obtains word fragments with the specific type tag.
- the specific types of "Xiaoming”, “Live”, “What” and “Place” are identified as "person's name”, “verb”, “adjective” and "noun”.
- S406 Combine specific types of word fragments according to grammatical rules to obtain a basic vocabulary sequence.
- the server combines specific types of word fragments according to grammatical rules, for example, combining adjacent word fragments to obtain a basic word sequence. For example, “Xiaoming” and “living” are combined to get “Xiaoming's residence”, and “what” and “place” are combined to get “where”.
- the segmentation result is obtained, the specific type corresponding to the segmentation result is identified, and the segment of the specific type is obtained, and the segment of the specific type is combined according to the grammatical rules to obtain the basic vocabulary sequence. Get the basic vocabulary sequence for easy use.
- step S404 that is, identifying the specific type corresponding to the word segmentation result, and obtaining a word piece of the specific type, includes the steps:
- S502 Input the word segmentation result into the trained named entity recognition model for recognition, and obtain a specific type corresponding to the word segmentation result; wherein the named entity recognition model is obtained by training using a neural network algorithm.
- the named entity recognition model refers to the NER (Named Entity Recognition, named entity recognition) model.
- NER Named Entity Recognition
- the model uses the existing named entity and the corresponding specific type to use neural network algorithm for training, when the training completion condition is reached, the trained named entity recognition model is obtained.
- the server inputs the word segmentation result into the trained named entity recognition model for recognition, and obtains the specific type corresponding to the word segmentation result. For example, input the word segmentation results “Xiaoming”, “live”, “what” and “place” into the trained named entity recognition model for recognition, and the specific type of output can be "person's name”, “verb”, “adjective” and “noun”.
- S504 Mark the word segmentation result as a specific type of word fragment according to the specific type corresponding to the word segmentation result.
- the server marks each word segmentation result according to the output specific type, and obtains the word fragment of the specific type.
- the word segmentation result is identified through the trained named entity recognition model, and the specific type corresponding to the word segmentation result is obtained, and then the word segmentation result is marked according to the corresponding specific type to obtain a specific type of word fragments, which can be quickly obtained
- the specific types of word segmentation results improve efficiency.
- step S206 that is, calculating the similarity between multiple syntax trees and user questions, and determining the target syntax tree according to the similarity, includes the steps:
- the server extracts the grammatical features of each syntax tree, and extracts the question features of the user's question.
- the trained feature extraction model can be used for feature extraction.
- S604 Calculate the similarity score between the grammatical feature and the question feature, and sort the multiple syntax trees according to the similarity score to obtain the sorting result of the multiple syntax trees.
- the server calculates the similarity between the grammatical features of each syntax tree and the question feature to obtain a similarity score, and sorts the syntax trees according to the similarity score to obtain a sorted result set of multiple syntax trees.
- the similarity algorithm can be used to calculate the similarity, and the similarity algorithm can be a cosine distance similarity algorithm or a Euclidean distance similarity algorithm.
- S606 Select the syntax tree corresponding to the maximum similarity score or the similarity score exceeding a preset threshold from the sorting result as the target syntax tree.
- the server selects the syntax tree corresponding to the largest similarity score from the sorted result set of the syntax tree as the target syntax tree.
- a syntax tree corresponding to a similarity score exceeding a preset threshold is selected from the sorted result set of the syntax tree as the target syntax tree.
- the similarity score corresponding to each syntax tree in the sorted result set does not exceed the preset threshold
- the similarity score closest to the preset threshold is obtained, and the syntax tree corresponding to the similarity score closest to the preset threshold is used as the target syntax tree .
- one of the syntax trees exceeding the preset threshold is randomly selected as the target syntax tree.
- the ambiguous grammar tree can be eliminated and the user’s question The ambiguity.
- step S602 that is, extracting grammatical features of multiple grammar trees and question features of user questions, includes the steps:
- S702 Convert the child nodes of the syntax tree into child node word vectors, and input the child node word vectors of the syntax tree into the trained first feature extraction model for extraction to obtain the root node word vector; wherein the first feature extraction model passes Use recursive neural network algorithm training.
- the word vector is a word vector generated according to the word corresponding to the node in the grammar tree.
- the word of the node is "microphone”
- the word vector is expressed as [0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0...] .
- the dimension of the word vector is the size of the vocabulary, which means that the value of the "microphone" dimension is 1, and the values of other dimensions are all 0.
- the word vector can also be obtained according to the position of the word in the vocabulary. For example, if the position of "microphone" in the vocabulary is 3, the word vector of the obtained "microphone” can also be [3].
- the server converts the child nodes of the grammar book into child node word vectors, inputs the child node word vectors of the grammar tree into the trained first feature extraction model for extraction, and obtains the root node word vector; wherein, the first feature extraction
- the model is trained by using a recursive neural network (TreeRNN, Tree Recursive Neural Net) algorithm.
- TreeRNN Tree Recursive Neural Net
- the word vector of the corresponding root node is used as the label
- TreeRNN Tree Recursive Neuralnet
- the training completion condition means that the number of training iterations reaches the maximum value or the loss function value is less than a preset threshold.
- the tanh function can be used as the activation function and the cross-entropy loss function can be used.
- the server directly uses the root node word vector output by the first feature extraction model as the grammatical feature of the grammar tree.
- S706 Input the user question sentence into the trained second feature extraction model for extraction to obtain the question sentence vector; wherein the second feature extraction model is obtained by training using a recurrent neural network algorithm.
- the second feature extraction model is obtained by training using Recurrent Neural Networks (RNN) algorithm.
- RNN Recurrent Neural Networks
- the existing user question is used as the input of the recurrent neural network, and the corresponding question vector is used as the label for training.
- the training completion condition means that the number of training iterations reaches the maximum value or the loss function value is less than a preset threshold.
- the loss function is a cross-entropy loss function
- the activation function of the output layer is a softmax function (normalized exponential function)
- the activation function of the hidden layer is a tanh (hyperbolic tangent) function.
- the server inputs the user's question sentence into the trained second feature extraction model for extraction, obtains the question sentence vector, and uses the question sentence vector as the question sentence feature of the user's question sentence.
- the question feature of the user's question and the grammatical feature of the grammar tree are extracted through the trained feature extraction model, which can improve the efficiency of obtaining the question feature and the grammatical feature and facilitate subsequent use.
- step S208 after the target syntax tree is converted into a query sentence, and the query sentence is executed in the knowledge graph, after obtaining the user question answer, the method further includes:
- the user's question answer is returned to the terminal so that the terminal displays the user's question answer.
- the terminal is used to receive and display the user's question answer, and the terminal may be a terminal corresponding to the user's question or another terminal.
- the terminal is not limited to personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
- the display is not limited to text or image display, voice playback, video playback, etc.
- the terminal can display the received reply on the terminal display interface, the terminal can also display and play on the video playback interface, or convert it into voice information for playback through a voice device.
- the server may return the obtained user question answer to the terminal corresponding to the user question, and the terminal corresponding to the user question will display the answer after receiving the user question answer, so that the user can obtain the answer information of the question, which is convenient for the user to use. It is also possible to return the answer corresponding to the user's question to the terminal set by the user or the server, and display the answer to the user's question on the set terminal.
- the user receives the user question through the mobile phone 8A
- the server 8B obtains the user question
- the answer to the user question is found through the question and answer processing method in any of the above embodiments , Return the user's question answer to the computer 8C set by the user for display.
- the question and answer processing method includes the steps:
- S902 Receive the user's question reply instruction, obtain the user's question according to the user's question reply instruction, segment the user's question, and obtain a word segmentation result.
- S904 Input the word segmentation result into the trained NER model for recognition, obtain a specific type corresponding to the word segmentation result, and mark the word segmentation result as a specific type of word fragment according to the specific type corresponding to the word segmentation result.
- S906 Combine specific types of word pieces according to CFG grammar rules to obtain a basic vocabulary sequence, and construct multiple grammar trees according to the basic vocabulary sequence using CFG grammar rules.
- S908 Convert the child nodes of each syntax tree into child node word vectors, input the child node word vectors of the syntax tree into the trained TreeRNN model for extraction, and obtain the root node word vectors of each syntax tree.
- S910 Input the user question sentence into the trained RNN model for extraction, and obtain the question sentence vector.
- S912 Use cosine distance to calculate similarity according to the root node word vector and question vector of each grammar tree to obtain a similarity score, and sort multiple grammar trees according to the similarity score to obtain a ranking of multiple grammar trees result.
- S916 Convert the target syntax tree into a query sentence, and execute the query sentence to obtain a user question answer corresponding to the user question answer instruction.
- the user's question “stocks whose stock price is greater than 50 yuan?” is obtained, and the user's question is segmented to obtain "stock price", "greater than”, “50 yuan” and “stocks”, and the segmentation results are marked with specific types of phrases ,
- the specific type corresponding to “price (price)0” is “property”
- the specific type corresponding to "greater than (>)” is “compareop”
- the specific type corresponding to "50 yuan (50)” Is "value”
- the specific type corresponding to "stock” is "entity set”.
- Figure 9a is transformed into a query sentence of "SELECT?x WHERE(?xa Stock.?xattr:price?xprice.filter(?xprice>50))"
- the query sentence is executed in the knowledge graph, and the user question answer corresponding to the user question answer instruction is obtained, and the user question answer is returned to the terminal for display.
- a question and answer processing apparatus 1000 which includes: a question acquisition module 1002, a tree building module 1004, a target tree determination 1006, and a sentence execution module 1008, wherein:
- the question acquisition module 1002 is used to receive the user's question reply instruction, and obtain the user's question according to the user's question reply instruction;
- the tree building module 1004 is used to build multiple syntax trees using user questions
- the target tree determination module 1006 is used to calculate the similarity between multiple syntax trees and user questions, and determine the target syntax tree according to the similarity
- the sentence execution module 1008 is used to convert the target syntax tree into a query sentence, execute the query sentence, and obtain a user question answer.
- the tree building module 1004 includes:
- the preprocessing module is used to preprocess user questions to obtain basic vocabulary sequences
- the building module is used to build multiple grammar trees using grammatical rules based on the basic vocabulary sequence.
- the preprocessing module includes:
- the word segmentation module is used to segment the user's question and obtain the word segmentation result
- the recognition module is used to identify the specific type corresponding to the word segmentation result, and obtain the word piece with the specific type;
- the combination module is used to combine specific types of word pieces according to grammatical rules to obtain a basic vocabulary sequence.
- the identification module includes:
- the model recognition module is used to input the word segmentation result into the trained named entity recognition model for recognition, and obtain the specific type corresponding to the word segmentation result; wherein the named entity recognition model is obtained by training using a neural network algorithm.
- the marking module is used to mark the word segmentation result as a specific type of word fragment according to the specific type corresponding to the word segmentation result.
- the target tree determination module 1006 includes:
- the feature extraction module is used to extract the grammatical features of multiple syntax trees and the question features of user questions;
- the score calculation module is used to calculate the similarity score between the grammatical feature and the question feature, and sort multiple grammar trees according to the similarity score to obtain the sorting result of the multiple grammar trees;
- the target number selection module is used to select the syntax tree corresponding to the maximum similarity score or the similarity score exceeding the preset threshold as the target syntax tree from the sorting result.
- the feature extraction module includes:
- the word vector obtaining module is used to convert the child nodes of the syntax tree into child node word vectors, and input the child node word vectors of the syntax tree into the trained first feature extraction model for extraction to obtain the root node word vector;
- a feature extraction model is obtained by training using a recurrent neural network algorithm;
- the pre-specified feature obtaining module is used to use the root node word vector as the grammatical feature of the grammar tree;
- the question vector obtaining module is used to input the user question into the trained second feature extraction model for extraction to obtain the question vector; wherein the second feature extraction model is obtained by training using a recurrent neural network algorithm;
- the question feature obtaining module is used to use the question vector as the question feature of the user's question.
- the question and answer processing apparatus 1000 further includes:
- the question display module is used to return the user's question answer to the terminal so that the terminal displays the user's question answer.
- Each module in the above question and answer processing device can be implemented in whole or in part by software, hardware, and a combination thereof.
- the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
- a computer device is provided.
- the computer device may be a server, and its internal structure diagram may be as shown in FIG. 11.
- the computer equipment includes a processor, a memory, a network interface and a database connected through a system bus.
- the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
- the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
- the database of the computer device is used to store user question answer data.
- the network interface of the computer device is used to communicate with an external terminal through a network connection.
- the computer readable instruction is executed by the processor to realize a question and answer processing method.
- a computer device is provided.
- the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 12.
- the computer equipment includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus.
- the processor of the computer device is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system and computer readable instructions.
- the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
- the network interface of the computer device is used to communicate with an external terminal through a network connection.
- the computer readable instruction is executed by the processor to realize a question and answer processing method.
- the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen
- the input device of the computer equipment can be a touch layer covered on the display screen, or it can be a button, a trackball or a touchpad set on the housing of the computer equipment , It can also be an external keyboard, touchpad, or mouse.
- FIG. 11 or FIG. 12 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
- the computer device may include more or fewer components than shown in the figures, or combine certain components, or have a different component arrangement.
- a computer device including a memory and one or more processors.
- the memory stores computer-readable instructions.
- the one or more processors Perform the following steps.
- the processor further implements the following steps when executing the computer-readable instructions: preprocessing the user's question to obtain a basic vocabulary sequence; and constructing multiple syntax trees according to the basic vocabulary sequence using grammatical rules.
- the processor also implements the following steps when executing the computer-readable instructions: segmenting the user’s question to obtain a segmentation result; identifying the specific type corresponding to the segmentation result to obtain a specific type of word fragment; The word pieces are combined according to the grammatical rules to obtain the basic vocabulary sequence.
- the processor further implements the following steps when executing the computer-readable instructions: input the word segmentation result into the trained named entity recognition model for recognition, and obtain the specific type corresponding to the word segmentation result; wherein, the named entity recognition model Obtained by using neural network algorithm training. According to the specific type corresponding to the word segmentation result, the word segmentation result is marked as a specific type of word fragment.
- the processor further implements the following steps when executing the computer-readable instructions: extracting the grammatical features of multiple grammar trees and the question features of the user's question; calculating the similarity score between the grammatical features and the question features, according to The similarity score sorts the multiple syntax trees to obtain the sorting result of the multiple syntax trees; from the sorting result, the syntax tree corresponding to the maximum similarity score or the similarity score exceeding the preset threshold is selected as the target syntax tree.
- the processor further implements the following steps when executing the computer-readable instructions: converting the child nodes of the syntax tree into child node word vectors, and inputting the child node word vectors of the syntax tree into the trained first feature extraction model
- the root node word vector is obtained by extracting in the database; the first feature extraction model is obtained by training using a recurrent neural network algorithm; the root node word vector is used as the grammatical feature of the grammar tree; the user’s question is input into the trained second feature extraction
- the question vector is obtained by extracting from the model; wherein the second feature extraction model is obtained by training using a recurrent neural network algorithm; the question vector is used as the question feature of the user's question.
- the processor further implements the following steps when executing the computer-readable instructions: returning the user's question answer to the terminal, so that the terminal displays the user's question answer.
- one or more non-volatile storage media storing computer-readable instructions are provided.
- the one or more processors execute the following Steps: Computer readable instructions
- Computer readable instructions receive user question answer instructions, obtain user questions according to user question answer instructions; use user questions to construct multiple syntax trees; calculate the similarity between multiple syntax trees and user questions , Determine the target syntax tree according to the similarity; transform the target syntax tree into a query sentence, and execute the query sentence to obtain the user question answer corresponding to the user question answer instruction.
- the following steps are also implemented: preprocessing the user's question to obtain a basic vocabulary sequence; and constructing multiple syntax trees according to the basic vocabulary sequence using grammatical rules.
- the following steps are also implemented: the user’s question is segmented to obtain the segmentation result; the specific type corresponding to the segmentation result is identified to obtain a specific type of word fragment; The pieces of words are combined according to the grammatical rules to get the basic vocabulary sequence.
- the following steps are also implemented: input the word segmentation result into the trained named entity recognition model for recognition, and obtain the specific type corresponding to the word segmentation result; wherein, named entity recognition
- the model is obtained by training using neural network algorithms. According to the specific type corresponding to the word segmentation result, the word segmentation result is marked as a specific type of word fragment.
- the following steps are also implemented: extracting the grammatical features of multiple syntax trees and the question features of the user question; calculating the similarity score between the grammatical features and the question features, The multiple syntax trees are sorted according to the similarity score to obtain the sorting result of the multiple syntax trees; the syntax tree corresponding to the maximum similarity score or the similarity score exceeding a preset threshold is selected from the sorting result as the target syntax tree.
- the following steps are also implemented: converting the child nodes of the syntax tree into child node word vectors, and inputting the child node word vectors of the syntax tree into the trained first feature extraction Extract from the model to obtain the root node word vector; among them, the first feature extraction model is obtained by training using a recurrent neural network algorithm; the root node word vector is used as the grammatical feature of the grammar tree; the user question is input into the trained second feature The question vector is obtained by extracting in the extraction model; wherein the second feature extraction model is obtained by training using a recurrent neural network algorithm; the question vector is used as the question feature of the user's question.
- the following steps are further implemented: returning the user's question answer to the terminal, so that the terminal displays the user's question answer.
- Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- ROM read only memory
- PROM programmable ROM
- EPROM electrically programmable ROM
- EEPROM electrically erasable programmable ROM
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本申请涉及一种问答处理方法、装置、计算机设备和存储介质。所述方法包括:接收用户问句答复指令,根据用户问句答复指令获取用户问句;使用用户问句构建多个语法树;计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。采用本方法能够消除用户问句中的歧义,提高得到用户问句答复的准确性。
Description
相关申请的交叉引用
本申请要求于2019年05月22日提交中国专利局,申请号为2019104286354,申请名称为“问答处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及一种问答处理方法、装置、计算机设备和存储介质。
随着互联网技术的发展,目前,手机助手和智能客服场景中广泛应用的FAQ机器人,使用正则表达式,模板或者基于机器学习的分类方法对用户意图进行分类,然后根据意图返回相应的答案。但受限于机器人无法完全理解用户的问句和问句中的信息,传统的问答机器人往往只能配置固定的答案,也无法针对用户问句中更丰富的含义做出相应的处理,导致回答用户问句的准确性不高。
发明内容
根据本申请公开的各种实施例,提供一种问答处理方法、装置、计算机设备和存储介质。
一种问答处理方法,包括:
接收用户问句答复指令,根据用户问句答复指令获取用户问句;
使用用户问句构建多个语法树;
计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;及
将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
一种问答处理装置,包括:
问句获取模块,用于接收用户问句答复指令,根据所述用户问句答复指令获取用户问句;
构建树模块,用于使用所述用户问句构建多个语法树;
目标树确定模块,用于计算所述多个语法树与所述用户问句的相似度,根据所述相似度确定目标语法树;及
语句执行模块,用于将所述目标语法树转化为查询语句,并执行所述查询语句,得到 用户问句答复。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤计算机可读指令计算机可读指令:
接收用户问句答复指令,根据用户问句答复指令获取用户问句;
使用用户问句构建多个语法树;
计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;及
将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤计算机可读指令计算机可读指令:
接收用户问句答复指令,根据用户问句答复指令获取用户问句;
使用用户问句构建多个语法树;
计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;及
将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为根据一个或多个实施例中问句处理方法的应用场景图。
图2为根据一个或多个实施例中问句处理方法的流程示意图。
图3为根据一个或多个实施例中构建多个语法树的流程示意图。
图3a为一个具体实施例中构建的语法树示意图。
图3b为图3a实施例中构建的另一语法树示意图。
图4为根据一个或多个实施例中得到基本词汇序列的流程示意图。
图5为根据一个或多个实施例中得到具有特定类型的词片的流程示意图。
图6为根据一个或多个实施例中得到目标语法树的流程示意图。
图7为根据一个或多个实施例中提取特征的流程示意图。
图8为另一个实施例问句处理方法的应用场景图。
图9为根据一个或多个具体实施例中问句处理方法的流程示意图。
图9a为图9具体实施例中得到的目标语法树的示意图。
图10为根据一个或多个实施例中问句处理装置的结构框图。
图11为根据一个或多个实施例中计算机设备的内部结构图。
图12为另一个实施例中计算机设备的内部结构图。
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的问答处理方法,可以应用于如图1所示的应用环境中。终端102通过网络与服务器104进行通信。服务器104接收终端102发送的用户问句答复指令,根据用户问句答复指令获取用户问句;使用用户问句构建多个语法树;计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在其中一个实施例中,如图2所示,提供了一种问答处理方法,以该方法应用于图1中的服务器为例进行说明,包括以下步骤:
S202,接收用户问句答复指令,根据用户问句答复指令获取用户问句。
具体地,终端获取到用户问句文本,可以通过语言装置获取到语音问句,将语音问句转换为用户问句文本,也可以获取到用户通过输入装置输入的用户问句文本等等。此时,终端根据得到的用户问句文本向服务器发送用户问句答复指令,服务器接收到终端发送的用户问句答复指令,该用户问句答复指令携带有用户问句文本,解析用户问句答复指令,得到用户问句。
S204,使用用户问句构建多个语法树。
具体地,服务器根据得到的用户问句使用句法分析算法构建出用户问句对应的多个语法树,该句法分析算法可以是CFG(context-free grammar,上下文无关文法)解析算法或者Dependency Parser(依存句法分析)算法。
S206,计算多个语法树与用户问句的相似度,根据相似度确定目标语法树。
具体地,使用相似度算法分别计算出每个语法树与用户问句的相似度,根据每个语法树与用户问句的相似度大小来确定目标语法树。其中,可以将相似度最大的语法树作为目标语法树,也可以将超过预设相似度阈值的语法树作为目标语法树。当未存在超过预设相似度阈值的语法树时,将与预设相似度阈值最接近的相似度对应的语法树作为目标语法树。当超过预设相似度阈值的语法树有多个时,随机选择一个超过设相似度阈值的语法树作为目标语法树或者根据设定好的选择规则选取一个超过设相似度阈值的语法树作为目标语法树。相似度算法可以是欧式距离相似度算法或者余弦相似度算法等。
S208,将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
具体地,将得到的目标语法树通过翻译器转化为可执行的查询语句,在知识库或者知识图谱中执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
上述问答处理方法,通过在解析用户问句时,保留多个语法树,然后得到语法树与用户问句的相似度,根据相似度筛选语法树,得到目标语法树,使用目标语法树构造查询语句进行查询,得到与用户问句答复指令相对应的用户问句答复。即利用相似度来消除用户问句的歧义,提高了得到用户问句答复的准确性。
在其中一个实施例中,如图3所示,步骤S204,即使用用户问句构建多个语法树,包括步骤:
S302,将用户问句预处理,得到基本词汇序列。
基本词汇序列是指具有特定类型的词片,比如特定类型可以是实体类,时间类,数字类,具体比如:人名、地名、组织机构名、时间、日期、货币、百分比等。特定类型也可以是预先设定好与应用场景相关的类型。比如,应用到股价询问场景中是,特定类型可以是property(财产)、compareop(比较)、value(价值)和Entity set(实体)等等。
具体地,服务器将用户问句进行预处理,该预处理可以是对用户问句进行分词,并得到分词之后的各个词片的类型,得到基本词汇序列。
S304,根据基本词汇序列使用语法规则构建多个语法树。
语法规则是用于根据基本词汇序列构建语法树的规则,比如:CFG(context-free grammar,上下文无关文法)。
具体地,服务器根据基本词汇序列使用语法规则构建多个语法树,语法树的数量与用户问句的歧义有关,用户问句的歧义越多,得到的语法树就越多。
比如,“小明拿出练习本来应有的态度吗”,构建的多个语法树可以包括如图3a和图3b所示的语法树。
在上述实施例中,通过将用户问句预处理,得到基本词汇序列根据基本词汇序列使用语法规则构建多个语法树,可以得到各个有歧义的语法数,便于后续进行消歧处理。
在其中一个实施例中,如图4所示,步骤S302,即将用户问句预处理,得到基本词汇序列,包括步骤:
S402,将用户问句分词,得到分词结果。
分词是指将连续的字序列按照一定的规范重新组合成词序列的过程
具体地,服务器将用户问句进行分词处理,得到分词结果。具体可以是使用基于字符串匹配的分词方法、基于理解的分词方法或者基于统计的分词方法进行分词。比如,对“小明住什么地方”分词之后,得到的分词结果可以是“小明”,“住”,“什么”和“地方”。
S404,识别分词结果对应的特定类型,得到具有特定类型的词片。
具体地,服务器对分词结果进行识别,确定各个分词结果对应的特定类型,分别对每 一个分词结果标注出对应的特定类型,得到具有特定类型标注的词片。比如,对“小明”,“住”,“什么”和“地方”识别对应的特定类型为“人名”,“动词”“形容词”和“名词”。
S406,将特定类型的词片按照语法规则组合,得到基本词汇序列。
具体地,服务器将特定类型的词片按照语法规则进行组合,比如,将相邻的词片进行组合,得到基本词汇序列。比如,将“小明”,“住”组合得到“小明住”,将“什么”和“地方”组合得到“什么地方”。
在上述实施例中,通过将用户问句分词,得到分词结果,识别分词结果对应的特定类型,得到具有特定类型的词片,将特定类型的词片按照语法规则组合,得到基本词汇序列,能够得到基本词汇序列,方便使用。
在其中一个实施例中,如图5所示,步骤S404,即识别分词结果对应的特定类型,得到具有特定类型的词片,包括步骤:
S502,将分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型;其中,命名实体识别模型通过使用神经网络算法训练得到。
命名实体识别模型是指NER(Named Entity Recognition,命名实体识别)模型。该模型是使用已有的命名实体和对应的特定类型使用神经网络算法进行训练,当达到训练完成条件时,得到已训练的命名实体识别模型。
具体地,服务器将将分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型。比如,将分词结果“小明”,“住”,“什么”和“地方”输入到已训练的命名实体识别模型进行识别,得到输出的特定类型可以是“人名”,“动词”“形容词”和“名词”。
S504,根据分词结果对应的特定类型将分词结果标记为特定类型的词片。
具体地,服务器根据输出的特定类型将各个分词结果进行标记,得到具有特定类型的词片。
在上述实施例中,通过已训练的命名实体识别模型对分词结果进行识别,得到分词结果对应的特定类型,然后将分词结果根据对应的特定类型进行标记得到具有特定类型的词片,能够快速得到分词结果的特定类型,提高效率。
在其中一个实施例中,如图6所示,步骤S206,即计算多个语法树与用户问句的相似度,根据相似度确定目标语法树,包括步骤:
S602,提取多个语法树的语法特征和用户问句的问句特征。
具体地,服务器提取各个语法树的语法特征,并提取用户问句的问句特征。其中,可以使用已训练的特征提取模型进行特征提取。
S604,计算语法特征与问句特征的相似度得分,根据相似度得分对多个语法树进行排序,得到多个语法树的排序结果。
具体地,服务器计算各个语法树的语法特征与问句特征的相似度,得到相似度得分,根据相似度得分对各个语法树进行排序,得到多个语法树的排序结果集。其中,可以使用 相似度算法计算相似度,相似度算法可以是余弦距离(cosine distance)相似度算法或者欧式距离相似度算法等。
S606,从排序结果选取最大相似度得分或者超过预设阈值的相似度得分对应的语法树作为目标语法树。
具体地,服务器从语法树的排序结果集中选取最大的相似度得分对应的语法树作为目标语法树。或者从语法树的排序结果集中选取超过预设阈值的相似度得分对应的语法树作为目标语法树。当排序结果集中各个语法树对应的相似度得分都未超过预设阈值时,获取到最接近预设阈值的相似度得分,将最接近预设阈值的相似度得分对应的语法树作为目标语法树。当排序结果集中各个语法树对应的相似度得分有多个超过预设阈值时,随机从超过超过预设阈值的各个语法树中选取一个作为目标语法树。
在上述实施例中,通过计算各个语法树与用户问句的相似度,选择最相似或者超于预设相似度的语法树作为目标语法树,能够消除掉歧义的语法树,消除用户问句中的歧义。
在其中一个实施例中,如图7所示,步骤S602,即提取多个语法树的语法特征和用户问句的问句特征,包括步骤:
S702,将语法树的子节点转换为子节点词向量,将语法树的子节点词向量输入已训练的第一特征提取模型中进行提取,得到根节点词向量;其中,第一特征提取模型通过使用递归神经网络算法训练得到。
词向量是根据语法树中的节点对应的词生成的词向量,比如,节点的词为“话筒”,词向量表示为[0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0…]。该词向量的维度大小是词表的大小,表示“话筒”维度的值为1,其他维度的值均为0。也可以根据词在词表中的位置来得到词向量,比如“话筒”在词表的位置为3,则得到的“话筒”的词向量也可以是[3]。
具体地,服务器将语法书的子节点转换为子节点词向量,将语法树的子节点词向量输入已训练的第一特征提取模型中进行提取,得到根节点词向量;其中,第一特征提取模型通过使用递归神经网络(TreeRNN,Tree Recursive neural net)算法训练得到。具体是根据已有的语法树的子节点作为输入,将对应的根节点的词向量作为标签,使用TreeRNN(树形RNN,Tree Recursive neural net)进行训练,当达到训练完成条件是,得到第一特征提取模型。训练完成条件是指训练迭代次数达到最大值或者损失函数值小于预设阈值。可以将tanh函数作为激活函数和使用交叉熵损失函数。
S704,将根节点词向量作为语法树的语法特征。
具体地,服务器直接将第一特征提取模型输出的根节点词向量作为语法树的语法特征。
S706,将用户问句输入已训练的第二特征提取模型中进行提取,得到问句向量;其中,第二特征提取模型通过使用循环神经网络算法训练得到。
S708,将问句向量作为用户问句的问句特征。
第二特征提取模型通过使用循环神经网络(RNN,Recurrent Neural Networks)算法 训练得到的。在进行训练第二特征提取模型时,通过将已有的用户问句作为循环神经网络的输入,将对应的问句向量作为标签进行训练,当达到训练完成条件是,得到第二特征提取模型。训练完成条件是指训练迭代次数达到最大值或者损失函数值小于预设阈值。损失函数为交叉熵损失函数,输出层的激活函数为softmax函数(归一化指数函数),隐藏层的激活函数为tanh(双曲正切)函数。
具体地,服务器将用户问句输入已训练的第二特征提取模型中进行提取,得到问句向量,将问句向量作为用户问句的问句特征。
在上述实施例中,通过已训练的特征提取模型提取出用户问句的问句特征和语法树的语法特征,能够提高得到问句特征和语法特征的效率,方便后续使用。
在其中一个实施例中,在步骤S208之后,在将目标语法树转化为查询语句,并在知识图谱中执行查询语句,得到用户问句答复之后,还包括:
将用户问句答复返回终端,以使终端展示用户问句答复。
其中,终端用于接收用户问句答复并进行展示,该终端可以是用户问句对应的的终端,也可以是其他终端。该终端不限于个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。展示不限于通过文字或者图像显示、通过语音播放、通过视频播放等等。比如,终端可以将接收到的答复在终端显示界面中进行显示,终端也可以在视频播放界面进行显示播放,也可以通过语音装置转换为语音信息进行播放等。
具体地,服务器可以将得到的用户问句答复返回给用户问句对应的终端,用户问句对应的终端接收到用户问句答复后进行展示,使用户得到问句的答案信息,方便用户使用。也可以将用户问句对应的答案返回给用户或者服务器设定的终端,在设定的终端展示用户问句答复。比如,在一个具体地实施例中,如图8所示,用户通过手机8A接收到用户问句,服务器8B获取到用户问句,通过上述任一实施例中问答处理方法查找到用户问句答案,将用户问句答案返回到用户设定的电脑8C进行展示。
在一个具体的实施例中,如图9所示,该问答处理方法包括步骤:
S902,接收用户问句答复指令,根据用户问句答复指令获取用户问句,将用户问句分词,得到分词结果。
S904,将分词结果输入到已训练的NER模型中进行识别,得到分词结果对应的特定类型,根据分词结果对应的特定类型将分词结果标记为特定类型的词片。
S906,将特定类型的词片按照CFG语法规则组合,得到基本词汇序列,根据基本词汇序列使用CFG语法规则构建多个语法树。
S908,将各个语法树的子节点转换为子节点词向量,将语法树的子节点词向量输入已训练的TreeRNN模型中进行提取,得到各个语法树的根节点词向量。
S910,将用户问句输入已训练的RNN模型中进行提取,得到问句向量。
S912,根据各个语法树的根节点词向量和问句向量使用余弦距离(cosine distance)计算相似度,得到相似度得分,根据相似度得分对多个语法树进行排序,得到多个语法树 的排序结果。
S914,从排序结果中选取最大相似度得分对应的语法树作为目标语法树。
S916,将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
S918,将用户问句答复返回用户问句对应的终端进行展示。
具体地,获取到用户问句“股价大于50元的个股?”,将该用户问句分词,得到“股价”“大于”“50元”和“个股”,将分词结果标记特定类型的词片,得到“股价(price)0”对应的特定类型为“property(财产)”,“大于(>)”对应的特定类型为“compareop(比较)”,“50元(50)”对应的特定类型为“value(价值)”,“个股(stock)”对应的特定类型为“entity set(实体)”。然后按照CFG语法规则组合,将“大于”“50元”按照CFG语法规则组合得到Datarange(数值范围)为“>50”将“股价”与数值范围组合得到ADJ(形容词)为“price>50”。将“个股”与形容词组合得到Entity set(实体)为“stock[price>50]of which”。就得到了一棵树如图9a所示,根据同样的方法可以得到其他的歧义语法树,最后计算各个语法树与用户问句的相似度,选取最高相似度对应的语法树作为目标语法树,得到的目标语法树可以是图9a,将图9a转化为查询语句为“SELECT?x WHERE(?x a Stock.?x attr:price?xprice.filter(?xprice>50))”,并在股市知识图谱中执行查询语句,得到与用户问句答复指令相对应的用户问句答复,将用户问句答复返回给终端进行展示。
应该理解的是,虽然图2-9的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-9中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在其中一个实施例中,如图10所示,提供了一种问答处理装置1000,包括:问句获取模块1002、构建树模块1004、目标树确定1006和语句执行模块1008,其中:
问句获取模块1002,用于接收用户问句答复指令,根据用户问句答复指令获取用户问句;
构建树模块1004,用于使用用户问句构建多个语法树;
目标树确定模块1006,用于计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;
语句执行模块1008,用于将目标语法树转化为查询语句,并执行查询语句,得到用户问句答复。
在其中一个实施例中,构建树模块1004,包括:
预处理模块,用于将用户问句预处理,得到基本词汇序列;
构建模块,用于根据基本词汇序列使用语法规则构建多个语法树。
在其中一个实施例中,预处理模块,包括:
分词模块,用于将用户问句分词,得到分词结果;
识别模块,用于识别分词结果对应的特定类型,得到具有特定类型的词片;
组合模块,用于将特定类型的词片按照语法规则组合,得到基本词汇序列。
在其中一个实施例中,识别模块,包括:
模型识别模块,用于将分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型;其中,命名实体识别模型通过使用神经网络算法训练得到。
标记模块,用于根据分词结果对应的特定类型将分词结果标记为特定类型的词片。
在其中一个实施例中,目标树确定模块1006,包括:
特征提取模块,用于提取多个语法树的语法特征和用户问句的问句特征;
得分计算模块,用于计算语法特征与问句特征的相似度得分,根据相似度得分对多个语法树进行排序,得到多个语法树的排序结果;
目标数选取模块,用于从排序结果选取最大相似度得分或者超过预设阈值的相似度得分对应的语法树作为目标语法树。
在其中一个实施例中,特征提取模块,包括:
词向量得到模块,用于将语法树的子节点转换为子节点词向量,将语法树的子节点词向量输入已训练的第一特征提取模型中进行提取,得到根节点词向量;其中,第一特征提取模型通过使用递归神经网络算法训练得到;
预提特征得到模块,用于将根节点词向量作为语法树的语法特征;
问句向量得到模块,用于将用户问句输入已训练的第二特征提取模型中进行提取,得到问句向量;其中,第二特征提取模型通过使用循环神经网络算法训练得到;
问句特征得到模块,用于将问句向量作为用户问句的问句特征。
在其中一个实施例中,问答处理装置1000,还包括:
问句展示模块,用于将用户问句答复返回终端,以使终端展示用户问句答复。
关于问答处理装置的具体限定可以参见上文中对于问答处理方法的限定,在此不再赘述。上述问答处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在其中一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图11所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运 行提供环境。该计算机设备的数据库用于存储用户问句答复数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种问答处理方法。
在其中一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图12所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种问答处理方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图11或者图12中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在其中一个实施例中,提供了一种计算机设备,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器执行以下步骤计算机可读指令计算机可读指令:接收用户问句答复指令,根据用户问句答复指令获取用户问句;使用用户问句构建多个语法树;计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:将用户问句预处理,得到基本词汇序列;根据基本词汇序列使用语法规则构建多个语法树。
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:将用户问句分词,得到分词结果;识别分词结果对应的特定类型,得到具有特定类型的词片;将特定类型的词片按照语法规则组合,得到基本词汇序列。
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:将分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型;其中,命名实体识别模型通过使用神经网络算法训练得到。根据分词结果对应的特定类型将分词结果标记为特定类型的词片。
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:提取多个语法树的语法特征和用户问句的问句特征;计算语法特征与问句特征的相似度得分,根据相似度得分对多个语法树进行排序,得到多个语法树的排序结果;从排序结果选取最大相似度 得分或者超过预设阈值的相似度得分对应的语法树作为目标语法树。
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:将语法树的子节点转换为子节点词向量,将语法树的子节点词向量输入已训练的第一特征提取模型中进行提取,得到根节点词向量;其中,第一特征提取模型通过使用递归神经网络算法训练得到;将根节点词向量作为语法树的语法特征;将用户问句输入已训练的第二特征提取模型中进行提取,得到问句向量;其中,第二特征提取模型通过使用循环神经网络算法训练得到;将问句向量作为用户问句的问句特征。
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:将用户问句答复返回终端,以使终端展示用户问句答复。
在其中一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤计算机可读指令计算机可读指令:接收用户问句答复指令,根据用户问句答复指令获取用户问句;使用用户问句构建多个语法树;计算多个语法树与用户问句的相似度,根据相似度确定目标语法树;将目标语法树转化为查询语句,并执行查询语句,得到与用户问句答复指令相对应的用户问句答复。
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:将用户问句预处理,得到基本词汇序列;根据基本词汇序列使用语法规则构建多个语法树。
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:将用户问句分词,得到分词结果;识别分词结果对应的特定类型,得到具有特定类型的词片;将特定类型的词片按照语法规则组合,得到基本词汇序列。
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:将分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型;其中,命名实体识别模型通过使用神经网络算法训练得到。根据分词结果对应的特定类型将分词结果标记为特定类型的词片。
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:提取多个语法树的语法特征和用户问句的问句特征;计算语法特征与问句特征的相似度得分,根据相似度得分对多个语法树进行排序,得到多个语法树的排序结果;从排序结果选取最大相似度得分或者超过预设阈值的相似度得分对应的语法树作为目标语法树。
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:将语法树的子节点转换为子节点词向量,将语法树的子节点词向量输入已训练的第一特征提取模型中进行提取,得到根节点词向量;其中,第一特征提取模型通过使用递归神经网络算法训练得到;将根节点词向量作为语法树的语法特征;将用户问句输入已训练的第二特征提取模型中进行提取,得到问句向量;其中,第二特征提取模型通过使用循环神经网络算法训练得到;将问句向量作为用户问句的问句特征。
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:将用户问句 答复返回终端,以使终端展示用户问句答复。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
Claims (20)
- 一种问答处理方法,所述方法包括:接收用户问句答复指令,根据所述用户问句答复指令获取用户问句;使用所述用户问句构建多个语法树;计算所述多个语法树与所述用户问句的相似度,根据所述相似度确定目标语法树;及将所述目标语法树转化为查询语句,并执行所述查询语句,得到与所述用户问句答复指令相对应的用户问句答复。
- 根据权利要求1所述的方法,其特征在于,所述使用所述用户问句构建多个语法树,包括:将所述用户问句预处理,得到基本词汇序列;及根据所述基本词汇序列使用语法规则构建多个语法树。
- 根据权利要求2所述的方法,其特征在于,所述将所述用户问句预处理,得到基本词汇序列,包括:将所述用户问句分词,得到分词结果;识别所述分词结果对应的特定类型,得到具有所述特定类型的词片;及将所述特定类型的词片按照所述语法规则组合,得到基本词汇序列。
- 根据权利要求3所述的方法,其特征在于,所述识别所述分词结果对应的特定类型,得到具有所述特定类型的词片,包括:将所述分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型;其中,所述命名实体识别模型通过使用神经网络算法训练得到;及根据所述分词结果对应的特定类型将所述分词结果标记为特定类型的词片。
- 根据权利要求1-4任意一项所述的方法,其特征在于,所述计算所述多个语法树与所述用户问句的相似度,根据所述相似度确定目标语法树,包括:提取所述多个语法树的语法特征和所述用户问句的问句特征;计算所述语法特征与所述问句特征的相似度得分,根据所述相似度得分对所述多个语法树进行排序,得到所述多个语法树的排序结果;及从排序结果选取最大相似度得分或者超过预设阈值的相似度得分对应的语法树作为目标语法树。
- 根据权利要求5所述的方法,其特征在于,提取所述多个语法树的语法特征和所述用户问句的问句特征,包括:将所述语法树的子节点转换为子节点词向量,将所述语法树的子节点词向量输入已训练的第一特征提取模型中进行提取,得到根节点词向量;其中,所述第一特征提取模型通过使用递归神经网络算法训练得到;将所述根节点词向量作为所述语法树的语法特征;将所述用户问句输入已训练的第二特征提取模型中进行提取,得到问句向量;其中, 所述第二特征提取模型通过使用循环神经网络算法训练得到;及将所述问句向量作为所述用户问句的问句特征。
- 根据权利要求1-4任意一项所述的方法,其特征在于,在所述将所述目标语法树转化为查询语句,并在知识图谱中执行所述查询语句,得到用户问句答复之后,还包括:将所述用户问句答复返回终端,以使所述终端展示所述用户问句答复。
- 一种问句处理装置,包括:问句获取模块,用于接收用户问句答复指令,根据所述用户问句答复指令获取用户问句;构建树模块,用于使用所述用户问句构建多个语法树;目标树确定模块,用于计算所述多个语法树与所述用户问句的相似度,根据所述相似度确定目标语法树;及语句执行模块,用于将所述目标语法树转化为查询语句,并执行所述查询语句,得到用户问句答复。
- 根据权利要求8所述的装置,其特征在于,构建树模块,包括:预处理模块,用于将用户问句预处理,得到基本词汇序列;及构建模块,用于根据基本词汇序列使用语法规则构建多个语法树。
- 根据权利要求9所述的装置,其特征在于,预处理模块,包括:分词模块,用于将用户问句分词,得到分词结果;识别模块,用于识别分词结果对应的特定类型,得到具有特定类型的词片;及组合模块,用于将特定类型的词片按照语法规则组合,得到基本词汇序列。
- 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:接收用户问句答复指令,根据所述用户问句答复指令获取用户问句;使用所述用户问句构建多个语法树;计算所述多个语法树与所述用户问句的相似度,根据所述相似度确定目标语法树;及将所述目标语法树转化为查询语句,并执行所述查询语句,得到与所述用户问句答复指令相对应的用户问句答复。
- 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:将所述用户问句预处理,得到基本词汇序列;及根据所述基本词汇序列使用语法规则构建多个语法树。
- 根据权利要求12所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:将所述用户问句分词,得到分词结果;识别所述分词结果对应的特定类型,得到具有所述特定类型的词片;及将所述特定类型的词片按照所述语法规则组合,得到基本词汇序列。
- 根据权利要求13所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:将所述分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型;其中,所述命名实体识别模型通过使用神经网络算法训练得到;及根据所述分词结果对应的特定类型将所述分词结果标记为特定类型的词片。
- 根据权利要求11-14所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:提取所述多个语法树的语法特征和所述用户问句的问句特征;计算所述语法特征与所述问句特征的相似度得分,根据所述相似度得分对所述多个语法树进行排序,得到所述多个语法树的排序结果;及从排序结果选取最大相似度得分或者超过预设阈值的相似度得分对应的语法树作为目标语法树。
- 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:接收用户问句答复指令,根据所述用户问句答复指令获取用户问句;使用所述用户问句构建多个语法树;计算所述多个语法树与所述用户问句的相似度,根据所述相似度确定目标语法树;及将所述目标语法树转化为查询语句,并执行所述查询语句,得到与所述用户问句答复指令相对应的用户问句答复。
- 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:将所述用户问句预处理,得到基本词汇序列;及根据所述基本词汇序列使用语法规则构建多个语法树。
- 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:将所述用户问句分词,得到分词结果;识别所述分词结果对应的特定类型,得到具有所述特定类型的词片;及将所述特定类型的词片按照所述语法规则组合,得到基本词汇序列。
- 根据权利要求18所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:将所述分词结果输入到已训练的命名实体识别模型中进行识别,得到分词结果对应的特定类型;其中,所述命名实体识别模型通过使用神经网络算法训练得到;根据所述分词结果对应的特定类型将所述分词结果标记为特定类型的词片。
- 根据权利要求16-19所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:提取所述多个语法树的语法特征和所述用户问句的问句特征;计算所述语法特征与所述问句特征的相似度得分,根据所述相似度得分对所述多个语法树进行排序,得到所述多个语法树的排序结果;及从排序结果选取最大相似度得分或者超过预设阈值的相似度得分对应的语法树作为目标语法树。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910428635.4 | 2019-05-22 | ||
CN201910428635.4A CN110334179B (zh) | 2019-05-22 | 2019-05-22 | 问答处理方法、装置、计算机设备和存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020233131A1 true WO2020233131A1 (zh) | 2020-11-26 |
Family
ID=68139119
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/130597 WO2020233131A1 (zh) | 2019-05-22 | 2019-12-31 | 问答处理方法、装置、计算机设备和存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110334179B (zh) |
WO (1) | WO2020233131A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112463949A (zh) * | 2020-12-01 | 2021-03-09 | 贝壳技术有限公司 | 数据召回方法与系统、交互方法及交互系统 |
CN113255351A (zh) * | 2021-06-22 | 2021-08-13 | 中国平安财产保险股份有限公司 | 语句意图识别方法、装置、计算机设备及存储介质 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334179B (zh) * | 2019-05-22 | 2020-12-29 | 深圳追一科技有限公司 | 问答处理方法、装置、计算机设备和存储介质 |
CN111259653B (zh) * | 2020-01-15 | 2022-06-24 | 重庆邮电大学 | 基于实体关系消歧的知识图谱问答方法、系统以及终端 |
CN111522966A (zh) * | 2020-04-22 | 2020-08-11 | 深圳追一科技有限公司 | 基于知识图谱的数据处理方法、装置、电子设备及介质 |
CN111783465B (zh) * | 2020-07-03 | 2024-04-30 | 深圳追一科技有限公司 | 一种命名实体归一化方法、系统及相关装置 |
CN113010651A (zh) * | 2021-03-02 | 2021-06-22 | 中国工商银行股份有限公司 | 一种针对用户提问的答复方法、装置及设备 |
CN113343713B (zh) * | 2021-06-30 | 2022-06-17 | 中国平安人寿保险股份有限公司 | 意图识别方法、装置、计算机设备及存储介质 |
CN113553411B (zh) | 2021-06-30 | 2023-08-29 | 北京百度网讯科技有限公司 | 查询语句的生成方法、装置、电子设备和存储介质 |
CN114358003A (zh) * | 2021-12-22 | 2022-04-15 | 上海浦东发展银行股份有限公司 | 目标句子识别方法、装置、设备、存储介质和程序产品 |
CN114723008A (zh) * | 2022-04-01 | 2022-07-08 | 北京健康之家科技有限公司 | 语言表征模型的训练方法、装置、设备、介质及用户响应方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105786875A (zh) * | 2014-12-23 | 2016-07-20 | 北京奇虎科技有限公司 | 提供问答对数据搜索结果的方法和装置 |
CN108549658A (zh) * | 2018-03-12 | 2018-09-18 | 浙江大学 | 一种基于语法分析树上注意力机制的深度学习视频问答方法及系统 |
CN110334179A (zh) * | 2019-05-22 | 2019-10-15 | 深圳追一科技有限公司 | 问答处理方法、装置、计算机设备和存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2011269688A1 (en) * | 2010-06-25 | 2013-01-24 | Smart Technologies Ulc | Equation-based assessment grading method and participant response system employing same |
CN104462327B (zh) * | 2014-12-02 | 2018-09-11 | 百度在线网络技术(北京)有限公司 | 语句相似度的计算、搜索处理方法及装置 |
CN105701253B (zh) * | 2016-03-04 | 2019-03-26 | 南京大学 | 中文自然语言问句语义化的知识库自动问答方法 |
CN105868313B (zh) * | 2016-03-25 | 2019-02-12 | 浙江大学 | 一种基于模板匹配技术的知识图谱问答系统及方法 |
CN107885786B (zh) * | 2017-10-17 | 2021-10-26 | 东华大学 | 面向大数据的自然语言查询接口实现方法 |
-
2019
- 2019-05-22 CN CN201910428635.4A patent/CN110334179B/zh active Active
- 2019-12-31 WO PCT/CN2019/130597 patent/WO2020233131A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105786875A (zh) * | 2014-12-23 | 2016-07-20 | 北京奇虎科技有限公司 | 提供问答对数据搜索结果的方法和装置 |
CN108549658A (zh) * | 2018-03-12 | 2018-09-18 | 浙江大学 | 一种基于语法分析树上注意力机制的深度学习视频问答方法及系统 |
CN110334179A (zh) * | 2019-05-22 | 2019-10-15 | 深圳追一科技有限公司 | 问答处理方法、装置、计算机设备和存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112463949A (zh) * | 2020-12-01 | 2021-03-09 | 贝壳技术有限公司 | 数据召回方法与系统、交互方法及交互系统 |
CN113255351A (zh) * | 2021-06-22 | 2021-08-13 | 中国平安财产保险股份有限公司 | 语句意图识别方法、装置、计算机设备及存储介质 |
CN113255351B (zh) * | 2021-06-22 | 2023-02-03 | 中国平安财产保险股份有限公司 | 语句意图识别方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110334179A (zh) | 2019-10-15 |
CN110334179B (zh) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020233131A1 (zh) | 问答处理方法、装置、计算机设备和存储介质 | |
CN108829757B (zh) | 一种聊天机器人的智能服务方法、服务器及存储介质 | |
WO2021068321A1 (zh) | 基于人机交互的信息推送方法、装置和计算机设备 | |
WO2022088672A1 (zh) | 基于bert的机器阅读理解方法、装置、设备及存储介质 | |
CN109635273B (zh) | 文本关键词提取方法、装置、设备及存储介质 | |
CN110427467B (zh) | 问答处理方法、装置、计算机设备和存储介质 | |
CN109858010B (zh) | 领域新词识别方法、装置、计算机设备和存储介质 | |
US20180276525A1 (en) | Method and neural network system for human-computer interaction, and user equipment | |
CN109522393A (zh) | 智能问答方法、装置、计算机设备和存储介质 | |
CN111695352A (zh) | 基于语义分析的评分方法、装置、终端设备及存储介质 | |
CN111046133A (zh) | 基于图谱化知识库的问答方法、设备、存储介质及装置 | |
CN111814466A (zh) | 基于机器阅读理解的信息抽取方法、及其相关设备 | |
CN111506714A (zh) | 基于知识图嵌入的问题回答 | |
WO2021204017A1 (zh) | 文本意图识别方法、装置以及相关设备 | |
WO2021027125A1 (zh) | 序列标注方法、装置、计算机设备和存储介质 | |
WO2019232893A1 (zh) | 文本的情感分析方法、装置、计算机设备和存储介质 | |
CN112287069B (zh) | 基于语音语义的信息检索方法、装置及计算机设备 | |
CN110377733B (zh) | 一种基于文本的情绪识别方法、终端设备及介质 | |
CN111400340B (zh) | 一种自然语言处理方法、装置、计算机设备和存储介质 | |
CN110362798B (zh) | 裁决信息检索分析方法、装置、计算机设备和存储介质 | |
WO2022174496A1 (zh) | 基于生成模型的数据标注方法、装置、设备及存储介质 | |
CN113094478B (zh) | 表情回复方法、装置、设备及存储介质 | |
WO2021244099A1 (zh) | 语音编辑方法、电子设备及计算机可读存储介质 | |
CN112101042A (zh) | 文本情绪识别方法、装置、终端设备和存储介质 | |
CN112183083A (zh) | 文摘自动生成方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19930088 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19930088 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 22/04/2022) |