CN111008267A - Intelligent dialogue method and related equipment - Google Patents

Intelligent dialogue method and related equipment Download PDF

Info

Publication number
CN111008267A
CN111008267A CN201911034425.3A CN201911034425A CN111008267A CN 111008267 A CN111008267 A CN 111008267A CN 201911034425 A CN201911034425 A CN 201911034425A CN 111008267 A CN111008267 A CN 111008267A
Authority
CN
China
Prior art keywords
sentence
question
target
determining
sentences
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911034425.3A
Other languages
Chinese (zh)
Inventor
刘涛
许开河
王少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201911034425.3A priority Critical patent/CN111008267A/en
Priority to PCT/CN2019/117542 priority patent/WO2021082070A1/en
Publication of CN111008267A publication Critical patent/CN111008267A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the field of voice semantics, in particular to an intelligent dialogue method and related equipment, which are applied to electronic equipment, wherein the method comprises the following steps: determining N first question sentences based on target question sentences input by a user, wherein each first question sentence is associated with a first answer sentence; determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, and the N first parameters comprise the target parameter; and outputting the target answer sentence, wherein the problem which does not appear in the controllable answer corpus can be realized by adopting the embodiment of the application.

Description

Intelligent dialogue method and related equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an intelligent dialogue method and related devices.
Background
The intelligent dialogue is an important application in the field of artificial intelligence, human beings have the capability of analyzing dialogue states, topics and moods naturally, and the realization of the intelligent dialogue on a machine has great significance. Currently, intelligent dialogs are mainly implemented based on two models, a generative model and a rule model. The generated model can answer questions which do not appear in the corpus, but answer sentences are uncontrollable; while the rule model can control the answer sentences, but cannot answer questions which do not appear in the corpus. Therefore, how to realize the problem which does not appear in the controlled answer corpus is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application provides an intelligent dialogue method and related equipment, which are used for realizing controllable answering of questions which do not appear in a corpus.
In a first aspect, an embodiment of the present application provides an intelligent dialogue method, which is applied to an electronic device, and the method includes:
determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, N is an integer greater than 1, and each first question sentence is associated with a first answer sentence;
determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one, and the N first parameters are used for evaluating the similarity between the corresponding first question sentences and the target question sentences;
taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter;
and outputting the target answer sentence.
In a second aspect, an embodiment of the present application provides an intelligent dialogue apparatus, which is applied to an electronic device, and the apparatus includes:
a determining unit, configured to determine N first question sentences based on a target question sentence input by a user, where a similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, where N is an integer greater than 1, and each first question sentence is associated with a first answer sentence; determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one, and the N first parameters are used for evaluating the similarity between the corresponding first question sentences and the target question sentences; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter;
and the output unit is used for outputting the target answer sentence.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing some or all of the steps described in the method according to the first aspect of the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, where the computer program is executed by a processor to implement part or all of the steps described in the method according to the first aspect of the present application.
It can be seen that, in the embodiment of the present application, N first question sentences are determined based on a target question sentence input by a user, then N first parameters are determined based on a preset neural network model, the N first parameters are used for evaluating the similarity between the corresponding first question sentence and the target question sentence, then a first answer sentence associated with the first question sentence corresponding to the first parameter greater than or equal to a second threshold is used as an answer sentence of the target question sentence, and finally the answer sentence is output, the N first question sentences are determined based on the target question sentence input by the user, a rough screening is performed, and controllability of the answer sentence is ensured; the N first parameters are determined based on the preset neural network model, and the problem which does not appear in the corpus is flexibly answered.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a schematic flowchart of an intelligent dialogue method provided in an embodiment of the present application;
fig. 2B is a schematic diagram of a sentence similarity calculation process provided in the embodiment of the present application;
fig. 3 is a schematic flowchart of an intelligent dialogue method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an intelligent dialog device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Hereinafter, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Electronic devices may include a variety of handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices communicatively connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and so forth having wireless communication capabilities. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
As shown in fig. 1, fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device includes a processor, Memory, signal processor, transceiver, display screen, speaker, microphone, Random Access Memory (RAM), camera, sensor, and the like. The storage, the signal processor, the display screen, the loudspeaker, the microphone, the RAM, the camera and the sensor are connected with the processor, and the transceiver is connected with the signal processor.
The Display screen may be a Liquid Crystal Display (LCD), an Organic or inorganic Light-Emitting Diode (OLED), an active matrix Organic Light-Emitting Diode (AMOLED), or the like.
The camera may be a common camera or an infrared camera, and is not limited herein. The camera may be a front camera or a rear camera, and is not limited herein.
Wherein the sensor comprises at least one of: light-sensitive sensors, gyroscopes, infrared proximity sensors, fingerprint sensors, pressure sensors, etc. Among them, the light sensor, also called an ambient light sensor, is used to detect the ambient light brightness. The light sensor may include a light sensitive element and an analog to digital converter. The photosensitive element is used for converting collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The processor is a control center of the electronic equipment, various interfaces and lines are used for connecting all parts of the whole electronic equipment, and various functions and processing data of the electronic equipment are executed by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory, so that the electronic equipment is monitored integrally.
The processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory is used for storing software programs and/or modules, and the processor executes various functional applications and data processing of the electronic equipment by operating the software programs and/or modules stored in the memory. The memory mainly comprises a program storage area and a data storage area, wherein the program storage area can store an operating system, a software program required by at least one function and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The following describes embodiments of the present application in detail.
Referring to fig. 2A, fig. 2A is a schematic flowchart of an intelligent dialog method provided in an embodiment of the present application, and the method is applied to an electronic device, and the method includes:
step 201: determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, N is an integer greater than 1, and each first question sentence is associated with a first answer sentence.
The information input by the user can be voice, characters or pictures, and then the information input by the user is analyzed to obtain the target question sentence.
N may be, for example, 5, 10, 15, 20, or other values, which are not limited herein.
The first threshold may be, for example, 80%, 85%, 90%, 95%, or other values, which are not limited herein.
As shown in table 1, table 1 is a relational mapping table in which first question sentences and first answer sentences correspond to each other one by one, and the relational mapping table may be stored in a database associated with the electronic device.
TABLE 1
Figure BDA0002251061390000051
Step 202: and determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one, and the N first parameters are used for evaluating the similarity between the corresponding first question sentences and the target question sentences.
Step 203: and taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter.
The first threshold and the second threshold are preset values.
For example, the determined first question statement includes 3 question statements, the value of the first parameter of each question statement is, for example, 80%, 85%, 90%, respectively, then the target parameter may be 90%, and the first answer statement associated with 90% of the corresponding first question statements is taken as the answer statement of the target question statement.
Step 204: and outputting the target answer sentence.
The target answer sentence may be output in voice, or the target answer sentence may be output in text, which is not limited herein.
It can be seen that, in the embodiment of the present application, N first question sentences are determined based on a target question sentence input by a user, then N first parameters are determined based on a preset neural network model, the N first parameters are used for evaluating the similarity between the corresponding first question sentence and the target question sentence, then a first answer sentence associated with the first question sentence corresponding to the first parameter greater than or equal to a second threshold is used as an answer sentence of the target question sentence, and finally the answer sentence is output, the N first question sentences are determined based on the target question sentence input by the user, a rough screening is performed, and controllability of the answer sentence is ensured; the N first parameters are determined based on the preset neural network model, and the problem which does not appear in the corpus is flexibly answered.
In an implementation manner of the present application, the determining N first question sentences based on the target question sentence input by the user includes:
acquiring a target question sentence input by a user;
determining M second question sentences from a preset corpus based on a literal search, and determining W third question sentences from the preset corpus based on a semantic search, wherein keywords of the literal search are determined based on the target question sentences, the literal similarity between each second question sentence and the target question sentences is greater than or equal to a third threshold, the semantic similarity between each third question sentence and the target question sentences is greater than or equal to a fourth threshold, the first threshold is greater than or equal to the third threshold, the first threshold is greater than or equal to the fourth threshold, and M and W are integers greater than 0;
determining N first question sentences based on the M second question sentences and the W third question sentences, wherein the N first question sentences comprise at least one second question sentence and at least one third question sentence.
Specifically, the target question sentence is composed of a first character set, the first character set includes P first characters, and P is an integer greater than 0; one specific implementation manner for determining M second question sentences from the preset corpus based on the literal search is as follows: searching a preset corpus by taking at least one first character in the P first characters as a keyword to obtain Q fifth question sentences; selecting M fifth question sentences from the Q fifth question sentences; and determining the M fifth question sentences as M second question sentences.
The M fifth question sentences may be any M fifth question sentences selected manually, M fifth question sentences ranked in the top after the search, or M fifth question sentences containing the most keywords, which is not limited herein.
Further, the number of the first characters included in the M second question sentences is greater than or equal to the number of the first characters included in Q-M sixth question sentences, and the Q-M sixth question sentences are question sentences excluding the M fifth question sentences from the Q fifth question sentences.
The third threshold may be, for example, 60%, 70%, 80%, 90%, or other values, which are not limited herein; the fourth threshold may be, for example, 60%, 70%, 80%, 90%, or other values, which are not limited herein.
Specifically, a specific way of determining the N first question statements based on the M second question statements and the W third question statements is as follows: determining N x N second question statements from the M second question statements, and (1-N) x N third question statements from the W third question statements; and taking the N × N second question sentences and the (1-N) × N third question sentences as N first question sentences.
Where n is a number greater than 0 and less than 1, and may be, for example, 0.1, 0.2, 0.3, 0.4, or other values, which are not limited herein.
The literal similarity between the N × N second question sentences and the target question sentences is greater than or equal to a fifth threshold, the semantic similarity between the (1-N) × N third question sentences and the target question sentences is greater than or equal to a sixth threshold, the fifth threshold may be equal to the sixth threshold, or the fifth threshold may not be equal to the sixth threshold, which is not limited herein.
In an implementation manner of the present application, the determining W third question sentences from the preset corpus based on semantic search includes:
determining sentence construction components of the target question sentence;
filtering the target question sentence based on the sentence construction components to obtain a fourth question sentence, wherein the sentence construction components of the fourth question sentence are less than or equal to the sentence construction components of the target question sentence;
determining W third question sentences from the preset corpus, wherein the semantic similarity between each third question sentence and the fourth question sentence is greater than or equal to the fourth threshold.
Wherein, the sentence composition comprises at least one of the following: subject, predicate, object, complement, center, and verb.
For example, the subject in the target question sentence is removed, so as to obtain the sentence with the subject removed. The subject in the sentence may be, for example, words such as "he", "she", "it", "they", "I", "you", and the like. Illustratively, the target question statement is "recommend me a suitable bag", and the statement after removing the stop word is "recommend a suitable bag".
In an implementation manner of the present application, the determining W third question sentences from the preset corpus based on semantic search includes:
performing word segmentation processing on the target question sentence to obtain a plurality of target words;
deleting stop words in the target words based on a preset stop word list to obtain a seventh question sentence;
determining W third question sentences from the preset corpus, wherein the semantic similarity between each third question sentence and the seventh question sentence is greater than or equal to the fourth threshold.
The stop words are words having no meaning to the sentences, such as "o", "j", "kah", "no", "etc. Illustratively, the target question statement is "how it is in tomorrow. "how much the weather is tomorrow" is the sentence with the stop word removed.
In an implementation manner of the present application, the determining N first parameters based on the preset neural network model includes:
determining N sentence similarities, N editing distances and N Jacard similarities of the target question sentence and the N first question sentences based on a preset neural network model, wherein the N sentence similarities, the N editing distances and the N Jacard similarities are in one-to-one correspondence with the N first question sentences;
determining N first parameters based on the N sentence similarity, the N editing distances and the N Jacard similarity, wherein the N first parameters correspond to the N sentence similarity, the N editing distances and the N Jacard similarity one to one.
The sentence similarity refers to the similarity between the target question sentence and the first question sentence.
The editing distance is the minimum editing times for converting the first question statement into the target question statement through the editing operation.
Specifically, the determining N first parameters based on the N sentence similarities, the N edit distances, and the N jackard similarities includes:
converting the N editing distances into N first similarities;
determining a first weight, a second weight and a third weight, wherein the first weight is used for representing the proportion of sentence similarity when the sentence similarity is used for evaluating a first parameter, the second weight is used for representing the proportion of first similarity when the sentence similarity is used for evaluating the first parameter, the third weight is used for representing the proportion of Jacard similarity when the sentence similarity is used for evaluating the first parameter, and the sum of the first weight, the second weight and the third weight is 1;
determining N first formulas based on the first weight, the second weight, the third weight, the N sentence similarities, the N first similarities, the N Jacard similarities, and a first parametric formula.
For example, table 2 is a one-to-one correspondence table between the edit distance and the first similarity provided in the embodiment of the present application.
TABLE 2
Edit distance First degree of similarity
Greater than or equal to 0 and less than 3 90%
Greater than or equal to 3 and less than 6 80%
Greater than or equal to 6 and less than 9 70%
Greater than or equal to 9 and less than 12 60%
··· ···
Further, the first parameter formula is: s is a first parameter, a is the first weight, B is the second weight, C is the third weight, a is sentence similarity, B is the first similarity, and C is the jackard similarity.
For example, if a is 0.3, B is 0.5, C is 0.2, a is 80%, B is 90%, and C is 80%, then S is calculated to be 85%.
In an implementation manner of the present application, the determining, based on a preset neural network model, a similarity between the target question statement and N sentences of the N first question statements includes:
converting the target question sentences into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors correspond to the N first question sentences one by one;
extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors correspond to the N second sentence vectors one to one;
and determining sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
Further, the target question sentence is composed of a first character set, the first character set includes P first characters, and a specific implementation manner of converting the target question sentence into a first sentence vector includes: converting the P first characters into P word vectors; and combining the P word vectors to obtain a first sentence vector.
It should be noted that, the way of converting the P first characters into P word vectors may be at least one of the following: bidirectional Encoder Representation (BERT) model, Language model embedding (ELMo) model, word2vec model.
Wherein, the sentence similarity calculation formula is
Figure BDA0002251061390000101
Wherein h isa、hbA first target vector and a second target vector, respectively.
As shown in fig. 2B, fig. 2B is a schematic diagram of a sentence similarity calculation process provided in the embodiment of the present application. The target question statement is 'He is smart' and the word vector of 'He' is x1 aThe word vector of "is x2 aThe word vector of "smart" is x3 aThen extracting x respectively through LSTMa algorithm1 a、x2 a、x3 aObtaining h as characteristic information of1 a、h2 a、h3 a. Similarly, the first question sentence is "A tree wise man", and the word vector of "A" is x1 bThe word vector of "try" is x2 bThe word vector of "wise" is x3 bThe term "man" is given as x4 bThen extracting x respectively through LSTMb algorithm1 b、x2 b、x3 b、x4 bObtaining h as characteristic information of1 b、h2 b、h3 b、h4 b. Finally, the formula f (h) is calculated through sentence similaritya,hb) Sentence similarity A and output sentence similarity A can be obtained.
In an implementation manner of the present application, the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets correspond to the N first question sentences one by one; the determining N edit distances of the target question statement and the N first question statements based on a preset neural network model includes:
determining the minimum number of editing operations required for converting the first character set into each second character set;
and determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances correspond to the N minimum editing operation times one to one.
Wherein the editing operation comprises at least one of: insertion, deletion, replacement.
For example, the two words "kitten" and "sitting" require the minimum single character editing operations from "kitten" to "sitting": first, kitten → sitten (replace "k" with "s"); second, sitten → sittin (replace "e" with "i"); third, sittin → sitting (insert "g" at the end of word). Thus, the edit distance between the two words "kitten" and "sitting" is 3.
In an implementation manner of the present application, the determining N jaccard similarities between the target question statement and the N first question statements based on a preset neural network model includes:
determining N intersections and N unions of the first character set and the N second character sets, wherein the N intersections and the N unions are in one-to-one correspondence with the N second character sets;
determining N Jacard similarities based on the N intersections and the N unions, the N Jacard similarities corresponding to the N intersections and the N unions one to one.
Further, the first character set comprises P first characters, the second character set comprises Q second characters, wherein R are the same as the first characters, the intersection of the first character set and the second character set is R, the union of the first character set and the second character set is P + Q-R, the vicard similarity is R/(P + Q-R), and R and Q are integers greater than 0.
Referring to fig. 3, in accordance with the embodiment shown in fig. 2A, fig. 3 is a schematic flowchart of an intelligent dialog method provided in an embodiment of the present application, and the method is applied to an electronic device, and the method includes:
step 301: and acquiring a target question sentence input by a user, wherein the target question sentence is composed of a first character set.
Step 302: determining M second question sentences from a preset corpus based on a literal search, wherein keywords of the literal search are determined based on the target question sentences, the literal similarity between each second question sentence and the target question sentence is greater than or equal to a third threshold, and M is an integer greater than 0.
Step 303: and determining sentence construction components of the target question sentence.
Step 304: and filtering the target question sentence based on the sentence construction components to obtain a fourth question sentence, wherein the sentence construction components of the fourth question sentence are less than or equal to the sentence construction components of the target question sentence.
Step 305: determining W third question sentences from the preset corpus, wherein the semantic similarity between each third question sentence and the fourth question sentence is greater than or equal to the fourth threshold, and W is an integer greater than 0.
Step 306: determining N first problem sentences based on the M second problem sentences and the W third problem sentences, wherein the N first problem sentences comprise at least one second problem sentence and at least one third problem sentence, the similarity between each first problem sentence and the target problem sentence is greater than or equal to a first threshold, the first threshold is greater than or equal to the third threshold, the first threshold is greater than or equal to a fourth threshold, the N first problem sentences are composed of N second character sets, and the N second character sets correspond to the N first problem sentences one by one.
Step 307: converting the target question sentences into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors correspond to the N first question sentences one by one.
Step 308: and extracting the characteristic information of the first sentence vector to obtain a first target vector, and extracting the characteristic information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors correspond to the N second sentence vectors one by one.
Step 309: and determining sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
Step 310: determining a minimum number of editing operations required to translate the first character set into each second character set.
Step 311: and determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances correspond to the N minimum editing operation times one to one.
Step 312: and determining N intersections and N unions of the first character set and the N second character sets, wherein the N intersections and the N unions are in one-to-one correspondence with the N second character sets.
Step 313: determining N Jacard similarities based on the N intersections and the N unions, the N Jacard similarities corresponding to the N intersections and the N unions one to one.
Step 314: determining N first parameters based on the N sentence similarity, the N editing distances and the N Jacard similarity, wherein the N first parameters correspond to the N sentence similarity, the N editing distances and the N Jacard similarity one to one.
Step 315: and taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold, and the N first parameters comprise the target parameter.
Step 316: and outputting the target answer sentence.
It should be noted that, the steps 302 and 303-305 may be executed simultaneously, or the step 302 may be executed first, and then the step 303-305 is executed, or the step 303-305 may be executed first, and then the step 302 is executed; step 307-. The specific implementation process of this embodiment may refer to the specific implementation process described in the above method embodiment, and will not be described here.
In accordance with the embodiments shown in fig. 2A and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in the figure, the electronic device includes a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, N is an integer greater than 1, and each first question sentence is associated with a first answer sentence;
determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one, and the N first parameters are used for evaluating the similarity between the corresponding first question sentences and the target question sentences;
taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter;
and outputting the target answer sentence.
In one implementation of the present application, in determining N first question sentences based on a target question sentence input by a user, the program includes instructions specifically for performing the steps of:
acquiring a target question sentence input by a user;
determining M second question sentences from a preset corpus based on a literal search, and determining W third question sentences from the preset corpus based on a semantic search, wherein keywords of the literal search are determined based on the target question sentences, the literal similarity between each second question sentence and the target question sentences is greater than or equal to a third threshold, the semantic similarity between each third question sentence and the target question sentences is greater than or equal to a fourth threshold, the first threshold is greater than or equal to the third threshold, the first threshold is greater than or equal to the fourth threshold, and M and W are integers greater than 0;
determining N first question sentences based on the M second question sentences and the W third question sentences, wherein the N first question sentences comprise at least one second question sentence and at least one third question sentence.
In an implementation of the present application, in determining W third question sentences from the preset corpus based on semantic search, the program includes instructions specifically configured to:
determining sentence construction components of the target question sentence;
filtering the target question sentence based on the sentence construction components to obtain a fourth question sentence, wherein the sentence construction components of the fourth question sentence are less than or equal to the sentence construction components of the target question sentence;
determining W third question sentences from the preset corpus, wherein the semantic similarity between each third question sentence and the fourth question sentence is greater than or equal to the fourth threshold.
In an implementation of the present application, in determining the N first parameters based on the preset neural network model, the program includes instructions specifically configured to:
determining N sentence similarities, N editing distances and N Jacard similarities of the target question sentence and the N first question sentences based on a preset neural network model, wherein the N sentence similarities, the N editing distances and the N Jacard similarities are in one-to-one correspondence with the N first question sentences;
determining N first parameters based on the N sentence similarity, the N editing distances and the N Jacard similarity, wherein the N first parameters correspond to the N sentence similarity, the N editing distances and the N Jacard similarity one to one.
In an implementation manner of the present application, in terms of determining similarity between the target question statement and N sentences of the N first question statements based on a preset neural network model, the program includes instructions specifically configured to perform the following steps:
converting the target question sentences into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors correspond to the N first question sentences one by one;
extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors correspond to the N second sentence vectors one to one;
and determining sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
In an implementation manner of the present application, the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets correspond to the N first question sentences one by one; in determining N edit distances of the target question statement and the N first question statements based on a preset neural network model, the program includes instructions specifically for performing the steps of:
determining the minimum number of editing operations required for converting the first character set into each second character set;
and determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances correspond to the N minimum editing operation times one to one.
In an implementation manner of the present application, in determining N jaccard similarities of the target question statement and the N first question statements based on a preset neural network model, the program includes instructions specifically configured to perform the following steps:
determining N intersections and N unions of the first character set and the N second character sets, wherein the N intersections and the N unions are in one-to-one correspondence with the N second character sets;
determining N Jacard similarities based on the N intersections and the N unions, the N Jacard similarities corresponding to the N intersections and the N unions one to one.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
The above embodiments mainly introduce the scheme of the embodiments of the present application from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
The following is an embodiment of the apparatus of the present application, which is used to execute the method implemented by the embodiment of the method of the present application. Referring to fig. 5, fig. 5 is a schematic structural diagram of an intelligent dialog apparatus provided in an embodiment of the present application, and the intelligent dialog apparatus is applied to an electronic device, where the apparatus includes:
a determining unit 501, configured to determine N first question sentences based on a target question sentence input by a user, where a similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, where N is an integer greater than 1, and each first question sentence is associated with a first answer sentence; determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one, and the N first parameters are used for evaluating the similarity between the corresponding first question sentences and the target question sentences; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter;
an output unit 502, configured to output the target answer sentence.
In an implementation of the present application, in determining N first question sentences based on a target question sentence input by a user, the determining unit 501 includes an acquiring unit 5011, a first sub-determining unit 5012, a second sub-determining unit 5013, and a third sub-determining unit 5014, in which:
the acquiring unit 5011 is configured to acquire a target question sentence input by a user;
the first sub-determination unit 5012 is configured to determine M second question sentences from a preset corpus based on a literal search, where a keyword of the literal search is determined based on the target question sentence;
the second sub-determining unit 5013 is configured to determine W third question sentences from the preset corpus based on semantic search, where a literal similarity between each second question sentence and the target question sentence is greater than or equal to a third threshold, a semantic similarity between each third question sentence and the target question sentence is greater than or equal to a fourth threshold, the first threshold is greater than or equal to the third threshold, the first threshold is greater than or equal to the fourth threshold, and both M and W are integers greater than 0;
the third sub-determining unit 5014 is configured to determine N first question sentences based on the M second question sentences and the W third question sentences, where the N first question sentences include at least one second question sentence and at least one third question sentence.
In an implementation manner of the present application, in determining W third question sentences from the preset corpus based on semantic search, the second sub-determining unit 5013 is specifically configured to determine sentence constituting components of the target question sentence; filtering the target question sentence based on the sentence construction components to obtain a fourth question sentence, wherein the sentence construction components of the fourth question sentence are less than or equal to the sentence construction components of the target question sentence; determining W third question sentences from the preset corpus, wherein the semantic similarity between each third question sentence and the fourth question sentence is greater than or equal to the fourth threshold.
In an implementation of the present application, in determining the N first parameters based on the preset neural network model, the determining unit 501 further includes a fourth sub-determining unit 5015 and a fifth sub-determining unit 5016, wherein:
the fourth sub-determination unit 5015 is configured to determine, based on a preset neural network model, N sentence similarities, N edit distances, and N jaccard similarities between the target problem statement and the N first problem statements, where the N sentence similarities, the N edit distances, and the N jaccard similarities are all in one-to-one correspondence with the N first problem statements;
the fifth sub-determining unit 5016 is configured to determine N first parameters based on the N sentence similarities, the N editing distances, and the N jackard similarities, where the N first parameters correspond to the N sentence similarities, the N editing distances, and the N jackard similarities one to one.
In an implementation manner of the present application, in terms of determining N sentence similarities between the target question sentence and the N first question sentences based on a preset neural network model, the fourth sub-determining unit 5015 is specifically configured to:
converting the target question sentences into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors correspond to the N first question sentences one by one;
extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors correspond to the N second sentence vectors one to one;
and determining sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
In an implementation manner of the present application, the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets correspond to the N first question sentences one by one; in terms of determining the N edit distances between the target question statement and the N first question statements based on a preset neural network model, the fourth sub-determining unit 5015 is specifically configured to:
determining the minimum number of editing operations required for converting the first character set into each second character set;
and determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances correspond to the N minimum editing operation times one to one.
In an implementation manner of the present application, in terms of determining N jaccard similarities of the target question statement and the N first question statements based on a preset neural network model, the fourth sub-determining unit 5015 is specifically configured to:
determining N intersections and N unions of the first character set and the N second character sets, wherein the N intersections and the N unions are in one-to-one correspondence with the N second character sets;
determining N Jacard similarities based on the N intersections and the N unions, the N Jacard similarities corresponding to the N intersections and the N unions one to one.
It is to be noted that the acquisition unit 5011, the first sub-determination unit 5012, the second sub-determination unit 5013, the third sub-determination unit 5014, the fourth sub-determination unit 5015, the fifth sub-determination unit 5016, and the output unit 502 may be implemented by processors. Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An intelligent dialogue method applied to an electronic device, the method comprising:
determining N first question sentences based on target question sentences input by a user, wherein the similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, N is an integer greater than 1, and each first question sentence is associated with a first answer sentence;
determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one, and the N first parameters are used for evaluating the similarity between the corresponding first question sentences and the target question sentences;
taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter;
and outputting the target answer sentence.
2. The method of claim 1, wherein determining N first question sentences based on the target question sentence input by the user comprises:
acquiring a target question sentence input by a user;
determining M second question sentences from a preset corpus based on a literal search, and determining W third question sentences from the preset corpus based on a semantic search, wherein keywords of the literal search are determined based on the target question sentences, the literal similarity between each second question sentence and the target question sentences is greater than or equal to a third threshold, the semantic similarity between each third question sentence and the target question sentences is greater than or equal to a fourth threshold, the first threshold is greater than or equal to the third threshold, the first threshold is greater than or equal to the fourth threshold, and M and W are integers greater than 0;
determining N first question sentences based on the M second question sentences and the W third question sentences, wherein the N first question sentences comprise at least one second question sentence and at least one third question sentence.
3. The method of claim 2, wherein the determining W third question sentences from the preset corpus based on semantic search comprises:
determining sentence construction components of the target question sentence;
filtering the target question sentence based on the sentence construction components to obtain a fourth question sentence, wherein the sentence construction components of the fourth question sentence are less than or equal to the sentence construction components of the target question sentence;
determining W third question sentences from the preset corpus, wherein the semantic similarity between each third question sentence and the fourth question sentence is greater than or equal to the fourth threshold.
4. The method according to any one of claims 1-3, wherein the determining N first parameters based on the pre-defined neural network model comprises:
determining N sentence similarities, N editing distances and N Jacard similarities of the target question sentence and the N first question sentences based on a preset neural network model, wherein the N sentence similarities, the N editing distances and the N Jacard similarities are in one-to-one correspondence with the N first question sentences;
determining N first parameters based on the N sentence similarity, the N editing distances and the N Jacard similarity, wherein the N first parameters correspond to the N sentence similarity, the N editing distances and the N Jacard similarity one to one.
5. The method of claim 4, wherein the determining N sentence similarities between the target question sentence and the N first question sentences based on a preset neural network model comprises:
converting the target question sentences into first sentence vectors, and converting the N first question sentences into N second sentence vectors, wherein the N second sentence vectors correspond to the N first question sentences one by one;
extracting feature information of the first sentence vector to obtain a first target vector, and extracting feature information of the N second sentence vectors to obtain N second target vectors, wherein the N second target vectors correspond to the N second sentence vectors one to one;
and determining sentence similarity of the first target vector and each second target vector based on a sentence similarity calculation formula to obtain N sentence similarities.
6. The method according to claim 4 or 5, wherein the target question sentence is composed of a first character set, the N first question sentences are composed of N second character sets, and the N second character sets are in one-to-one correspondence with the N first question sentences; the determining N edit distances of the target question statement and the N first question statements based on a preset neural network model includes:
determining the minimum number of editing operations required for converting the first character set into each second character set;
and determining the obtained N minimum editing operation times as N editing distances, wherein the N editing distances correspond to the N minimum editing operation times one to one.
7. The method of claim 6, wherein the determining N Jacard similarities for the target question statement and the N first question statements based on a preset neural network model comprises:
determining N intersections and N unions of the first character set and the N second character sets, wherein the N intersections and the N unions are in one-to-one correspondence with the N second character sets;
determining N Jacard similarities based on the N intersections and the N unions, the N Jacard similarities corresponding to the N intersections and the N unions one to one.
8. An intelligent dialogue apparatus, applied to an electronic device, the apparatus comprising:
a determining unit, configured to determine N first question sentences based on a target question sentence input by a user, where a similarity between each first question sentence and the target question sentence is greater than or equal to a first threshold, where N is an integer greater than 1, and each first question sentence is associated with a first answer sentence; determining N first parameters based on a preset neural network model, wherein the N first parameters correspond to the N first question sentences one by one, and the N first parameters are used for evaluating the similarity between the corresponding first question sentences and the target question sentences; taking a target answer sentence as an answer sentence of the target question sentence, wherein the target answer sentence is a first answer sentence associated with a first question sentence corresponding to a target parameter, the value of the target parameter is greater than or equal to a second threshold value, and the N first parameters comprise the target parameter;
and the output unit is used for outputting the target answer sentence.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1 to 7.
CN201911034425.3A 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment Pending CN111008267A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911034425.3A CN111008267A (en) 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment
PCT/CN2019/117542 WO2021082070A1 (en) 2019-10-29 2019-11-12 Intelligent conversation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911034425.3A CN111008267A (en) 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment

Publications (1)

Publication Number Publication Date
CN111008267A true CN111008267A (en) 2020-04-14

Family

ID=70111048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911034425.3A Pending CN111008267A (en) 2019-10-29 2019-10-29 Intelligent dialogue method and related equipment

Country Status (2)

Country Link
CN (1) CN111008267A (en)
WO (1) WO2021082070A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694942A (en) * 2020-05-29 2020-09-22 平安科技(深圳)有限公司 Question answering method, device, equipment and computer readable storage medium
CN112667794A (en) * 2020-12-31 2021-04-16 民生科技有限责任公司 Intelligent question-answer matching method and system based on twin network BERT model
CN113407699A (en) * 2021-06-30 2021-09-17 北京百度网讯科技有限公司 Dialogue method, dialogue device, dialogue equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009003814A (en) * 2007-06-22 2009-01-08 National Institute Of Information & Communication Technology Method and system for answering question
CN104598445A (en) * 2013-11-01 2015-05-06 腾讯科技(深圳)有限公司 Automatic question-answering system and method
CN109472008A (en) * 2018-11-20 2019-03-15 武汉斗鱼网络科技有限公司 A kind of Text similarity computing method, apparatus and electronic equipment
CN109710744A (en) * 2018-12-28 2019-05-03 合肥讯飞数码科技有限公司 A kind of data matching method, device, equipment and storage medium
CN109740077A (en) * 2018-12-29 2019-05-10 北京百度网讯科技有限公司 Answer searching method, device and its relevant device based on semantic indexing
CN109829040A (en) * 2018-12-21 2019-05-31 深圳市元征科技股份有限公司 A kind of Intelligent dialogue method and device
CN109948143A (en) * 2019-01-25 2019-06-28 网经科技(苏州)有限公司 The answer extracting method of community's question answering system
CN110096580A (en) * 2019-04-24 2019-08-06 北京百度网讯科技有限公司 A kind of FAQ dialogue method, device and electronic equipment
CN110162611A (en) * 2019-04-23 2019-08-23 苏宁易购集团股份有限公司 A kind of intelligent customer service answer method and system
CN110263346A (en) * 2019-06-27 2019-09-20 卓尔智联(武汉)研究院有限公司 Lexical analysis method, electronic equipment and storage medium based on small-sample learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334356B (en) * 2019-07-15 2023-08-04 腾讯科技(深圳)有限公司 Article quality determining method, article screening method and corresponding device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009003814A (en) * 2007-06-22 2009-01-08 National Institute Of Information & Communication Technology Method and system for answering question
CN104598445A (en) * 2013-11-01 2015-05-06 腾讯科技(深圳)有限公司 Automatic question-answering system and method
CN109472008A (en) * 2018-11-20 2019-03-15 武汉斗鱼网络科技有限公司 A kind of Text similarity computing method, apparatus and electronic equipment
CN109829040A (en) * 2018-12-21 2019-05-31 深圳市元征科技股份有限公司 A kind of Intelligent dialogue method and device
CN109710744A (en) * 2018-12-28 2019-05-03 合肥讯飞数码科技有限公司 A kind of data matching method, device, equipment and storage medium
CN109740077A (en) * 2018-12-29 2019-05-10 北京百度网讯科技有限公司 Answer searching method, device and its relevant device based on semantic indexing
CN109948143A (en) * 2019-01-25 2019-06-28 网经科技(苏州)有限公司 The answer extracting method of community's question answering system
CN110162611A (en) * 2019-04-23 2019-08-23 苏宁易购集团股份有限公司 A kind of intelligent customer service answer method and system
CN110096580A (en) * 2019-04-24 2019-08-06 北京百度网讯科技有限公司 A kind of FAQ dialogue method, device and electronic equipment
CN110263346A (en) * 2019-06-27 2019-09-20 卓尔智联(武汉)研究院有限公司 Lexical analysis method, electronic equipment and storage medium based on small-sample learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694942A (en) * 2020-05-29 2020-09-22 平安科技(深圳)有限公司 Question answering method, device, equipment and computer readable storage medium
CN112667794A (en) * 2020-12-31 2021-04-16 民生科技有限责任公司 Intelligent question-answer matching method and system based on twin network BERT model
CN113407699A (en) * 2021-06-30 2021-09-17 北京百度网讯科技有限公司 Dialogue method, dialogue device, dialogue equipment and storage medium

Also Published As

Publication number Publication date
WO2021082070A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
CN109871532B (en) Text theme extraction method and device and storage medium
CN107943860B (en) Model training method, text intention recognition method and text intention recognition device
US20220188521A1 (en) Artificial intelligence-based named entity recognition method and apparatus, and electronic device
CN109697239B (en) Method for generating teletext information
CN109918676B (en) Method and device for detecting intention regular expression and terminal equipment
CN111507099A (en) Text classification method and device, computer equipment and storage medium
CN111951805A (en) Text data processing method and device
CN110633577B (en) Text desensitization method and device
CN110851641B (en) Cross-modal retrieval method and device and readable storage medium
EP3926531B1 (en) Method and system for visio-linguistic understanding using contextual language model reasoners
CN111008267A (en) Intelligent dialogue method and related equipment
CN114676704B (en) Sentence emotion analysis method, device and equipment and storage medium
CN110619050B (en) Intention recognition method and device
CN107807968B (en) Question answering device and method based on Bayesian network and storage medium
CN110781273B (en) Text data processing method and device, electronic equipment and storage medium
CN110866098B (en) Machine reading method and device based on transformer and lstm and readable storage medium
CN108846138B (en) Question classification model construction method, device and medium fusing answer information
CN110580516B (en) Interaction method and device based on intelligent robot
CN111666379B (en) Event element extraction method and device
CN112287085B (en) Semantic matching method, system, equipment and storage medium
CN111858898A (en) Text processing method and device based on artificial intelligence and electronic equipment
CN110930969A (en) Background music determination method and related equipment
CN112232066A (en) Teaching outline generation method and device, storage medium and electronic equipment
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN112784011B (en) Emotion problem processing method, device and medium based on CNN and LSTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40019542

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination