WO2021051558A1 - Procédé et appareil de questions et réponses basées sur un graphe de connaissances et support de stockage - Google Patents

Procédé et appareil de questions et réponses basées sur un graphe de connaissances et support de stockage Download PDF

Info

Publication number
WO2021051558A1
WO2021051558A1 PCT/CN2019/117583 CN2019117583W WO2021051558A1 WO 2021051558 A1 WO2021051558 A1 WO 2021051558A1 CN 2019117583 W CN2019117583 W CN 2019117583W WO 2021051558 A1 WO2021051558 A1 WO 2021051558A1
Authority
WO
WIPO (PCT)
Prior art keywords
recognition model
label
entity
artificial
entity element
Prior art date
Application number
PCT/CN2019/117583
Other languages
English (en)
Chinese (zh)
Inventor
刘翔
姚飞
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021051558A1 publication Critical patent/WO2021051558A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology

Definitions

  • This application relates to the field of information processing technology, and in particular to a question and answer method, device and storage medium based on a knowledge graph.
  • Question answering system is an advanced form of information retrieval system. It can use accurate and concise natural language to answer users' questions in natural language.
  • the development and improvement of question answering systems is also a research direction that has attracted much attention and has broad development prospects.
  • the traditional question answering system is based on a certain question and answer corpus training model, the user's natural language is processed and input into the trained model, and the result is obtained by querying similar corpus in the model.
  • the accuracy of this question answering system depends on the coverage of the training corpus.
  • the question answering result output by the traditional question answering system is not accurate.
  • the main purpose of this application is to provide a question and answer method, device and storage medium based on a knowledge graph, aiming to solve the technical problem of inaccurate question and answer results output by a traditional question and answer system.
  • this application provides a question and answer method based on a knowledge graph, which includes the following steps:
  • the entity element, the label, and the question and answer sentence are input to the Bayesian classifier, and the matching degree between each preset template in the Bayesian classifier and the question and answer sentence is calculated, and the one with the highest matching degree is calculated
  • the preset template is determined to be the query template
  • the entity element and the label are input into the query template to obtain the corresponding query sentence, and the query sentence is input into the knowledge graph for query, and the corresponding question and answer result is obtained.
  • the present application also provides a device that includes a memory, a processor, and computer-readable instructions stored on the memory and capable of running on the processor, and the computer-readable instructions are used by the processor When executed, the steps of the question answering method based on the knowledge graph as described above are realized.
  • the present application also provides a non-volatile computer-readable storage medium having computer-readable instructions stored on the non-volatile computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the aforementioned Steps of question answering method based on knowledge graph.
  • the application discloses a question and answer method, device and storage medium based on a knowledge graph.
  • the method first obtains the question and answer sentence input by the user, performs word segmentation on the question and answer sentence, and completes the entity element recognition model and label recognition through training
  • the model obtains the entity elements in the question and answer sentence and the label corresponding to the entity element respectively; inputs the entity element, the label, and the question and answer sentence into the Bayesian classifier, and calculates the Bayesian classifier
  • the degree of matching between each preset template and the question and answer sentence, and the preset template with the highest matching degree is determined as the query template; the entity element and the label are input into the query template to obtain the corresponding query sentence,
  • the query sentence is input into the knowledge graph for query, and the corresponding question and answer result is obtained.
  • This application analyzes the user’s question and answer sentences of the trained entity element recognition model and the label recognition model, and obtains the entity elements and labels of the question and answer sentence by training the entity element recognition model and the label recognition model.
  • Hierarchical mining determine the most suitable query template, and generate the corresponding query sentence, get the corresponding question and answer result according to the knowledge map, the whole process reduces the dependence on the training corpus, avoids the question and answer system missed detection and error detection, thereby improving the question and answer The accuracy of the Q&A results output by the system.
  • FIG. 1 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present application;
  • FIG. 2 is a schematic flowchart of an embodiment of a question answering method based on a knowledge graph of this application;
  • FIG. 3 is a schematic flowchart of another embodiment of the question answering method based on the knowledge graph of this application;
  • FIG. 4 is a schematic flowchart of another embodiment of the question answering method based on the knowledge graph of this application.
  • FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application.
  • the terminal of this application is a device, and the device may be a terminal device with a storage function such as a mobile phone, a computer, or a mobile computer.
  • the terminal may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 can be a high-speed RAM memory or a stable memory (non-volatile memory), such as disk storage.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the terminal may also include a camera, a Wi-Fi module, etc., which will not be repeated here.
  • terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or fewer components than shown in the figure, or combine some components, or arrange different components.
  • the terminal may also include a camera, a Wi-Fi module, etc., which will not be repeated here.
  • terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or fewer components than shown in the figure, or combine some components, or arrange different components.
  • the network interface 1004 is mainly used to connect to a back-end server and communicate with the back-end server;
  • the user interface 1003 mainly includes an input unit such as a keyboard.
  • the keyboard includes a wireless keyboard and a wired keyboard for connecting to a client.
  • Perform data communication with the client; and the processor 1001 can be used to call the computer readable instructions stored in the memory 1005 and execute the steps of the question and answer method based on the knowledge graph.
  • the optional embodiments of the device are basically the same as the following embodiments of the question and answer method based on the knowledge graph, and will not be repeated here.
  • FIG. 1 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present application.
  • the question answering method based on the knowledge graph provided in this embodiment includes the following steps:
  • Step S10 Obtain the question and answer sentence input by the user, perform word segmentation on the question and answer sentence, and obtain the entity element in the question and answer sentence and the label corresponding to the entity element through the entity element recognition model and the label recognition model completed by training. ;
  • the question and answer sentences expressed by the user can be obtained by means of voice recognition, and the question and answer sentences input by the user can also be obtained by other methods, which is not limited in this embodiment.
  • the obtained question and answer sentences are segmented, and the segmented question and answer sentences are passed through the entity element recognition model to extract the entity elements of the question and answer sentences; the question and answer sentences after the word segmentation are passed through the label recognition model to extract the question and answer sentences. label.
  • entity elements and labels take the question and answer sentence "What is the relationship between Huang Xiaoming and Changsheng Medicine" as an example.
  • “Huang Huaweing” is the entity element
  • the corresponding label is person
  • “Changsheng "Medicine” is an entity element
  • the corresponding label is a company.
  • Step S20 Input the entity element, the label, and the question and answer sentence into a Bayesian classifier, calculate the matching degree between each preset template in the Bayesian classifier and the question and answer sentence, and compare the matching The preset template with the highest degree is determined as the query template;
  • the Bayesian classifier uses the prior probability of an object to calculate the object using the Bayesian formula For the probability of belonging to a certain class, the class with the largest posterior probability is selected as the class to which the object belongs.
  • there are three preset templates in the Bayesian classifier namely, relational query template, entity query template, and attribute query template, which are combined by calculating the matching degree of each preset template with entity elements, tags, and question and answer sentences.
  • the contextual semantics of the question and answer sentence determines the template with the highest matching degree as the query template.
  • Step S30 input the entity element and the label into the query template to obtain the corresponding query sentence, and input the query sentence into the knowledge graph for query, and obtain the corresponding question and answer result.
  • the entity elements and tags corresponding to the question and answer statement are input into the query template to obtain the corresponding query statement.
  • a knowledge graph is also preset in this embodiment, and the query sentence is input into the knowledge graph, and the corresponding question and answer result is obtained by means of vector feature matching.
  • This embodiment first obtains the question and answer sentence input by the user, performs word segmentation on the question and answer sentence, and obtains the entity element in the question and answer sentence and the corresponding entity element through the entity element recognition model and the label recognition model completed by training.
  • Label input the entity element, the label, and the question and answer sentence into the Bayesian classifier, calculate the matching degree between each preset template in the Bayesian classifier and the question and answer sentence, and compare the matching degree
  • the highest preset template is determined to be the query template; the entity elements and the tags are input into the query template to obtain the corresponding query sentence, and the query sentence is input into the knowledge graph for query, and the corresponding Q&A results.
  • the entity elements and labels of the question and answer sentences are obtained by training the entity element recognition model and the label recognition model, and the user’s question and answer sentences are deeply analyzed.
  • Mining determine the most suitable query template, and generate the corresponding query sentence, and get the corresponding question and answer result according to the knowledge graph.
  • the whole process reduces the dependence on the training corpus, avoids the question and answer system's missed detection and error detection, thereby improving the output of the question and answer system The accuracy of the Q&A results.
  • Step S10 obtains the question and answer sentence input by the user, and before performing word segmentation on the question and answer sentence, further includes:
  • Step S40 Obtain training corpus through web crawler technology, and perform word segmentation on the training corpus;
  • the web crawler technology is used to obtain a large amount of information from the existing database as training corpus, and the obtained training corpus is segmented.
  • the web crawler technology is a program that automatically captures information on the World Wide Web according to certain rules or Scripts can automatically update a method of information acquisition and information retrieval of the stored content. For example, for the question and answer sentence "What is the relationship between Huang Huaweing and Changsheng Medicine", crawlers can be crawled from related databases such as the national enterprise information publicity system, news database, and enterprise credit database to obtain relevant training corpus.
  • Step S50 receiving the artificial entity element and artificial label of the word obtained after word segmentation, and the artificial entity element and artificial label corresponding to the word are received;
  • the obtained words are manually labeled with entity elements and labels.
  • entity elements include multiple Chinese characters
  • the labels should be manually labeled at the preset position of each Chinese character; and
  • the type of the label may be determined according to the type of the preset query template.
  • the preset query templates include relationship query templates, entity query templates, and attribute query templates.
  • entity elements are "Huang Huaweing” and “Changsheng Medicine”
  • result of manual labeling is "Huang person Xiao person Ming person "And “long company health company medical company medicine company”.
  • Step S60 input the artificial entity element and training corpus into a preset entity element recognition model to train the entity element recognition model
  • the artificial entity elements and training corpus are input into the preset entity element recognition model, and the preset entity element recognition model is trained.
  • Step S70 Input the artificial label and training corpus into a preset label recognition model to train the label recognition model.
  • the artificial label and training corpus are input into the preset label recognition model, and the preset label recognition model is trained.
  • This embodiment uses crawler technology to obtain a large number of training corpora, so as to provide the model with data required for training, and uses manual standard methods to perform entity element labeling and label labeling on the training corpus to train the entity element recognition model and the label recognition model, thereby ensuring The accuracy of extracting entity elements by the entity element recognition model and the accuracy of extracting tags by the label recognition model.
  • step of training the entity element recognition model includes:
  • Step S61 extracting the training corpus through the entity element recognition model to obtain corresponding extracted entity elements
  • the training corpus is input into the entity element recognition model, the training corpus is extracted through the entity element recognition model, and the entity elements extracted by the entity element recognition model are used as the extracted entity elements. It should be understood that since the accuracy of the entity element recognition model has not been judged, the entity element recognition model may extract wrong entity elements, that is, entity elements that do not belong to the training corpus.
  • Step S62 Determine the entity element that overlaps the extracted entity element among the artificial entity elements as an entity element set
  • the training corpus is manually labeled with entity elements, the artificial entity elements are compared with the extracted entity elements, and the entity elements that overlap with the extracted entity elements among the artificial entity elements are taken as the entity element set. It is easy to understand that the entity elements in the entity element set must be the correct entity elements, that is, they must belong to the input training corpus.
  • Step S63 calculating the accuracy of the entity element recognition model according to the entity element set, the artificial entity element and the extracted entity element;
  • the preset formula is used to calculate the entity element set, artificial entity element and extracted entity element to obtain the accuracy of the trained entity element recognition model.
  • step S64 the entity element recognition model whose accuracy exceeds the preset first accuracy is used as the entity element recognition model completed by the training.
  • the accuracy of the obtained entity element recognition model does not exceed the preset first accuracy, it means that the entity element recognition model is not accurate enough in extracting the entity elements, and the wrong entity will be extracted, and the training corpus will continue to be used as Input and train the entity element recognition model until the accuracy of the entity element recognition model exceeds the preset first accuracy.
  • This embodiment uses training corpus and artificial entity elements to train the entity element recognition model, and by calculating the accuracy of the entity element recognition model to ensure that the trained entity element recognition model meets the preset accuracy, thereby ensuring that the entity element recognition model can Accurately extract the entity elements in question and answer sentences.
  • the step of calculating the accuracy of the entity element recognition model based on the entity element set, the artificial entity element and the extracted entity element includes:
  • Step S631 Divide the set of entity elements by the extracted entity elements to obtain the accuracy rate of the entity element recognition model
  • the value of the entity elements in the entity element set Divide the value of the entity element extracted by the entity element recognition model, and use the calculation result as the accuracy rate of the entity element recognition model.
  • Step S632 Divide the set of entity elements by the artificial entity elements to obtain the recall rate of the entity element recognition model
  • the value of the entity element in the entity element set is divided by the value of the entity element manually labeled, and the calculation result is used as the recall rate of the entity element recognition model.
  • Step S633 Calculate the product of the accuracy rate and the recall rate and the sum of the accuracy rate and the recall rate, divide the product by the sum value, and multiply the calculated result by the preset value as the entity element recognition model The first F value;
  • the recall rate and accuracy rate of the entity element recognition model After obtaining the recall rate and accuracy rate of the entity element recognition model, multiply the accuracy rate and the recall rate, add the accuracy rate and the recall rate, and divide the product of the accuracy rate and the recall rate. Multiply the obtained calculation result by a preset value as the first F value of the entity element recognition model.
  • Step S634 According to the accuracy rate, the recall rate and the first F value of the entity element recognition model, the accuracy of the entity element recognition model is calculated.
  • the accuracy of the entity element recognition model can be obtained according to the above three elements.
  • the design of the weight ratio corresponding to each value can be drawn up by the developer.
  • the accuracy, recall, and first F value of the entity element recognition model are calculated according to the entity element set and the extracted entity elements, and the entity element recognition model is obtained by accurate calculation based on the accuracy, recall rate and the first F value. Accuracy.
  • step of training the label recognition model includes:
  • Step S71 Extract the above-mentioned training corpus through the label recognition model to obtain the corresponding extracted label
  • the training corpus is input into the label recognition model, the training corpus is extracted through the label recognition model, and the label extracted by the label recognition model is used as the extracted label corresponding to the training corpus. It should be understood that, since the accuracy of the label recognition model has not been judged, the label recognition model may extract wrong labels, that is, labels that are not part of the training corpus.
  • Step S72 Determine the label that overlaps the extracted label among the artificial labels as a label set
  • the training corpus is manually labelled in the above steps, the artificial label is compared with the extracted label, and the label in the artificial label that is coincident with the extracted label is used as the label set. It is easy to understand that the tags in the tag set must be the correct tags, that is, the tags must belong to the input training corpus.
  • Step S73 Calculate the accuracy of the label recognition model according to the label set, the artificial label and the extracted label;
  • a preset formula is used to perform calculations based on the label set, manual labels, and extracted labels to obtain the accuracy of the trained label recognition model.
  • step S74 the label recognition model whose accuracy exceeds the preset second accuracy is used as the trained label recognition model.
  • the accuracy of the obtained label recognition model does not exceed the preset second accuracy, it means that the label recognition model is not accurate enough for label extraction, and the wrong label will be extracted. Then the training corpus will continue to be used as the input training place.
  • the label recognition model is described until the accuracy of the label recognition model exceeds the preset second accuracy.
  • This embodiment uses training corpus and artificial tags to train the label recognition model, and by calculating the accuracy of the label recognition model to ensure that the trained label recognition model meets the preset accuracy, thereby ensuring that the label recognition model can accurately extract question and answer sentences In the label.
  • the step of calculating the accuracy of the label recognition model according to the label set, the artificial label and the extracted label includes:
  • Step S731 Divide the label set by the extracted label to obtain the accuracy rate of the label recognition model
  • the tags in the tag set must be correct tags, and the tags extracted by the tag recognition model may not belong to the tags of the training corpus; therefore, the value of the tags in the tag set is divided by the value extracted by the tag recognition model The numerical value of the label, and the calculation result is used as the accuracy rate of the label recognition model.
  • Step S732 Divide the label set by the artificial label to obtain the recall rate of the label recognition model
  • the numerical value of the label in the label set is divided by the numerical value of the manually labeled label, and the calculation result is used as the recall rate of the label recognition model.
  • Step S733 Calculate the product of the accuracy rate and the recall rate and the sum of the accuracy rate and the recall rate, divide the product by the sum value, and multiply the obtained calculation result by a preset value as the label recognition model Second F value
  • the accuracy rate is multiplied by the recall rate, the accuracy rate and the recall rate are added, and the product of the accuracy rate and the recall rate is divided.
  • Step S734 According to the accuracy rate, the recall rate and the second F value of the tag recognition model, the accuracy of the tag recognition model is calculated.
  • the accuracy of the label recognition model can be obtained based on the above three elements, and the accuracy can be calculated by setting different weights for the accuracy rate, recall rate and the second F value of the label recognition model It is easy to understand that the design of the weight ratio corresponding to each value can be drawn up by the developer.
  • the accuracy rate, recall rate, and second F value of the label recognition model are calculated according to the label set and extracted tags, and the accuracy of the label recognition model is obtained by accurate calculation according to the accuracy rate, recall rate and the second F value.
  • FIG. 4 is a schematic flowchart of another embodiment of the question answering method based on the knowledge graph of this application.
  • Step S80 using artificial entity elements and artificial tags as inputs of a preset TransE algorithm, so that the artificial entity elements and the artificial tags are embedded in a low-dimensional vector space to generate a corresponding vector template;
  • the TransE algorithm is also preset in this embodiment, and the TransE algorithm is a distributed vector representation based on entities and relationships.
  • the artificial entity elements and artificial tags are input into the preset TransE algorithm, and the artificial entity elements and artificial tags are embedded in the low-dimensional vector space through the TransE algorithm algorithm to generate the corresponding vector template.
  • step S90 the vector template is stored in the graph database to construct a corresponding knowledge graph.
  • the vector template is stored in the graph database, and the corresponding knowledge graph is constructed according to the multiple vector templates stored in the graph database.
  • the construction method of the knowledge graph is not described in this embodiment.
  • This embodiment uses a preset TransE algorithm to obtain a vector template according to artificial entity elements and artificial tags, and constructs a corresponding knowledge map according to the vector template, so as to ensure the comprehensiveness and accuracy of the data of the knowledge map.
  • the step of inputting the query sentence into the knowledge graph for query, and obtaining the corresponding question and answer result includes:
  • Step S31 vectorizing the query sentence to generate a corresponding vector set
  • the query sentence is vectorized by the NLP algorithm to generate the corresponding vector set. It should be understood that the vectorization of the query sentence can also be realized in other ways, and this embodiment will not do it here. limit.
  • Step S32 matching the vector set with the vector template in the knowledge graph to obtain a corresponding question and answer result.
  • the vector set is matched with the vector template in the knowledge graph.
  • the vector that best matches the vector set in the knowledge graph is determined by calculating the matching degree between the vectors. Template, and then parse the vector template to obtain the corresponding question and answer result.
  • the query sentence is vectorized, and the result of the question and answer in the knowledge graph is determined more accurately through the matching of the vector set and the vector template, thereby ensuring the accuracy of the question and answer result.
  • the embodiment of the present application also proposes a non-volatile computer-readable storage medium, the non-volatile computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor The operation of the question answering method based on the knowledge graph as described above is realized.
  • the optional embodiments of the non-volatile computer-readable storage medium of the present application are basically the same as the above-mentioned embodiments of the question and answer method based on the knowledge graph, and will not be repeated here.
  • the method of the embodiment can be realized by means of software plus the necessary general hardware platform, of course, it can also be realized through Over hardware, but in many cases the former is a better implementation.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium (such as ROM/RAM, floppy disk, optical disk)
  • the disk includes several instructions to enable a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé et un appareil de questions et réponses basées sur un graphe de connaissances, ainsi qu'un support de stockage, le procédé consistant à : acquérir une phrase de question et de réponse entrée par un utilisateur, effectuer une segmentation de mots sur la phrase de question et de réponse et, au moyen d'un modèle de reconnaissance d'élément d'entité et d'un modèle de reconnaissance d'étiquette entraînés, acquérir respectivement des éléments d'entité dans la phrase de question et de réponse et des étiquettes correspondant aux éléments d'entité (S10) ; entrer les éléments d'entité, les étiquettes et la phrase de question et de réponse dans un classificateur bayésien, calculer le degré de correspondance entre des modèles prédéfinis dans le classificateur bayésien et la phrase de question et de réponse et déterminer le modèle prédéfini ayant le degré de correspondance le plus élevé en tant que modèle d'interrogation (S20) ; entrer les éléments d'entité et les étiquettes dans le modèle d'interrogation pour obtenir une phrase d'interrogation correspondante et entrer la phrase d'interrogation dans un graphe de connaissances pour interrogation afin d'obtenir un résultat de question et de réponse correspondant (S30).
PCT/CN2019/117583 2019-09-18 2019-11-12 Procédé et appareil de questions et réponses basées sur un graphe de connaissances et support de stockage WO2021051558A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910885936.X 2019-09-18
CN201910885936.XA CN110781284B (zh) 2019-09-18 2019-09-18 基于知识图谱的问答方法、装置和存储介质

Publications (1)

Publication Number Publication Date
WO2021051558A1 true WO2021051558A1 (fr) 2021-03-25

Family

ID=69383813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117583 WO2021051558A1 (fr) 2019-09-18 2019-11-12 Procédé et appareil de questions et réponses basées sur un graphe de connaissances et support de stockage

Country Status (2)

Country Link
CN (1) CN110781284B (fr)
WO (1) WO2021051558A1 (fr)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914074B (zh) * 2020-07-16 2023-06-20 华中师范大学 基于深度学习与知识图谱的限定领域对话生成方法及系统
CN112182178A (zh) * 2020-09-25 2021-01-05 北京字节跳动网络技术有限公司 智能问答方法、装置、设备及可读存储介质
CN112507135B (zh) * 2020-12-17 2021-11-16 深圳市一号互联科技有限公司 知识图谱查询模板构建方法、装置、系统、以及存储介质
CN113254635B (zh) * 2021-04-14 2021-11-05 腾讯科技(深圳)有限公司 数据处理方法、装置及存储介质
CN115794857A (zh) * 2022-01-19 2023-03-14 支付宝(杭州)信息技术有限公司 查询请求的处理方法及装置
CN115186780B (zh) * 2022-09-14 2022-12-06 江西风向标智能科技有限公司 学科知识点分类模型训练方法、系统、存储介质及设备
CN116975657B (zh) * 2023-09-25 2023-11-28 中国人民解放军军事科学院国防科技创新研究院 基于人工经验的即时优势窗口挖掘方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140363082A1 (en) * 2013-06-09 2014-12-11 Apple Inc. Integrating stroke-distribution information into spatial feature extraction for automatic handwriting recognition
CN107066541A (zh) * 2017-03-13 2017-08-18 平安科技(深圳)有限公司 客服问答数据的处理方法及系统
CN107766483A (zh) * 2017-10-13 2018-03-06 华中科技大学 一种基于知识图谱的交互式问答方法及系统
CN108491433A (zh) * 2018-02-09 2018-09-04 平安科技(深圳)有限公司 聊天应答方法、电子装置及存储介质
CN108959366A (zh) * 2018-05-21 2018-12-07 宁波薄言信息技术有限公司 一种开放性问答的方法
CN109815318A (zh) * 2018-12-24 2019-05-28 平安科技(深圳)有限公司 问答系统中的问题答案查询方法、系统及计算机设备
CN110032632A (zh) * 2019-04-04 2019-07-19 平安科技(深圳)有限公司 基于文本相似度的智能客服问答方法、装置及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105068661B (zh) * 2015-09-07 2018-09-07 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法和系统
CN105868313B (zh) * 2016-03-25 2019-02-12 浙江大学 一种基于模板匹配技术的知识图谱问答系统及方法
CN107992585B (zh) * 2017-12-08 2020-09-18 北京百度网讯科技有限公司 通用标签挖掘方法、装置、服务器及介质
US11030226B2 (en) * 2018-01-19 2021-06-08 International Business Machines Corporation Facilitating answering questions involving reasoning over quantitative information
CN109033374B (zh) * 2018-07-27 2022-03-15 四川长虹电器股份有限公司 基于贝叶斯分类器的知识图谱检索方法
CN109492077B (zh) * 2018-09-29 2020-09-29 北京智通云联科技有限公司 基于知识图谱的石化领域问答方法及系统
CN109308321A (zh) * 2018-11-27 2019-02-05 烟台中科网络技术研究所 一种知识问答方法、知识问答系统及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140363082A1 (en) * 2013-06-09 2014-12-11 Apple Inc. Integrating stroke-distribution information into spatial feature extraction for automatic handwriting recognition
CN107066541A (zh) * 2017-03-13 2017-08-18 平安科技(深圳)有限公司 客服问答数据的处理方法及系统
CN107766483A (zh) * 2017-10-13 2018-03-06 华中科技大学 一种基于知识图谱的交互式问答方法及系统
CN108491433A (zh) * 2018-02-09 2018-09-04 平安科技(深圳)有限公司 聊天应答方法、电子装置及存储介质
CN108959366A (zh) * 2018-05-21 2018-12-07 宁波薄言信息技术有限公司 一种开放性问答的方法
CN109815318A (zh) * 2018-12-24 2019-05-28 平安科技(深圳)有限公司 问答系统中的问题答案查询方法、系统及计算机设备
CN110032632A (zh) * 2019-04-04 2019-07-19 平安科技(深圳)有限公司 基于文本相似度的智能客服问答方法、装置及存储介质

Also Published As

Publication number Publication date
CN110781284B (zh) 2024-05-28
CN110781284A (zh) 2020-02-11

Similar Documents

Publication Publication Date Title
WO2021051558A1 (fr) Procédé et appareil de questions et réponses basées sur un graphe de connaissances et support de stockage
WO2020034526A1 (fr) Procédé d'inspection de qualité, appareil, dispositif et support de stockage informatique pour l'enregistrement d'une assurance
WO2021132927A1 (fr) Dispositif informatique et procédé de classification de catégorie de données
WO2020164267A1 (fr) Procédé et appareil de construction de modèle de classification de texte, terminal et support de stockage
WO2020213843A1 (fr) Système de fourniture d'informations médicales personnalisées par l'utilisateur et son procédé de fonctionnement
WO2020015067A1 (fr) Procédé d'acquisition de données, dispositif, équipement et support de stockage
WO2020107761A1 (fr) Procédé, appareil et dispositif de traitement de copie de publicité et support d'informations lisible par ordinateur
WO2020215681A1 (fr) Procédé et appareil de génération d'informations d'indication, terminal et support de stockage
WO2020082562A1 (fr) Procédé, appareil, dispositif et support de mémoire d'identification de symbole
WO2020258657A1 (fr) Procédé et appareil de détection d'anomalie, dispositif informatique et support d'informations
WO2020119176A1 (fr) Procédé de vérification de données de remboursement, serveur d'identification et support de stockage
WO2020258656A1 (fr) Procédé et appareil de génération de segment de code, support d'informations et dispositif informatique
EP3031213A1 (fr) Appareil, serveur et procédé pour fournir un sujet de conversation
WO2020107762A1 (fr) Procédé et dispositif d'estimation de ctr et support d'enregistrement lisible par ordinateur
WO2019024485A1 (fr) Procédé et dispositif de partage de données, et support de stockage lisible par ordinateur
WO2020186777A1 (fr) Procédé, appareil et dispositif de récupération d'image et support de stockage lisible par ordinateur
WO2020087981A1 (fr) Procédé et appareil de génération de modèle d'audit de contrôle de risque, dispositif, et support de stockage lisible
WO2020087704A1 (fr) Procédé, appareil et dispositif de gestion d'informations de crédit et support d'enregistrement
WO2020082766A1 (fr) Procédé et appareil d'association pour un procédé d'entrée, dispositif et support d'informations lisible
WO2021027143A1 (fr) Procédé, appareil et dispositif de poussée d'informations et support d'informations lisible par ordinateur
WO2021051557A1 (fr) Procédé et appareil de détermination de mot-clé basé sur une reconnaissance sémantique et support de stockage
WO2020233089A1 (fr) Procédé et appareil de création de jeu de test, terminal et support de stockage lisible par ordinateur
WO2020107591A1 (fr) Procédé, appareil, dispositif de limitation de double assurance et support d'informations lisible
WO2017115994A1 (fr) Procédé et dispositif destinés à fournir des notes au moyen d'un calcul de corrélation à base d'intelligence artificielle
WO2019240343A1 (fr) Système pour fournir des informations d'auto-gestion personnalisées selon un état d'utilisateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945754

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945754

Country of ref document: EP

Kind code of ref document: A1