CN111368145A - Knowledge graph creating method and system and terminal equipment - Google Patents

Knowledge graph creating method and system and terminal equipment Download PDF

Info

Publication number
CN111368145A
CN111368145A CN201811602914.XA CN201811602914A CN111368145A CN 111368145 A CN111368145 A CN 111368145A CN 201811602914 A CN201811602914 A CN 201811602914A CN 111368145 A CN111368145 A CN 111368145A
Authority
CN
China
Prior art keywords
user
information
knowledge
data
creating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811602914.XA
Other languages
Chinese (zh)
Inventor
陈烁
王宏玉
王晓东
王海鹏
杜威
卢裕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Siasun Robot and Automation Co Ltd
Original Assignee
Shenyang Siasun Robot and Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Siasun Robot and Automation Co Ltd filed Critical Shenyang Siasun Robot and Automation Co Ltd
Priority to CN201811602914.XA priority Critical patent/CN111368145A/en
Publication of CN111368145A publication Critical patent/CN111368145A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the technical field of data processing, and particularly discloses a knowledge graph creating method, a knowledge graph creating system and terminal equipment, wherein the knowledge graph creating method comprises the following steps: acquiring user information and voice information in a preset range; recognizing the voice information; extracting map data for creating the knowledge map from the user information and the voice information according to a recognition result of the voice information; storing the atlas data in a specified format to obtain the created knowledge-atlas. In the embodiment provided by the application, the data for creating the knowledge graph is acquired in a visual mode and a voice mode, and the correct relation among the data in different types or different fields is established through data fusion and extraction during data processing, so that the created knowledge graph can be applied to different scenes. The created knowledge graph contains different types of data, and the data in the corresponding field can be called when the knowledge graph interacts with a user, so that the requirements of some users on accuracy can be met.

Description

Knowledge graph creating method and system and terminal equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, a system, and a terminal device for creating a knowledge graph.
Background
Knowledge Graph (also called scientific Knowledge Graph/value) is a series of different graphs showing the relationship between the Knowledge development process and the structure. In general, the process of constructing a knowledge graph is a process of extracting knowledge elements from raw data by a series of automatic or semi-automatic means from the raw data and storing the knowledge elements into a data layer and a mode layer of a knowledge base. The knowledge map is used as preset knowledge for an external system, the construction process can be applied after being processed in advance, and data of different sources and different forms can be applied to construction only after being processed and fused in advance; however, the knowledge graph constructed in this way cannot meet the requirements of some new customers who need to perform accurate marketing, and cannot meet the requirements of some external systems such as a chat system with higher real-time requirements.
In addition, the currently created graph generally extracts knowledge, associates data in various aspects, and does not pass through the fusion of technologies in various fields, so that the application scenarios are limited.
Disclosure of Invention
In view of this, the present application provides a method, a system and a terminal device for creating a knowledge graph, and aims to solve the problems that the knowledge graph created in the prior art cannot meet the requirements of some customers on accuracy and the application scenarios are limited.
A first aspect of an embodiment of the present application provides a method for creating a knowledge graph, where the method for creating a knowledge graph includes:
acquiring user information and voice information in a preset range, wherein the user information comprises face information of a user and identity information of the user;
recognizing the voice information;
extracting map data for creating the knowledge map from the user information and the voice information according to a recognition result of the voice information;
storing the atlas data in a specified format to obtain the created knowledge-atlas.
Optionally, the acquiring the user information includes:
acquiring a video file;
extracting a face image of the user from the video file;
detecting whether the user is a registered user or not according to the face image of the user;
if the user is not a registered user, acquiring the identity information of the user according to the face image;
and registering the user according to the face image and the identity information, and correspondingly storing the user information of the user.
Optionally, the recognizing the voice information specifically includes: and recognizing the voice information by a deep learning algorithm based on a neural network.
Optionally, extracting, from the user information and the voice information, profile data for creating the knowledge-profile according to a recognition result of the voice information includes:
extracting map data which can be used for creating the knowledge graph in the recognition result according to semantic understanding;
and/or the first and/or second light sources,
extracting atlas data for creating the knowledge-atlas according to the semantic understanding and facial information of the user.
Optionally, the method for creating a knowledge graph further includes:
when an interaction request input by a user is received, acquiring the field to which the content needing to be interacted by the user belongs according to the interaction request;
and calling corresponding content from the knowledge graph according to the field to interact with the user.
A second aspect of an embodiment of the present application provides a system for creating a knowledge-graph, including:
the information acquisition unit is used for acquiring user information and voice information in a preset range, wherein the user information comprises face information of a user and identity information of the user;
a recognition unit for recognizing the voice information;
a data extraction unit configured to extract map data for creating the knowledge map from the user information and the voice information according to a recognition result of the voice information;
and the storage unit is used for storing the map data in a specified format to obtain the created knowledge map.
Optionally, the information obtaining unit is specifically configured to:
acquiring a video file;
extracting a face image of the user from the video file;
detecting whether the user is a registered user or not according to the face image of the user;
if the user is not a registered user, acquiring the identity information of the user according to the face image;
and registering the user according to the face image and the identity information, and correspondingly storing the user information of the user.
Optionally, the identification unit is specifically configured to: and recognizing the voice information by a deep learning algorithm based on a neural network.
Optionally, the data extracting unit is specifically configured to:
extracting map data which can be used for creating the knowledge graph in the recognition result according to semantic understanding;
and/or the first and/or second light sources,
extracting atlas data for creating the knowledge-atlas according to the semantic understanding and facial information of the user.
A third aspect of embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of any one of the methods for creating a knowledge-graph as provided in the first aspect.
In the embodiment provided by the application, the data for creating the knowledge graph is acquired in a visual mode and a voice mode, and the data acquisition process and the data processing can be carried out in real time; during data processing, correct relation among data of different types or different fields is established through data fusion and extraction, so that the created knowledge graph can be applied to different scenes. The created knowledge graph contains different types of data, and the data in the corresponding field can be called when the knowledge graph interacts with a user, so that the requirements of some users on accuracy can be met.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 shows a schematic diagram of the overall structure of a knowledge-graph creation system provided by the present application;
FIG. 2 is a flow chart diagram illustrating a method for creating a knowledge graph according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a knowledge-graph creation system provided in accordance with an embodiment of the present application;
fig. 4 is a schematic diagram of a terminal device provided according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
Fig. 1 shows that the knowledge graph creation system provided in the present application is applied to an android system, and the structure thereof includes a peripheral layer, a data processing layer, and a knowledge processing layer, where: the peripheral layer comprises a video camera and an audio microphone which are respectively used for acquiring a video file and voice information in a certain range; the data processing layer comprises a visual module and a voice module which are respectively used for processing related data acquired by a video camera and an audio microphone in the peripheral layer; and the knowledge processing layer extracts, fuses and stores the data in the data processing layer to create a corresponding knowledge graph.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
The first embodiment is as follows:
fig. 2 is a schematic flow chart illustrating an implementation of a method for creating a knowledge graph provided by the present application in an embodiment, and includes steps S21-S24, which are detailed as follows:
step S21, obtaining user information and voice information within a preset range, where the user information includes face information of a user and identity information of the user.
In the embodiment provided by the application, the video camera acquires the user information of the user, wherein the user information comprises the facial information of the user, such as a face picture, a facial expression and the like, and further acquires the identity information of the user, such as the age, the sex and the like of the user, according to the face picture of the user. The video camera here may be a common 2D camera, such as a smartphone, a tablet, or a pad, and may be installed on other terminal devices, at this time, the other terminal devices establish a connection relationship with the knowledge graph creation system to implement communication, and the knowledge graph creation system obtains user information of a user from the other terminal devices. Meanwhile, the system for creating the knowledge graph can acquire voice information within a certain range through an audio microphone of the system. And the user information and the voice information are locally stored or stored in a cloud terminal so as to be convenient to use.
Optionally, the acquiring the user information includes:
acquiring a video file;
extracting a face image of the user from the video file;
detecting whether the user is a registered user or not according to the face image of the user;
if the user is not a registered user, acquiring the identity information of the user according to the face image;
and registering the user according to the face image and the identity information, and correspondingly storing the user information of the user.
In the step, the system firstly obtains a video file through a video camera, then detects a face image of a user from the video file through a big data deep learning technology, the system can automatically detect the face image in the video acquisition process, if the corresponding user is judged to be unregistered according to the information in the face image, the system automatically obtains the information of the age, the gender and the like of the user, and registers the user based on the face image. For subsequent user interaction with the system. In the step, automatic registration and information extraction of face information are realized based on an android system platform, and knowledge information or atlas data for creating a knowledge atlas is subsequently extracted.
Step S22, recognizing the voice information.
The embodiment that this application provided in to the speech information that obtains in step S21 discern, specifically, the deep learning algorithm based on neural network discerns above-mentioned speech information, promotion to speech information' S recognition efficiency by a wide margin can be, when carrying out speech recognition, both can the speech information of the internal storage of identification system also can discern the speech information of high in the clouds storage. In addition, in the process of identifying the voice, the selectable content is determined by setting various factors such as confidence coefficient, reaction time and the like, and meanwhile, audio data processing algorithms such as noise reduction and signal enhancement are integrated, so that the identification accuracy is effectively improved. The identification method gives consideration to the conditions of both reaction speed and accuracy, and effectively improves the overall efficiency of the system.
Step S23, extracting map data for creating the knowledge map from the user information and the voice information according to the recognition result of the voice information.
Optionally, in another embodiment provided by the present application, extracting, from the user information and the voice information, profile data used for creating the knowledge-graph according to the recognition result of the voice information includes:
extracting map data which can be used for creating the knowledge graph in the recognition result according to semantic understanding;
and/or the first and/or second light sources,
extracting atlas data for creating the knowledge-atlas according to the semantic understanding and facial information of the user.
Specifically, after the recognition result of the voice information is obtained, the recognition result is understood to obtain useful information, the intention of the user, and the like, and a corresponding answer is made according to the understood result. The understanding process of the recognition result comprises synonym processing, real word extraction, syntactic analysis, ambiguity processing, model matching, answer generation and the like. For example, the recognition result may have different synonyms of same tone, different synonyms of different tones, etc., the system may establish a corresponding list of the words, add pinyin information thereto, and preprocess the recognition result according to the corresponding list. Furthermore, the system for creating the knowledge graph collects a large number of real words in a plurality of fields in advance so as to extract the real words in the recognition result of the voice information, and the real word extraction adopts the mode of constructing a real word tree and searching the real word tree to extract the real words and the part of speech in the sentence during the continuous updating of the real word library. The syntactic analysis utilizes the dependency relationship between words in a sentence to represent syntactic structure information of the words (such as structural relationships of a predicate, an actor, a fixed form and the like) and uses a tree structure to represent the structure of the whole sentence (such as a predicate, a fixed form and the like), so that the intention of a user can be effectively understood. In addition, in many cases, the meaning expressed by a sentence has birelevance, the expressed meaning is greatly related to the context, the ambiguous understanding is that the semantic birelevance is effectively solved, and the true intention of the birelevance sentence is judged according to the information such as the context intention. The system will express the sentence after the previous processing according to the established rule, then match with the rule in the template, so as to judge the real meaning of the sentence, and the template library is continuously updated. And selecting a proper answer template to combine the related data information for answer generation according to the understood final intention. The semantic understanding of the recognition result of the voice information is helpful for acquiring some languages used in daily communication of the user, and words which are obtrusive to the user are avoided during man-machine interaction.
Based on the recognition result, semantic comprehension and user information obtained from the voice information, map data which can be used for creating a knowledge map is obtained, for example, visual information is extracted through analysis of a video file when the user information is obtained, text is obtained and required knowledge is extracted from the text in combination with the voice interaction process of the user in the voice information, and then the two pieces of information (the visual information and the voice text information) are fused to the map data used for creating the knowledge map. Or extracting text information directly from the recognition result of the voice information as map data for creating a knowledge map.
Step S24, storing the atlas data in a specified format resulting in the created knowledge-atlas.
In the embodiment provided by the application, the acquired atlas data is stored in a Resource Description Framework (RDF) triple form to finally form a knowledge adobe, and the RDF can provide a standard data model for describing entities, attributes and relationships by using the knowledge atlas. Therefore, the data in the knowledge graph can be stored by the RDF triad group, and can be stored locally or in the cloud during storage, so that the data in the knowledge graph can be used by a user no matter whether the user has network connection or not.
Optionally, in another embodiment provided by the present application, the method for creating a knowledge-graph further includes:
when an interaction request input by a user is received, acquiring the field to which the content needing to be interacted by the user belongs according to the interaction request;
and calling corresponding content from the knowledge graph according to the field to interact with the user.
In the embodiment provided by the application, in the colleague who creates the knowledge graph, if an interaction request of a user is received (for example, the user needs to have a conversation with the system and makes a problem inquiry to the system), the field to which the user interaction content belongs is known according to the interaction request of the user, and then relevant data suitable for the user to communicate is searched from the knowledge graph, so that the interaction and communication with the user are realized. It should be noted that the process of communicating with the user can be performed while the knowledge graph is created, that is, while the system is in interactive communication with the user, the voice information and the like input by the user can be collected and stored, and then the voice information and the like are processed to be used as basic data for creating the knowledge graph, and the process of constructing the knowledge graph and the process of interacting with the user can be performed simultaneously, so that the knowledge graph can be immediately applied to the conversation or service with the user in real time when being constructed.
In the embodiment provided by the application, the data for creating the knowledge graph is acquired in a visual mode and a voice mode, and the data acquisition process and the data processing can be carried out in real time; during data processing, correct relation among data of different types or different fields is established through data fusion and extraction, so that the created knowledge graph can be applied to different scenes. The created knowledge graph contains different types of data, and the data in the corresponding field can be called when the knowledge graph interacts with a user, so that the requirements of some users on accuracy can be met.
Example two:
fig. 3 shows a block diagram of a system for creating a knowledge graph according to an embodiment of the present application, which corresponds to the method for creating a knowledge graph according to the above embodiment, and for convenience of explanation, only the parts related to the embodiment of the present application are shown.
As shown in fig. 3: the system for creating the knowledge graph comprises:
an information obtaining unit 31, configured to obtain user information and voice information within a preset range, where the user information includes face information of a user and identity information of the user;
a recognition unit 32 for recognizing the voice information;
a data extraction unit 33 for extracting map data for creating the knowledge map from the user information and the voice information according to a recognition result of the voice information;
a storage unit 34 for storing the atlas data in a specified format to obtain the created knowledge-atlas.
Optionally, the information obtaining unit 31 is specifically configured to:
acquiring a video file;
extracting a face image of the user from the video file;
detecting whether the user is a registered user or not according to the face image of the user;
if the user is not a registered user, acquiring the identity information of the user according to the face image;
and registering the user according to the face image and the identity information, and correspondingly storing the user information of the user.
Optionally, the identification unit 32 is specifically configured to: and recognizing the voice information by a deep learning algorithm based on a neural network.
Optionally, the data extracting unit 33 is specifically configured to:
extracting map data which can be used for creating the knowledge graph in the recognition result according to semantic understanding;
and/or the first and/or second light sources,
extracting atlas data for creating the knowledge-atlas according to the semantic understanding and facial information of the user.
Optionally, the system for creating a knowledge-graph is further configured to:
when an interaction request input by a user is received, acquiring the field to which the content needing to be interacted by the user belongs according to the interaction request;
and calling corresponding content from the knowledge graph according to the field to interact with the user.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Example three:
fig. 4 shows a schematic structural diagram of a terminal device provided in an embodiment of the present application, where the terminal device 4 of the embodiment includes: a processor 40, a memory 41 and a computer program 42 stored in said memory 41 and executable on said processor 40, such as a program in a software upgrade method. The processor 40 implements the steps in the above-described respective software upgrading method embodiments, such as steps S21 to S23 shown in fig. 2, when executing the computer program 42, and the processor 40 implements the procedures in steps S21 to S23 shown in fig. 2 when executing the computer program 42.
The terminal device 4 may be a robot. The terminal device 4 may include, but is not limited to, a processor 40 and a memory 41. It will be understood by those skilled in the art that fig. 4 is only an example of the terminal device 4, and does not constitute a limitation to the terminal device 4, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device 4 may further include an input-output device, a network access device, a bus, etc.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.

Claims (10)

1. A method for creating a knowledge graph, the method comprising:
acquiring user information and voice information in a preset range, wherein the user information comprises face information of a user and identity information of the user;
recognizing the voice information;
extracting map data for creating the knowledge map from the user information and the voice information according to a recognition result of the voice information;
storing the atlas data in a specified format to obtain the created knowledge-atlas.
2. The method of knowledge-graph creation as claimed in claim 1, wherein said obtaining user information comprises:
acquiring a video file;
extracting a face image of the user from the video file;
detecting whether the user is a registered user or not according to the face image of the user;
if the user is not a registered user, acquiring the identity information of the user according to the face image;
and registering the user according to the face image and the identity information, and correspondingly storing the user information of the user.
3. The method for knowledge-graph creation as claimed in claim 1, wherein said recognizing the speech information specifically comprises: and recognizing the voice information by a deep learning algorithm based on a neural network.
4. The method of creating a knowledge-graph according to claim 1, wherein extracting graph data for creating the knowledge-graph from the user information and the voice information based on the recognition result of the voice information comprises:
extracting map data which can be used for creating the knowledge graph in the recognition result according to semantic understanding;
and/or the first and/or second light sources,
extracting atlas data for creating the knowledge-atlas according to the semantic understanding and facial information of the user.
5. The method of knowledge-graph creation as claimed in any one of claims 1 to 4, wherein said method of knowledge-graph creation further comprises:
when an interaction request input by a user is received, acquiring the field to which the content needing to be interacted by the user belongs according to the interaction request;
and calling corresponding content from the knowledge graph according to the field to interact with the user.
6. A system for creating a knowledge graph, the system comprising:
the information acquisition unit is used for acquiring user information and voice information in a preset range, wherein the user information comprises face information of a user and identity information of the user;
a recognition unit for recognizing the voice information;
a data extraction unit configured to extract map data for creating the knowledge map from the user information and the voice information according to a recognition result of the voice information;
and the storage unit is used for storing the map data in a specified format to obtain the created knowledge map.
7. The system for knowledge-graph creation according to claim 6, wherein the information acquisition unit is specifically configured to:
acquiring a video file;
extracting a face image of the user from the video file;
detecting whether the user is a registered user or not according to the face image of the user;
if the user is not a registered user, acquiring the identity information of the user according to the face image;
and registering the user according to the face image and the identity information, and correspondingly storing the user information of the user.
8. The system for knowledge-graph creation as defined in claim 6, wherein the recognition unit is specifically configured to: and recognizing the voice information by a deep learning algorithm based on a neural network.
9. The system for knowledge-graph creation as defined in claim 6, wherein the data extraction unit is specifically configured to:
extracting map data which can be used for creating the knowledge graph in the recognition result according to semantic understanding;
and/or the first and/or second light sources,
extracting atlas data for creating the knowledge-atlas according to the semantic understanding and facial information of the user.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
CN201811602914.XA 2018-12-26 2018-12-26 Knowledge graph creating method and system and terminal equipment Pending CN111368145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811602914.XA CN111368145A (en) 2018-12-26 2018-12-26 Knowledge graph creating method and system and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811602914.XA CN111368145A (en) 2018-12-26 2018-12-26 Knowledge graph creating method and system and terminal equipment

Publications (1)

Publication Number Publication Date
CN111368145A true CN111368145A (en) 2020-07-03

Family

ID=71208975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811602914.XA Pending CN111368145A (en) 2018-12-26 2018-12-26 Knowledge graph creating method and system and terminal equipment

Country Status (1)

Country Link
CN (1) CN111368145A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858892A (en) * 2020-07-24 2020-10-30 中国平安人寿保险股份有限公司 Voice interaction method, device, equipment and medium based on knowledge graph
CN112465144A (en) * 2020-12-11 2021-03-09 北京航空航天大学 Multi-modal demonstration intention generation method and device based on limited knowledge
CN112507138A (en) * 2020-12-28 2021-03-16 医渡云(北京)技术有限公司 Method and device for constructing disease-specific knowledge map, medium and electronic equipment
CN113408690A (en) * 2021-07-01 2021-09-17 之江实验室 Robot personalized emotion interaction device and method based on multi-mode knowledge graph

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101958892A (en) * 2010-09-16 2011-01-26 汉王科技股份有限公司 Electronic data protection method, device and system based on face recognition
CN104834849A (en) * 2015-04-14 2015-08-12 时代亿宝(北京)科技有限公司 Dual-factor identity authentication method and system based on voiceprint recognition and face recognition
CN106156365A (en) * 2016-08-03 2016-11-23 北京智能管家科技有限公司 A kind of generation method and device of knowledge mapping
CN106875949A (en) * 2017-04-28 2017-06-20 深圳市大乘科技股份有限公司 A kind of bearing calibration of speech recognition and device
CN107609478A (en) * 2017-08-09 2018-01-19 广州思涵信息科技有限公司 A kind of real-time analysis of the students system and method for matching classroom knowledge content
CN107665252A (en) * 2017-09-27 2018-02-06 深圳证券信息有限公司 A kind of method and device of creation of knowledge collection of illustrative plates
WO2018036239A1 (en) * 2016-08-24 2018-03-01 慧科讯业有限公司 Method, apparatus and system for monitoring internet media events based on industry knowledge mapping database

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101958892A (en) * 2010-09-16 2011-01-26 汉王科技股份有限公司 Electronic data protection method, device and system based on face recognition
CN104834849A (en) * 2015-04-14 2015-08-12 时代亿宝(北京)科技有限公司 Dual-factor identity authentication method and system based on voiceprint recognition and face recognition
CN106156365A (en) * 2016-08-03 2016-11-23 北京智能管家科技有限公司 A kind of generation method and device of knowledge mapping
WO2018036239A1 (en) * 2016-08-24 2018-03-01 慧科讯业有限公司 Method, apparatus and system for monitoring internet media events based on industry knowledge mapping database
CN106875949A (en) * 2017-04-28 2017-06-20 深圳市大乘科技股份有限公司 A kind of bearing calibration of speech recognition and device
CN107609478A (en) * 2017-08-09 2018-01-19 广州思涵信息科技有限公司 A kind of real-time analysis of the students system and method for matching classroom knowledge content
CN107665252A (en) * 2017-09-27 2018-02-06 深圳证券信息有限公司 A kind of method and device of creation of knowledge collection of illustrative plates

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858892A (en) * 2020-07-24 2020-10-30 中国平安人寿保险股份有限公司 Voice interaction method, device, equipment and medium based on knowledge graph
CN111858892B (en) * 2020-07-24 2023-09-29 中国平安人寿保险股份有限公司 Voice interaction method, device, equipment and medium based on knowledge graph
CN112465144A (en) * 2020-12-11 2021-03-09 北京航空航天大学 Multi-modal demonstration intention generation method and device based on limited knowledge
CN112507138A (en) * 2020-12-28 2021-03-16 医渡云(北京)技术有限公司 Method and device for constructing disease-specific knowledge map, medium and electronic equipment
CN113408690A (en) * 2021-07-01 2021-09-17 之江实验室 Robot personalized emotion interaction device and method based on multi-mode knowledge graph

Similar Documents

Publication Publication Date Title
CN107492379B (en) Voiceprint creating and registering method and device
CN106776544B (en) Character relation recognition method and device and word segmentation method
CN106875941B (en) Voice semantic recognition method of service robot
CN111368145A (en) Knowledge graph creating method and system and terminal equipment
WO2018045646A1 (en) Artificial intelligence-based method and device for human-machine interaction
WO2019084810A1 (en) Information processing method and terminal, and computer storage medium
CN106294774A (en) User individual data processing method based on dialogue service and device
JP2020030408A (en) Method, apparatus, device and medium for identifying key phrase in audio
CN109086276B (en) Data translation method, device, terminal and storage medium
CN110096599B (en) Knowledge graph generation method and device
US11036996B2 (en) Method and apparatus for determining (raw) video materials for news
CN110910903A (en) Speech emotion recognition method, device, equipment and computer readable storage medium
CN113486170B (en) Natural language processing method, device, equipment and medium based on man-machine interaction
CN112650842A (en) Human-computer interaction based customer service robot intention recognition method and related equipment
CN113836303A (en) Text type identification method and device, computer equipment and medium
CN113919360A (en) Semantic understanding method, voice interaction method, device, equipment and storage medium
CN116894078A (en) Information interaction method, device, electronic equipment and medium
CN114676705B (en) Dialogue relation processing method, computer and readable storage medium
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN115033661A (en) Natural language semantic understanding method and device based on vertical domain knowledge graph
CN112286916A (en) Data processing method, device, equipment and storage medium
CN117112065A (en) Large model plug-in calling method, device, equipment and medium
CN111931503A (en) Information extraction method and device, equipment and computer readable storage medium
CN115858776B (en) Variant text classification recognition method, system, storage medium and electronic equipment
CN116643814A (en) Model library construction method, model calling method based on model library and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200703