CN111667840A - Robot knowledge graph node updating method based on voiceprint recognition - Google Patents
Robot knowledge graph node updating method based on voiceprint recognition Download PDFInfo
- Publication number
- CN111667840A CN111667840A CN202010526839.4A CN202010526839A CN111667840A CN 111667840 A CN111667840 A CN 111667840A CN 202010526839 A CN202010526839 A CN 202010526839A CN 111667840 A CN111667840 A CN 111667840A
- Authority
- CN
- China
- Prior art keywords
- robot
- knowledge graph
- entity
- new
- voiceprint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000003993 interaction Effects 0.000 claims abstract description 21
- 238000005516 engineering process Methods 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 239000000203 mixture Substances 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 2
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000019771 cognition Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 244000099147 Ananas comosus Species 0.000 description 1
- 235000007119 Ananas comosus Nutrition 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/04—Training, enrolment or model building
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/06—Decision making techniques; Pattern matching strategies
- G10L17/14—Use of phonemic categorisation or speech recognition prior to speaker recognition or verification
Abstract
The invention belongs to the field of service robots and artificial intelligence, and particularly relates to a robot knowledge graph node updating method based on voiceprint recognition. The method mainly comprises the following steps: starting voiceprint recognition, and collecting voiceprint information of a target person; querying a local or cloud database; judging whether the voiceprint information is in a knowledge map database; if not, adding a new person entity, and taking voiceprint information as an identification tag; actively asking the target person to obtain basic information of the target person, and adding a new triple example; reasoning out the relation between the target person entity and other entities; different human entities are distinguished, and personalized human-computer interaction selection is made in a targeted manner. The method distinguishes each figure entity through voiceprint recognition, enables the robot to determine different entities conversing with the robot, actively adds entity information into an individualized knowledge graph of the robot, deduces new triples continuously in interaction, updates the relation between the entities and perfects the knowledge graph.
Description
Technical Field
The invention belongs to the field of service robots and artificial intelligence, and particularly relates to a robot knowledge graph node updating method based on voiceprint recognition.
Background
With the continuous and deep development of the robot technology, the robot is more and more intelligent, and the robot knowledge graph is a key technology for the intelligent progress of the robot. At present, most service robots only rely on fixed knowledge maps for interaction, and do not have a knowledge map updating method. The node updating method of the knowledge graph of part of the advanced robots comprises two methods: one is a manual node update, and a user needs to manually upload the relationship between entities and store the relationship into a knowledge graph to achieve the purpose of updating the node; another technique that can realize automatic node update relies on natural language processing to understand the entities appearing in sentences and reason about the relationships among the entities in question-answer interactions to update the nodes.
The method for manually updating the nodes is too limited and tedious, a user or a developer needs to input the triples by himself, the process is time-consuming and labor-consuming, manual input is always incomplete, and many details cannot be considered; the method for updating the knowledge graph by reasoning the triples by the robot by the natural language processing technology is very flexible, the robot can continuously perfect the knowledge graph of the robot in the process of communicating with people, but the method has great limitation in the aspect of character entity relationship learning, the robot cannot know the character entities communicating with the robot, and the triples related to the character entities are difficult to collect in the natural language interaction.
Disclosure of Invention
The invention aims to provide a robot knowledge graph node updating method based on voiceprint recognition. The method distinguishes each figure entity through the voiceprint recognition technology, so that the robot can determine different entities conversing with the robot through the voiceprint recognition technology, actively adds entity information into an individualized knowledge graph of the robot, continuously deduces new triples in interaction, updates the relation between the entities and perfects the knowledge graph, and thus more natural man-machine interaction can be realized in a home environment.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a robot knowledge graph node updating method based on voiceprint recognition is characterized by comprising the following steps:
the robot starts voiceprint recognition, and collects voiceprint information of a target person according to audio input;
inquiring a local or cloud database according to the collected voiceprint information;
judging whether the collected voiceprint information is in a database of the knowledge graph of the robot, if so, turning to the seventh step, otherwise, continuing the fourth step;
adding a new figure entity in the robot knowledge graph, and taking voiceprint information as an identification tag;
the robot actively asks a target person to obtain basic information of the target person, and adds a new triple example according to the basic information;
reasoning the relation between the target person entity and other entities according to the new triple examples and the existing knowledge graph;
distinguishing different character entities by a voiceprint recognition technology, and making personalized man-machine interaction selection in a targeted manner;
and step eight, reasoning new related triples by using nlp technology, namely natural voice processing technology according to the interaction information with the target person entity.
According to the robot knowledge graph node updating method based on voiceprint recognition, voice information of a target person collected in the step one is calculated, and corresponding voice characteristics MFCC, namely Mel cepstrum coefficients, are extracted.
In the robot knowledge graph node updating method based on voiceprint recognition, the knowledge graph in the step two can be stored locally or in a cloud server, and the general knowledge graph and the personalized knowledge graph can be stored separately.
The robot knowledge graph node updating method based on voiceprint recognition specifically comprises the following steps: and calculating the probability score of the speaker by using a GMM-UBM speaker recognition system which is a Gaussian mixture general background model according to the extracted feature parameters of the voice feature MFCC and by using the existing speaker GMM-UBM in the database, if the score exceeds a set threshold and is the highest, determining that the speaker is the same person, and if the score does not exceed the set threshold, determining that the voice is unregistered.
The robot knowledge graph node updating method based on voiceprint recognition specifically comprises the following steps of: training a GMM model of the speaker by using the GMM-UBM speaker recognition system according to the extracted feature parameters of the voice feature MFCC and adopting a maximum posterior probability self-adaptive method, namely MAP, so as to register a new voiceprint, creating a new character entity in a knowledge graph, and establishing a triple, namely a character tag-voiceprint-speaker model; the act of registering the tag of the new target persona entity is performed actively by the robot.
In the sixth step, after a new entity node is added into the knowledge graph, the robot tries to reason every time a triple is added, obtains the indirect correlation relationship between the new entity and other entities according to the existing direct correlation relationship, and tries to create a new triple again.
The robot knowledge graph node updating method based on voiceprint recognition comprises the step seven of personalized interaction, wherein the personalized interaction means that the robot adjusts speaking tone, nominal calling, active suggestion and the like based on various personal information of known character entities such as gender, age, family relationship, social relationship, interests and hobbies when interacting with people.
According to the robot knowledge graph node updating method based on voiceprint recognition, the robot depends on a reliable nlp technology to determine the character entity interacting with the voiceprint recognition system, new triples related to the character entity in conversation can be continuously and actively created in daily life communication, the knowledge graph of the robot is updated, and therefore the robot can learn continuously.
In summary, compared with various existing technologies for presetting a knowledge graph and enabling a robot not to be automatically or actively updated, the robot knowledge graph node updating method based on voiceprint recognition provided by the invention can flexibly and quickly expand the boundary of the knowledge graph and enable the robot to have more cognition on the relationship of a person entity. Based on the knowledge graph updated in real time, the robot can interact with people more intelligently, so that the humanization of the robot is stronger.
The foregoing is a summary of the present application and thus contains, by necessity, simplifications, generalizations and omissions of detail; those skilled in the art will appreciate that the summary is illustrative of the application and is not intended to be in any way limiting. Other aspects, features and advantages of the devices and/or methods and/or other subject matter described in this specification will become apparent as the description proceeds. The summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Drawings
The above-described and other features of the present application will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. It is to be understood that these drawings are solely for purposes of illustrating several embodiments of the present application and are not intended as a definition of the limits of the application, for which reference should be made to the appended drawings, wherein the disclosure is to be interpreted in a more complete and detailed manner.
FIG. 1 is a flowchart of a robot knowledge graph node updating method based on voiceprint recognition.
Detailed Description
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, the same/similar reference numerals generally refer to the same/similar parts unless otherwise specified in the specification. The illustrative embodiments described in the detailed description, drawings, and claims should not be considered limiting of the application. Other embodiments of, and changes to, the present application may be made without departing from the spirit or scope of the subject matter presented in the present application. It should be readily understood that the aspects of the present application, as generally described in the specification and illustrated in the figures herein, could be arranged, substituted, combined, designed in a wide variety of different configurations, and that all such modifications are expressly contemplated and made part of this application.
Referring to fig. 1, the robot knowledge graph node updating method based on voiceprint recognition provided by the invention is a flow chart, and comprises the following specific steps:
the robot starts voiceprint recognition, and collects voiceprint information of a target person according to audio input;
inquiring a local or cloud database according to the collected voiceprint information;
judging whether the collected voiceprint information is in a database of the knowledge graph of the robot, if so, turning to the seventh step, otherwise, continuing the fourth step;
adding a new figure entity in the robot knowledge graph, and taking voiceprint information as an identification tag;
the robot actively asks a target person to obtain basic information of the target person, and adds a new triple example according to the basic information;
reasoning the relation between the target person entity and other entities according to the new triple examples and the existing knowledge graph;
distinguishing different character entities by a voiceprint recognition technology, and making personalized man-machine interaction selection in a targeted manner;
and step eight, reasoning new related triples by using nlp technology, namely natural voice processing technology according to the interaction information with the target person entity.
1) And step one, the collected sound information of the target person is extracted by calculating the voice frequency spectrum of the sound information to obtain corresponding voice characteristics MFCC, namely Mel cepstrum coefficients. The method is very suitable for the robot mainly using voice interaction, wherein voiceprint recognition (speaker verification) is also called speaker recognition.
2) The knowledge graph in the second step can be stored locally or in a cloud server, and the general knowledge graph and the personalized knowledge graph can be stored separately.
3) The judging method in the third step specifically comprises the following steps: and calculating the probability score of the speaker by using a GMM-UBM speaker recognition system which is a Gaussian mixture general background model according to the extracted feature parameters of the voice feature MFCC and by using the existing speaker GMM-UBM in the database, if the score exceeds a set threshold and is the highest, determining that the speaker is the same person, and if the score does not exceed the set threshold, determining that the voice is unregistered.
4) The method for adding the new label in the fourth step specifically comprises the following steps: training a GMM model of the speaker by using the GMM-UBM speaker recognition system according to the extracted feature parameters of the voice feature MFCC and adopting a maximum posterior probability self-adaptive method, namely MAP, so as to register a new voiceprint, creating a new character entity in a knowledge graph, and establishing a triple, namely a character tag-voiceprint-speaker model; the act of registering the tag of the new target persona entity is performed actively by the robot.
5) In step five, the person entity has many attributes in the knowledge graph, such as gender, name and the like, and more entities can be associated by perfecting the basic information. The knowledge graph is composed of a piece of knowledge, and each piece of knowledge is represented as an SPO triple (Subject-predict-Object), namely, an entity-entity relation-entity. The entity relationship can be 'attribute', or 'relationship', the entities at both ends of the 'attribute' are usually topic and character string, and the 'relationship' is usually topic at both ends.
6) And step six, after a new entity node is added into the knowledge graph, the robot tries to reason every time a triple is added, obtains the indirectly related relation between the new entity and other entities according to the existing directly related relation, and tries to create a new triple again.
7) The personalized interaction in the seventh step means that the robot adjusts the speaking mood, the calling, the active suggestion and the like when interacting with people based on various personal information such as the sex, the age, the family relationship, the social relationship, the hobbies and the like of the known character entities. The reply information of the robot completely depends on the knowledge graph, a reliable and comprehensive knowledge graph is constructed, and the robot is helped to complete real humanized interaction.
8) In step eight, the robot determines the character entities interacting with the robot by relying on a reliable nlp technology and a voiceprint recognition system and analyzes the language of daily communication, and can continuously and actively create new triples related to the character entities in conversation in daily communication, update the knowledge graph of the triples and further continuously learn. When acquiring triples, it is necessary to attempt extraction in a more natural way in a normal conversation, without deliberately asking questions.
Example 1 was carried out:
step 1: when a guest visits, the home robot is awakened and voiceprint recognition is started;
step 2: the robot tries to actively register information by judging and recognizing that the voiceprint of the guest is not registered;
and step 3: the robot sends out voice interaction: what relation i don't know your what you are with the home owner "
Step 4, the guest answers: "I is a sibling of xxx (the owner's name of the family);
step 5, guests actively ask questions: "My father likes to eat what"
And 6, the robot deduces according to the knowledge graph to obtain that the guest is a brother of the family owner, so that the father of the guest is the father of the family owner, and answers according to the knowledge graph: "he likes to eat pineapple. "
The experiment fully embodies the superiority of the active learning function endowed by the method of the patent, can quickly judge the speaker by relying on voiceprint recognition, and achieves real humanized communication by combining the technology of actively registering and updating knowledge map nodes.
Example 2 was carried out:
step 1: the household robot is awakened and voiceprint recognition is started;
step 2: the voiceprint of the current speaker is registered;
and step 3: the robot responds: "you Happy, xxx (speaker name)";
and step 4, responding by the speaker: "play a small apple, which is the favorite song of my wife";
and 5, the robot plays a song 'small apple', and actively creates the 'small apple' as a new triple: the speaker wife's character entity-favorite song-apple;
and 6, inquiring the character entity which has a relation with the speaker and is wife: "what songs I like best";
and 7: the robot determines the physical entity according to the voiceprint information of the current speaker and responds: the apple is named as the small apple.
The result of the experiment is as follows: the experiment fully embodies the extracting capability of the robot on the triples and the entity relationship reasoning capability in daily conversations by relying on the voiceprint recognition technology.
In summary, compared with various existing technologies for presetting a knowledge graph and enabling a robot not to be automatically or actively updated, the robot knowledge graph node updating method based on voiceprint recognition provided by the invention can flexibly and quickly expand the boundary of the knowledge graph and enable the robot to have more cognition on the relationship of a person entity. Based on the knowledge graph updated in real time, the robot can interact with people more intelligently, so that the humanization of the robot is stronger.
The foregoing has been a detailed description of various embodiments of the apparatus and/or methods of the present application via block diagrams, flowcharts, and/or examples of implementations. When the block diagrams, flowcharts, and/or embodiments include one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within the block diagrams, flowcharts, and/or embodiments can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. However, those skilled in the art will recognize that some aspects of the embodiments described in this specification can be equivalently implemented in whole or in part in integrated circuits, in the form of one or more computer programs running on one or more computers (e.g., in the form of one or more computer programs running on one or more computer systems), in the form of one or more programs running on one or more processors (e.g., in the form of one or more programs running on one or more microprocessors), in the form of firmware, or in virtually any combination thereof, and, it is well within the ability of those skilled in the art to design circuits and/or write code for use in the present application, software and/or firmware, in accordance with the teachings disclosed herein. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described in this specification are capable of being distributed as a program product in a variety of forms, regardless of the type of signal bearing media used to actually carry out the distribution, and that an illustrative embodiment of the subject matter described in this specification applies. For example, signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disks, Compact Disks (CDs), Digital Video Disks (DVDs), digital tape, computer memory, etc.; a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
Those skilled in the art will recognize that it is common within the art to describe devices and/or methods in the manner described in this specification and then to perform engineering practices to integrate the described devices and/or methods into a data processing system. That is, at least a portion of the devices and/or methods described herein may be integrated into a data processing system through a reasonable amount of experimentation. Those skilled in the art will recognize that a typical data processing system will typically include one or more of the following: a system unit housing, a video display device, memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computing entities such as operating systems, drivers, graphical user interfaces, and applications, one or more interaction devices such as a touch pad or screen, and/or a control system including feedback loops and control motors (e.g., feedback to detect position and/or velocity; control motors to move and/or adjust components and/or size). A typical data processing system may be implemented using any suitable commercially available components such as those typically found in data computing/communication and/or network computing/communication systems.
With respect to substantially any plural and/or singular terms used in this specification, those skilled in the art may interpret the plural as singular and/or the singular as plural as appropriate from a context and/or application. Various singular/plural combinations may be explicitly stated in this specification for the sake of clarity.
Various aspects and embodiments of the present application are disclosed herein, and other aspects and embodiments of the present application will be apparent to those skilled in the art. The various aspects and embodiments disclosed in this application are presented by way of example only, and not by way of limitation, and the true scope and spirit of the application is to be determined by the following claims.
Claims (8)
1. A robot knowledge graph node updating method based on voiceprint recognition is characterized by comprising the following steps:
the robot starts voiceprint recognition, and collects voiceprint information of a target person according to audio input;
inquiring a local or cloud database according to the collected voiceprint information;
judging whether the collected voiceprint information is in a database of the knowledge graph of the robot, if so, turning to the seventh step, otherwise, continuing the fourth step;
adding a new figure entity in the robot knowledge graph, and taking voiceprint information as an identification tag;
the robot actively asks a target person to obtain basic information of the target person, and adds a new triple example according to the basic information;
reasoning the relation between the target person entity and other entities according to the new triple examples and the existing knowledge graph;
distinguishing different character entities by a voiceprint recognition technology, and making personalized man-machine interaction selection in a targeted manner;
and step eight, reasoning new related triples by using nlp technology, namely natural voice processing technology according to the interaction information with the target person entity.
2. The method as claimed in claim 1, wherein the voice information of the target person collected in step one is extracted by calculating its voice spectrum to obtain corresponding voice feature MFCC, i.e. mel-frequency cepstral coefficient.
3. The method as claimed in claim 1, wherein the knowledge graph in step two may be stored locally or in a cloud server, and the generic knowledge graph and the customized knowledge graph may be stored separately.
4. The method for updating knowledge-graph nodes of a robot based on voiceprint recognition according to claim 1 or 2, wherein the determination method in the third step is specifically as follows: and calculating the probability score of the speaker by using a GMM-UBM speaker recognition system which is a Gaussian mixture general background model according to the extracted feature parameters of the voice feature MFCC and by using the existing speaker GMM-UBM in the database, if the score exceeds a set threshold and is the highest, determining that the speaker is the same person, and if the score does not exceed the set threshold, determining that the voice is unregistered.
5. The method for updating nodes of a robot knowledge-graph based on voiceprint recognition according to any one of claims 1, 2 or 4, wherein the method for adding new labels in the fourth step is specifically as follows: training a GMM model of the speaker by using the GMM-UBM speaker recognition system according to the extracted feature parameters of the voice feature MFCC and adopting a maximum posterior probability self-adaptive method, namely MAP, so as to register a new voiceprint, creating a new character entity in a knowledge graph, and establishing a triple, namely a character tag-voiceprint-speaker model; the act of registering the tag of the new target persona entity is performed actively by the robot.
6. The method as claimed in claim 1, wherein in step six, after a new entity node is added to the knowledge graph, the robot tries to perform inference every time a triple is added, obtains an indirect correlation between the new entity and another entity according to an existing direct correlation, and tries to create a new triple again.
7. The method as claimed in claim 1, wherein the personalized interaction in step seven is that the robot adjusts the language, the name, the initiative suggestion, etc. when interacting with the person based on the known sex, age, family relationship, social relationship, hobbies, etc. of the person entity.
8. The method as claimed in claim 1, wherein the robot relies on reliable nlp technology to determine the person entity interacting with the voiceprint recognition system, and can actively create new triples related to the dialog person entity and update its knowledge graph to learn continuously in daily life communication.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010526839.4A CN111667840A (en) | 2020-06-11 | 2020-06-11 | Robot knowledge graph node updating method based on voiceprint recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010526839.4A CN111667840A (en) | 2020-06-11 | 2020-06-11 | Robot knowledge graph node updating method based on voiceprint recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111667840A true CN111667840A (en) | 2020-09-15 |
Family
ID=72386887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010526839.4A Pending CN111667840A (en) | 2020-06-11 | 2020-06-11 | Robot knowledge graph node updating method based on voiceprint recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111667840A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112331201A (en) * | 2020-11-03 | 2021-02-05 | 珠海格力电器股份有限公司 | Voice interaction method and device, storage medium and electronic device |
CN112530438A (en) * | 2020-11-27 | 2021-03-19 | 贵州电网有限责任公司 | Identity authentication method based on knowledge graph assisted voiceprint recognition |
CN113254666A (en) * | 2021-06-02 | 2021-08-13 | 上海酒贝乐信息技术有限公司 | Method and system for artificial intelligence self-learning and perfect growth |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030148790A1 (en) * | 2002-02-01 | 2003-08-07 | Microsoft Corporation | Method and system for managing changes to a contact database |
US20140188944A1 (en) * | 2004-05-26 | 2014-07-03 | Facebook, Inc. | Relationship Confirmation in an Online Social Network |
CN104490570A (en) * | 2014-12-31 | 2015-04-08 | 桂林电子科技大学 | Embedding type voiceprint identification and finding system for blind persons |
CN106294813A (en) * | 2016-08-15 | 2017-01-04 | 歌尔股份有限公司 | A kind of method and apparatus of smart machine person recognition |
CN106355627A (en) * | 2015-07-16 | 2017-01-25 | 中国石油化工股份有限公司 | Method and system used for generating knowledge graphs |
CN106462384A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Multi-modal based intelligent robot interaction method and intelligent robot |
CN109147770A (en) * | 2017-06-16 | 2019-01-04 | 阿里巴巴集团控股有限公司 | The optimization of voice recognition feature, dynamic registration method, client and server |
CN109657238A (en) * | 2018-12-10 | 2019-04-19 | 宁波深擎信息科技有限公司 | Context identification complementing method, system, terminal and the medium of knowledge based map |
CN110110155A (en) * | 2019-04-03 | 2019-08-09 | 中国人民解放军战略支援部队信息工程大学 | Personage's knowledge mapping attribute acquisition methods and device based on first social relationships circle |
-
2020
- 2020-06-11 CN CN202010526839.4A patent/CN111667840A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030148790A1 (en) * | 2002-02-01 | 2003-08-07 | Microsoft Corporation | Method and system for managing changes to a contact database |
US20140188944A1 (en) * | 2004-05-26 | 2014-07-03 | Facebook, Inc. | Relationship Confirmation in an Online Social Network |
CN104490570A (en) * | 2014-12-31 | 2015-04-08 | 桂林电子科技大学 | Embedding type voiceprint identification and finding system for blind persons |
CN106355627A (en) * | 2015-07-16 | 2017-01-25 | 中国石油化工股份有限公司 | Method and system used for generating knowledge graphs |
CN106462384A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Multi-modal based intelligent robot interaction method and intelligent robot |
CN106294813A (en) * | 2016-08-15 | 2017-01-04 | 歌尔股份有限公司 | A kind of method and apparatus of smart machine person recognition |
CN109147770A (en) * | 2017-06-16 | 2019-01-04 | 阿里巴巴集团控股有限公司 | The optimization of voice recognition feature, dynamic registration method, client and server |
CN109657238A (en) * | 2018-12-10 | 2019-04-19 | 宁波深擎信息科技有限公司 | Context identification complementing method, system, terminal and the medium of knowledge based map |
CN110110155A (en) * | 2019-04-03 | 2019-08-09 | 中国人民解放军战略支援部队信息工程大学 | Personage's knowledge mapping attribute acquisition methods and device based on first social relationships circle |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112331201A (en) * | 2020-11-03 | 2021-02-05 | 珠海格力电器股份有限公司 | Voice interaction method and device, storage medium and electronic device |
CN112530438A (en) * | 2020-11-27 | 2021-03-19 | 贵州电网有限责任公司 | Identity authentication method based on knowledge graph assisted voiceprint recognition |
CN112530438B (en) * | 2020-11-27 | 2023-04-07 | 贵州电网有限责任公司 | Identity authentication method based on knowledge graph assisted voiceprint recognition |
CN113254666A (en) * | 2021-06-02 | 2021-08-13 | 上海酒贝乐信息技术有限公司 | Method and system for artificial intelligence self-learning and perfect growth |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10977452B2 (en) | Multi-lingual virtual personal assistant | |
US11822859B2 (en) | Self-learning digital assistant | |
US10770073B2 (en) | Reducing the need for manual start/end-pointing and trigger phrases | |
US10884503B2 (en) | VPA with integrated object recognition and facial expression recognition | |
EP3158427B1 (en) | System and method for speech-enabled personalized operation of devices and services in multiple operating environments | |
CN111667840A (en) | Robot knowledge graph node updating method based on voiceprint recognition | |
US11106868B2 (en) | System and method for language model personalization | |
JP5155943B2 (en) | Data processing apparatus, data processing apparatus control program, and data processing method | |
WO2019000991A1 (en) | Voice print recognition method and apparatus | |
JPWO2019142427A1 (en) | Information processing equipment, information processing systems, information processing methods, and programs | |
EP3631793B1 (en) | Dynamic and/or context-specific hot words to invoke automated assistant | |
CN115713949A (en) | Encapsulation and synchronization state interactions between devices | |
US11393459B2 (en) | Method and apparatus for recognizing a voice | |
CN111696559B (en) | Providing emotion management assistance | |
EP2946311A2 (en) | Accumulation of real-time crowd sourced data for inferring metadata about entities | |
US10789961B2 (en) | Apparatus and method for predicting/recognizing occurrence of personal concerned context | |
CN114270361A (en) | System and method for registering devices for voice assistant services | |
US11257482B2 (en) | Electronic device and control method | |
JP2023549975A (en) | Speech individuation and association training using real-world noise | |
Hamidi et al. | Emotion recognition from Persian speech with neural network | |
CN117041807B (en) | Bluetooth headset play control method | |
CN109074809A (en) | Information processing equipment, information processing method and program | |
CN111524514A (en) | Voice control method and central control equipment | |
US11727085B2 (en) | Device, method, and computer program for performing actions on IoT devices | |
US10959050B2 (en) | Action based object location system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200915 |
|
WD01 | Invention patent application deemed withdrawn after publication |