CN107301168A - Intelligent robot and its mood exchange method, system - Google Patents
Intelligent robot and its mood exchange method, system Download PDFInfo
- Publication number
- CN107301168A CN107301168A CN201710402814.1A CN201710402814A CN107301168A CN 107301168 A CN107301168 A CN 107301168A CN 201710402814 A CN201710402814 A CN 201710402814A CN 107301168 A CN107301168 A CN 107301168A
- Authority
- CN
- China
- Prior art keywords
- text information
- answer
- mood
- intelligent robot
- sentence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000036651 mood Effects 0.000 title claims abstract description 75
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000004044 response Effects 0.000 claims abstract description 47
- 230000009471 action Effects 0.000 claims abstract description 42
- 230000008921 facial expression Effects 0.000 claims abstract description 41
- 238000004590 computer program Methods 0.000 claims description 19
- 230000002452 interceptive effect Effects 0.000 claims description 11
- 230000007935 neutral effect Effects 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 240000001140 Mimosa pudica Species 0.000 claims 1
- 230000008451 emotion Effects 0.000 abstract description 13
- 230000003993 interaction Effects 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 2
- 206010048232 Yawning Diseases 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000002354 daily effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000027534 Emotional disease Diseases 0.000 description 1
- 206010016322 Feeling abnormal Diseases 0.000 description 1
- 241001282135 Poromitra oscitans Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- IXSZQYVWNJNRAL-UHFFFAOYSA-N etoxazole Chemical compound CCOC1=CC(C(C)(C)C)=CC=C1C1N=C(C=2C(=CC=CC=2F)F)OC1 IXSZQYVWNJNRAL-UHFFFAOYSA-N 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008904 neural response Effects 0.000 description 1
- 238000004549 pulsed laser deposition Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9032—Query formulation
- G06F16/90332—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Acoustics & Sound (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Manipulator (AREA)
Abstract
The invention belongs to field in intelligent robotics there is provided a kind of intelligent robot and its mood exchange method, system, to improve the intelligent level that intelligence machine person to person carries out mood interaction.Methods described includes:Gather any one or more in voice, facial expression and the limb action that user produces when user interacts with intelligent robot;Any one or more in voice, facial expression and limb action is converted into corresponding text information or sentence semantics are represented;Text information or sentence semantics are handled using mood engine to represent, it is determined that representing the answer that matches with text information or sentence semantics;Output represents response of the answer matched as intelligent robot to user with text information or sentence semantics.The intelligent robot that technical solution of the present invention is provided represents these and text information or sentence semantics the answer matched as the response to user, wherein with abundant emotion, embodying the higher intelligent level of intelligent robot.
Description
Technical field
The invention belongs to field in intelligent robotics, more particularly to intelligent robot and its mood exchange method, system.
Background technology
Robot mesh including industrial robot, service robot, Pei Liao robots and emotion accompany and attend to robot etc.
The preceding visual field for having stepped into people, for global range, these robots continue for number with double-digit development now
Year.With the development and progress of intelligent robot technology, robot more broadly enters business and the life of people.Long-term
With in people's interaction, robot constantly learns and is imperceptibly influenced by what one constantly sees and hears, slowly become it is clever with it is understanding, to a certain extent
The emotion (for example, happiness, anger, grief and joy) of people is understood, the expression and limb action of people can be understood.
Now with the intensification of social senilization's degree, Empty nest elderly is more and more, the machine to being positioned at affective interaction
Man's Demands also gradually rise.These affective interactions include talking with, understand the happiness, anger, grief and joy of owner with owner, remind owner on time
Take medicine and discover abnormal conditions etc. of old man's body generation.Current intelligent robot has very high IQ, can be complete
Many work that adult is assigned, but its feeling quotrient is also than relatively low, so that significantly limit it uses function and application scope.Have
If although a little robots have been also imparted with " emotion " of dry form, but these emotions are all very rudimentary, scrappy, interrupted
Or machinery, the interaction between various emotions does not have continuity, internal logic and dialectical unity, actually simply imitates
Certain emotional expression of the mankind, rather than emotion truly.
To sum up, current intelligent robot is main drawback is that can only be according to people's program prepared in advance and affection data
Acted, it is impossible to affective interaction truly is independently carried out with people.
The content of the invention
It is an object of the invention to provide a kind of intelligent robot and its mood exchange method, system, to improve intelligent machine
Device person to person carries out the intelligent level of mood interaction.
First aspect present invention provides a kind of intelligent robot mood exchange method, and methods described includes:
Appointing in collection the user user produces when being interacted with intelligent robot voice, facial expression and limb action
Meaning is one or more;
By any one or more in the voice, facial expression and limb action be converted into corresponding text information or
Sentence semantics are represented;
The text information is handled using mood engine or sentence semantics are represented, it is determined that with the text information or sentence language
Justice represents the answer of matching;
Represent the answer matched as the intelligent robot to institute with the text information or sentence semantics described in output
State the response of user.
Second aspect of the present invention provides a kind of intelligent robot mood interactive system, and the system includes:
Acquisition module, for gather the user produces when user interacts with intelligent robot voice, facial expression and
Any one or more in limb action;
Modular converter, for any one or more in the voice, facial expression and limb action to be converted into phase
The text information or sentence semantics answered are represented;
Mood engine modules, for being represented using the mood engine processing text information or sentence semantics, it is determined that and institute
State text information or sentence semantics represent the answer of matching;
Output module, for exporting the answer matched with the text information or sentence semantics as the intelligent machine
Response of the device people to the user.
Third aspect present invention provides a kind of terminal device, including memory, processor and is stored in the memory
In and the computer program that can run on the processor, following step is realized described in the computing device during computer program
Suddenly:
Appointing in collection the user user produces when being interacted with intelligent robot voice, facial expression and limb action
Meaning is one or more;
By any one or more in the voice, facial expression and limb action be converted into corresponding text information or
Sentence semantics are represented;
The text information is handled using mood engine or sentence semantics are represented, it is determined that with the text information or sentence language
Justice represents the answer of matching;
Represent the answer matched as the intelligent robot to institute with the text information or sentence semantics described in output
State the response of user.
Fourth aspect present invention provides a kind of computer-readable recording medium, and the computer-readable recording medium storage has
Computer program, the computer program realizes following steps when being executed by processor:
Appointing in collection the user user produces when being interacted with intelligent robot voice, facial expression and limb action
Meaning is one or more;
By any one or more in the voice, facial expression and limb action be converted into corresponding text information or
Sentence semantics are represented;
The text information is handled using mood engine or sentence semantics are represented, it is determined that with the text information or sentence language
Justice represents the answer of matching;
Represent the answer matched as the intelligent robot to institute with the text information or sentence semantics described in output
State the response of user.
It was found from the invention described above technical scheme, voice, the facial table produced when being interacted due to user with intelligent robot
Feelings and/or limb action are converted into corresponding text information or sentence semantics are represented, and use mood engine processing word letter
Breath or sentence semantics are represented, are the processes of deep learning, the identified answer for representing with text information or sentence semantics to match
Adaptivity with height, therefore, compared with lacking emotion when prior art intelligent robot is with user mutual, skill of the present invention
The intelligent robot of art scheme represents these and text information or sentence semantics the answer that matches as the response to user, its
In with abundant emotion, embody the higher intelligent level of intelligent robot.
Brief description of the drawings
Fig. 1 is the implementation process schematic diagram of intelligent robot mood exchange method provided in an embodiment of the present invention;
Fig. 2 is the structural representation of intelligent robot mood interactive system provided in an embodiment of the present invention;
Fig. 3 is the structural representation for the intelligent robot mood interactive system that another embodiment of the present invention is provided;
Fig. 4 is the structural representation for the intelligent robot mood interactive system that another embodiment of the present invention is provided;
The structural representation for the intelligent robot mood interactive system that Fig. 5 another embodiment of the present invention is provided;
Fig. 6 is the structural representation of intelligent robot provided in an embodiment of the present invention.
Embodiment
In order that the purpose of the present invention, technical scheme and beneficial effect are more clearly understood, below in conjunction with accompanying drawing and implementation
Example, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only to explain this hair
It is bright, it is not intended to limit the present invention.
The embodiment of the present invention provides a kind of intelligent robot and its mood exchange method, and methods described includes:Gather user
When being interacted with intelligent robot user produce voice, facial expression and limb action in any one or more;By voice,
Any one or more in facial expression and limb action is converted into corresponding text information or sentence semantics are represented;Using feelings
The thread engine processing text information or sentence semantics represent, it is determined that representing the answer that matches with text information or sentence semantics;
Output represents response of the answer matched as intelligent robot to user with text information or sentence semantics.The embodiment of the present invention
Corresponding intelligent robot mood interactive system and a kind of intelligent robot are also provided.It is described in detail individually below.
Accompanying drawing 1 is referred to, is that the implementation process for the intelligent robot mood exchange method that the embodiment of the present invention one is provided is shown
It is intended to, mainly includes the following steps that S101 to step S104, describes in detail as follows:
In S101, user produces when collection user interacts with intelligent robot voice, facial expression and limb action
Any one or more.
In embodiments of the present invention, the voice that user produces when collection user interacts with intelligent robot can be by sound
Camera-shooting and recording device is realized, for example, can gather what user when user interacts with intelligent robot produced by Mike (microphone)
Voice, and user user produces when being interacted with intelligent robot facial expression or limb action can be by image picking-up apparatus
Realize, for example, gathering facial expression or limb action that user produces when user interacts with intelligent robot etc. by camera.
When user interacts with intelligent robot user produce facial expression can be smile, get angry, shedding tears, frowning, double eyebrows raise up, mouth
Lip is closed or the corners of the mouth upwards, etc., the limb action of user's generation can be stance, sit when user interact with intelligent robot
Appearance, head, hand and the posture of leg or motion etc., including shake the head, nod, chinning up and chesting out, rock fist, applaud, yawn, patting
Shoulder is carried on the back, grabs rich cheek of ear etc..The facial expression or limb action that these users send contain the mood that user enriches, for example, face
In terms of portion's expression, " frowning " generally represents angry or difficult mood, including melancholy, doubt, suspection etc., and " double eyebrows raise up " is usual
Represent a kind of to appreciate very much or extremely surprised look, what " lip is closed " was generally represented is harmonious quiet, dignified naturally, " mouth
On angular " good will, the meaning of happiness are generally represented, allow other side to feel sincere, understanding, " smile " typically takes what is intimately watched attentively
Mode, etc., can make these facial expressions and its corresponding implication in the facial expression storehouse of intelligent robot;As for action
Aspect, " shaking the head " represents to disagree in most culture or viewpoint fails to agree, and " nodding " represents in most culture
Agree to or approve, " chinning up and chesting out " generally represents self-confident, resolute, " rocking fist " generally represents indignation or rich aggressive;" drum
The palm " is generally agreed or glad, and " yawning " generally represents to be sick of, and " patting the shoulder back of the body " generally represents to encourage, congratulate or comfort,
" grab ear have mercy on the cheek " generally represents fascination or not believed that, etc., these limb actions and its corresponding implication can be made into intelligence
The limb action storehouse of robot.
S102, corresponding text information is converted into by any one or more in voice, facial expression and limb action
Or sentence semantics are represented.
As it was previously stated, user produces when user interacts with intelligent robot facial expression or limb action have what is enriched
Emotion implication, and these facial expressions or limb action can be understood, it is necessary to be changed by intelligent robot.In this hair
In bright embodiment, user produces when can user be interacted with intelligent robot voice, facial expression and limb action conversion
For text information.When implementing, speech processing software or hardware can be used, by the corresponding text information of speech production;Can
With the facial expression storehouse by inquiring about intelligent robot, the corresponding implication of these facial expressions is known, so as to be converted into corresponding
Text information;The corresponding implication of these limb actions can be known by inquiring about the limb action storehouse of intelligent robot, so as to turn
Turn to corresponding text information.
It should be noted that simple text information may also be not enough to allow intelligent robot to understand.In present invention implementation
In example, text information further can also be converted into corresponding sentence semantics and represented, for example, real number value is vectorial.Implement
When, text information can be input to convolutional neural networks (CNN) sentence model, the sentence semantics table of these text informations is obtained
Show i.e. real number value vector.
S103, is represented using mood engine processing sentence semantics, it is determined that representing what is matched with text information or sentence semantics
Answer.
As one embodiment of the invention, handle any in voice, facial expression and limb action using mood engine
The text information or sentence semantics that one or more are converted are represented, it is determined that representing to match with these text informations or sentence semantics
Answer Sa1031 and Sa1032 can realize as follows:
Sa1031, extracts the keyword in above-mentioned text information.
For example, it is assumed that the text information that certain voice messaging of user is changed into is that " classes are over, and today, I opened very much
The heart ", the keyword that can therefrom extract includes " classes are over " and " happy ";For another example, it is assumed that some facial expression of user turns
The text information of chemical conversion is " I feels a pain now, wants to cry ", and the keyword that can therefrom extract includes " feeling bad " and " wanting to cry ";Deng
Deng.
Sa1032, retrieves the question and answer knowledge base built, from the answer of question and answer knowledge base searching and Keywords matching.
In the present invention is implemented, question and answer knowledge base can be using artificial template's technique construction or by expanding certainly
The method for opening up technique construction sentiment dictionary is built.Artificial template's technique construction question and answer knowledge base, refers in specific area, for every
Individual application scenarios, are pre-designed various answers and keyword, and the set of the corresponding relation of keyword and answer constitutes question and answer and known
Know storehouse, the question and answer knowledge base of this artificial template's technique construction, due to be use various application scenarios, with it with substantial amounts of data
Go to build, therefore the characteristics of must comparing accurate is generally matched with keyword and the corresponding relation of answer.By from expansion technique
The method for building sentiment dictionary builds question and answer knowledge base, is substantially a kind of knowledge acquisition and incremental learning art, its feature
It is to only need to a small amount of data sample, based on this, by training one after another, data is effectively expanded so that ask
Answer the data message scale that knowledge base is finally reached needs.Specifically, the method structure from expansion technique structure sentiment dictionary is passed through
Building question and answer knowledge base includes following key step i) to iv):I) repeatedly from a mood sample set D sample n sample;
Ii) for the subsample collection sampled every time, statistical learning is carried out, obtains and assumes Hi;Iii) several hypothesis are combined, shape
Into final sample set;Iv) these final sample set for being used for specific classification task constitute question and answer knowledge base.
For step Sa1032, in retrieval, Lucene full-text search frameworks can be used, with reference to boolean operation, wildcard
Search, domain search, fuzzy query and range searching scheduling algorithm are accorded with, the data of question and answer knowledge base are set up into index and function of search,
The word letter that voice, facial expression and the limb action that user produces when being interacted according to user with intelligent robot are converted into
Breath, fuzzy matching is carried out in question and answer knowledge base, most suitable answer is found.Changed into certain voice messaging of user
Exemplified by one text information is " classes are over, and today, I was very happy ", by extracting " classes are over " and " happy " the two keywords,
From question and answer knowledge base searching to the answer with Keywords matching can be that " I am also very happy.Whether today is by teacher's
Praise ";Expressed one's feelings with user's face exemplified by changed into text information is " I feels a pain now, wants to cry ", pass through and extract " difficult
By " and " wanting to cry " the two keywords, from question and answer knowledge base searching to the answer with Keywords matching can be that " what you have
Unhappy thing, it may be said that listened to me, perhaps I can help hey you ", etc..
As another embodiment of the present invention, handle any in voice, facial expression and limb action using mood engine
The text information or sentence semantics that one or more are converted are represented, it is determined that representing to match with these text informations or sentence semantics
Answer Sb1031 and Sb1032 can realize as follows:
Sb1031, retrieves mood knowledge base, search and above-mentioned text information similarity highest rope from mood knowledge base
Fuse ceases.
In embodiments of the present invention, mood knowledge base as knowledge base a branch, wherein including pleasure, anger, sorrow, happiness etc.
The phrase of various expression moods, including " happy ", " indignation ", " sad ", " sadness ", " happiness ", " happiness " etc., these phrase bags
It is contained in the various middle application scenarios related to mood, the matching of these application scenarios has in various responses corresponding with these phrases
Hold.As one embodiment of the present of invention, mood knowledge base is retrieved, therefrom search and above-mentioned text information similarity highest rope
Fuse breath can be editing distance algorithm using bayes method to realize.Specifically, in order to improve the degree of accuracy of retrieval, use
In editing distance (Edit Distance) algorithm, the algorithm, by (the bag of the minimum editor needed for becoming character string B from character string A
Including increase, delete and insertion etc.) number of times is referred to as becoming character string B editing distance from character string A;In general, this is edited
Apart from smaller, character string A and character string B similarity are higher.
It is that " classes are over, and today, I was very happy " is with the text information that certain voice messaging of user is changed into
Example, according to Lucene full-text search frameworks, with reference to boolean operation, wildcard search, domain search, fuzzy query and range searching
Scheduling algorithm, retrieving has three index informations to be " it is very happy that today, classes are over ", " today celebrates a festival, very happy " respectively in mood storehouse
" classes are over, and time is up, good happiness ", it is clear that according to the principle of above-mentioned editing distance algorithm, " it is very happy that today, classes are over " this
One index information and the similarity highest of " classes are over, and today, I was very happy ".
Sb1032, response content corresponding with text information similarity highest index information is defined as and text information
The answer of matching.
As it was previously stated, the application scenarios matching in mood knowledge base has various response contents corresponding with these phrases.With
Exemplified by the text information that certain voice messaging of above-mentioned user is changed into is " classes are over, and today, I was very happy ", by
In the similarity highest of " it is very happy that today, classes are over " this index information and " classes are over, and today, I was very happy ", then, can be with
By " it is very happy that today, classes are over " the corresponding response content of this index information in mood knowledge base, for example, " I am also very happy.Always
Shi Jintian has praised you" be defined as with " classes are over, and today, I was very happy " match answer.
As another embodiment of the present invention, handle any in voice, facial expression and limb action using mood engine
The text information or sentence semantics that one or more are converted are represented, it is determined that representing to match with these text informations or sentence semantics
Answer Sc1031 and Sc1032 can realize as follows:
Sc1031, response content and these sentence semantics are represented to input neutral net respectively, sentence is judged by neutral net
The matching degree of sub- semantic expressiveness and response content.
When implementing, it can be examined by text information and from knowledge base (including mood knowledge base and question and answer knowledge base etc.)
To a response content, number income, to two convolutional neural networks (CNN) sentence models, obtains their sentence semantics to rope respectively
Represent, for example, real number value is vectorial, then, then the two sentence semantics are represented to be input to a multilayer neural network, by nerve
Network judges the matching degree that the two sentence semantics are represented, so as to judge whether the response content can turn into text information
The question and answer of one matching.
Sc1032, it is if sentence semantics represent the matching degree with response content and reach predetermined threshold value, response content is true
It is set to and represents the answer that matches with sentence semantics.
If by the judgement of neutral net in step Sc1031, the sentence semantics of both text information and response content are represented
Matching degree reach predetermined threshold value, then by response content be defined as sentence semantics corresponding with text information represent matching answer
Case.
Still the text information changed into certain voice messaging of user is that " classes are over, and today, I was very happy
" exemplified by, if " classes are over, and today, I was very happy " is with response content, " I am also very happy.Whether today is praised by teacher
" both the matching degree that represents of sentence semantics reach predetermined threshold value, then by response content, " I am also very happy.It is whether modern
It is praised by teacher" be defined as with text information " classes are over, and today, I was very happy " match answer.
In embodiments of the present invention, be the characteristics of this matching algorithms arranged side by side of step Sc1031 and Sc1032 text information and
The convolutional neural networks (CNN) that the sentence semantics of response content represent independent by two respectively are obtained, and are obtaining each of which
Expression before, the information between two sentences is independent of each other.This model is to need the sentence matched semantic from the overall situation to two
It is upper to be matched, in the relevant issues of statement matching, often there is mutual local matching in two sentences to be matched.
It should be noted that the knowledge base such as question and answer knowledge base or mood knowledge base that above-described embodiment is referred to, can be intelligence
The local knowledge base or the knowledge base in the high in the clouds that can be interacted with intelligent robot of energy robot, these can be by right
The training of intelligent robot is obtained or expanded, and these training are divided into the training of primary stage and the training of advanced stage.It is primary
The training in stage is that intelligent robot passively receives new knowledge, constantly training smart robot, to intelligent robot input more
Many data, can grasp more knowledge;Intelligent robot backstage provides the work(being trained to intelligent robot
Can, the method for use is the substantial amounts of template question and answer data of question and answer typing of simulation reality;Data are divided into two classes:One class is intelligent machine
Data in device people's professional domain, if ready-made lteral data is then importing directly into the corpus data storehouse of backstage, if do not had
There are the question and answer data for then wanting typing specialty;The another kind of data with emotion, this kind of data (example just as people's every-day language
Such as, today, mood was all well and goodToday and colleague quarrel, unhappy, smile, angry etc.).The training of advanced stage belongs to intelligence
Intelligent robot learns the stage automatically, i.e.,:Intelligent robot is slowly ripe by vocal print technology in daily interacting with user
Know the demeanor and personality of different users;Intelligent robot study conventional field include data mining, visual analysis,
In terms of speech recognition and natural language processing, intelligent robot can automatically according to the individual character of oneself, interest, search for phase automatically
Knowledge is closed, intelligent robot, according to the extraction of text feature and additional weight, sets up optimum classifier and the row's of falling rope from data
Draw classification problem, and be stored in the knowledge base of oneself;Because the complexity and uncertainty of natural language, make language construction
Type division is not unique, runs into ambiguous word or during baroque sentence, it may not be possible to accurately identify and analyze use
Family " intention ";Therefore to intelligent robot carry out daily workout during, constantly strengthen intelligent robot cognition and
To the additional weight of problem, so that intelligent robot can be with the true intention of correct understanding user.
S104, output represents that the answer matched is returned as intelligent robot to user with text information or sentence semantics
Should.
When output represents the answer matched with text information or sentence semantics, how certainly an important link is
Response or reply that dynamic generation natural language is represented.In embodiments of the present invention, based on retrieval type reply or response mechanism be
The answer that Automatic generation of information is made up of sequence of terms is inputted according to active user, this mechanism mainly uses a large amount of interaction numbers
According to building spatial term model, an information is given, the response that a natural language is represented can be automatically generated, wherein
Key issue be how to realize this spatial term model.Automatically generating for responding needs to solve two major issues,
One is sentence semantics are represented, the second is spatial term.Because Recognition with Recurrent Neural Network is in the expression of language and generation side
Face all shows excellent performance, therefore, and the present invention uses the dialog model " neural response machine " based on neutral net
(Neural Responding Machine, NRM) builds spatial term model, the model be used to realizing it is man-machine between
Single-wheel dialogue (single-turn dialog).NRM is that, from large-scale information pair, such as problem-answer centering is retrieved most
Good response, and the pattern acquired is stored in the model parameter of system, that is, obtain a spatial term model;By nature
Language generation model can be exported represents the answer matched as intelligent robot to user's with text information or sentence semantics
Respond.
It was found from the intelligent robot mood exchange method of the above-mentioned example of accompanying drawing 1, because user interacts with intelligent robot
When the voice, facial expression and/or the limb action that produce be converted into corresponding text information or sentence semantics are represented, and use
Mood engine handles text information or sentence semantics are represented, is the process of deep learning, identified and text information or sentence
The answer of semantic expressiveness matching has the adaptivity of height, therefore, lacks during with prior art intelligent robot and user mutual
Weary emotion is compared, the answer that these and text information or sentence semantics are represented to match by the intelligent robot of technical solution of the present invention
As the response to user, wherein with abundant emotion, embodying the higher intelligent level of intelligent robot.
Accompanying drawing 2 is referred to, is the structural representation of intelligent robot mood interactive system provided in an embodiment of the present invention.For
It is easy to explanation, accompanying drawing 2 illustrate only the part related to the embodiment of the present invention.The intelligent robot mood of the example of accompanying drawing 2 is handed over
Mutual system mainly includes acquisition module 201, modular converter 202, mood engine modules 203 and output module 204, describes in detail such as
Under:
Acquisition module 201, for gathering voice, facial expression and the limb that user produces when user interacts with intelligent robot
Body action in any one or more;
Modular converter 202, user produces during for user to be interacted with intelligent robot voice, facial expression and limbs
Any one or more in action is converted into corresponding text information or sentence semantics are represented;
Mood engine modules 203, for being represented using mood engine processing text information or sentence semantics, it is determined that and word
Information or sentence semantics represent the answer of matching;
Output module 204, represents the answer matched as intelligent robot for exporting with text information or sentence semantics
Response to user.
The mood engine modules 203 of the example of accompanying drawing 2 can include the retrieval unit 302 of extraction unit 301 and first, such as accompanying drawing
The intelligent robot mood interactive system that another embodiment of the present invention shown in 3 is provided, wherein:
Extraction unit 301, for extracting voice, facial expression and the limb that user produces when user interacts with intelligent robot
The keyword in any one or more text information converted in body action;
First retrieval unit 302, for retrieving the question and answer knowledge base built, from question and answer knowledge base searching and Keywords matching
Answer.
The mood engine modules 203 of the example of accompanying drawing 2 can include the second retrieval unit 401 and the first determining unit 402, such as
The intelligent robot mood interactive system that another embodiment of the present invention shown in accompanying drawing 4 is provided, wherein:
Second retrieval unit 401, for retrieving mood knowledge base, is used when therefrom search is interacted with user with intelligent robot
Any one or more text information similarity highest converted in voice, facial expression and limb action that family is produced
Index information;
First determining unit 402, for response content corresponding with index information to be defined as into what is matched with text information
Answer.
The mood engine modules 203 of the example of accompanying drawing 2 can include the determining unit 502 of judging unit 501 and second, such as accompanying drawing
The intelligent robot mood interactive system that another embodiment of the present invention shown in 5 is provided, wherein:
Judging unit 501, for representing to input neutral net respectively response content and sentence semantics, is sentenced by neutral net
The matching degree for sub- semantic expressiveness and the response content of making pauses in reading unpunctuated ancient writings;
Second determining unit 502, if the judged result for judging unit 501 is represented and response content for sentence semantics
Matching degree reaches predetermined threshold value, then is defined as response content representing the answer that matches with sentence semantics.
Fig. 6 is the schematic diagram of intelligent robot provided in an embodiment of the present invention.As shown in fig. 6, the intelligent machine of the embodiment
Device people 6 includes:Processor 60, memory 61 and it is stored in the memory 61 and can be run on the processor 60
Computer program 62.The processor 60 realizes the step in the above-mentioned embodiment of the method for accompanying drawing 1, example when performing computer program 62
Step S101 to S104 as shown in Figure 1.Or, the processor 60 realizes that above-mentioned each device is real when performing computer program 62
The function of each module/unit in example is applied, such as the function of module shown in Fig. 2.
Exemplary, computer program 62 can be divided into one or more module/units, one or more mould
Block/unit is stored in memory 61, and is performed by the processor 60, to complete the present invention.One or more of moulds
Block/unit can complete the series of computation machine programmed instruction section of specific function, the instruction segment by describe it is described based on
Implementation procedure of the calculation machine program 62 in the intelligent robot 6.For example, the computer program 62 can be divided into collection
Module, modular converter, mood engine modules and output module (module in virtual bench), each module concrete function are as follows:Adopt
Collect module, for gathering in voice, facial expression and the limb action that the user produces when user interacts with intelligent robot
Any one or more;Modular converter, for by any one or more in the voice, facial expression and limb action
It is converted into corresponding text information or sentence semantics is represented;Mood engine modules, for handling the word using mood engine
Information or sentence semantics represent, it is determined that representing the answer that matches with the text information or sentence semantics;Output module, for defeated
Go out response of the answer matched with the text information or sentence semantics as the intelligent robot to the user.
The intelligent robot 6 can be that the calculating such as desktop PC, notebook, palm PC and cloud server is set
It is standby.The intelligent robot may include but be not limited only to processor 60, memory 61.It will be understood by those skilled in the art that Fig. 6
Only the example of intelligent robot 6, does not constitute the restriction to intelligent robot 6, can include more more or less than illustrating
Part, either combine some parts or different parts, such as described intelligent robot can also be set including input and output
Standby, network access equipment, bus etc..
The processor 60 can be CPU (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng.
The memory 61 can be the hard disk of the internal storage unit, such as intelligent robot 6 of the intelligent robot 6
Or internal memory.The memory 61 can also be the External memory equipment of the intelligent robot 6, such as described intelligent robot 6
The plug-in type hard disk of upper outfit, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital,
SD) card, flash card (Flash Card) etc..Further, the memory 61 can also both include the intelligent robot 6
Internal storage unit also includes External memory equipment.The memory 61 is used to store the computer program and the intelligence
Other programs and data needed for robot.The memory 61, which can be also used for temporarily storing, have been exported or will be defeated
The data gone out.
It is apparent to those skilled in the art that, for convenience of description and succinctly, only with above-mentioned each work(
Energy unit, the division progress of module are for example, in practical application, as needed can distribute above-mentioned functions by different
Functional unit, module are completed, will the internal structure of the intelligent robot be divided into different functional unit or module, with complete
Into all or part of function described above.Each functional unit, module in embodiment can be integrated in a processing unit
In or unit be individually physically present, can also two or more units it is integrated in a unit, it is above-mentioned
Integrated unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.In addition, each work(
Energy unit, the specific name of module are also only to facilitate mutually differentiation, is not limited to the protection domain of the application.It is above-mentioned
Unit, the specific work process of module, may be referred to the corresponding process in preceding method embodiment in system, no longer go to live in the household of one's in-laws on getting married herein
State.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, without detailed description or note in some embodiment
The part of load, may refer to the associated description of other embodiments.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
The scope of the present invention.
, can in embodiment provided by the present invention, it should be understood that disclosed device/intelligent robot and method
To realize by another way.For example, device described above/intelligent robot embodiment is only schematical, example
Such as, the division of the module or unit, only a kind of division of logic function, can there is other division side when actually realizing
Formula, such as multiple units or component can combine or be desirably integrated into another system, or some features can be ignored, or not
Perform.Another, shown or discussed coupling or direct-coupling or communication connection each other can be connect by some
Mouthful, the INDIRECT COUPLING or communication connection of device or unit can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated module/unit realized using in the form of SFU software functional unit and as independent production marketing or
In use, can be stored in a computer read/write memory medium.Understood based on such, the present invention realizes above-mentioned implementation
All or part of flow in example method, can also instruct the hardware of correlation to complete, described meter by computer program
Calculation machine program can be stored in a computer-readable recording medium, and the computer program can be achieved when being executed by processor
The step of stating each embodiment of the method, for example, the step S101 to S104 of accompanying drawing 1.Wherein, the computer program includes calculating
Machine program code, the computer program code can for source code form, object identification code form, executable file or it is some in
Between form etc..The computer-readable medium can include:Any entity or dress of the computer program code can be carried
Put, recording medium, USB flash disk, mobile hard disk, magnetic disc, CD, computer storage, read-only storage (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software
Distribution medium etc..It should be noted that the content that the computer-readable medium is included can make laws according in jurisdiction
Requirement with patent practice carries out appropriate increase and decrease, such as in some jurisdictions, according to legislation and patent practice, computer
Computer-readable recording medium does not include electric carrier signal and telecommunication signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to foregoing reality
Example is applied the present invention is described in detail, it will be understood by those within the art that:It still can be to foregoing each
Technical scheme described in embodiment is modified, or carries out equivalent substitution to which part technical characteristic;And these are changed
Or replace, the essence of appropriate technical solution is departed from the spirit and scope of various embodiments of the present invention technical scheme, all should
Within protection scope of the present invention.
Claims (10)
1. a kind of intelligent robot mood exchange method, it is characterised in that methods described includes:
It is any one in collection the user user produces when being interacted with intelligent robot voice, facial expression and limb action
Plant or a variety of;
Any one or more in the voice, facial expression and limb action is converted into corresponding text information or sentence
Semantic expressiveness;
The text information is handled using mood engine or sentence semantics are represented, it is determined that with the text information or sentence semantics table
Show the answer of matching;
Output is described to represent that the answer matched is used described as the intelligent robot with the text information or sentence semantics
The response at family.
2. the method as described in claim 1, it is characterised in that the use mood engine handles the text information or sentence
Semantic expressiveness, it is determined that the answer that matches is represented with the text information or sentence semantics, including:
Extract the keyword in the text information;
The question and answer knowledge base built is retrieved, from the answer of the question and answer knowledge base searching and the Keywords matching.
3. the method as described in claim 1, it is characterised in that the use mood engine handles the text information or sentence
Semantic expressiveness, it is determined that the answer that matches is represented with the text information or sentence semantics, including:
Mood knowledge base is retrieved, search and the text information similarity highest index information from the mood knowledge base;
The answer that response content corresponding with the index information is defined as matching with the text information.
4. the method as described in claim 1, it is characterised in that the use mood engine handles the text information or sentence
Semantic expressiveness, it is determined that the answer that matches is represented with the text information or sentence semantics, including:
Response content and the sentence semantics are represented to input neutral net respectively, the sentence language is judged by the neutral net
Justice represents the matching degree with the response content;
If the sentence semantics represent the matching degree with the response content and reach predetermined threshold value, and the response content is true
It is set to and represents the answer that matches with the sentence semantics.
5. a kind of intelligent robot mood interactive system, it is characterised in that the system includes:
Acquisition module, for gathering voice, facial expression and the limbs that the user produces when user interacts with intelligent robot
Any one or more in action;
Modular converter, for any one or more in the voice, facial expression and limb action to be converted into accordingly
Text information or sentence semantics are represented;
Mood engine modules, for handling the text information using mood engine or sentence semantics are represented, it is determined that with the text
Word information or sentence semantics represent the answer of matching;
Output module, for exporting the answer matched with the text information or sentence semantics as the intelligent robot
Response to the user.
6. system as claimed in claim 5, it is characterised in that the mood engine modules include:
Extraction unit, for extracting the keyword in the text information;
First retrieval unit, for retrieving the question and answer knowledge base built, from the question and answer knowledge base searching and the keyword
The answer matched somebody with somebody.
7. system as claimed in claim 5, it is characterised in that the mood engine modules include:
Second retrieval unit, for retrieving mood knowledge base, searches for similar to the text information from the mood knowledge base
Spend highest index information;
First determining unit, for response content corresponding with the index information to be defined as into what is matched with the text information
Answer.
8. system as claimed in claim 5, it is characterised in that the mood engine modules include:
Judging unit, for representing to input neutral net respectively response content and the sentence semantics, by the neutral net
Judge that the sentence semantics represent the matching degree with the response content;
Second determining unit, if the judged result for the judging unit is represented and the response content for the sentence semantics
Matching degree reach predetermined threshold value, then the response content is defined as representing the answer that matches with the sentence semantics.
9. a kind of intelligent robot, including memory, processor and it is stored in the memory and can be in the processor
The computer program of upper operation, it is characterised in that described in the computing device during computer program realize as claim 1 to
The step of 4 any one methods described.
10. a kind of computer-readable recording medium, the computer-readable recording medium storage has computer program, its feature exists
In the step of realizing such as Claims 1-4 any one methods described when the computer program is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710402814.1A CN107301168A (en) | 2017-06-01 | 2017-06-01 | Intelligent robot and its mood exchange method, system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710402814.1A CN107301168A (en) | 2017-06-01 | 2017-06-01 | Intelligent robot and its mood exchange method, system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107301168A true CN107301168A (en) | 2017-10-27 |
Family
ID=60138047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710402814.1A Pending CN107301168A (en) | 2017-06-01 | 2017-06-01 | Intelligent robot and its mood exchange method, system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107301168A (en) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108053826A (en) * | 2017-12-04 | 2018-05-18 | 泰康保险集团股份有限公司 | For the method, apparatus of human-computer interaction, electronic equipment and storage medium |
CN108171114A (en) * | 2017-12-01 | 2018-06-15 | 深圳竹信科技有限公司 | The recognition methods of heart line, terminal and readable storage medium |
CN108305511A (en) * | 2017-12-11 | 2018-07-20 | 南京萌宝睿贝教育科技有限公司 | A kind of children's feeling quotrient training system and its method |
CN108470188A (en) * | 2018-02-26 | 2018-08-31 | 北京物灵智能科技有限公司 | Exchange method based on image analysis and electronic equipment |
CN108595406A (en) * | 2018-01-04 | 2018-09-28 | 广东小天才科技有限公司 | User state reminding method and device, electronic equipment and storage medium |
CN108833941A (en) * | 2018-06-29 | 2018-11-16 | 北京百度网讯科技有限公司 | Man-machine dialogue system method, apparatus, user terminal, processing server and system |
CN108847239A (en) * | 2018-08-31 | 2018-11-20 | 上海擎感智能科技有限公司 | Interactive voice/processing method, system, storage medium, engine end and server-side |
CN109086368A (en) * | 2018-07-20 | 2018-12-25 | 吴怡 | A kind of legal advice robot based on artificial intelligence cloud platform |
CN109243582A (en) * | 2018-09-19 | 2019-01-18 | 江苏金惠甫山软件科技有限公司 | The human-computer interaction motion management method and system of knowledge based graphical spectrum technology |
CN109271018A (en) * | 2018-08-21 | 2019-01-25 | 北京光年无限科技有限公司 | Exchange method and system based on visual human's behavioral standard |
CN109308466A (en) * | 2018-09-18 | 2019-02-05 | 宁波众鑫网络科技股份有限公司 | The method that a kind of pair of interactive language carries out Emotion identification |
CN109324688A (en) * | 2018-08-21 | 2019-02-12 | 北京光年无限科技有限公司 | Exchange method and system based on visual human's behavioral standard |
CN109343695A (en) * | 2018-08-21 | 2019-02-15 | 北京光年无限科技有限公司 | Exchange method and system based on visual human's behavioral standard |
CN109346079A (en) * | 2018-12-04 | 2019-02-15 | 北京羽扇智信息科技有限公司 | Voice interactive method and device based on Application on Voiceprint Recognition |
CN109350415A (en) * | 2018-11-30 | 2019-02-19 | 湖南新云医疗装备工业有限公司 | A kind of shared intelligent system of accompanying and attending to of hospital |
CN109545212A (en) * | 2018-12-11 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | Exchange method, smart machine and storage medium |
CN109697462A (en) * | 2018-12-13 | 2019-04-30 | 井冈山大学 | A kind of human brain language cognition model foundation system and method Internet-based |
CN109783669A (en) * | 2019-01-21 | 2019-05-21 | 美的集团武汉制冷设备有限公司 | Screen methods of exhibiting, robot and computer readable storage medium |
CN109783516A (en) * | 2019-02-19 | 2019-05-21 | 北京奇艺世纪科技有限公司 | A kind of query statement retrieval answering method and device |
CN109840009A (en) * | 2017-11-28 | 2019-06-04 | 浙江思考者科技有限公司 | A kind of intelligence true man's advertisement screen interactive system and implementation method |
CN109940636A (en) * | 2019-04-02 | 2019-06-28 | 广州创梦空间人工智能科技有限公司 | Humanoid robot for commercial performance |
WO2019165732A1 (en) * | 2018-02-27 | 2019-09-06 | 深圳狗尾草智能科技有限公司 | Robot emotional state-based reply information generating method and apparatus |
CN110288077A (en) * | 2018-11-14 | 2019-09-27 | 腾讯科技(深圳)有限公司 | A kind of synthesis based on artificial intelligence is spoken the method and relevant apparatus of expression |
CN110349577A (en) * | 2019-06-19 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | Man-machine interaction method, device, storage medium and electronic equipment |
CN110427472A (en) * | 2019-08-02 | 2019-11-08 | 深圳追一科技有限公司 | The matched method, apparatus of intelligent customer service, terminal device and storage medium |
WO2019231405A1 (en) * | 2018-06-01 | 2019-12-05 | Kaha Pte. Ltd. | System and method for generating notifications related to user queries |
CN110634491A (en) * | 2019-10-23 | 2019-12-31 | 大连东软信息学院 | Series connection feature extraction system and method for general voice task in voice signal |
CN110660412A (en) * | 2018-06-28 | 2020-01-07 | Tcl集团股份有限公司 | Emotion guiding method and device and terminal equipment |
CN110970032A (en) * | 2018-09-28 | 2020-04-07 | 深圳市冠旭电子股份有限公司 | Sound box voice interaction control method and device |
CN111048075A (en) * | 2018-10-11 | 2020-04-21 | 上海智臻智能网络科技股份有限公司 | Intelligent customer service system and intelligent customer service robot |
CN111046148A (en) * | 2018-10-11 | 2020-04-21 | 上海智臻智能网络科技股份有限公司 | Intelligent interaction system and intelligent customer service robot |
CN111177346A (en) * | 2019-12-19 | 2020-05-19 | 爱驰汽车有限公司 | Man-machine interaction method and device, electronic equipment and storage medium |
CN111309862A (en) * | 2020-02-10 | 2020-06-19 | 贝壳技术有限公司 | User interaction method and device with emotion, storage medium and equipment |
CN111368609A (en) * | 2018-12-26 | 2020-07-03 | 深圳Tcl新技术有限公司 | Voice interaction method based on emotion engine technology, intelligent terminal and storage medium |
CN111696537A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN111696536A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN111696538A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN111722702A (en) * | 2019-03-22 | 2020-09-29 | 北京京东尚科信息技术有限公司 | Human-computer interaction method and system, medium and computer system |
CN111881695A (en) * | 2020-06-12 | 2020-11-03 | 国家电网有限公司 | Audit knowledge retrieval method and device |
CN112001275A (en) * | 2020-08-09 | 2020-11-27 | 成都未至科技有限公司 | Robot for collecting student information |
CN112489797A (en) * | 2019-09-11 | 2021-03-12 | 北京国双科技有限公司 | Accompanying method, device and terminal equipment |
CN112528000A (en) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | Virtual robot generation method and device and electronic equipment |
CN112562652A (en) * | 2020-12-02 | 2021-03-26 | 湖南翰坤实业有限公司 | Voice processing method and system based on Untiy engine |
CN113766253A (en) * | 2021-01-04 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Live broadcast method, device, equipment and storage medium based on virtual anchor |
CN114121041A (en) * | 2021-11-19 | 2022-03-01 | 陈文琪 | Intelligent accompanying method and system based on intelligent accompanying robot |
CN116052646A (en) * | 2023-03-06 | 2023-05-02 | 北京水滴科技集团有限公司 | Speech recognition method, device, storage medium and computer equipment |
CN117708305A (en) * | 2024-02-05 | 2024-03-15 | 天津英信科技有限公司 | Dialogue processing method and system for response robot |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2933071A1 (en) * | 2014-04-17 | 2015-10-21 | Aldebaran Robotics | Methods and systems for managing dialogs of a robot |
CN105082150A (en) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | Robot man-machine interaction method based on user mood and intension recognition |
CN106294774A (en) * | 2016-08-11 | 2017-01-04 | 北京光年无限科技有限公司 | User individual data processing method based on dialogue service and device |
CN106599998A (en) * | 2016-12-01 | 2017-04-26 | 竹间智能科技(上海)有限公司 | Method and system for adjusting response of robot based on emotion feature |
CN106649524A (en) * | 2016-10-20 | 2017-05-10 | 宁波江东大金佰汇信息技术有限公司 | Improved advanced study intelligent response system based on computer cloud data |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
-
2017
- 2017-06-01 CN CN201710402814.1A patent/CN107301168A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2933071A1 (en) * | 2014-04-17 | 2015-10-21 | Aldebaran Robotics | Methods and systems for managing dialogs of a robot |
CN105082150A (en) * | 2015-08-25 | 2015-11-25 | 国家康复辅具研究中心 | Robot man-machine interaction method based on user mood and intension recognition |
CN106294774A (en) * | 2016-08-11 | 2017-01-04 | 北京光年无限科技有限公司 | User individual data processing method based on dialogue service and device |
CN106649524A (en) * | 2016-10-20 | 2017-05-10 | 宁波江东大金佰汇信息技术有限公司 | Improved advanced study intelligent response system based on computer cloud data |
CN106599998A (en) * | 2016-12-01 | 2017-04-26 | 竹间智能科技(上海)有限公司 | Method and system for adjusting response of robot based on emotion feature |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
Non-Patent Citations (3)
Title |
---|
@HOUKAI: "boosting、adaboost", 《HTTPS://WWW.CNBLOGS.COM/HOUKAI/P/4863406.HTML》 * |
XIAOPIHAIERLETIAN: "统计学习方法笔记(8)——提升方法", 《HTTPS://BLOG.CSDN.NET/XIAOPIHAIERLETIAN/ARTICLE/DETAILS/53241003?LOCATIONNUM=14&FPS=1》 * |
码迷: "adaboost算法", 《HTTP://WWW.MAMICODE.COM/INFO-DETAIL-1168256.HTML》 * |
Cited By (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840009A (en) * | 2017-11-28 | 2019-06-04 | 浙江思考者科技有限公司 | A kind of intelligence true man's advertisement screen interactive system and implementation method |
CN108171114A (en) * | 2017-12-01 | 2018-06-15 | 深圳竹信科技有限公司 | The recognition methods of heart line, terminal and readable storage medium |
CN108053826A (en) * | 2017-12-04 | 2018-05-18 | 泰康保险集团股份有限公司 | For the method, apparatus of human-computer interaction, electronic equipment and storage medium |
CN108053826B (en) * | 2017-12-04 | 2021-01-15 | 泰康保险集团股份有限公司 | Method and device for man-machine interaction, electronic equipment and storage medium |
CN108305511A (en) * | 2017-12-11 | 2018-07-20 | 南京萌宝睿贝教育科技有限公司 | A kind of children's feeling quotrient training system and its method |
CN108595406A (en) * | 2018-01-04 | 2018-09-28 | 广东小天才科技有限公司 | User state reminding method and device, electronic equipment and storage medium |
CN108595406B (en) * | 2018-01-04 | 2022-05-17 | 广东小天才科技有限公司 | User state reminding method and device, electronic equipment and storage medium |
CN108470188A (en) * | 2018-02-26 | 2018-08-31 | 北京物灵智能科技有限公司 | Exchange method based on image analysis and electronic equipment |
CN108470188B (en) * | 2018-02-26 | 2022-04-22 | 北京物灵智能科技有限公司 | Interaction method based on image analysis and electronic equipment |
WO2019165732A1 (en) * | 2018-02-27 | 2019-09-06 | 深圳狗尾草智能科技有限公司 | Robot emotional state-based reply information generating method and apparatus |
WO2019231405A1 (en) * | 2018-06-01 | 2019-12-05 | Kaha Pte. Ltd. | System and method for generating notifications related to user queries |
CN110660412A (en) * | 2018-06-28 | 2020-01-07 | Tcl集团股份有限公司 | Emotion guiding method and device and terminal equipment |
US11282516B2 (en) | 2018-06-29 | 2022-03-22 | Beijing Baidu Netcom Science Technology Co., Ltd. | Human-machine interaction processing method and apparatus thereof |
CN108833941A (en) * | 2018-06-29 | 2018-11-16 | 北京百度网讯科技有限公司 | Man-machine dialogue system method, apparatus, user terminal, processing server and system |
CN109086368A (en) * | 2018-07-20 | 2018-12-25 | 吴怡 | A kind of legal advice robot based on artificial intelligence cloud platform |
CN109271018A (en) * | 2018-08-21 | 2019-01-25 | 北京光年无限科技有限公司 | Exchange method and system based on visual human's behavioral standard |
CN109343695A (en) * | 2018-08-21 | 2019-02-15 | 北京光年无限科技有限公司 | Exchange method and system based on visual human's behavioral standard |
CN109324688A (en) * | 2018-08-21 | 2019-02-12 | 北京光年无限科技有限公司 | Exchange method and system based on visual human's behavioral standard |
CN108847239A (en) * | 2018-08-31 | 2018-11-20 | 上海擎感智能科技有限公司 | Interactive voice/processing method, system, storage medium, engine end and server-side |
CN109308466A (en) * | 2018-09-18 | 2019-02-05 | 宁波众鑫网络科技股份有限公司 | The method that a kind of pair of interactive language carries out Emotion identification |
CN109243582A (en) * | 2018-09-19 | 2019-01-18 | 江苏金惠甫山软件科技有限公司 | The human-computer interaction motion management method and system of knowledge based graphical spectrum technology |
CN110970032A (en) * | 2018-09-28 | 2020-04-07 | 深圳市冠旭电子股份有限公司 | Sound box voice interaction control method and device |
CN111046148A (en) * | 2018-10-11 | 2020-04-21 | 上海智臻智能网络科技股份有限公司 | Intelligent interaction system and intelligent customer service robot |
CN111048075A (en) * | 2018-10-11 | 2020-04-21 | 上海智臻智能网络科技股份有限公司 | Intelligent customer service system and intelligent customer service robot |
CN110288077B (en) * | 2018-11-14 | 2022-12-16 | 腾讯科技(深圳)有限公司 | Method and related device for synthesizing speaking expression based on artificial intelligence |
CN110288077A (en) * | 2018-11-14 | 2019-09-27 | 腾讯科技(深圳)有限公司 | A kind of synthesis based on artificial intelligence is spoken the method and relevant apparatus of expression |
CN109350415A (en) * | 2018-11-30 | 2019-02-19 | 湖南新云医疗装备工业有限公司 | A kind of shared intelligent system of accompanying and attending to of hospital |
CN109346079A (en) * | 2018-12-04 | 2019-02-15 | 北京羽扇智信息科技有限公司 | Voice interactive method and device based on Application on Voiceprint Recognition |
CN109545212A (en) * | 2018-12-11 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | Exchange method, smart machine and storage medium |
CN109697462A (en) * | 2018-12-13 | 2019-04-30 | 井冈山大学 | A kind of human brain language cognition model foundation system and method Internet-based |
CN111368609A (en) * | 2018-12-26 | 2020-07-03 | 深圳Tcl新技术有限公司 | Voice interaction method based on emotion engine technology, intelligent terminal and storage medium |
CN111368609B (en) * | 2018-12-26 | 2023-10-17 | 深圳Tcl新技术有限公司 | Speech interaction method based on emotion engine technology, intelligent terminal and storage medium |
CN109783669A (en) * | 2019-01-21 | 2019-05-21 | 美的集团武汉制冷设备有限公司 | Screen methods of exhibiting, robot and computer readable storage medium |
CN109783516A (en) * | 2019-02-19 | 2019-05-21 | 北京奇艺世纪科技有限公司 | A kind of query statement retrieval answering method and device |
CN111722702A (en) * | 2019-03-22 | 2020-09-29 | 北京京东尚科信息技术有限公司 | Human-computer interaction method and system, medium and computer system |
CN109940636A (en) * | 2019-04-02 | 2019-06-28 | 广州创梦空间人工智能科技有限公司 | Humanoid robot for commercial performance |
CN110349577B (en) * | 2019-06-19 | 2022-12-06 | 达闼机器人股份有限公司 | Man-machine interaction method and device, storage medium and electronic equipment |
CN110349577A (en) * | 2019-06-19 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | Man-machine interaction method, device, storage medium and electronic equipment |
CN110427472A (en) * | 2019-08-02 | 2019-11-08 | 深圳追一科技有限公司 | The matched method, apparatus of intelligent customer service, terminal device and storage medium |
CN112489797A (en) * | 2019-09-11 | 2021-03-12 | 北京国双科技有限公司 | Accompanying method, device and terminal equipment |
CN110634491A (en) * | 2019-10-23 | 2019-12-31 | 大连东软信息学院 | Series connection feature extraction system and method for general voice task in voice signal |
CN111177346A (en) * | 2019-12-19 | 2020-05-19 | 爱驰汽车有限公司 | Man-machine interaction method and device, electronic equipment and storage medium |
CN111309862A (en) * | 2020-02-10 | 2020-06-19 | 贝壳技术有限公司 | User interaction method and device with emotion, storage medium and equipment |
CN111696538A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN111696536A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN111696537B (en) * | 2020-06-05 | 2023-10-31 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN111696538B (en) * | 2020-06-05 | 2023-10-31 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN111696536B (en) * | 2020-06-05 | 2023-10-27 | 北京搜狗智能科技有限公司 | Voice processing method, device and medium |
CN111696537A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Voice processing method, apparatus and medium |
CN111881695A (en) * | 2020-06-12 | 2020-11-03 | 国家电网有限公司 | Audit knowledge retrieval method and device |
CN112001275A (en) * | 2020-08-09 | 2020-11-27 | 成都未至科技有限公司 | Robot for collecting student information |
CN112562652A (en) * | 2020-12-02 | 2021-03-26 | 湖南翰坤实业有限公司 | Voice processing method and system based on Untiy engine |
CN112562652B (en) * | 2020-12-02 | 2024-01-19 | 湖南翰坤实业有限公司 | Voice processing method and system based on Untiy engine |
CN112528000A (en) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | Virtual robot generation method and device and electronic equipment |
CN113766253A (en) * | 2021-01-04 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Live broadcast method, device, equipment and storage medium based on virtual anchor |
CN114121041A (en) * | 2021-11-19 | 2022-03-01 | 陈文琪 | Intelligent accompanying method and system based on intelligent accompanying robot |
CN114121041B (en) * | 2021-11-19 | 2023-12-08 | 韩端科技(深圳)有限公司 | Intelligent accompanying method and system based on intelligent accompanying robot |
CN116052646A (en) * | 2023-03-06 | 2023-05-02 | 北京水滴科技集团有限公司 | Speech recognition method, device, storage medium and computer equipment |
CN117708305A (en) * | 2024-02-05 | 2024-03-15 | 天津英信科技有限公司 | Dialogue processing method and system for response robot |
CN117708305B (en) * | 2024-02-05 | 2024-04-30 | 天津英信科技有限公司 | Dialogue processing method and system for response robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107301168A (en) | Intelligent robot and its mood exchange method, system | |
Abdullah et al. | SEDAT: sentiment and emotion detection in Arabic text using CNN-LSTM deep learning | |
Wu et al. | Multimodal large language models: A survey | |
Zhang et al. | Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot | |
CN107562863A (en) | Chat robots reply automatic generation method and system | |
CN110148318A (en) | A kind of number assiatant system, information interacting method and information processing method | |
CN110110169A (en) | Man-machine interaction method and human-computer interaction device | |
CN113724882B (en) | Method, device, equipment and medium for constructing user portrait based on inquiry session | |
CN110457466A (en) | Generate method, computer readable storage medium and the terminal device of interview report | |
Liu et al. | Speech emotion recognition based on convolutional neural network with attention-based bidirectional long short-term memory network and multi-task learning | |
JP6076425B1 (en) | Interactive interface | |
CN106855879A (en) | The robot that artificial intelligence psychology is seeked advice from music | |
JP6366749B2 (en) | Interactive interface | |
CN110297906A (en) | Generate method, computer readable storage medium and the terminal device of interview report | |
CN112115242A (en) | Intelligent customer service question-answering system based on naive Bayes classification algorithm | |
CN116010581A (en) | Knowledge graph question-answering method and system based on power grid hidden trouble shooting scene | |
Chandiok et al. | CIT: Integrated cognitive computing and cognitive agent technologies based cognitive architecture for human-like functionality in artificial systems | |
WO2019165732A1 (en) | Robot emotional state-based reply information generating method and apparatus | |
Huang et al. | Developing context-aware dialoguing services for a cloud-based robotic system | |
Ruwa et al. | Affective visual question answering network | |
CN112233648B (en) | Data processing method, device, equipment and storage medium combining RPA and AI | |
Zhao et al. | Transferring age and gender attributes for dimensional emotion prediction from big speech data using hierarchical deep learning | |
Yang et al. | User behavior fusion in dialog management with multi-modal history cues | |
CN114970561B (en) | Dialogue emotion prediction model with reinforced characters and construction method thereof | |
CN116029303A (en) | Language expression mode identification method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171027 |