CN106799733A - Robot motion method and system - Google Patents

Robot motion method and system Download PDF

Info

Publication number
CN106799733A
CN106799733A CN201611222614.XA CN201611222614A CN106799733A CN 106799733 A CN106799733 A CN 106799733A CN 201611222614 A CN201611222614 A CN 201611222614A CN 106799733 A CN106799733 A CN 106799733A
Authority
CN
China
Prior art keywords
scene
words
robot
current session
phrases
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611222614.XA
Other languages
Chinese (zh)
Inventor
陈伟国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Yyd Robot Co Ltd
Original Assignee
Shenzhen Qianhai Yyd Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Yyd Robot Co Ltd filed Critical Shenzhen Qianhai Yyd Robot Co Ltd
Priority to CN201611222614.XA priority Critical patent/CN106799733A/en
Publication of CN106799733A publication Critical patent/CN106799733A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices

Abstract

The invention discloses a kind of robot motion method, comprise the following steps:S110:Action command corresponding to pre-stored different dialogue scene;S120:Phonetic entry is received, current session scene is judged according to the phonetic entry;S130:At least one is performed with current session scene to corresponding action command.The present invention can make robot make the action being adapted with session operational scenarios.

Description

Robot motion method and system
Technical field
Generally the present invention relates to robot, more particularly to a kind of robot motion method and system.
Background technology
With continuing to develop for science and technology, machine man-based development is also increasingly faster, and species is also more and more.From industrial machine , to housekeeping robot, amusement robot, robot technology is just towards the big step advance in following four direction for people:Sensory function is increasingly Abundant, cost of manufacture is more and more cheaper, design programming increasingly simplifies, and uses more and more safer.Robot technology Used as one of greatest invention of 20th century mankind, since being come out from the beginning of the sixties, the experience development of 40 years has obtained considerable It is progressive.Industrial robot experienced birth --- growth --- after the maturity period, it has also become core indispensable in manufacturing industry is filled It is standby, there are about 750,000 industrial robots just to be fought side by side in every front of endeavour with friend workman in the world.Specialized robot conduct The up-and-coming youngster of family of robot, has greatly the gesture caught up from behind, humanoid robot, farming machine because its is widely used The particular kind of machine of the various uses such as people, service robot, underwater robot, medical robot, military robot, amusement robot People emerges one after another, and is just strided forward to practical with very fast speed.
Current robot can not be linked up with people well, and understanding language aspect easily situations such as giving an irrelevant answer occurs, in machine Typically only simple question and answer mode during device person-to-person communication, as shown in figure 1, people sends enquirement to robot 1, robot 1 leads to Cross after Mike receives voice, carry out speech recognition, the content according to identification makes corresponding answer, and this answer passes through machine Loudspeaker on people 1 is played.However, the limbs or facial expression of robot 1 do not have any change, i.e. machine in this course People 1 does not export the expression/action being adapted with dialogue scenarios, and dialog procedure is stiff, uninteresting.
The content of the invention
For the defect of prior art, the invention provides a kind of robot motion method and system.
A kind of robot motion method, comprises the following steps:S110:Action corresponding to pre-stored different dialogue scene refers to Order;S120:Phonetic entry is received, current session scene is judged according to the phonetic entry;S130:Perform at least one with it is current Session operational scenarios are to corresponding action command.
Optionally, the step S120 is specifically included:S121:Prestore and represent the scene words and phrases storehouse of different dialogue scene; S122:Receive dialogic voice;S123:The dialogic voice is converted into dialog text;S124:When the judgement scene words and phrases storehouse During the words and phrases that middle presence matches with the dialog text, the session operational scenarios corresponding to the scene words and phrases of matching are set to currently Scene.It is further comprising the steps of:S210:By the action command corresponding to the pre-stored different dialogue scene of cloud server; S220:Local voice input is received, local voice input is sent to the cloud server, it is current according to the phonetic decision Session operational scenarios;S230:At least one set is obtained with current session scene to corresponding action data;S240:By the action data Send to local machine people, to perform corresponding actions.The step S130 includes:Perform a plurality of and current session at random simultaneously Scape is to corresponding action command.The step S130 includes:The action data corresponding with the action command is obtained, according to institute State action data and perform corresponding actions.
A kind of robot motion system, including with lower unit:Memory cell, for being pre-stored corresponding to different dialogue scene Action command;Receiving unit, for receiving phonetic entry;Processing unit, for judging current right according to the phonetic entry Words scene;Motor unit, for performing at least one with current session scene to corresponding action command.
Optionally, the memory cell, being additionally operable to prestore represents the scene words and phrases storehouse of different dialogue scene;Receiving unit, It is additionally operable to receive dialogic voice;Processing unit, is additionally operable to for the dialogic voice to be converted to dialog text, when the judgement scene When there are the words and phrases matched with the dialog text in words and phrases storehouse, the session operational scenarios corresponding to scene words and phrases that will be matched are set It is current scene.Also include cloud server:By the action corresponding to the pre-stored different dialogue scene of the cloud server Instruction;Local voice input is received, local voice input is sent to the cloud server, it is current according to the phonetic decision Session operational scenarios;At least one set is obtained with current session scene to corresponding action data;The action data is sent to local Robot, to perform corresponding actions.
The beneficial effects of the invention are as follows:By setting up the corresponding relation between different action commands and session operational scenarios, obtaining To after phonetic entry, current session scene is judged according to scene lexicon, such that it is able to be made according to corresponding relation and session operational scenarios Adaptable action.So that dialog procedure between people and robot become be rich in recreational so that robot becomes more to intend Peopleization;The treatment and the miniaturization being stored with beneficial to robot of data are carried out by cloud server.
Brief description of the drawings
Fig. 1 is present context technology schematic diagram;
Fig. 2 is the outside drawing of robot of the present invention;
Fig. 3 is the structural representation of moving system of the present invention;
Fig. 4 is the flow chart of the embodiment of the present invention 1;
Fig. 5 is the flow chart of step S120;
Fig. 6 is that robot first acts schematic diagram;
Fig. 7 is that robot second acts schematic diagram;
Fig. 8 is the flow chart of the embodiment of the present invention 2.
Specific embodiment
To enable the above objects, features and advantages of the present invention more obvious understandable, below in conjunction with the accompanying drawings to the present invention Specific embodiment be described in detail, above and other purpose of the invention, feature and advantage will be become apparent from.Complete Identical reference indicates identical part in portion's accompanying drawing.Not deliberately accompanying drawing drawn to scale, it is preferred that emphasis is show this hair Bright purport.
Fig. 2 and Fig. 3 is referred to, a summarized introduction is done to system architecture of the invention first.Robot in the present invention can To be humanoid robot as shown in Figure 2, it has mobilizable four limbs and head, can also have display screen, language certainly Sound input/output device etc..In figure 3, robot include processing unit and be connected with processing unit receiving unit, Communication unit, voice unit, motor unit and memory cell, receiving unit can be Mikes, and it is used to receive phonetic entry;It is logical Letter unit is used to be communicated with follow-up described cloud server;Voice unit can be loudspeaker, and it is used to be input into response Voice;Motor unit is used for the action data of reception processing unit, is activated according to motor on action data driven machine people etc. Device, so that robot makes corresponding action;Memory cell is used to store Various types of data.System can also be further included Cloud server, it is understood that cloud server and the essential part of non-invention in follow-up detailed description in detail.Robot Two-way communication can be carried out by communication unit and cloud server, communication can use wired or wireless connection, for example can be with Using wifi, 3G/4G etc., the concrete form present invention for communicating is not limited.
Embodiment 1
Present embodiment discloses a kind of robot motion method.As shown in figure 4, the method is entered by the voice for receiving Row identification, the action data being adapted with session operational scenarios is searched according to recognition result in the memory unit, and robot is dynamic according to this Make data and corresponding action is made while voice answer-back is sent.
S110:Action command corresponding to pre-stored different dialogue scene.
In the present invention, the action command corresponding to different dialogue scene is pre-stored with memory cell.Session operational scenarios can To be for example to meet mutually greeting, joy, sadness, query, refusal etc., different moving can be set for each session operational scenarios Make, the instruction corresponding with these actions is referred to as action command, and it is stored in memory cell jointly with session operational scenarios, such as table 1 It is shown.
Table 1
The action command corresponding to different dialogue scene is listed in table 1, it can be found that robot in follow-up elaboration Action command that can be according to corresponding to these session operational scenarios is acted accordingly to make.It should be noted that each session operational scenarios A plurality of action command can be corresponded to, such as in table 1 " meeting " this session operational scenarios just with " stretching out the right hand ", " smile ", " to reach It is dynamic ", " lifting the right hand " this 4 action commands it is corresponding.
S120:Phonetic entry is received, according to the phonetic decision current session scene.
Refering to Fig. 5, after people and robot dialogue is started, robot is defeated by the voice of receiving unit recipient thereon Enter, obtain voice signal (S122), the post-processing unit voice signal that receives of docking carry out speech recognition, be converted into text This form, i.e. dialog text (S123), current session scene is judged according to dialog text content.Speech recognition technology be it is a kind of compared with It is ripe technology, the specific method of speech recognition may be referred to correlation technique, and the present invention is repeated no more to it.Emphasis is situated between below Continue, how current session scene is judged according to the dialog text for getting.
Prestore and represent the scene words and phrases of different dialogue scene, that is, set up scene words and phrases storehouse (S121).So-called scene words and phrases are Refer to that some can represent the word or sentence of current session main contents.Table 1 is returned to, such as " meeting " this scene, its table Can be up to the mutual words and phrases greeted are met:" hello ", " hello ", " hi ", very " brilliant idea to you " etc..These scene words and phrases Stored by memory cell with corresponding scene.After dialog text is got, processing unit is inquired about in the dialog text With the presence or absence of the words and phrases matched with scene words and phrases, when judging to exist the words and phrases matched with scene words and phrases, the field that will be matched Session operational scenarios corresponding to scape words and phrases are set to current scene (S124), thus just can be according to the phonetic decision current session Scene.
For example, someone its voice for sending when beginning engages in the dialogue with robot is:Your good robot Ah Dai!, machine People (Ah Dai) receives this voice by Mike, and processing unit carries out speech recognition to the voice signal for receiving, by its turn It is changed to text:" your good robot Ah Dai ".Afterwards, whether there is in processing unit inquiry scene words and phrases storehouse and " your good robot The item that Ah Dai " this dialog text matches, occurrence is in getting scene words and phrases storehouse by inquiring about it:Hello, this The session operational scenarios corresponding to scene words and phrases " hello " matched somebody with somebody be " meeting ", then processing unit judge current session scene as:Meeting.
As can be seen here, the step of step S120 is performed by following order is realized:S121:Prestoring, it is different right to represent Talk about the scene words and phrases storehouse of scene;S122:Receive dialogic voice;S123:The dialogic voice is converted into dialog text;S124: When there are the words and phrases matched with the dialog text in judging the scene words and phrases storehouse, corresponding to the scene words and phrases that will be matched Session operational scenarios be set to current scene.
S130:At least one is performed with current session scene to corresponding action command.
After current scene is determined, it is corresponding with the session operational scenarios dynamic that processing unit further obtains at least one Instruct.Connect example, when judge current session scene as:After " meeting ", processing unit at random with " meeting " this scene phase At least one is chosen in corresponding four action commands to perform action.The action command that for example it is chosen is " lifting the right hand ", Then the local data base in robot inquiry memory cell, obtains the action data matched with " lifting the right hand " This move. Action data can be array, its position for including control, duration, speed of rotation etc..Control unit will act number According to motor unit is sent to, motor unit controls the actuators such as the motor of robot robot is made phase according to action data Should act.As shown in Tu6Zhong robots, after action data has been performed, robot can lift the right hand.The right hand is lifted in robot While voice unit speech response can be made according to dialog text, for example send:Hello!This voice.Robot how root Its core place not of the invention of speech response is made according to dialog text, be will not be described in detail herein.
Robot only performs an action command in certain said process, completes an action.It is more highly preferred to It is that processing unit can randomly select a plurality of action command so that robot makes multiple actions simultaneously.For example, when judgement is worked as Preceding session operational scenarios are:After " meeting ", processing unit is selected in four action commands corresponding with " meeting " this scene at random Take three:Smile, lift the right hand, move forward.As shown in Tu7Zhong robots, after action data has been performed, robot can be same When lift the right hand, smile and move forward.
The present invention can be seen that by setting up different action commands with dialogue by the description of above step S110-S130 Corresponding relation between scene, after phonetic entry is got, judges current session scene, such that it is able to basis according to scene lexicon Corresponding relation makes the action being adapted with session operational scenarios.So that the rich amusement that the dialog procedure between people and robot becomes Property so that robot becomes more to personalize.
Embodiment 2
The embodiment introduces the difference with embodiment 1, and something in common is repeated no more.Machine in embodiment 1 People will complete speech recognition, scene lexicon storage and action executing etc., and this needs to set the storage device of Large Copacity for robot With efficient processor, this is unfavorable for the miniaturization of robot.
Referring back to Fig. 3, in this embodiment by cloud server complete speech recognition, the storage in scene words and phrases storehouse, Calculating/storage of action data etc..Memory cell in robot only preserves the operating necessary basis parameter of some robots, Such as parameter of electric machine, and processing unit is then mainly responsible for completing and the communication between cloud server, can so reduce for depositing The performance requirement of storage unit and processing unit, is conducive to miniaturization.Machine monitors phonetic entry by microphone, by processing sound Frequency evidence, server saves as the result of parsing, text, the content such as scene json data and returns to robot, and robot leads to Parsing json is crossed, the data such as scene, answer content are got.Local data base is inquired about by robot, what matching was consistent with scene Data, by random algorithm, extract one of which action data.Action data is array, the position comprising control, when lasting Between, speed of rotation etc. travels through array, and control machine forces are reached, and action is performed by set flow, realizes that robot dialogue is chatted It when random action realization.I.e. as shown in figure 8, the present embodiment is comprised the following steps:
S210:By the action command corresponding to the pre-stored different dialogue scene of cloud server;
S220:Local voice input is received, local voice input is sent to the cloud server, according to the voice Judge current session scene.
S230:At least one set is obtained with current session scene to corresponding action data;
S240:The action data is sent to local machine people, to perform corresponding actions.
Embodiment 3
Accordingly, present invention also offers a kind of robot motion system, including with lower unit:Memory cell, for pre- Action command corresponding to storage different dialogue scene;Receiving unit, for receiving phonetic entry;Processing unit, for basis The phonetic entry judges current session scene;Motor unit, for performing at least one with current session scene to corresponding Action command.
Wherein, the memory cell, being additionally operable to prestore represents the scene words and phrases storehouse of different dialogue scene;Receiving unit, also For receiving dialogic voice;Processing unit, is additionally operable to for the dialogic voice to be converted to dialog text, when the judgement scene word When there are the words and phrases matched with the dialog text in sentence storehouse, the session operational scenarios corresponding to the scene words and phrases of matching are set to Current scene.Also include cloud server:Referred to by the action corresponding to the pre-stored different dialogue scene of the cloud server Order;Local voice input is received, local voice input is sent to the cloud server, it is currently right according to the phonetic decision Words scene;At least one set is obtained with current session scene to corresponding action data;The action data is sent to local Device people, to perform corresponding actions.The motor unit, a plurality of moves with current session scene for performing at random simultaneously to corresponding Instruct.The action data corresponding with the action command is obtained, corresponding actions are performed according to the action data.
Each embodiment in this specification is described by the way of progressive, what each embodiment was stressed be with The difference of other embodiment, between each embodiment same or analogous part mutually referring to.For the application's For embodiment of the method, because it is substantially similar to device embodiment, so description is fairly simple, related part is referring to device The part explanation of embodiment.
Elaborate many details in order to fully understand the present invention in the above description.But above description is only Presently preferred embodiments of the present invention, the present invention can be implemented with being much different from other manner described here, therefore this Invention is not limited by specific implementation disclosed above.Any those skilled in the art are not departing from the technology of the present invention simultaneously In the case of aspects, all make many possible to technical solution of the present invention using the methods and techniques content of the disclosure above Change and modify, or the Equivalent embodiments for being revised as equivalent variations.Every content without departing from technical solution of the present invention, according to this The technical spirit of invention still falls within skill of the present invention to any simple modification, equivalent variation and modification made for any of the above embodiments In the range of the protection of art scheme.

Claims (10)

1. a kind of robot motion method, it is characterised in that comprise the following steps:
S110:Action command corresponding to pre-stored different dialogue scene;
S120:Phonetic entry is received, current session scene is judged according to the phonetic entry;
S130:At least one is performed with current session scene to corresponding action command.
2. robot motion method according to claim 1, it is characterised in that the step S120 is specifically included:S121: Prestore and represent the scene words and phrases storehouse of different dialogue scene;S122:Receive dialogic voice;S123:The dialogic voice is converted to Dialog text;S124:When there are the words and phrases matched with the dialog text in judging the scene words and phrases storehouse, by what is matched Session operational scenarios corresponding to scene words and phrases are set to current scene.
3. robot motion method according to claim 1, it is characterised in that further comprising the steps of:
S210:By the action command corresponding to the pre-stored different dialogue scene of cloud server;
S220:Local voice input is received, local voice input is sent to the cloud server, according to the phonetic decision Current session scene;
S230:At least one set is obtained with current session scene to corresponding action data;
S240:The action data is sent to local machine people, to perform corresponding actions.
4. robot motion method according to claim 1, it is characterised in that the step S130 includes:It is simultaneously random Perform it is a plurality of with current session scene to corresponding action command.
5. robot motion method according to claim 1, it is characterised in that the step S130 includes:Obtain and institute The corresponding action data of action command is stated, corresponding actions are performed according to the action data.
6. a kind of robot motion system, it is characterised in that including with lower unit:
Memory cell, for the action command being pre-stored corresponding to different dialogue scene;
Receiving unit, for receiving phonetic entry;
Processing unit, for judging current session scene according to the phonetic entry;
Motor unit, for performing at least one with current session scene to corresponding action command.
7. robot motion system according to claim 6, it is characterised in that the memory cell, is additionally operable to the generation that prestores The scene words and phrases storehouse of table different dialogue scene;Receiving unit, is additionally operable to receive dialogic voice;Processing unit, being additionally operable to will be described Dialogic voice is converted to dialog text, there are the words and phrases matched with the dialog text in the scene words and phrases storehouse is judged When, the session operational scenarios corresponding to the scene words and phrases of matching are set to current scene.
8. robot motion system according to claim 6, it is characterised in that also including cloud server:By described Action command corresponding to the pre-stored different dialogue scene of cloud server;Local voice input is received, local voice is input into Send to the cloud server, according to the phonetic decision current session scene;Obtain at least one set of and current session scene To corresponding action data;The action data is sent to local machine people, to perform corresponding actions.
9. robot motion system according to claim 6, it is characterised in that the motor unit, for simultaneously random Perform it is a plurality of with current session scene to corresponding action command.
10. robot motion system according to claim 6, it is characterised in that obtain corresponding with the action command Action data, according to the action data perform corresponding actions.
CN201611222614.XA 2016-12-27 2016-12-27 Robot motion method and system Pending CN106799733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611222614.XA CN106799733A (en) 2016-12-27 2016-12-27 Robot motion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611222614.XA CN106799733A (en) 2016-12-27 2016-12-27 Robot motion method and system

Publications (1)

Publication Number Publication Date
CN106799733A true CN106799733A (en) 2017-06-06

Family

ID=58985561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611222614.XA Pending CN106799733A (en) 2016-12-27 2016-12-27 Robot motion method and system

Country Status (1)

Country Link
CN (1) CN106799733A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108177151A (en) * 2017-11-28 2018-06-19 上海魔龙机器人科技有限公司 A kind of robot and its ontology voice interactive system
CN110125932A (en) * 2019-05-06 2019-08-16 达闼科技(北京)有限公司 A kind of dialogue exchange method, robot and the readable storage medium storing program for executing of robot
CN110154048A (en) * 2019-02-21 2019-08-23 北京格元智博科技有限公司 Control method, control device and the robot of robot
CN110348335A (en) * 2019-06-25 2019-10-18 平安科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of Activity recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2492850A1 (en) * 2009-10-21 2012-08-29 Thecorpora, S.L. Social robot
CN104965426A (en) * 2015-06-24 2015-10-07 百度在线网络技术(北京)有限公司 Intelligent robot control system, method and device based on artificial intelligence
CN105598972A (en) * 2016-02-04 2016-05-25 北京光年无限科技有限公司 Robot system and interactive method
CN105785813A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Intelligent robot system multi-modal output method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2492850A1 (en) * 2009-10-21 2012-08-29 Thecorpora, S.L. Social robot
CN104965426A (en) * 2015-06-24 2015-10-07 百度在线网络技术(北京)有限公司 Intelligent robot control system, method and device based on artificial intelligence
CN105598972A (en) * 2016-02-04 2016-05-25 北京光年无限科技有限公司 Robot system and interactive method
CN105785813A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Intelligent robot system multi-modal output method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108177151A (en) * 2017-11-28 2018-06-19 上海魔龙机器人科技有限公司 A kind of robot and its ontology voice interactive system
CN110154048A (en) * 2019-02-21 2019-08-23 北京格元智博科技有限公司 Control method, control device and the robot of robot
CN110125932A (en) * 2019-05-06 2019-08-16 达闼科技(北京)有限公司 A kind of dialogue exchange method, robot and the readable storage medium storing program for executing of robot
CN110125932B (en) * 2019-05-06 2024-03-19 达闼科技(北京)有限公司 Dialogue interaction method for robot, robot and readable storage medium
CN110348335A (en) * 2019-06-25 2019-10-18 平安科技(深圳)有限公司 Method, apparatus, terminal device and the storage medium of Activity recognition

Similar Documents

Publication Publication Date Title
CN106799733A (en) Robot motion method and system
CN106560892B (en) Intelligent robot, cloud interaction method thereof and cloud interaction system
CN107340865B (en) Multi-modal virtual robot interaction method and system
US11302302B2 (en) Method, apparatus, device and storage medium for switching voice role
JP6053847B2 (en) Action control system, system and program
US9026447B2 (en) Command and control of devices and applications by voice using a communication base system
KR102211595B1 (en) Speech recognition apparatus and control method thereof
CN108009573B (en) Robot emotion model generation method, emotion model and interaction method
EP3047481A1 (en) Local and remote speech processing
CN205508398U (en) Intelligent robot with high in clouds interactive function
US20210056966A1 (en) System and method for dialog session management
CN110265013A (en) The recognition methods of voice and device, computer equipment, storage medium
CN106503786B (en) Multi-modal interaction method and device for intelligent robot
CN113127609A (en) Voice control method, device, server, terminal equipment and storage medium
CN111344717B (en) Interactive behavior prediction method, intelligent device and computer readable storage medium
US20180158458A1 (en) Conversational voice interface of connected devices, including toys, cars, avionics, mobile, iot and home appliances
JP2014191029A (en) Voice recognition system and method for controlling voice recognition system
CN110232924A (en) Vehicle-mounted voice management method, device, vehicle and storage medium
WO2018135276A1 (en) Speech and behavior control device, robot, control program, and control method for speech and behavior control device
US20200143235A1 (en) System and method for providing smart objects virtual communication
JP4062591B2 (en) Dialog processing apparatus and method, and robot apparatus
US10777198B2 (en) Apparatus for determining speech properties and motion properties of interactive robot and method thereof
CN117122927A (en) NPC interaction method, device and storage medium
CN105835064A (en) Multi-mode output method of intelligent robot, and intelligent robot system
KR102063389B1 (en) Character display device based the artificial intelligent and the display method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170606