CN106933344A - Realize the method and device of multi-modal interaction between intelligent robot - Google Patents
Realize the method and device of multi-modal interaction between intelligent robot Download PDFInfo
- Publication number
- CN106933344A CN106933344A CN201710033259.XA CN201710033259A CN106933344A CN 106933344 A CN106933344 A CN 106933344A CN 201710033259 A CN201710033259 A CN 201710033259A CN 106933344 A CN106933344 A CN 106933344A
- Authority
- CN
- China
- Prior art keywords
- script information
- robot
- modal
- script
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
Abstract
The invention discloses a kind of method and device for realizing multi-modal interaction between intelligent robot, the intelligent robot is provided with robot operating system, and the method includes:Specific script information is obtained, the script information is to have preset the script information that multiple robots perform multi-modal output according to setting order respectively;Determine to need the robot of foundation communication connection according to the script information and set up connection therewith;Multi-modal output is performed based on the script information and other multiple robots respectively according to setting order.The present invention can not only realize the multi-modal interaction between multiple robots, the accuracy rate of the multi-modal output of intelligent robot can also be improved, making intelligent robot can more meet user's request, and enhance the multi-modal interaction capabilities of intelligent robot, the diversity of the function of intelligent robot is improve, Consumer's Experience is improved.
Description
Technical field
The present invention relates to field in intelligent robotics, more particularly to a kind of side for realizing multi-modal interaction between intelligent robot
Method and device.
Background technology
With continuing to develop for science and technology, the introducing of information technology, computer technology and artificial intelligence technology, machine
Industrial circle is progressively walked out in the research of people, gradually extend to the neck such as medical treatment, health care, family, amusement and service industry
Domain.And people for the requirement of robot also conform to the principle of simplicity the multiple mechanical action of substance be promoted to anthropomorphic question and answer, independence and with
The intelligent robot that other robot is interacted, man-machine interaction also just turns into the key factor for determining intelligent robot development,
Therefore, the interactive capability of intelligent robot is improved, intelligent, the weight as current urgent need to resolve of intelligent robot is lifted
Want problem.
The content of the invention
One of technical problems to be solved by the invention are to need to provide a kind of man-machine friendship that can improve intelligent robot
The solution for realizing multi-modal interaction between intelligent robot of mutual ability.
In order to solve the above-mentioned technical problem, embodiments herein provide firstly one kind realize it is many between intelligent robot
The method of mode interaction, the intelligent robot is provided with robot operating system, and the method includes:Obtain specific script letter
Breath, the script information is to have preset the script information that multiple robots perform multi-modal output according to setting order respectively;Root
Determine to need the robot of foundation communication connection according to the script information and set up connection therewith;Based on the script information and its
His multiple robots perform multi-modal output respectively according to setting order.
Preferably, in the step of the acquisition specific script information, further include:Downloaded from cloud server
The script information;Or according to the multi-modal input of user, generate the script information.
Preferably, after multi-modal output has been performed according to the script information, sending trigger signal will hold to next
The robot of the multi-modal output of row.
Preferably, performed according to the execution time or execution time interval that are set in the script information corresponding many
Mode is exported.
Preferably, when script information is got, it is automatically matched to according to the script information pre- on the script information
Determine role.
In addition, embodiments herein additionally provides a kind of device for realizing multi-modal interaction between intelligent robot, institute
State intelligent robot and robot operating system is installed, the device includes:Script information acquisition module, it obtains specific script
Information, the script information is to have preset the script information that multiple robots perform multi-modal output according to setting order respectively;
Module is set up in communication connection, and it determines to need the robot of foundation communication connection and the therewith company of foundation according to the script information
Connect;Multi-modal output module, it is based on the script information and performs multimode respectively according to setting order with other multiple robots
State is exported.
Preferably, the script information acquisition module, it further downloads the script information from cloud server;Or
Person generates the script information according to the multi-modal input of user.
Preferably, the device also includes:Trigger signal sending module, its performed according to the script information it is multi-modal
After output, trigger signal is sent to next robot that perform multi-modal output.
Preferably, the multi-modal output module, its further according in the script information set the execution time or
Time interval is performed to perform corresponding multi-modal output.
Preferably, the device also includes:Role match module, its further when script information is got, according to described
Script information is automatically matched to the predetermined role on the script information.
Compared with prior art, one or more embodiments in such scheme can have the following advantages that or beneficial effect
Really:
The embodiment of the present invention passes through to make to need the robot for carrying out multi-modal interaction to obtain specific script information, according to pin
This information determines to need the robot of foundation communication connection and foundation connection therewith, is then based on script information and other multiple machines
Device people performs multi-modal output respectively according to setting order, can not only realize the multi-modal interaction between multiple robots, also
The accuracy of the multi-modal output of intelligent robot can be improved, makes intelligent robot more to meet user's request, and enhance
The multi-modal interaction capabilities of intelligent robot, improve the diversity of the function of intelligent robot, improve Consumer's Experience.
Other features and advantages of the present invention will be illustrated in the following description, also, the partly change from specification
Obtain it is clear that or being understood by implementing technical scheme.The purpose of the present invention and other advantages can by
Specifically noted structure and/or flow are realized and obtained in specification, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing is used for providing to the technical scheme of the application or further understanding for prior art, and constitutes specification
A part.Wherein, the accompanying drawing of expression the embodiment of the present application is used to explain the technical side of the application together with embodiments herein
Case, but do not constitute the limitation to technical scheme.
Fig. 1 is to be related to the flow for realizing the example one of the method for multi-modal interaction between intelligent robot of the invention to illustrate
Figure.
Fig. 2 is to be related to the flow for realizing the example two of the method for multi-modal interaction between intelligent robot of the invention to illustrate
Figure.
Fig. 3 is to be related to the structure for realizing the example three of the device 300 of multi-modal interaction between intelligent robot of the invention
Block diagram.
Specific embodiment
Describe embodiments of the present invention in detail below with reference to drawings and Examples, how the present invention is applied whereby
Technological means solves technical problem, and reaches the implementation process of relevant art effect and can fully understand and implement according to this.This Shen
Each feature that please be in embodiment and embodiment, can be combined with each other under the premise of not colliding, the technical scheme for being formed
Within protection scope of the present invention.
In addition, the flow of accompanying drawing can be in the such as one group computer system of computer executable instructions the step of illustrating
Middle execution.And, although show logical order in flow charts, but in some cases, can be with different from herein
Order performs shown or described step.
In existing field in intelligent robotics, most of robots can carry out single interactive voice with user, complete
Simple question and answer behavior is carried out into the task of user's imparting or with user.But, due to existing robot it is intelligent compared with
Difference, it is how effected by environmental factors, it is impossible to which that realization is interacted with the multi-modal of other robot, is reduced user and is used intelligent machine
The interest of device people.
The embodiment of the invention provides the solution for solving the above problems.Carried out between robot it is multi-modal it is interactive it
Before, specific script information is obtained in advance, robot can download the script information or according to user's from cloud server
Multi-modal input generates the script information.After robot obtains the script information with script in other need multi-modal friendship
Mutual robot sets up communication connection, and each robot carries out multi-modal output based on script information.By above-mentioned this mode reality
Multi-modal interaction between Xian Liao robots, and, automatic speech recognition need not be carried out during interaction
(Automatic Speech Recognition, ASR) is recognized and is generated voice output according to speech recognition so that robot
Between interaction it is more simple, directly, controllability is stronger, will not also be by external environment condition, such as influence of noise.
When multi-modal output is performed based on script information, robot can be according to the execution time set in script information
Or perform time interval and perform.So, even if each robot is not communicated each other, it is also possible to completed according to script information
Interactive operation.
In other examples, it is also possible to after a robot has performed multi-modal output according to script information, triggering is sent
Signal after then the robot has performed multi-modal output signal, is also sent out to next robot that perform multi-modal output
Trigger signal is sent to next robot, according to setting order successively perform script information in turn.It is readily appreciated that, is set according to the time
Put when carrying out perform script information, it is possible that due to bursty state accidental when current robot performs multi-modal output, and
Cause temporal error, current robot may be caused also to be not carried out terminating, next robot begins to export multimode
The uncoordinated situation of state data.And by the way of signal triggering, the generation of scene above can be avoided.
Embodiment one
Fig. 1 is to be related to the flow for realizing the example one of the method for multi-modal interaction between intelligent robot of the invention to illustrate
Figure, each intelligent robot is preferably the robot for being provided with robot operating system, however, other have voice, expression, action
Etc. ability to express, do not use the intelligent robot of the robot operating system (or equipment) the present embodiment can also be realized.
Each step of the method is illustrated below with reference to Fig. 1,
In step s 110, intelligent robot obtains specific script information.
" script information " in this example is to have preset multiple robots to perform multi-modal output respectively according to setting order
Script information, can typically include robot quantity, script theme, each robot multigroup multi-modal output task to be performed
Particular content etc..Two script examples as follows:
Example 1:It is related to Liang Ge robots (robot quantity), they are in chat (script theme), and particular content is as follows:
Robot A:TTS:Hello, and I is millet.Expression:Happily.Action:Lift left hand is greeted.
Robot B:TTS:Hello for millet, and you are very lovely.I is small second.Expression:Love.Action:Get to know.
Robot A:TTS:Small second is drizzly to rattle away.Expression:Laugh at.Action:Swinging arm.
Robot B:TTS:Millet can danceExpression:Expect.Action:Both arms are held.
Robot A:……
Example 2:It is related to three robots, they play in object for appreciation, particular content is as follows:
Robot A:TTS:I is mother.Expression:It is proud.Action:Finger oneself.
Robot B:TTS:I is father.Expression:Happiness.Action:Hand touches head.
Robot C:TTS:I is baby.Expression:It is happy.Action:Arm is imitated and flown.
Robot A and B:The baby of our favorites.Expression:Love.Action:Gesticulate love.
……
Certainly, in addition to script element described above, other elements, such as one can also be included in script information
Robot complete one group of pending multi-modal output required for time, the execution time of every group of pending multi-modal output
Deng, do not limit herein, can increase or delete script element according to actual needs.
In this step, each robot can obtain the script information that content is performed comprising all robots.Certainly, exist
In one preferred exemplary, each robot is only obtained and oneself wants perform script content, is so prevented from execution multi-modal defeated
The confusion for going out, makes the robot definitely particular content oneself to be performed.
Obtain script information mode on, robot can from cloud server Download Script information.Ordinary circumstance
Under, script designer can design the script information of many different themes, change or assign different machines people difference for convenience
These script informations can be uploaded to role, script designer high in the clouds brain --- the cloud server of robot, and supply and demand will
Robot download.When user needs robot to perform a certain script information, then a certain script of download is sent by robot
The request of information is transferred the script information of matching to corresponding robot to cloud server, cloud server.
Or, in order to meet the personalized customization of user and improve using the interest of robot, each robot can be with
According to the multi-modal input of user, corresponding script information is generated.For example, user sends voice messaging to robot, machine is informed
The pending multi-modal input of device people, perform the time and which robot to set up the information such as communication connection with, robot passes through
Automatic speech recognition system carrys out the voice messaging of identifying user, and the information is generated into script information, on the other hand, this
A little script informations are it is also assumed that be pending multi-modal instruction.Certainly, the script information that user will can also edit is straight
In connecing the script execution module for being input to robot.
In addition, in other embodiments, it is also possible to obtain script information from other robots.Specifically, it is a certain
First from cloud server Download Script information, then, be sent to for script information and set up communication link therewith by the robot for robot
The other robot for connecing.
In the step s 120, determine to need the robot of foundation communication connection according to script information and set up connection therewith.
Specifically, after robot obtains script information, determine to participate in the machine number of multi-modal interaction in script information
Amount N, it is determined that the id information of N-1 robot being attached thereto, then sets up communication connection with this N-1 robot.Specifically
Connected mode do not limit, preferably wireless connection, such as WIFI connections.
In addition, after foundation is communicated to connect, if the script information that N number of robot is obtained is same comprising all machines
When people performs the script information of content, each robot can be automatically matched to predetermined on the script information according to script information
, with robot identification code ID be associated together role after matching by role, and informs other robot with the forms of broadcasting, is anti-
The distribution conflict of angle till color, can successively match role by setting order, and so, robot has just distributed role automatically.The opposing party
Face, if the script information that each robot is obtained is individual machine people pending script information, also by the corresponding role of script
It is associated together with robot identification code ID, and other robot is informed in broadcast.
The step not only completes the foundation of the communication connection of multiple robots, also achieves the role point of each robot
Match somebody with somebody, it is ready to be that each robot of next step carries out multi-modal interaction.
In step s 130, multi-modal output is performed based on script information.
Specifically, after each robot distribution role, the multi-modal output according to robot in script information is sequentially
Carry out perform script successively.When starting to perform, the robot multimode to be performed to role of sequence first in script information
State output is parsed, it is determined that needing voice output, action output and/or the expression output for performing.Other robots are also pressed
Multi-modal output is performed according to above step.
With " robot A:TTS:Hello, and I is millet.Expression:Happily.Action:Lift left hand is greeted." as a example by, by right
This script information is parsed, and the content of voice output is " hello, and I is millet ", and the content of output of expressing one's feelings is " to open
The heart ", the content for acting output is " lifting left hand ".Then, corresponding phonetic order, expression output is equipped with according to parsing content to refer to
Order and action output order, and in order to realize the harmony of these three output results, the same of these three instruction execution is kept as far as possible
Step property.When voice output is carried out, using TTS technologies, by voice-output device voice output, " hello, and I is small for robot
Rice ".When expression output is carried out, because the expression of most of robots is virtual expression, expression is shown by image, so can
Shown on the display device with the image that would indicate that " happy ".And action command can typically include completing the specific of corresponding actions
The complete information of the director datas such as hardware controls, hardware free degree correspondence numerical value.For example, in order to realize lifting left hand, then setting
The free degree 1 is:On the basis of state of left arm when freely vertical, side lifts 180 °.The robot face free degree 1 in realization it is dynamic
When making, particularly for the drive motor controller receive control data of driven machine human arm, motor drive module is performed should
The arm of control data driven machine people makes action.
In step S140, after multi-modal output has been performed according to script information, send trigger signal and wanted to next
Perform the robot of multi-modal output.
After the corresponding one group of multi-modal output of a certain role during a robot completes script information, it is first determined whether
Script information is all finished, if so, then exiting multi-modal interaction scenarios, the otherwise content according to script information is determined
Robot execution sequence or known robot execution sequence, generation trigger signal simultaneously be sent to by communication link next
Perform the robot of multi-modal output.Next robot is after trigger signal is received, and the content according to step S130 is complete
Into corresponding multi-modal output, then also judge whether script information is all finished, if it is not, also generating trigger signal
Give next one robot.By so moving in circles, all robots for setting up communication connection complete script information to be owned
Content.While each robot sends trigger signal, also the completed script information of correspondence role is recorded,
After receiving trigger signal next time again, it is known that the execution is which group script information.Or, complete one group of script information
Afterwards, next group of pending script information of mark can also.
Multi-modal interaction between robot is realized by above-mentioned this mode, and, it is not required to during interaction
Carry out ASR identifications so that the interaction between robot is more simple, direct, and controllability is stronger, will not also be subject to external rings
The influence in border, such as noise.
Embodiment two
Fig. 2 is to be related to the flow for realizing the example two of the method for multi-modal interaction between intelligent robot of the invention to illustrate
Scheme, the method for the embodiment is mainly included the following steps that, wherein, by the step similar to embodiment one with identical label mark
Note, and its particular content is repeated no more, only difference step is specifically described.
In step s 110, intelligent robot obtains specific script information.
In the step s 120, determine to need the robot of foundation communication connection according to script information and set up connection therewith.
In step S210, performed according to the execution time or execution time interval that are set in script information corresponding
Multi-modal output.
It should be noted that in script information, every group of multi-modal output content for corresponding to each role is set
The time interval that the time of execution or output multi-modal with upper one group are separated by.So, each robot was only needed to according to the time
Each group of multi-modal output for carrying out correspondence role in perform script information is limited, is operated relatively simple, without monitoring in real time
Trigger message.Every group of same role multi-modal output content corresponding execution time or time interval can according to same angle
The script information content of other roles being separated by between the content of upper one group of multi-modal output of color is calculated, for example, real
The script information example 2 in example 1 is applied, first group of script information of robot A and second group of script information are spaced two other roles
Two groups of script informations corresponding to (robot B and C), and between first group of script information of robot B and second group of script information
Every one group of script information of other roles's (robot C), therefore, although robot A and B performs second group of script letter simultaneously
Breath, but the two with the script information of previous group every time interval it is entirely different.Furthermore it is also possible to be believed according to perform script
Breath involved hardware device quantity, length of voice output content etc. come design time interval, repeat no more.
Whether can also be monitored in real time during each robot performs multi-modal output has the control from user to refer to
Order, because the priority of the control instruction of user is higher, therefore, no matter the instruction that user sends whether be script information pause/
Exit instruction, robot can all suspend/exit perform script information, perform the control instruction of user.
Embodiment three
Fig. 3 is the structured flowchart for realizing the device 300 of multi-modal interaction between intelligent robot of the embodiment of the present invention.Such as
Shown in Fig. 3, the device 300 of the embodiment of the present application mainly includes:Module 320 is set up in script information acquisition module 310, communication connection
And multi-modal output module 330.
Script information acquisition module 310, it obtains specific script information, and script information is pressed to have preset multiple robots
Perform the script information of multi-modal output respectively according to setting order.The script information acquisition module 310, it is further from high in the clouds
The script information is downloaded at server;Or according to the multi-modal input of user, generate the script information.
Module 320 is set up in communication connection, and it determines to need the robot for setting up communication connection simultaneously according to the script information
Connection is set up therewith.
Multi-modal output module 330, it is based on the script information and distinguishes according to setting order with other multiple robots
Perform multi-modal output.The multi-modal output module 330, it is further according to the execution time set in the script information
Or perform time interval and perform corresponding multi-modal output.
In addition, as shown in figure 3, the present apparatus also includes trigger signal sending module 340 and role match module 350.Triggering
Signal transmitting module 340, it sends trigger signal and is wanted to next after multi-modal output has been performed according to the script information
Perform the robot of multi-modal output.Role match module 350, its further when script information is got, according to the pin
This information is automatically matched to the predetermined role on the script information.
By rationally setting, the device 300 of the present embodiment can perform each step of embodiment one and embodiment two, this
Place repeats no more.
Because the method for the present invention describes what is realized in computer systems.The computer system can for example be set
In the control core processor of robot.For example, method described herein can be implemented as what can be performed with control logic
Software, it is performed by the CPU in robot operating system.Function as herein described can be implemented as storage to be had in non-transitory
Programmed instruction set in shape computer-readable medium.When implemented in this fashion, the computer program includes one group of instruction,
When group instruction is run by computer, it promotes computer to perform the method that can implement above-mentioned functions.FPGA can be temporary
When or be permanently mounted in non-transitory tangible computer computer-readable recording medium, for example ROM chip, computer storage,
Disk or other storage mediums.In addition to being realized with software, logic as herein described can utilize discrete parts, integrated electricity
What road and programmable logic device (such as, field programmable gate array (FPGA) or microprocessor) were used in combination programmable patrols
Volume, or embodied including any other equipment that they are combined.All such embodiments are intended to fall under model of the invention
Within enclosing.
It should be understood that disclosed embodiment of this invention is not limited to ad hoc structure disclosed herein, process step
Or material, and the equivalent substitute of these features that those of ordinary skill in the related art are understood should be extended to.Should also manage
Solution, term as used herein is only used for describing the purpose of specific embodiment, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means special characteristic, the structure for describing in conjunction with the embodiments
Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs
Apply example " or " embodiment " same embodiment might not be referred both to.
While it is disclosed that implementation method as above, but described content is only to facilitate understanding the present invention and adopting
Implementation method, is not limited to the present invention.Any those skilled in the art to which this invention pertains, are not departing from this
On the premise of the disclosed spirit and scope of invention, any modification and change can be made in the formal and details implemented,
But scope of patent protection of the invention, must be still defined by the scope of which is defined in the appended claims.
Claims (10)
1. a kind of method for realizing multi-modal interaction between intelligent robot, the intelligent robot is provided with robot manipulation system
System, the method includes:
Specific script information is obtained, the script information performs multimode respectively to have preset multiple robots according to setting order
The script information of state output;
Determine to need the robot of foundation communication connection according to the script information and set up connection therewith;
Multi-modal output is performed based on the script information and other multiple robots respectively according to setting order.
2. method according to claim 1, it is characterised in that in the step of the acquisition specific script information, enter
One step includes:
The script information is downloaded from cloud server;Or
According to the multi-modal input of user, the script information is generated.
3. method according to claim 1 and 2, it is characterised in that
After multi-modal output has been performed according to the script information, sending trigger signal will perform multi-modal output to next
Robot.
4. method according to claim 1 and 2, it is characterised in that
Corresponding multi-modal output is performed according to the execution time or execution time interval that are set in the script information.
5. the method according to any one of Claims 1 to 4, it is characterised in that
When script information is got, the predetermined role on the script information is automatically matched to according to the script information.
6. a kind of device for realizing multi-modal interaction between intelligent robot, the intelligent robot is provided with robot manipulation system
System, the device includes:
Script information acquisition module, it obtains specific script information, the script information for preset multiple robots according to
Setting order performs the script information of multi-modal output respectively;
Module is set up in communication connection, and it determines to need the robot of foundation communication connection and set up therewith according to the script information
Connection;
Multi-modal output module, it is based on the script information and performs multimode respectively according to setting order with other multiple robots
State is exported.
7. device according to claim 6, it is characterised in that
The script information acquisition module, it further downloads the script information from cloud server;Or according to user
Multi-modal input, generate the script information.
8. the device according to claim 6 or 7, it is characterised in that the device also includes:
Trigger signal sending module, after multi-modal output has been performed according to the script information, transmission trigger signal is under for it
One robot that perform multi-modal output.
9. the device according to claim 6 or 7, it is characterised in that
The multi-modal output module, it is further according between the execution time or execution time set in the script information
Every performing corresponding multi-modal output.
10. the device according to any one of claim 6~8, it is characterised in that the device also includes:
Role match module, it further when script information is got, the script is automatically matched to according to the script information
Predetermined role in information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710033259.XA CN106933344A (en) | 2017-01-18 | 2017-01-18 | Realize the method and device of multi-modal interaction between intelligent robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710033259.XA CN106933344A (en) | 2017-01-18 | 2017-01-18 | Realize the method and device of multi-modal interaction between intelligent robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106933344A true CN106933344A (en) | 2017-07-07 |
Family
ID=59444717
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710033259.XA Pending CN106933344A (en) | 2017-01-18 | 2017-01-18 | Realize the method and device of multi-modal interaction between intelligent robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106933344A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107688983A (en) * | 2017-07-27 | 2018-02-13 | 北京光年无限科技有限公司 | Intelligent robot custom service processing method and system based on business platform |
CN107894831A (en) * | 2017-10-17 | 2018-04-10 | 北京光年无限科技有限公司 | A kind of interaction output intent and system for intelligent robot |
CN109510753A (en) * | 2017-09-15 | 2019-03-22 | 上海挖数互联网科技有限公司 | Construction method, interaction response method and apparatus, the storage medium, server of group, IP robot |
CN111686448A (en) * | 2019-03-13 | 2020-09-22 | 上海波克城市网络科技股份有限公司 | Game script operation control method and device, storage medium and server |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030073429A (en) * | 2002-03-11 | 2003-09-19 | 주식회사 보스텍 | A communication system and method between intelligent toy robots |
CN1553845A (en) * | 2001-11-07 | 2004-12-08 | 索尼公司 | Robot system and robot apparatus control method |
CN101217488A (en) * | 2008-01-16 | 2008-07-09 | 中南大学 | A reconstructible multiple mobile robot communication means |
CN104238552A (en) * | 2014-09-19 | 2014-12-24 | 南京理工大学 | Redundancy multi-robot forming system |
CN105680972A (en) * | 2016-01-20 | 2016-06-15 | 山东大学 | Network synchronous control method of robot cluster cooperation tasks |
CN105975622A (en) * | 2016-05-28 | 2016-09-28 | 蔡宏铭 | Multi-role intelligent chatting method and system |
-
2017
- 2017-01-18 CN CN201710033259.XA patent/CN106933344A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1553845A (en) * | 2001-11-07 | 2004-12-08 | 索尼公司 | Robot system and robot apparatus control method |
KR20030073429A (en) * | 2002-03-11 | 2003-09-19 | 주식회사 보스텍 | A communication system and method between intelligent toy robots |
CN101217488A (en) * | 2008-01-16 | 2008-07-09 | 中南大学 | A reconstructible multiple mobile robot communication means |
CN104238552A (en) * | 2014-09-19 | 2014-12-24 | 南京理工大学 | Redundancy multi-robot forming system |
CN105680972A (en) * | 2016-01-20 | 2016-06-15 | 山东大学 | Network synchronous control method of robot cluster cooperation tasks |
CN105975622A (en) * | 2016-05-28 | 2016-09-28 | 蔡宏铭 | Multi-role intelligent chatting method and system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107688983A (en) * | 2017-07-27 | 2018-02-13 | 北京光年无限科技有限公司 | Intelligent robot custom service processing method and system based on business platform |
CN109510753A (en) * | 2017-09-15 | 2019-03-22 | 上海挖数互联网科技有限公司 | Construction method, interaction response method and apparatus, the storage medium, server of group, IP robot |
CN107894831A (en) * | 2017-10-17 | 2018-04-10 | 北京光年无限科技有限公司 | A kind of interaction output intent and system for intelligent robot |
CN111686448A (en) * | 2019-03-13 | 2020-09-22 | 上海波克城市网络科技股份有限公司 | Game script operation control method and device, storage medium and server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106933344A (en) | Realize the method and device of multi-modal interaction between intelligent robot | |
KR102038016B1 (en) | Personalized service operation system and method of smart device and robot using smart mobile device | |
US9250622B2 (en) | System and method for operating a smart service robot | |
CN105141587B (en) | A kind of virtual puppet interactive approach and device | |
CN107340859A (en) | The multi-modal exchange method and system of multi-modal virtual robot | |
US20080096533A1 (en) | Virtual Assistant With Real-Time Emotions | |
CN108665890A (en) | Operate method, electronic equipment and the system for supporting the equipment of speech-recognition services | |
CN103877727B (en) | A kind of by mobile phone control and the electronic pet that interacted by mobile phone | |
CN105082150A (en) | Robot man-machine interaction method based on user mood and intension recognition | |
AU2014236686A1 (en) | Apparatus and methods for providing a persistent companion device | |
CN103218654A (en) | Robot emotion generating and expressing system | |
US20190389075A1 (en) | Robot system and robot dialogue method | |
CN106573376A (en) | Activity monitoring of a robot | |
CN107813306B (en) | Robot and motion control method and device thereof | |
CN106471444A (en) | A kind of exchange method of virtual 3D robot, system and robot | |
US11581086B2 (en) | System and method for delivering a digital therapeutic specific to a users EMS and profile | |
US20190182193A1 (en) | System and Method for Delivering a Digital Therapeutic from a Parsed Electronic Message | |
CN107307873A (en) | Mood interactive device and method | |
CN106326087B (en) | Web page experience method and system based on robot operating system | |
CN106903695A (en) | It is applied to the projection interactive method and system of intelligent robot | |
CN106445153A (en) | Man-machine interaction method and device for intelligent robot | |
US11478925B2 (en) | Robot and method for controlling same | |
US20230333541A1 (en) | Mobile Brain Computer Interface | |
CN104778044B (en) | The method and device of touch-screen gesture event stream distribution | |
CN106970704A (en) | A kind of man-machine interaction method and device towards intelligent robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170707 |
|
RJ01 | Rejection of invention patent application after publication |