CN107678617A - The data interactive method and system of Virtual robot - Google Patents

The data interactive method and system of Virtual robot Download PDF

Info

Publication number
CN107678617A
CN107678617A CN201710828403.9A CN201710828403A CN107678617A CN 107678617 A CN107678617 A CN 107678617A CN 201710828403 A CN201710828403 A CN 201710828403A CN 107678617 A CN107678617 A CN 107678617A
Authority
CN
China
Prior art keywords
robot
user
virtual robot
output
operating system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710828403.9A
Other languages
Chinese (zh)
Inventor
王恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201710828403.9A priority Critical patent/CN107678617A/en
Publication of CN107678617A publication Critical patent/CN107678617A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of data interaction system of Virtual robot, and interactive system includes:Hardware device, it has data processing unit, memory cell and input interface and output interface;Robot operating system, it is loaded in the memory unit, to cause hardware device integrally to have robot disposal ability in load operating;Virtual robot instrument, its Utilization ability unit resolves single mode and/or multi-modal interactive instruction simultaneously generate multi-modal output content, are then sent to the output interface of hardware device to show to user.The data processing method and system for the Virtual robot that the present invention uses, on the one hand, due to possessing multiple functions, scene and instrument so that interacting between user and virtual robot is more abundant;On the other hand, the complete product of virtual robot and robot operating system composition so that more convenient, the function admirable of docking with other systems of interaction, compatibility are high.

Description

The data interactive method and system of Virtual robot
Technical field
The present invention relates to artificial intelligence field, specifically, is related to a kind of data interactive method of Virtual robot And system.
Background technology
The exploitation of robot chat interactive system is directed to imitating the dialogue of the mankind.The chat robots that early stage is widely known by the people Application program includes the received inputs of processing such as small i chat robots, siri chat robots on iPhone (including language Sound or text) and responded, to attempt to imitate the mankind between context, chat robots are a kind of virtual robots, mesh Preceding existing virtual robot, also includes the robot of virtual image, and virtual robot is one of intelligent robot development Important directions.
However, virtual robot fails the form for reaching product at present, often with the side of chat robots or virtual image Formula is linked into application.Virtual robot can not be made to be used as the product of shaping, and it is poor with the docking property of each system, Compatibility can not ensure and bring substantial amounts of development effort.
The content of the invention
To solve the above problems, the invention provides a kind of data interaction system of Virtual robot, the data Interactive system includes:
Hardware device, it has data processing unit, memory cell and input interface and output interface, the input Interface is more to show to user to receive the single mode of user's transmission and/or multi-modal interactive instruction, the output interface Mode exports content;
Robot operating system, it is loaded in the memory cell, to cause the hardware device in load operating It is overall that there is robot disposal ability;
Virtual robot instrument, there is call relation in the capacity unit of itself and the robot operating system, to utilize institute State capacity unit to parse the single mode and/or multi-modal interactive instruction and generate multi-modal output content, be then sent to To the output interface of the hardware device to show to user.
According to one embodiment of present invention, the robot operating system includes:
Capacity unit, it includes sound identification module, action module, expression module, wake module;
Tool unit, it is configuration module, for supporting the robot operating system to carry out data processing;
Scene unit, it includes session operational scenarios and User Defined scene;
Wherein, the instrument of the artificial robot operating system of the virtual machine.
According to one embodiment of present invention, can be also provided with the hardware device can be with the robot manipulation System compatibility or the hardware embedded OS of docking.
According to one embodiment of present invention, the system includes Three-Dimensional Dynamic model unit, and it is to according to determination UI image designs export Three-Dimensional Dynamic model data.
According to one embodiment of present invention, the system includes:
Expression output unit, it is connected with the robot operating system, is sent with receiving the robot operating system Expression output data and pass to the expression output that the Three-Dimensional Dynamic model unit carries out UI images.
Output unit is acted, it is connected with the robot operating system, is sent to receive the robot operating system Action output data and pass to the action output that the Three-Dimensional Dynamic model unit carries out UI images.
Shape of the mouth as one speaks output unit, it is connected with the robot operating system, is sent with receiving the robot operating system Shape of the mouth as one speaks output data and pass to the action output that the Three-Dimensional Dynamic model unit carries out UI images.
According to one embodiment of present invention, the User Defined scene is used under the scene of user's selection and user Interact.
According to another aspect of the present invention, a kind of data interactive method of Virtual robot is additionally provided, it is described Method includes:
Receive single mode and/or the multi-modal interactive instruction that user sends;
Single mode described in handling capacity unit resolves and/or multi-modal interactive instruction simultaneously generate multi-modal output content;
By the output interface of hardware device to show the output content to user.
The data processing method and system for the Virtual robot that the present invention uses, on the one hand, due to possessing a variety of work( Energy, scene and instrument so that interacting between user and virtual robot is more abundant;On the other hand, virtual robot with And the complete product of robot operating system composition so that more convenient, the function admirable of docking with other systems of interaction is compatible Degree is high.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by specification, rights Specifically noted structure is realized and obtained in claim and accompanying drawing.
Brief description of the drawings
Accompanying drawing is used for providing a further understanding of the present invention, and a part for constitution instruction, the reality with the present invention Apply example to be provided commonly for explaining the present invention, be not construed as limiting the invention.In the accompanying drawings:
Fig. 1 shows the data interaction system according to an embodiment of the invention towards multi-modal virtual robot Interaction schematic diagram;
Fig. 2 shows that the equipment of the data interaction system of Virtual robot according to an embodiment of the invention is shown It is intended to;
Fig. 3 shows the robot of the data interaction system of Virtual robot according to an embodiment of the invention The structural representation of operating system;
Fig. 4 shows the structural frames of the data interaction system of Virtual robot according to an embodiment of the invention Figure;
Fig. 5 shows the module knot of the data interaction system of Virtual robot according to an embodiment of the invention Composition;
Fig. 6 shows the flow of the data interactive method of Virtual robot according to an embodiment of the invention Figure;
The further details of data interactions for showing Virtual robot according to an embodiment of the invention of Fig. 7 The details flow chart of method;
Fig. 8 shows another stream of the data interactive method of Virtual robot according to an embodiment of the invention Cheng Tu;And
Fig. 9 shows taken according to one embodiment of present invention in user, hardware device and high in the clouds in further detail The flow chart that business device is communicated between the parties.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, the embodiment of the present invention is made below in conjunction with accompanying drawing Further describe in detail.
It is clear, it is necessary to be carried out before embodiment as described below to state:
The artificial multi-modal interactive robot of the virtual machine so that multi-modal interactive robot turns into interaction A member, user carry out question and answer, chat, game with the multi-modal robot that interacts.The virtual image is the multi-modal interaction The carrier of robot, the multi-modal output to the multi-modal interactive robot show.Virtual robot is (with virtual image For carrier) be:The multi-modal interactive robot and the community that virtual image is carrier, i.e.,:With the UI image designs of determination For carrier;Based on multi-modal man-machine interaction, there is the AI abilities such as semanteme, emotion, cognition;User is set to enjoy the individual character of Flow Experience Change and intelligentized service robot.In the present embodiment, the virtual robot includes:The virtual robot of the high mould animations of 3D Image.
The cloud server is, there is provided the multi-modal interactive robot carries out disposal ability to the interaction demand of user Terminal, realize and interacted with user.
Fig. 1 shows the data interaction system according to an embodiment of the invention towards multi-modal virtual robot Interaction schematic diagram.The first purpose of the data interaction system of Virtual robot provided by the invention be in order to user with Interaction between virtual robot is more convenient, therefore, special that interaction between user and virtual robot is carried out at this Introduce.
As shown in figure 1, include user 101, hardware device 102, virtual robot 103 and cloud server 104.Its In, the user 101 interacted with virtual robot 103 can be the machine of single people, another virtual robot and entity The interaction of people, another virtual robot and tangible machine people and virtual robot and single people and virtual robot Interaction is similar, therefore, only shows the multi-modal interaction of user (people) and virtual robot in Fig. 1.
In addition, hardware device 102 includes viewing area 1021 and hardware device 1022.Viewing area 1021 is used to show The image of virtual robot 103, hardware device 1022 are used cooperatively with cloud server 104, for the data in interaction Processing.Virtual robot 103 needs screen display carrier to present.Therefore, viewing area 1021 includes:PC screens, projecting apparatus, TV Machine, multimedia display screen, line holographic projections, VR and AR.Multi-modal interaction proposed by the present invention needs certain hardware In general, hardware device 1022 can be used as from the PC ends for having main frame as support.Viewing area 1021 is selected in Fig. 1 It is PC screens.
The places different from general hardware device 102 are that the data of Virtual robot provided by the invention are handed over Virtual robot operating system is mounted with hardware device 102 in mutual system.In addition, it is also equipped with hardware device 102 embedding Embedded system.Virtual robot operating system loading is hard to cause in load operating in the memory cell of hardware device 102 Part equipment integrally has robot disposal ability.
The process interacted in Fig. 1 between virtual robot 103 and user 101 can be:
First, before interaction, user 101 needs wake-up virtual robot 103 to enter interactive mode.Wake up virtual machine The means of people 103 can be the biological characteristic such as vocal print, iris, touch, button, remote control and specific limb action, gesture etc.. In addition, the special time that interactive mode can also be set before user enters.
Herein it should be noted that the image of virtual robot 103 and dressing up and being not limited to a kind of pattern.Virtual robot 103 can possess different images and dress up.Virtual robot 103 can possess different appearance and decoration.It is for example, empty It can be that the pure big elder sister of image is vivid or elder brother's image of handsome sunlight to intend robot 103.Every kind of virtual machine Device people 103 image can also correspond to it is a variety of different dress up, the classification dressed up can classify according to season, can also be according to occasion Classification.These images and dress up and may reside in cloud server 104, there may also be in hardware device 102, needing Call these images and can be called at any time when dressing up.Later stage operation personnel can periodically upload new image and the best friend that dresss up Mutual platform, user can be as needed, the image that selects oneself to like and dress up.
After user 101 wakes up virtual robot 103 and virtual robot 103 enters interactive mode, user 101 can To ask that 103 expansion interact with virtual robot.When user 101 exports single mode and/or multi-modal interactive instruction, hardware is set Standby 102 can gather these instructions, and send virtual robot 103 to, and virtual robot 103 can be called in robot operating system Capacity unit these instructions are parsed, with obtain the semantic information included in interactive instruction and emotional information etc. letter Breath.In general, the single mode and/or multi-modal interactive instruction that user 101 exports include voice messaging, text message, image Information and video information.
The process that virtual robot 103 parses interactive instruction needs to call the capacity unit in robot operating system, energy Power unit possesses some serial abilities, and interactive instruction can be parsed, and in general, capacity unit first can be to interaction Instruction carries out semantic understanding, analyzes the shallow-layer included in interactive instruction or the semantic meaning of deep layer.Then, interactive instruction is carried out The analysis of emotion, if user 101 is old user, i.e., there is the user that experience is interacted with virtual robot 103 before, then virtually Interactive information before the calling of robot 103, is analyzed the personality and behavioural habits of user 101.If user 101 is New user, i.e., the user never interacted before with virtual robot 103, then the meeting of virtual robot 103 default user is first User, the interactive instruction to user 101 record, and can transfer during to interact later conveniently.Virtual robot 103 can root Corresponding answer is made according to the semantic understanding and sentiment analysis made before, that is, exports content.
, it is necessary to export and show user 101 after generation exports content, now, virtual robot 103 can be by output Appearance sends robot operating system to, by operating system cooperation hardware device 102 export the output of content.It is general next Say, exporting the way of output of content has a variety of, because virtual robot 103 can export multi-modal information, therefore, is exporting Virtual robot voice output, virtual robot text output, the output of virtual robot image can be typically included when exporting content And virtual robot video frequency output.These four way of outputs can coordinate while export same output content, can also be single Only or combination output form output content.The form of output can be changed according to the demand of user 101.
Above interactive step is exactly that first, the single mode and/or multi-modal interaction for receiving user's transmission refer in simple terms Order.Then, handling capacity unit resolves single mode and/or multi-modal interactive instruction and multi-modal output content is generated.Finally, lead to The output interface of hardware device is crossed to show the output content to user.
In the present invention, hardware device 102 is in fact display carrier and the interaction pair for being used as virtual robot 103 Talk about the display carrier of content.Cloud server 104 is the carrier of virtual machine personal data.Explanation of giving one example below virtual machine Device people 103 and the interactive dialogue process of user 101.The two can so deploy to chat.
Virtual robot 103 is said:(smile), there is anything to need (greeting) to help
User 101 says:Have, I wants to travel, and you can help me to inquire about trip tool information
Virtual robot 103 is said:Good (smile), where you travels if wanting to go to
User 101 says:I wants to leave for Shenzhen tourism from Beijing.
Virtual robot 103 is said:It is such (smile), you, which are intended to, takes train to or go (to doubt by air It is puzzled)
User 101 says:You recommend to me.
Virtual robot 103 is said:There are 4 trains in good (smile), Beijing to Shenzhen, most fast 8 45 minutes hours, most 29 slow hours.Admission fee is from 200 to 3000.5 hours only are needed if aircraft, the air fare of tomorrow is 1124 yuan, I Feel or suitable (smile) by air.
User 101 says:Alright, I just goes by air.
In talking with above, virtual robot 103 can change oneself when responding and waiting other side to respond Mood.The response in expression that content in above question and answer in bracket is made for virtual robot 103.Except answering in expression Answer, virtual robot 103 can also express the feelings of virtual robot at that time by way of lowering one's voice and raising intonation Thread.Except the response in expression and intonation, virtual robot 103 can also express the feelings of oneself by the action on limbs Thread, such as a series of actions such as nod, wave, sitting down, standing, walking, running.
Virtual robot 103 can by judging the emotional change of interactive object, according to the emotional change of interactive object come Make the change on corresponding expression, intonation and limbs.Virtual robot 103 can also occur in program interim card or network When problem with dance or other performance forms make up program interim card and network problem caused by interaction do not flow The defects of smooth.In addition, the user for slightly lacking some recognition capabilities, this interaction output can also improve their dialogue Interaction capabilities.
Fig. 2 shows that the equipment of the data interaction system of Virtual robot according to an embodiment of the invention is shown It is intended to.As shown in Fig. 2 system includes mobile phone 102A, tablet personal computer 102B, computer 102C, presentation device 102D, robot manipulation System 201 and cloud server 104.
Hardware device 102, hardware device 102 are included in the data interaction system of Virtual robot provided by the invention With data processing unit, memory cell and input interface and output interface, input interface to receive user transmission list Mode and/or multi-modal interactive instruction, output interface to user showing multi-modal output content.In general, hardware is set Standby 102 include mobile phone 102A, tablet personal computer 102B, computer 102C and presentation device 102D.Provided by the invention towards void In the data interaction system for intending robot, hardware device 102 can be any of equipment listed above.In the utilization of reality Central, user 101 can select suitable hardware device 102 according to the needs of oneself, be realized and void by hardware device 102 Intend the interaction of robot 103.
Embedded system is generally comprised in hardware device 102, embedded system disclosure satisfy that daily living needs.In the present invention In, in order to complete to interact with virtual robot, hardware device 102 is also equipped with robot operating system 201.
Robot operating system 201 loads in the memory unit, to cause hardware device integrally to have in load operating Robot disposal ability, robot operating system 201 are a supplements of embedded system so that hardware device 102 not only possesses In general disposal ability, it is also equipped with robot disposal ability.After being mounted with robot operating system 201, hardware device 102 is with regard to energy Enough completions interact with user's.
Except hardware device 102 and robot operating system 201, interacted with virtual robot to complete user 101 Process, it is also necessary to the support of cloud server 104.Cloud server 104 is logical by specific means of communication and hardware device 102 News.When user 101 and virtual robot 103 interact, robot operating system 201 can be by interactive information transmission to cloud Hold server 104, cloud server 104 data caused by interaction can be handled, then by the result after processing transmit to Robot operating system 201, then by the image display of virtual robot to user 101.
Fig. 3 shows the robot of the data interaction system of Virtual robot according to an embodiment of the invention The structural representation of operating system.As shown in figure 3, robot operating system 201 includes functional unit 2011, scene unit 2012 And tool unit 2013.Wherein, it is more to include voice output, speech recognition, action, expression and wake-up etc. for functional unit 2011 The different function of kind.Scene unit 2012 includes home court scape 2012A and sub-scene 2012B.
Robot operating system 201 loads in the memory unit, to cause hardware device integrally to have in load operating Robot disposal ability.Specifically, robot operating system possesses the function of scene selection, can user 101 with it is virtual Robot 103 is supplied to the chance of 101 1 progress scene selections of user when interacting.In robot operating system 201 Scene unit 2012 includes two scene modes, and first scene mode is home court scape 2012A.Home court scape 2012A generally pair Words pattern, i.e., under home court scape 2012A, user 101 can open a dialogue with virtual robot 103 and interact.Second scenario mould Formula is sub-scene 2012B, and sub-scene 2012B is generally the scene of user's setting, and user can be in the scene that developer provides Select a scene.For example, sub-scene 2012B can be included, home services scene, Office Service scene and daytime dress Scene of being engaged in etc..
In addition, robot operating system 201 also includes functional unit 2011, after scene is chosen, user 101 To be opened a dialogue with virtual robot 103.During dialogue, functional unit 2011 can be that the smooth development of dialogue is made Contribution.Functional unit 2011 includes the functions such as voice output, speech recognition, action, expression and wake-up.Speech identifying function energy Enough voices transmitted to user 101 are identified, and to distinguish the implication information included in voice, implication packet herein is containing shallow Layer implication information and Analysis of Deep Implications information.
In daily life, a simple language can include many information, there is the shallow hierarchy that surface can just obtain Information also has the profound information for being not easy to be acquired.The information that included in language can be identified for speech identifying function, no Shallow hierarchy information can only be identified, additionally it is possible to identify the information of profound level.
The expression of human face can be identified for holding function and expression function, obtain the mood letter wherein included Breath, additionally it is possible to exported to the facial expression and limb action of virtual robot 103, show user 101.
Some configuration modules are included in tool unit 2013, virtual robot 103 can be embedded system in hardware device In application, or application in function or the application in robot operating system or function.In the present invention, virtually Robot 103 is included in tool unit 2013.In addition, tool unit 2013 is also comprising some other instruments, such as weather Instrument, clock facility and encyclopaedia instrument etc..
During user 101 interacts with the expansion of virtual robot 103, virtual robot 103 and tool unit 2013 Between, there is communication to come and go between functional unit 2011 and between scene unit 2012.Scene unit 2012 and functional unit Also there is communication to come and go between 2011.
Fig. 4 shows the structural frames of the data interaction system of Virtual robot according to an embodiment of the invention Figure.As shown in figure 4, include user 101, hardware device 102 and cloud server 104.Wherein, user 101 includes entity People, virtual robot and tangible machine people.Hardware device 102 includes receiving device 401 and viewing area 1021.
In the interaction of user 101 and virtual robot 103, user 101 can send interactive instruction first, this friendship Mutually instruction can be single mode can also be it is multi-modal, interactive instruction can include audio, text, image and video. In order to receive the interactive instruction of these users 101 transmission, hardware device 102 just needs corresponding reception device.In general, connect Receiving apparatus includes keyboard, microphone and camera.Wherein, keyboard includes physical keyboard and soft keyboard, and microphone can be All can receive the radio reception medium of the speech data of the transmission of user 101.Camera can shoot the image of user 101 and regard Frequency information.
Certainly, the reply data that user 101 sends can include other shapes outside text, audio, image and video The reply data of formula, hardware device 102 may be equipped with the reception device matched with the reply data of other forms, and the present invention is not It is limited to this.
In addition, the example that user inputs information equipment also includes keyboard, cursor control device (mouse), for voice operating Microphone, scanner, touch function (such as to detect the capacitance type transducers of physical touch), camera is (using visible Or nonvisible wavelength detection is not related to the action of touch) etc..
Fig. 5 shows the module knot of the data interaction system of Virtual robot according to an embodiment of the invention Composition.As shown in figure 5, include scene selecting module 501, receiving module 502, processing module 503 and output module 504.Its In, scene selecting module 501 can select home court scape and sub-scene.Receiving module 502 includes text collection unit 5021, sound Frequency collecting unit 5022, image acquisition units 5023 and video acquisition unit 5024.
At the beginning of interaction, user 101 need to select suitable scene, then deploy under this scene with virtual robot 101 Interaction.Home court scape is session operational scenarios, and sub-scene is self-defined scene.After scene selection terminates, receiving module 502 can receive user 101 interactive instructions sent, then pass to processing module 503, interactive instruction are handled and produces output content.Finally Output module 504 is sent to, passes through the image output output content of virtual robot.
Fig. 6 shows the flow of the data interactive method of Virtual robot according to an embodiment of the invention Figure.In order to coordinate the use of the data interaction system of Virtual robot proposed by the present invention, spy introduces exchange method herein Flow chart.
First, in step s 601, single mode and/or the multi-modal interactive instruction that user sends are received.Sent out in this step While raw, illustrate that user 101 has waken up virtual robot 103, and cause virtual robot 103 to enter interactive mould Formula.Wake up virtual robot 103 mode have it is a variety of, for example, voice wake up, touch wake up and machinery wake up.It is empty waking up After intending robot 103, virtual robot 103 can allow user to select the scene of interaction, wait user 101 to select the scene of interaction Afterwards, virtual robot 103 formally enters interactive mode.Now, virtual robot 103 just starts to receive the single mode of user's transmission And/or multi-modal interactive instruction.Receiving interactive instruction needs receiving device, and receiving device generally comprises Mike in simple terms Wind, keyboard and camera.
Then, in step S602, handling capacity unit resolves single mode and/or multi-modal interactive instruction simultaneously generate multimode State exports content.In this step, the robot capability in the meeting of virtual robot 103 call capability unit, the friendship to receiving Mutually instruction is parsed, to generate output content after parsing.
Finally, in step S603, content is exported to be shown to user by the output interface of hardware device.It is defeated generating After going out content, virtual robot 103 can connect the output that these contents are transmitted to hardware device by robot operating system 201 Mouthful, output content is exported by output interface.Output output content when virtual robot 103 can coordinate expression, action with And mood etc. come increase output content visibility.
The further details of data interactions for showing Virtual robot according to an embodiment of the invention of Fig. 7 The details flow chart of method.First, in step s 701, user selects the scene of interaction.In this step, robot manipulation is System can be supplied to one interface of user, and therein one can arbitrarily be selected with selected interaction scenarios, user by having on interface It is individual, interacted under this scene with the expansion of virtual robot 103.
Then, in step S702, single mode and/or multi-modal interactive instruction that user sends are received.Into corresponding After interaction scenarios, user 101 can export the interaction demand of oneself, i.e. interactive instruction.At this time, it may be necessary to the reception device of hardware device To receive the interaction demand of user, virtual robot 103 is then transmit to, is further analyzed for virtual robot.
Then, in step S703, single mode and/or multi-modal interactive instruction are parsed by semantic understanding ability.Connecing After the interactive instruction sent by user 101, virtual robot 103 can call semantic understanding ability, to being included in interactive instruction Semantic information analyzed and understood.More suitably answered to be made when generation exports content.
Then, in step S704, single mode and/or multi-modal interactive instruction are parsed by emotional ability.Receiving After the interactive instruction that user 101 sends, virtual robot 103 can call emotional ability, and the emotion included in interactive instruction is believed Breath is analyzed and understood.More suitably answered to be made when generation exports content.
Then, in step S705, single mode and/or multi-modal interactive instruction are parsed by cognitive ability.Receiving After the interactive instruction that user 101 sends, virtual robot 103 can call cognitive ability, and the information included in interactive instruction is entered Row analysis and understanding.More suitably answered to be made when generation exports content.
After semantic understanding, sentiment analysis and cognition parsing were carried out to interactive instruction, in step S706, virtual machine The result that device people 103 can parse according to above ability generates multi-modal output content.
Finally, in step S707, content is exported to be shown to user by the output interface of hardware device.Virtual machine The output interface of the output content transmission of generation to hardware device can be exported output content by people 103 by output interface.
Fig. 8 shows another stream of the data interactive method of Virtual robot according to an embodiment of the invention Cheng Tu.As illustrated, in step S801, hardware device 102 sends request content to cloud server 104.Afterwards, hardware is set Standby 102 are constantly in the state for waiting cloud server 104 to complete the partial task of cloud server 104.During wait, The time that hardware device 102 can be spent to returned data carries out Clocked operation.
If returned data is not obtained for a long time, such as, exceed predetermined time span 10S, then the meeting of hardware device 102 Selection carries out local reply, generates local conventional reply data.Then by the output of virtual robot image and local conventional response The animation of cooperation, and call voice playing equipment to play voice.
Fig. 9 shows taken according to one embodiment of present invention in user, hardware device and high in the clouds in further detail The flow chart that business device is communicated between the parties.
As shown in figure 9, it is necessary into interactive mode, into interaction before virtual robot 103 interacts with user 101 The mode of pattern have it is a variety of, such as, hardware device 102 has the visual identity ability or tactile cognitive ability of hardware, such as pacifies Equipped with camera and there is touch-screen.Shown when hardware device 102 by these hardware acceptances to after entry instruction in specified Show to enter to be about to virtual image in region and show.The structure of the animating image of virtual robot 103 can be the high mould animations of 3D Virtual robot image.
Before interaction, user 101 needs the clear and definite interaction demand of oneself, that is, selects the interaction scenarios needed, now, user 101 send the result of interaction scenarios selection to hardware device.Interaction scenarios are divided into home court scape and sub-scene.Home court scape is interaction Scene, sub-scene are the self-defined scene of user.
After scene selection terminates, interaction is formal to be started, and now user 101 can export interactive instruction, be included in interactive instruction User 101 needs the information expressed, and hardware device 102 can receive these interactive instructions by receiving device.
After interactive instruction is received, the virtual robot 103 in hardware device 102 can send call instruction to cloud service Device 104, request cloud server 104 are handled caused data in interaction.Virtual robot 103 also can locally Call capability parses interactive instruction.
Pass through the common parsing and processing of cloud server 104 and virtual robot 103, the meeting of virtual robot 103 Reply, generation output content are made in the interactive instruction sent for user.Finally, virtual robot 103 is defeated by hardware device Go out and export content, and will export content displaying to user 101.
It should be understood that disclosed embodiment of this invention is not limited to specific structure disclosed herein, processing step Or material, and the equivalent substitute for these features that those of ordinary skill in the related art are understood should be extended to.It should also manage Solution, term as used herein are only used for describing the purpose of specific embodiment, and are not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means special characteristic, the structure described in conjunction with the embodiments Or during characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs Apply example " or " embodiment " same embodiment might not be referred both to.
While it is disclosed that embodiment as above, but described content only to facilitate understand the present invention and adopt Embodiment, it is not limited to the present invention.Any those skilled in the art to which this invention pertains, this is not being departed from On the premise of the disclosed spirit and scope of invention, any modification and change can be made in the implementing form and in details, But the scope of patent protection of the present invention, still should be subject to the scope of the claims as defined in the appended claims.

Claims (7)

1. a kind of data interaction system of Virtual robot, it is characterised in that the data interaction system includes:
Hardware device, it has data processing unit, memory cell and input interface and output interface, the input interface It is multi-modal to show to user to receive the single mode of user's transmission and/or multi-modal interactive instruction, the output interface Export content;
Robot operating system, it is loaded in the memory cell, to cause that the hardware device is overall in load operating With robot disposal ability;
There is call relation in virtual robot instrument, the capacity unit of itself and the robot operating system, to utilize the energy Single mode described in power unit resolves and/or multi-modal interactive instruction simultaneously generate multi-modal output content, are then sent to institute The output interface of hardware device is stated to show to user.
2. the data interaction system of Virtual robot as claimed in claim 1, it is characterised in that the robot manipulation System includes:
Capacity unit, it includes sound identification module, action module, expression module, wake module;
Tool unit, it is configuration module, for supporting the robot operating system to carry out data processing;
Scene unit, it includes session operational scenarios and User Defined scene;
Wherein, the instrument of the artificial robot operating system of the virtual machine.
3. the data interaction system of Virtual robot as claimed in claim 1, it is characterised in that on the hardware device Can also be provided with being capable of hardware embedded OS compatible with the robot operating system or docking.
4. the data interaction system of Virtual robot as claimed in claim 2, it is characterised in that the system includes three Dynamic model unit is tieed up, it exports Three-Dimensional Dynamic model data to UI image designs according to determination.
5. the data interaction system of Virtual robot as claimed in claim 4, it is characterised in that the system includes:
Expression output unit, it is connected with the robot operating system, to receive the table that the robot operating system is sent Feelings output data simultaneously passes to the expression output that the Three-Dimensional Dynamic model unit carries out UI images.
Output unit is acted, it is connected with the robot operating system, to receive the dynamic of the robot operating system transmission Make output data and pass to the action output that the Three-Dimensional Dynamic model unit carries out UI images.
Shape of the mouth as one speaks output unit, it is connected with the robot operating system, to receive the mouth that the robot operating system is sent Type output data simultaneously passes to the action output that the Three-Dimensional Dynamic model unit carries out UI images.
6. the data interaction system of Virtual robot as claimed in claim 2, it is characterised in that the User Defined Scene is used to interact with user under the scene of user's selection.
7. a kind of data interactive method of Virtual robot, it is characterised in that methods described includes:
Receive single mode and/or the multi-modal interactive instruction that user sends;
Single mode described in handling capacity unit resolves and/or multi-modal interactive instruction simultaneously generate multi-modal output content;
By the output interface of hardware device to show the output content to user.
CN201710828403.9A 2017-09-14 2017-09-14 The data interactive method and system of Virtual robot Pending CN107678617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710828403.9A CN107678617A (en) 2017-09-14 2017-09-14 The data interactive method and system of Virtual robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710828403.9A CN107678617A (en) 2017-09-14 2017-09-14 The data interactive method and system of Virtual robot

Publications (1)

Publication Number Publication Date
CN107678617A true CN107678617A (en) 2018-02-09

Family

ID=61136790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710828403.9A Pending CN107678617A (en) 2017-09-14 2017-09-14 The data interactive method and system of Virtual robot

Country Status (1)

Country Link
CN (1) CN107678617A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110795973A (en) * 2018-08-03 2020-02-14 北京大学 Multi-mode fusion action recognition method and device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187990A (en) * 2007-12-14 2008-05-28 华南理工大学 A session robotic system
CN106471444A (en) * 2016-07-07 2017-03-01 深圳狗尾草智能科技有限公司 A kind of exchange method of virtual 3D robot, system and robot
CN106663127A (en) * 2016-07-07 2017-05-10 深圳狗尾草智能科技有限公司 An interaction method and system for virtual robots and a robot
CN106863319A (en) * 2017-01-17 2017-06-20 北京光年无限科技有限公司 A kind of robot awakening method and device
CN106985137A (en) * 2017-03-09 2017-07-28 北京光年无限科技有限公司 Multi-modal exchange method and system for intelligent robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187990A (en) * 2007-12-14 2008-05-28 华南理工大学 A session robotic system
CN106471444A (en) * 2016-07-07 2017-03-01 深圳狗尾草智能科技有限公司 A kind of exchange method of virtual 3D robot, system and robot
CN106663127A (en) * 2016-07-07 2017-05-10 深圳狗尾草智能科技有限公司 An interaction method and system for virtual robots and a robot
WO2018006370A1 (en) * 2016-07-07 2018-01-11 深圳狗尾草智能科技有限公司 Interaction method and system for virtual 3d robot, and robot
CN106863319A (en) * 2017-01-17 2017-06-20 北京光年无限科技有限公司 A kind of robot awakening method and device
CN106985137A (en) * 2017-03-09 2017-07-28 北京光年无限科技有限公司 Multi-modal exchange method and system for intelligent robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110795973A (en) * 2018-08-03 2020-02-14 北京大学 Multi-mode fusion action recognition method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107340865A (en) Multi-modal virtual robot exchange method and system
CN107632706B (en) Application data processing method and system of multi-modal virtual human
CN107894833B (en) Multi-modal interaction processing method and system based on virtual human
CN107340859A (en) The multi-modal exchange method and system of multi-modal virtual robot
CN107294837A (en) Engaged in the dialogue interactive method and system using virtual robot
US11271765B2 (en) Device and method for adaptively providing meeting
CN105917404B (en) For realizing the method, apparatus and system of personal digital assistant
CN107704169B (en) Virtual human state management method and system
CN107808191A (en) The output intent and system of the multi-modal interaction of visual human
CN107329990A (en) A kind of mood output intent and dialogue interactive system for virtual robot
CN108804536B (en) Man-machine conversation and strategy generation method, equipment, system and storage medium
CN109447234A (en) A kind of model training method, synthesis are spoken the method and relevant apparatus of expression
CN108000526A (en) Dialogue exchange method and system for intelligent robot
CN105244042B (en) A kind of speech emotional interactive device and method based on finite-state automata
CN107480766A (en) The method and system of the content generation of multi-modal virtual robot
CN102868830A (en) Switching control method and device of mobile terminal themes
CN107977928A (en) Expression generation method, apparatus, terminal and storage medium
CN109324688A (en) Exchange method and system based on visual human's behavioral standard
CN107784355A (en) The multi-modal interaction data processing method of visual human and system
CN110309254A (en) Intelligent robot and man-machine interaction method
CN102801652A (en) Method, client and system for adding contact persons through expression data
CN109343695A (en) Exchange method and system based on visual human's behavioral standard
CN108416420A (en) Limbs exchange method based on visual human and system
CN106572131B (en) The method and system that media data is shared in Internet of Things
CN106649712A (en) Method and device for inputting expression information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180209

RJ01 Rejection of invention patent application after publication