CN106903695A - It is applied to the projection interactive method and system of intelligent robot - Google Patents
It is applied to the projection interactive method and system of intelligent robot Download PDFInfo
- Publication number
- CN106903695A CN106903695A CN201710027806.3A CN201710027806A CN106903695A CN 106903695 A CN106903695 A CN 106903695A CN 201710027806 A CN201710027806 A CN 201710027806A CN 106903695 A CN106903695 A CN 106903695A
- Authority
- CN
- China
- Prior art keywords
- projection
- data
- output
- user
- modal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a kind of projection interactive method and system for being applied to intelligent robot, methods described includes:During projection output is carried out, determine whether that setting event is triggered, if so, then pause projection output;Receive and parse through the multi-modal input data from user;According to the analysis result parsed to the multi-modal input data, projection data corresponding thereto is exported.The present invention makes robot to carry out the multi-modal intelligent and class human nature for interacting, improving robot with user in projection process, so as to teaching, game effect etc. is better achieved.
Description
Technical field
The present invention relates to field in intelligent robotics, more particularly to a kind of projection interactive method for being applied to intelligent robot and
System.
Background technology
With continuing to develop for science and technology, the introducing of information technology, computer technology and artificial intelligence technology, machine
Industrial circle is progressively walked out in the research of people, gradually extend to the neck such as medical treatment, health care, family, amusement and service industry
Domain.And people for the requirement of robot also conform to the principle of simplicity the multiple mechanical action of substance be promoted to anthropomorphic question and answer, independence and with
The intelligent robot that other robot is interacted, man-machine interaction also just turns into the key factor for determining intelligent robot development,
Therefore, the interactive capability of intelligent robot is improved, intelligent, the weight as current urgent need to resolve of intelligent robot is lifted
Want problem.
The content of the invention
One of technical problems to be solved by the invention are to need to provide one kind to make the robot can be with projection process
User carries out multi-modal interaction, improves intelligent and class human nature the solution of robot.
In order to solve the above-mentioned technical problem, embodiments herein provide firstly a kind of throwing for being applied to intelligent robot
Shadow exchange method, methods described includes:During projection output is carried out, determine whether that setting event is triggered, if setting
Determine event triggering, then pause projection output;Receive and parse through the multi-modal input data from user;According to described multi-modal
The analysis result that input data is parsed, exports projection data corresponding thereto.
Preferably, the setting event includes monitoring that the multi-modal input data or projection output time of user's output reach
To specific interaction time point;The multi-modal input data from user includes the multi-modal input data that user actively exports
With the multi-modal input data for responding multi-modal output data, the multi-modal output data is robot in the specific interaction
The data that time point issues the user with.
Preferably, the projection interactive method is realized by way of projecting application program, is obtained according to analysis result
User view, judges to whether there is pre-defined data for projection corresponding with user view in projection application program, if in the presence of,
Then the projection output data for projection, otherwise, is then sent to cloud server by the user view, receives and passes through cloud service
Feedback data after device analysis.
Preferably, the method also includes:By project application program judge whether to the feedback data carry out projection it is defeated
Go out.
Preferably, also export that corresponding with the data for projection other are multi-modal while projection exports the data for projection
Data.
The embodiment of the present application additionally provides a kind of projection interactive system for being applied to intelligent robot, and the system includes:
Projection output control module, it determines whether that setting event is triggered, if there is setting event during projection output is carried out
Triggering, then pause projection output;User data receiver module, it receives and parses through the multi-modal input data from user;Throw
Shadow data outputting module, it exports corresponding throwing according to the analysis result parsed to the multi-modal input data
Shadow data.
Preferably, the setting event includes monitoring that the multi-modal input data or projection output time of user's output reach
To specific interaction time point;The multi-modal input data from user includes the multi-modal input data that user actively exports
With the multi-modal input data for responding multi-modal output data, the multi-modal output data is robot in the specific interaction
The data that time point issues the user with.
Preferably, the data for projection output module, it further obtains user view, judges whether according to analysis result
In the presence of pre-defined data for projection corresponding with user view, if in the presence of, the projection output data for projection, otherwise, then
The user view is sent to cloud server, the feedback data after cloud server is analyzed is received.
Preferably, the data for projection output module, its further determine whether to the feedback data carry out projection it is defeated
Go out.
Preferably, the data for projection output module, it is further also defeated while projection exports the data for projection
Go out other multi-modal datas corresponding with the data for projection.
Compared with prior art, one or more embodiments in such scheme can have the following advantages that or beneficial effect
Really:
The embodiment provides a kind of projection interactive method for being applied to intelligent robot, projection output is being carried out
During, suspend projection output by setting event triggering moment, then receive and parse through the multi-modal input from user
Data, projection data corresponding thereto is exported finally according to analysis result, robot is entered with user in projection process
The multi-modal interaction of row, improves the intelligent and class human nature of robot, so as to teaching, game effect etc. is better achieved.
Other features and advantages of the present invention will be illustrated in the following description, also, the partly change from specification
Obtain it is clear that or being understood by implementing technical scheme.The purpose of the present invention and other advantages can by
Specifically noted structure and/or flow are realized and obtained in specification, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing is used for providing to the technical scheme of the application or further understanding for prior art, and constitutes specification
A part.Wherein, the accompanying drawing of expression the embodiment of the present application is used to explain the technical side of the application together with embodiments herein
Case, but do not constitute the limitation to technical scheme.
Fig. 1 is the schematic flow sheet of the example one for being related to the projection interactive method for being applied to intelligent robot of the invention.
Fig. 2 is the schematic flow sheet of the example two for being related to the projection interactive method for being applied to intelligent robot of the invention.
Fig. 3 is the schematic flow sheet of the example three for being related to the projection interactive method for being applied to intelligent robot of the invention.
Fig. 4 is the structured flowchart of the example four for being related to the projection interactive device 400 for being applied to intelligent robot of the invention.
Specific embodiment
Describe embodiments of the present invention in detail below with reference to drawings and Examples, how the present invention is applied whereby
Technological means solves technical problem, and reaches the implementation process of relevant art effect and can fully understand and implement according to this.This Shen
Each feature that please be in embodiment and embodiment, can be combined with each other under the premise of not colliding, the technical scheme for being formed
Within protection scope of the present invention.
In addition, the flow of accompanying drawing can be in the such as one group computer system of computer executable instructions the step of illustrating
Middle execution.And, although show logical order in flow charts, but in some cases, can be with different from herein
Order performs shown or described step.
In the prior art, during being projected, majority is all by view data or regards existing projector equipment
Frequency in some cases, outfit is also played while output projection according to being projected on projection screen (for example, ground or wall)
Sound or background music, so as to statically be watched for user.However, in the projection process, coming especially with projector equipment
Education activities are carried out, because beholder is generally the relatively low child of know-how, is often produced during study each
The problem of various kinds is planted, therefore, viewing for a long time projects the dislike for easily causing user in the case of unmanned indication and interaction, leads
What cause education activities were brought has little effect.Therefore, the embodiment of the present invention provides one kind and can improve intelligent robot and plan
Human nature, and then robot is carried out the multi-modal solution for interacting with user in projection process.
The projection interactive method for being applied to intelligent robot of the embodiment of the present invention can make robot carry out projection it is defeated
Interacted with user during going out, improve intelligent robot and class human nature.Specifically, for example, passing through in robot
Projection is come during being imparted knowledge to students, in specific events trigger, such as, projection output time reaches specific interaction time point
When, robot can in time actively pause projection be exported as teacher, and inquires child that the content to being said just now is
No understanding, robot obtains the information that child shakes the head or nods by visual capacity, it is also possible to obtain small by voice mode
The voice messaging of the yes/no that friend sends, so as to decide whether that projection or continuation before replaying are played.In addition,
During projection output, robot is also monitored to user, if monitoring, user actively exports multi-modal input data
During setting event, for example, when ancient poetry is said to child, child interrupts " what meaning bent item is " asked suddenly, due to prison
The speech data of user, therefore robot pause projection output are measured, the multi-modal input data to user is parsed, and root
Projection data corresponding thereto is exported according to analysis result.
In specific implementation, in order to reduce development cost and manpower and materials, can be using the side of application program (APP)
Formula realizes projection interactive method, after be properly termed as " projection application program ".
It should be noted that in output data for projection corresponding with analysis result, first judging whether to be tied with parsing
The corresponding pre-defined data for projection of user view in fruit, if in the presence of by data for projection output.And if in the absence of pre-
During the data for projection for first defining, user view is sent to cloud server, receives the feedback after cloud server is analyzed
Data.Cloud server may be considered the high in the clouds brain of robot, the task or letter that cannot locally will be processed by robot
Breath is sent to cloud server treatment, robot can be helped to realize complicated task and can mitigate the place of robot local cpu
Reason burden.After robot obtains the feedback data from high in the clouds, judge whether to carry out projection output to the feedback data.
The feedback data in high in the clouds can be the group of the speech data of data for projection to be projected or data for projection and data for projection matching
Data are closed, certainly, feedback data can also be result data for result etc..
In addition, robot can also export other multi-modal numbers corresponding with data for projection while projection output is carried out
According to, for example make robot perform certain limb action Mechanical course instruction, robot is realized the expression of facial expression state
Control instruction, can apish expression and action in some practical application scenes, improve user using robot degree
And viscosity.
Embodiment one
Fig. 1 is the schematic flow sheet of the example one for being related to the projection interactive method for being applied to intelligent robot of the invention,
The method of the embodiment is mainly included the following steps that.
In step s 110, intelligent robot carries out projection output according to the data for projection of setting.
The intelligent robot of the present embodiment possesses projecting function, can be obtained from external equipment before being projected and wait to throw
Shadow data, it is also possible to which the control instruction according to user selects data to be projected from the memory of itself.In the storage of robot
Be stored with substantial amounts of data for projection in device, and these data can be that view data can also be video data.Treated to setting
When data for projection (for example, view data or video data) is projected, data for projection is treated first and is decoded, will be to be projected
Data change into corresponding projection information, and then decoded projection information is projected.
In the step s 120, in output procedure is projected, determine whether that setting event is triggered, if judgement has setting event
Triggering, then pause projection output, wherein, setting event includes monitoring the multi-modal input data of user's output or reaches specific
Interaction time point.
In this step, on the one hand robot carries out projection output, and various asking on the other hand can be monitored in projection process
Ask event, such as request event of the request event of timer, semaphore.The present embodiment is in order to realize robot in projection process
In can be interacted with user, define setting event trigger pause projection output operation.Using application program come
In the case of realizing the embodiment of the present invention, application program all includes a message loop, and the message loop continues to monitor repeatedly and disappears
Breath queue, has checked whether setting event message, and these setting event messages include that timer is reached, monitors what user sent
Multi-modal input data etc..In fact, these events are first received by robot operating system, when robot operating system is received
To after these events, the message of some corresponding description events can be produced, and these message are dealt into the application program.Using journey
After sequence is connected to these message, query messages mapping table calls its corresponding message response function, completes the behaviour of pause projection output
Make.
It should be noted that it refers to reach specific interaction time point, the specific interaction that the timer in setting event is reached
Time point could be arranged to in project content setting content play terminate after time point it is consistent, so for teaching application
In scene, robot pause broadcasting after the time point reaches is projected and issues the user with multi-modal output data and carrys out inquiry user
Understanding to playing content before, can be better achieved teaching efficiency.On the other hand, during broadcasting is projected, machine
The image capture device and sound collection equipment of people are in real time or interval setting time section gathers the information of user, if collecting use
During the multi-modal input data that family sends, then it is assumed that setting event is triggered, pause projection is played.For example, passing through in robot
When the mode that projection is exported explains ancient poetry to child, child interrupts during teaching, suddenly and asks:" bent item is assorted
The setting event that triggers then is assert after sound collection equipment collects the information by the meaning ", robot, and pause is played and thrown
Shadow, and record the content of Current projection broadcasting.
In step s 130, the multi-modal input data from user is received and parsed through.Multi-modal input from user
Data include the user actively multi-modal input data of output and the multi-modal input data for responding multi-modal output data, multimode
State output data is the data that robot is issued the user with the specific interaction time point.
If it should be noted that robot setting time point arrival after suspend projection play, robot can actively to
User sends multi-modal output data, and the multi-modal output data is mainly speech data, is mostly for inquiring user to before
Understanding of project content for being shown etc inquiry content, for example " child, the content understanding said just now”.User is then
The multi-modal output data can be responded and produce multi-modal input data, for example, send " understanding of ", " not understanding ", " XXX is also not
It is clear " etc. voice messaging, or, make " shaking the head ", the action of " nodding ".
After the multi-modal input data from user is received, the data are parsed.Specific analysis result can be with
The information such as including the mission bit stream that the data characteristics and/or multi-modal input data that recognize multi-modal input data are characterized.Pin
To different multi-modal input datas, the complexity and process of dissection process are entirely different.If the information for obtaining is for sound is believed
Breath, then robot the multi-modal data is submitted to ASR or the local and high in the clouds of local ASR or cloud server
The ASR and VPR (Application on Voiceprint Recognition, Voiceprint Recognition) engine of server mixing.These engines use ASR technology
Convert voice data to text message.The specific pretreatment that such as denoising etc is first carried out to multi-modal input data, then
Pretreated voice messaging is carried out the comprehensive analysis of speech recognition, text message corresponding with voice messaging is generated.Enter one
For step, can be believed with the voice of input according to the model of speech recognition, the sound template that will be prestored in identification process
Number feature be compared, according to certain search and matching strategy, find out a series of optimal moulds with input voice match
Plate.Then according to the definition of this template, recognition result can just be provided by tabling look-up.If the information for obtaining is view data,
Parsed by motion analysis technology based on the two dimensional image and obtain human body attitude.
In step S140, according to the analysis result parsed to multi-modal input data, corresponding throwing is exported
Shadow data.
After parsing obtains the analysis result that user is responded to the multi-modal output data that robot is exported, robot is looked into
The map listing corresponding data for projection of output of analysis result and data for projection is ask, for example, the analysis result in user is " to pay no attention to
Solution ", then robot exports the content before specific interaction time point again, or output is explained in more detailed than the content
Hold to user.If in the case that the analysis result in user is " understanding ", robot then continues to play the projection of setting.
It is readily appreciated that, after output data for projection corresponding with analysis result, due to the data for projection that sets before also not
The project content recorded when all output, therefore robot is according to pause projection continues to play projection for starting point, completes this
Projection output.
To sum up, the embodiment of the present invention can make robot in projection process can with user carry out it is multi-modal interact, carry
The intelligent and class human nature of robot high, so as to teaching, game effect etc. is better achieved.
Embodiment two
Fig. 2 is the schematic flow sheet of the example two for being related to the projection interactive method for being applied to intelligent robot of the invention,
The method of the embodiment mainly includes the following steps that, wherein, the step similar to embodiment one is marked with identical label, and
Its particular content is repeated no more, only difference step is specifically described.
In step s 110, intelligent robot carries out projection output according to the data for projection of setting.
In the step s 120, in output procedure is projected, determine whether that setting event is triggered, if judgement has setting event
Triggering, then pause projection output, wherein, setting event includes monitoring multi-modal input data or the projection output of user's output
Time reaches specific interaction time point.
In step s 130, the multi-modal input data from user is received and parsed through.
In step S210, user view is obtained according to analysis result.
As above in step S130 by multi-modal data is analyzed the information for obtaining typically just with voice messaging pair
The text message answered or human body attitude information corresponding with user action, but the user view tool that these embodying informations are expressed
What body is, in addition it is also necessary to which robot can understand after further being screened and being matched.By taking voice messaging as an example, by language
The analysis result that sound identification is obtained is " what meaning bent item is ", and robot parsing believes emphasis therein after obtaining the above
Breath such as " bent item ", " what meaning " are extracted, and are screened from default user view data storehouse by guide of these information
Go out the user view of matching, such as according to user's meanings such as analysis results " what meaning bent item is " acquisition " explaining the implication of bent item "
Figure.
In step S220, pre-defined data for projection corresponding with user view is judged whether.If in the presence of with
The corresponding pre-defined data for projection of user view, then perform step S230, otherwise performs step S240.
User view and the interrelated number of data for projection set in advance are stored in robot local storage in advance
According to storehouse, by inquiring about the database, data for projection corresponding with user view can be found.For example, with " explain containing for bent item
The corresponding data for projection of user view of justice " is comprising a view data for winding goose neck.If by application program come
The embodiment of the present invention is realized, then judges to whether there is pre-defined projection number corresponding with user view in projection application program
According to.
In step S230, projection output data for projection, return to step S110 after end of output.
After data for projection corresponding with user view is inquired, the data for projection is exported.In order to before ensureing
The integrality of the data for projection of output, the project content recorded during with the time point for suspending projection output continues to play as starting point
The projection not exported.
In step S240, user view is sent to cloud server, receives anti-after cloud server is analyzed
Feedback data.
When the database that inquiry robot is locally stored is to obtain data for projection, due to robot interior memory hardware
Limitation in equipment and CPU disposal abilities, it is likely that without storage pre-defined data for projection corresponding with user view, but
It is, in order to preferably be interacted with user, when corresponding data for projection is not inquired, then user view to be sent to cloud
End server, is processed by cloud server.
Cloud server is processed according to user view, inquires about content corresponding with user view, and the content can be wrapped
Include data for projection, the combination of data for projection+speech data, the feedback data of other forms.
In step s 250, judge whether to carry out projection output to feedback data.If the determination result is YES, then step is performed
S230, otherwise, return continues executing with step S110.
Robot receives the feedback data from cloud server, judges whether the feedback data includes that output can be projected
Data for projection, if including data for projection, perform step S230 carries out projection output according to feedback data, otherwise, according to temporary
Stop the project content recorded during projection for starting point continues to play projection, complete this projection output.If by application program come
The embodiment of the present invention is realized, then judges whether to carry out projection output to feedback data by projecting application program.
Embodiment three
Fig. 3 is the schematic flow sheet of the example three for being related to the projection interactive method for being applied to intelligent robot of the invention,
The method of the embodiment mainly includes the following steps that, wherein, will be with embodiment one and step as embodiment two-phase with identical
Label is marked, and repeats no more its particular content, and only difference step is specifically described.
In step s 110, intelligent robot carries out projection output according to the data for projection of setting.
In the step s 120, in output procedure is projected, determine whether that setting event is triggered, if judgement has setting event
Triggering, then pause projection output, wherein, setting event includes monitoring multi-modal input data or the projection output of user's output
Time reaches specific interaction time point.
In step s 130, the multi-modal input data from user is received and parsed through.
In step S210, user view is obtained according to analysis result.
In step S220, pre-defined data for projection corresponding with user view is judged whether.If in the presence of with
The corresponding pre-defined data for projection of user view, then perform step S310, otherwise performs step S240.
In step S310, further determine whether there are other multi-modal datas corresponding with the data for projection, if sentencing
Disconnected result is in the presence of other multi-modal datas corresponding with the data for projection, then to perform step S320, otherwise performs step S230.
It should be noted that other multi-modal datas, such as can be the machinery for making robot perform certain limb action
Control instruction, robot is realized the expression control instruction of facial expression state, can be imitated in some practical application scenes
The expression of people and action.In this example, the memory of robot has user view and pre-defined projection except associated storage
Beyond data, data for projection and the associated database of multi-modal data are also stored for.By inquiring about the database, judge whether
In the presence of other multi-modal datas corresponding with pre-defined data for projection.For example, the user view of " explaining the implication of bent item "
The data for projection of " a winding goose neck " is corresponded to, the data for projection correspond to the voice number explained to " bent item "
According to:" bent item " two fonts hold the state that goose is sung heartily to day, refer to the winding neck of goose.
In step s 320, projection output data for projection, and other multi-modal datas are exported, return to step after end of output
S110。
Export while projection output data for projection or afterwards other multi-modal datas, other multi-modal datas it is defeated
Going out the time is not defined.In such manner, it is possible to further be explained to data for projection, user is helped to understand project content.
If multi-modal data is speech data, operation is performed by voice-output device, if multi-modal data is instructed for Mechanical course,
Then the corresponding mechanical structure of robot is performed according to instruction.
In step S230, projection output data for projection, return to step S110 after end of output.
In step S240, user view is sent to cloud server, receives anti-after cloud server is analyzed
Feedback data.
In step s 250, judge whether to carry out projection output to feedback data.If the determination result is YES, then step is performed
S230, otherwise, return continues executing with step S110.
Example IV
Fig. 4 is the structured flowchart of the projection interactive system 400 for being applied to intelligent robot of the embodiment of the present application.Such as Fig. 4
Shown, the projection interactive system 400 of the embodiment of the present application mainly includes:Projection output control module 410, user data receives mould
Block 420 and data for projection output module 430.
Projection output control module 410, it determines whether that setting event is triggered during projection output is carried out,
If so, then pause projection is exported, the setting event includes monitoring multi-modal input data or the projection output of user's output
Time reaches specific interaction time point.
User data receiver module 420, it receives and parses through the multi-modal input data from user, described from user
Multi-modal input data include the multi-modal of the user actively multi-modal input data of output and the multi-modal output data of response
Input data, the multi-modal output data is the data that robot is issued the user with the specific interaction time point.
Data for projection output module 430, it is according to the analysis result parsed to the multi-modal input data, output
Projection data corresponding thereto.The data for projection output module 430, it further obtains user view according to analysis result,
Pre-defined data for projection corresponding with user view is judged whether, if in the presence of, the projection output data for projection,
Otherwise, then the user view is sent to cloud server, receives the feedback data after cloud server is analyzed.It is described
Data for projection output module 430, it further determines whether to carry out projection output to the feedback data, in addition, defeated projecting
Other multi-modal datas corresponding with the data for projection are also exported while going out the data for projection.
By rationally setting, the projection interactive system 400 of the present embodiment can perform embodiment one, embodiment two and implement
Each step of example three, here is omitted.
Because the method for the present invention describes what is realized in computer systems.The computer system can for example be set
In the control core processor of robot.For example, method described herein can be implemented as what can be performed with control logic
Software, it is performed by the CPU in robot operating system.Function as herein described can be implemented as storage to be had in non-transitory
Programmed instruction set in shape computer-readable medium.When implemented in this fashion, the computer program includes one group of instruction,
When group instruction is run by computer, it promotes computer to perform the method that can implement above-mentioned functions.FPGA can be temporary
When or be permanently mounted in non-transitory tangible computer computer-readable recording medium, for example ROM chip, computer storage,
Disk or other storage mediums.In addition to being realized with software, logic as herein described can utilize discrete parts, integrated electricity
What road and programmable logic device (such as, field programmable gate array (FPGA) or microprocessor) were used in combination programmable patrols
Volume, or embodied including any other equipment that they are combined.All such embodiments are intended to fall under model of the invention
Within enclosing.
It should be understood that disclosed embodiment of this invention is not limited to ad hoc structure disclosed herein, process step
Or material, and the equivalent substitute of these features that those of ordinary skill in the related art are understood should be extended to.Should also manage
Solution, term as used herein is only used for describing the purpose of specific embodiment, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means special characteristic, the structure for describing in conjunction with the embodiments
Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs
Apply example " or " embodiment " same embodiment might not be referred both to.
While it is disclosed that implementation method as above, but described content is only to facilitate understanding the present invention and adopting
Implementation method, is not limited to the present invention.Any those skilled in the art to which this invention pertains, are not departing from this
On the premise of the disclosed spirit and scope of invention, any modification and change can be made in the formal and details implemented,
But scope of patent protection of the invention, must be still defined by the scope of which is defined in the appended claims.
Claims (10)
1. a kind of projection interactive method for being applied to intelligent robot, it is characterised in that methods described includes:
During projection output is carried out, determine whether that setting event is triggered, if there is setting event to trigger, pause projection
Output;
Receive and parse through the multi-modal input data from user;
According to the analysis result parsed to the multi-modal input data, projection data corresponding thereto is exported.
2. method according to claim 1, it is characterised in that
The setting event includes monitoring that the multi-modal input data or projection output time of user's output reach specific interaction
Time point;
The multi-modal input data from user includes user, and actively the multi-modal input data of output and response are multi-modal
The multi-modal input data of output data, the multi-modal output data is for robot in the specific interaction time point to user
The data for sending.
3. method according to claim 1, it is characterised in that
The projection interactive method is realized by way of application program,
User view is obtained according to analysis result, with the presence or absence of corresponding with user view fixed in advance in judgement projection application program
The data for projection of justice, if in the presence of the user view otherwise, is then sent to high in the clouds clothes by the projection output data for projection
Business device, receives the feedback data after cloud server is analyzed.
4. method according to claim 3, it is characterised in that also include:
Judge whether to carry out projection output to the feedback data by projecting application program.
5. the method according to any one of Claims 1 to 4, it is characterised in that
Other multi-modal datas corresponding with the data for projection are also exported while projection exports the data for projection.
6. a kind of projection interactive system for being applied to intelligent robot, it is characterised in that the system includes:
Projection output control module, it determines whether that setting event is triggered, if there is setting during projection output is carried out
Event is triggered, then pause projection output;
User data receiver module, it receives and parses through the multi-modal input data from user;
Data for projection output module, it is according to the analysis result parsed to the multi-modal input data, and it is right therewith to export
The data for projection answered.
7. system according to claim 6, it is characterised in that
The setting event includes monitoring that the multi-modal input data or projection output time of user's output reach specific interaction
Time point;
The multi-modal input data from user includes user, and actively the multi-modal input data of output and response are multi-modal
The multi-modal input data of output data, the multi-modal output data is for robot in the specific interaction time point to user
The data for sending.
8. system according to claim 6, it is characterised in that
The data for projection output module, it further obtains user view, judges whether and user according to analysis result
It is intended to corresponding pre-defined data for projection, if in the presence of the projection output data for projection, otherwise, then by the user
Intention is sent to cloud server, receives the feedback data after cloud server is analyzed.
9. system according to claim 8, it is characterised in that
The data for projection output module, it further determines whether to carry out projection output to the feedback data.
10. the system according to any one of claim 6~9, it is characterised in that
The data for projection output module, it is further also exported and the projection number while projection exports the data for projection
According to corresponding other multi-modal datas.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710027806.3A CN106903695B (en) | 2017-01-16 | 2017-01-16 | Projection interactive method and system applied to intelligent robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710027806.3A CN106903695B (en) | 2017-01-16 | 2017-01-16 | Projection interactive method and system applied to intelligent robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106903695A true CN106903695A (en) | 2017-06-30 |
CN106903695B CN106903695B (en) | 2019-04-26 |
Family
ID=59206486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710027806.3A Active CN106903695B (en) | 2017-01-16 | 2017-01-16 | Projection interactive method and system applied to intelligent robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106903695B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107553505A (en) * | 2017-10-13 | 2018-01-09 | 刘杜 | Autonomous introduction system platform robot and explanation method |
CN107741882A (en) * | 2017-11-22 | 2018-02-27 | 阿里巴巴集团控股有限公司 | The method and device and electronic equipment of distribution task |
CN108748141A (en) * | 2018-05-04 | 2018-11-06 | 安徽三弟电子科技有限责任公司 | A kind of children animation dispensing robot control system based on voice control |
CN111152232A (en) * | 2018-11-08 | 2020-05-15 | 现代自动车株式会社 | Service robot and method for operating the same |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104985599A (en) * | 2015-07-20 | 2015-10-21 | 百度在线网络技术(北京)有限公司 | Intelligent robot control method and system based on artificial intelligence and intelligent robot |
US20160039097A1 (en) * | 2014-08-07 | 2016-02-11 | Intel Corporation | Context dependent reactions derived from observed human responses |
CN105807933A (en) * | 2016-03-18 | 2016-07-27 | 北京光年无限科技有限公司 | Man-machine interaction method and apparatus used for intelligent robot |
CN105835064A (en) * | 2016-05-03 | 2016-08-10 | 北京光年无限科技有限公司 | Multi-mode output method of intelligent robot, and intelligent robot system |
CN105900051A (en) * | 2014-01-06 | 2016-08-24 | 三星电子株式会社 | Electronic device and method for displaying event in virtual reality mode |
CN205521501U (en) * | 2015-11-14 | 2016-08-31 | 华中师范大学 | Robot based on three -dimensional head portrait of holographically projected 3D |
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
-
2017
- 2017-01-16 CN CN201710027806.3A patent/CN106903695B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105900051A (en) * | 2014-01-06 | 2016-08-24 | 三星电子株式会社 | Electronic device and method for displaying event in virtual reality mode |
US20160039097A1 (en) * | 2014-08-07 | 2016-02-11 | Intel Corporation | Context dependent reactions derived from observed human responses |
CN104985599A (en) * | 2015-07-20 | 2015-10-21 | 百度在线网络技术(北京)有限公司 | Intelligent robot control method and system based on artificial intelligence and intelligent robot |
CN205521501U (en) * | 2015-11-14 | 2016-08-31 | 华中师范大学 | Robot based on three -dimensional head portrait of holographically projected 3D |
CN105807933A (en) * | 2016-03-18 | 2016-07-27 | 北京光年无限科技有限公司 | Man-machine interaction method and apparatus used for intelligent robot |
CN105835064A (en) * | 2016-05-03 | 2016-08-10 | 北京光年无限科技有限公司 | Multi-mode output method of intelligent robot, and intelligent robot system |
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107553505A (en) * | 2017-10-13 | 2018-01-09 | 刘杜 | Autonomous introduction system platform robot and explanation method |
CN107741882A (en) * | 2017-11-22 | 2018-02-27 | 阿里巴巴集团控股有限公司 | The method and device and electronic equipment of distribution task |
CN107741882B (en) * | 2017-11-22 | 2021-08-20 | 创新先进技术有限公司 | Task allocation method and device and electronic equipment |
CN108748141A (en) * | 2018-05-04 | 2018-11-06 | 安徽三弟电子科技有限责任公司 | A kind of children animation dispensing robot control system based on voice control |
CN111152232A (en) * | 2018-11-08 | 2020-05-15 | 现代自动车株式会社 | Service robot and method for operating the same |
Also Published As
Publication number | Publication date |
---|---|
CN106903695B (en) | 2019-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106903695B (en) | Projection interactive method and system applied to intelligent robot | |
US11765439B2 (en) | Intelligent commentary generation and playing methods, apparatuses, and devices, and computer storage medium | |
Lombard et al. | Social responses to media technologies in the 21st century: The media are social actors paradigm | |
CN109977208B (en) | Dialogue system integrating FAQ (failure-based query language) and task and active guidance | |
RU2690071C2 (en) | Methods and systems for managing robot dialogs | |
US20190122409A1 (en) | Multi-Dimensional Puppet with Photorealistic Movement | |
CN109176535B (en) | Interaction method and system based on intelligent robot | |
CN109710748B (en) | Intelligent robot-oriented picture book reading interaction method and system | |
CN106997243B (en) | Speech scene monitoring method and device based on intelligent robot | |
CN109789550A (en) | Control based on the social robot that the previous role in novel or performance describes | |
CN109960723A (en) | A kind of interactive system and method for psychological robot | |
CN107480766B (en) | Method and system for content generation for multi-modal virtual robots | |
CN110531849A (en) | A kind of intelligent tutoring system of the augmented reality based on 5G communication | |
CN109977238A (en) | Generate the system for drawing this, method and apparatus | |
WO2023226913A1 (en) | Virtual character drive method, apparatus, and device based on expression recognition | |
CN109343695A (en) | Exchange method and system based on visual human's behavioral standard | |
CN106502382A (en) | Active exchange method and system for intelligent robot | |
Gao et al. | Architecture of visual design creation system based on 5G virtual reality | |
CN109857929A (en) | A kind of man-machine interaction method and device for intelligent robot | |
CN110309470A (en) | A kind of virtual news main broadcaster system and its implementation based on air imaging | |
Gelfert | Steps to an Ecology of Knowledge: Continuity and Change in the Genealogy of Knowledge | |
CN112860213B (en) | Audio processing method and device, storage medium and electronic equipment | |
Van Oijen et al. | Agent communication for believable human-like interactions between virtual characters | |
CN110728604B (en) | Analysis method and device | |
CN108959488A (en) | Safeguard the method and device of Question-Answering Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |