CN107783650A - A kind of man-machine interaction method and device based on virtual robot - Google Patents

A kind of man-machine interaction method and device based on virtual robot Download PDF

Info

Publication number
CN107783650A
CN107783650A CN201710840497.1A CN201710840497A CN107783650A CN 107783650 A CN107783650 A CN 107783650A CN 201710840497 A CN201710840497 A CN 201710840497A CN 107783650 A CN107783650 A CN 107783650A
Authority
CN
China
Prior art keywords
information
user
modal
virtual robot
feedback information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710840497.1A
Other languages
Chinese (zh)
Inventor
王恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201710840497.1A priority Critical patent/CN107783650A/en
Publication of CN107783650A publication Critical patent/CN107783650A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Abstract

A kind of man-machine interaction method based on virtual robot, including:Step 1: obtain multi-modal input information;Step 2: parsing multi-modal input information, judge to whether there is user in designated area, wherein, if user in designated area be present, perform step 3;Step 3: the analysis result based on multi-modal input information carries out intention assessment, determine user view, according to user view, call the multi-modal interaction data related with current interaction scenarios to generate and export corresponding multi-modal feedback information, wherein, the virtual robot copyright in multi-modal feedback information is vivid related to current interaction scenarios information.This method can be exported the form that the related content of enterprise and enterprise product is interacted by virtual robot, make reception of the client to enterprises propagandist content more complete, so that user can quickly and easily get the related content about enterprise and enterprise product by virtual robot, so as to contribute to the popularization of business event.

Description

A kind of man-machine interaction method and device based on virtual robot
Technical field
The present invention relates to robotic technology field, specifically, is related to a kind of man-machine interaction side based on virtual robot Method and device.
Background technology
With the continuous development of scientific technology, the introducing of information technology, computer technology and artificial intelligence technology, machine Industrial circle is progressively walked out in the research of people, gradually extend to the neck such as medical treatment, health care, family, amusement and service industry Domain.And requirement of the people for robot is also conformed to the principle of simplicity the multiple mechanical action of substance be promoted to anthropomorphic question and answer, independence and with The intelligent robot that other robot interacts, man-machine interaction also just turn into an important factor for determining intelligent robot development.
At present, virtual robot is increasingly welcome by user as a kind of intelligent robot with virtual image, But the interaction scenarios of virtual robot relatively limit to, using the teaching of the invention it is possible to provide service it is limited, how to widen virtual robot service neck Domain turns into the problem of current urgent need to resolve
The content of the invention
To solve the above problems, the invention provides a kind of man-machine interaction method based on virtual robot, methods described Enable virtual robot and include the image of the virtual robot in default viewing area, it includes:
Step 1: obtain multi-modal input information;
Step 2: parsing the multi-modal input information, judge to whether there is user in designated area, wherein, If user be present in the designated area, step 3 is performed;
Step 3: the analysis result based on the multi-modal input information carries out intention assessment, user view is determined, according to The user view, call the multi-modal interaction data related with current interaction scenarios to generate and export corresponding multi-modal feedback Information, wherein, the virtual robot copyright in the multi-modal feedback information is vivid related to the current interaction scenarios information.
According to one embodiment of present invention, in the step 2, if user is not present in the designated area, Step 4 is performed, in the step 4, generates and exports default enterprise's promotion message.
According to one embodiment of present invention, in the step 3, also by being carried out to the multi-modal input information Parsing, obtains the user feeling information of the user, and generates and export corresponding multi-modal with reference to the user feeling information Feedback information.
According to one embodiment of present invention, the multi-modal feedback information also includes and the virtual robot copyright shape As corresponding voice feedback information.
According to one embodiment of present invention, in the step 3, actively generate and export reception function prompt information, And the input information fed back according to the user with regard to the reception function prompt information judges whether to need to start reception function, Wherein, if necessary to start reception function, then generated according to current interaction scenarios information and export corresponding multi-modal feedback letter Breath, wherein, the multi-modal feedback information includes Quick Response Code corresponding with the reception function phase.
Present invention also offers a kind of human-computer interaction device based on virtual robot, described device is configured to virtual machine The image of device people is shown in default viewing area, and described device includes:
Data obtaining module is inputted, it is used to obtain multi-modal input information;
Data processing module, it is connected with the input data obtaining module, for entering to the multi-modal input information Row parsing, judge to whether there is user in designated area, wherein, if user be present in the designated area, based on described The analysis result of multi-modal input information carries out intention assessment, determines user view, according to the user view, calls and current The related multi-modal interaction data of interaction scenarios generates and exports corresponding multi-modal feedback information, wherein, it is described multi-modal anti- Virtual robot copyright in feedforward information is vivid related to the current interaction scenarios information.
According to one embodiment of present invention, if user, the data processing module are not present in the designated area Then it is configured to generate and export default enterprise's promotion message.
According to one embodiment of present invention, the data processing module is configured to also by believing the multi-modal input Breath is parsed, and obtains the user feeling information of the user, and is generated and exported corresponding with reference to the user feeling information Multi-modal feedback information.
According to one embodiment of present invention, the multi-modal feedback information also includes and the virtual robot copyright shape As corresponding voice feedback information.
According to one embodiment of present invention, the data processing module is configured to actively generate and exports reception function and carries Show information, and the input information fed back according to the user with regard to the reception function prompt information judges whether to need startup to connect Standby function, wherein, if necessary to start reception function, then generated and exported corresponding multi-modal according to current interaction scenarios information Feedback information, wherein, the multi-modal feedback information includes Quick Response Code corresponding with the reception function phase.
Present invention also offers a kind of storage medium, it is stored with the storage medium described in executable as above any one The program code of man-machine interaction method step based on virtual robot.
Man-machine interaction method provided by the present invention based on virtual robot causes the version of shown virtual robot Power image or action match with current interaction scenarios, by the related content of enterprise and enterprise product (such as Introduction of enterprises, look forward to Industry product introduction etc.) exported by the form of virtual robot interaction, so that reception of the client to enterprises propagandist content More completely, relevant enterprise and enterprise can quickly and easily be got by virtual robot by so also allowing for user The related content of product, so as to contribute to the popularization of business event.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by specification, rights Specifically noted structure is realized and obtained in claim and accompanying drawing.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing required in technology description to do simple introduction:
Fig. 1 be the man-machine interaction method according to an embodiment of the invention based on virtual robot realize scene illustrate Figure;
Fig. 2 is the implementation process signal of the man-machine interaction method according to an embodiment of the invention based on virtual robot Figure;
Fig. 3 is that the implementation process of the man-machine interaction method in accordance with another embodiment of the present invention based on virtual robot is shown It is intended to;
Fig. 4 is shown according to the implementation process of the man-machine interaction method based on virtual robot of another embodiment of the invention It is intended to;
Fig. 5 is the structural representation of the human-computer interaction device according to an embodiment of the invention based on virtual robot.
Embodiment
Embodiments of the present invention are described in detail below with reference to drawings and Examples, and how the present invention is applied whereby Technological means solves technical problem, and the implementation process for reaching technique effect can fully understand and implement according to this.Need to illustrate As long as not forming conflict, each embodiment in the present invention and each feature in each embodiment can be combined with each other, The technical scheme formed is within protection scope of the present invention.
Meanwhile in the following description, many details are elaborated for illustrative purposes, to provide to of the invention real Apply the thorough understanding of example.It will be apparent, however, to one skilled in the art, that the present invention can not have to tool here Body details or described ad hoc fashion are implemented.
In addition, can be in the department of computer science of such as one group computer executable instructions the flow of accompanying drawing illustrates the step of Performed in system, although also, show logical order in flow charts, in some cases, can be with different from herein Order perform shown or described step.
The invention provides a kind of new man-machine interaction method based on virtual robot, the man-machine interaction method can lead to Cross and show that virtual robot copyright image accordingly related to enterprise and enterprise product is come and user carries out man-machine interaction.Fig. 1 Show this method in the present embodiment realizes schematic diagram of a scenario.
As shown in figure 1, in the present embodiment, being somebody's turn to do the man-machine interaction method based on virtual robot can use in implementation process To the image display 101 for showing virtual robot image 103.It is pointed out that implement in the different of the present invention In example, above-mentioned image display 101 can realize that the invention is not restricted to this according to being actually needed using different equipment. For example, in one embodiment of the invention, above-mentioned image display 101 can show virtual machine using liquid crystal display Device it is humanoid as;And in another embodiment of the present invention, above-mentioned image display 101 can also using holographic projector come Show virtual robot image 103.
In the present embodiment, the virtual robot image shown by image display 101 is and enterprise and enterprise product Corresponding.In order to clearly illustrate the realization of the man-machine interaction method provided by the present invention based on virtual robot original Reason, implementation process and advantage, come to the man-machine interaction side based on virtual robot below in conjunction with different embodiments Method is further described.
Embodiment one:
Fig. 2 shows the implementation process signal for the man-machine interaction method based on virtual robot that the present embodiment is provided Figure.
As shown in Fig. 2 the man-machine interaction method based on virtual robot that the present embodiment is provided is first in step S201 It is middle to obtain multi-modal input information.In the present embodiment, accessed multi-modal input information had been both in step s 201 for this method The video information (i.e. user images information) on user can be included, can also be comprising the audio-frequency information that user is inputted (i.e. User speech information).Certainly, in other embodiments of the invention, obtained in step s 201 according to actual conditions, this method Other appropriate messages can also be included in the multi-modal interactive information got, the invention is not restricted to this.
After multi-modal input information is got, this method can be carried out in step S202 to above-mentioned multi-modal input information Parsing, and judge to whether there is user in designated area according to the analysis result obtained by step S202 in step S203. In the present embodiment, above-mentioned designated area can be for show virtual robot copyright image image display before it is certain Regional extent, the present invention are not defined to the particular location and size of above-mentioned designated area.In other realities of the present invention Apply in example, this method can also determine to whether there is user in designated area by other means, for example, passing through microphone array Row, in the range of receiving of the microphone array, it is determined that when receiving user speech, you can it is determined that user be present.
Specifically, in the present embodiment, this method is in step S202 preferably to the figure in above-mentioned multi-modal input information As information progress image procossing, to determine specific region (the i.e. area corresponding with above-mentioned designated area of accessed image Domain) in whether there is humanoid figure's picture.Wherein, if humanoid figure's picture is not present in the specific region of accessed image, then Also can judges user is not present in designated area this method in step S203;And if the specified area of acquired image Humanoid figure's picture in domain be present, then also can judges user be present in designated area to this method in step S203.
If user is not present in designated area, then there is not pedestrian or client before then representing image display, So now this method can preferably generate and export default enterprise's promotion message.In the present embodiment, above-mentioned default enterprise promotes Information can be that the Introduction of enterprises video of relevant enterprise in current interaction scenarios or enterprise product introduce video etc..Certainly, exist In other embodiments of the invention, if user is not present in designated area, this method can not also export related feedback information, So that image display is in a dormant state, so contribute to reduce energy consumption.
If user in designated area be present, then pedestrian or client occur before then representing image display, now This method can carry out intention assessment in step S204 based on the analysis result of multi-modal input information, so that it is determined that going out user It is intended to.
Specifically, in the present embodiment, this method can be believed the audio in above-mentioned multi-modal input information in step S204 Breath is parsed, so as to obtain out interactive topic.Interaction topic can characterize user and interact or take turns more in single-wheel with robot The interactive subject surrounded in interaction, this method utilize determined interaction topic to primarily determine that out final needs The context being related to required for the feedback information of generation.
Certainly, in other embodiments of the invention, when including text message in above-mentioned multi-modal input information, the party Method can also determine to interact topic in step S204 by way of extracting the keyword in text message.And work as user institute When to interactively enter information be voice dialog information of input, voice dialog information can be converted to corresponding text by this method first This information, then determine to interact topic by parsing text message.
After determining to interact topic, this method can utilize it is default be intended to collection of illustrative plates, according to the interaction topic determined come Determine user view.Intention can be considered as robot and attempt to understand in interactive process with itself visual angle, and user is at certain The plan for reaching certain purpose it is expected under kind theme or topic.Because the content involved by interaction topic is relatively broad, therefore This method also just needs to excavate and determine that user needs to obtain from robot in follow-up interactive process to be intended to collection of illustrative plates Information (i.e. robot needs the information to user feedback).
Specifically, in the present embodiment, when according to interaction topic to determine the intention of user, this method first can be default It is intended to determine the node corresponding to above-mentioned interactive topic in collection of illustrative plates, then is intended to determine with above-mentioned interaction in collection of illustrative plates default Node corresponding to topic is the node (i.e. terminal node) corresponding to the line of start node, so that according to terminal node come really Make user view.
By the node being connected with start node there may be multiple, therefore the terminal node that this method is determined May be multiple.For such case, in the present embodiment, this method first can be according to the multiple sections being connected with start node Point determines that multiple candidate users are intended to, and then these candidate users are intended to carry out confidence level sequence, and according to ranking results To determine required user view.
Specifically, in the present embodiment, this method is according to the default weight for being intended to each node line in picture come to these Candidate user is intended to be ranked up, and the candidate user of weight selection maximum is intended to be used as final required user view.
Certainly, in other embodiments of the invention, this method can also be using other rational methods come according to multi-modal Input information determines user view, and the invention is not restricted to this.
As shown in Fig. 2 in the present embodiment, after user view is determined, this method can be in step S205 according to user It is intended to, calls the multi-modal input information related with current interaction scenarios to generate corresponding multi-modal feedback information.
In the present embodiment, virtual robot copyright image is included in the multi-modal feedback information that this method is generated, should The vivid enterprise or enterprise product to corresponding to current interaction scenarios of virtual robot copyright is related.For example, this method is in step Generated in S205 and show virtual robot copyright image can be enterprise lucky figure image or the enterprise Cartoon character of a certain product etc..
It is pointed out that in the present embodiment, according to being actually needed, above-mentioned steps S201 to step S205 both can be complete Realizing for the relevant hardware devices directly interacted with user in current interaction scenarios are arranged at, can also be by upper State and ordinatedly realized with cloud server for the relevant hardware devices directly interacted with user, the invention is not restricted to this.
For example, when above-mentioned steps are needed by the relevant hardware devices and cloud server for directly being interacted with user When ordinatedly realizing, this method can be by the multi-modal input information transfer got to cloud server, with by cloud service Device generates multi-modal feedback information by performing above-mentioned steps S202 to step S205, then multi-modal anti-by what is generated again Feedforward information is transmitted to being exported for the relevant hardware devices directly interacted with user.
It is also desirable to, it is noted that in different embodiments of the invention, this method generated in step S205 simultaneously The multi-modal feedback information of output both can be comprising the vivid animation of virtual robot copyright or comprising virtual machine The knot of animation and the corresponding voice messaging (such as with the vivid related story voice of the virtual robot copyright) of people's copyright image Close, or be the combination of the information of the animation comprising virtual robot copyright image and other proper forms, the invention is not restricted to This.
As can be seen that the man-machine interaction method based on virtual robot that the present embodiment is provided causes from foregoing description It is shown go out virtual robot copyright image or action match with current interaction scenarios, by enterprise and enterprise product Related content (such as Introduction of enterprises, enterprise product introduction etc.), the form interacted by virtual robot are exported, so that Reception of the client to enterprises propagandist content is more complete, and so also allowing for user can be by virtual robot come convenient, fast Ground gets the related content about enterprise and enterprise product, so as to contribute to the popularization of business event.
Embodiment two:
Fig. 3 shows the implementation process signal for the man-machine interaction method based on virtual robot that the present embodiment is provided Figure.
As shown in figure 3, the man-machine interaction method based on virtual robot that the present embodiment is provided is first in step S301 It is middle to obtain multi-modal input information, then in step s 302 to above-mentioned multi-modal input information parsing, and in step S303 Judged to whether there is user in designated area according to the analysis result obtained by step S302.Used if existed in designated area Family, then party's rule can be in step s 304 by carrying out intention knowledge to accessed multi-modal input information in step S301 Not, so that it is determined that going out user view.
It is pointed out that in the present embodiment, above-mentioned steps S301 to step S304 specific implementation principle and realization Process is identical with step S201 in above-described embodiment one to step S204 realization principle and implementation process, therefore no longer right herein Above-mentioned steps S301 to step S304 related content are repeated.
In the present embodiment, this method, can also be in step S305 in above-mentioned steps S301 in addition to determining user view Accessed multi-modal input information is parsed to determine user feeling information.Specifically, in the present embodiment, this method In step S305 recognition of face can be carried out to the image information in multi-modal input information accessed in step S301 To obtain the facial image in image, Expression Recognition then is carried out to obtained facial image again, so also can according to table Feelings recognition result determines user feeling information.
Certainly, in other embodiments of the invention, this method can also use other rational methods in step S305 To determine user feeling information according to multi-modal input information, the invention is not restricted to this.For example, in one embodiment of invention In, this method can also be by carrying out Application on Voiceprint Recognition, according to vocal print to the voice messaging of accessed multi-modal input information Recognition result determines user feeling information.
After user view and user feeling information is obtained, in the present embodiment, this method can in step S306 basis Above-mentioned user view and user feeling information, to call the multi-modal interaction data related to current interaction scenarios, with generation Corresponding multi-modal feedback information.
Embodiment three:
Fig. 4 shows the implementation process signal for the man-machine interaction method based on virtual robot that the present embodiment is provided Figure.
As shown in figure 4, the man-machine interaction method based on virtual robot that the present embodiment is provided is first in step S401 It is middle to obtain multi-modal input information, then to above-mentioned multi-modal input information parsing in step S402, and in step S403 Judged to whether there is user in designated area according to the analysis result obtained by step S402.
It is pointed out that in the present embodiment, above-mentioned steps S401 to step S403 specific implementation principle and realization Process is identical with step S201 in above-described embodiment one to step S203 realization principle and implementation process, therefore no longer right herein Above-mentioned steps S401 to step S403 related content are repeated.
In the present embodiment, if user in designated area be present, then party's rule can actively generate in step s 404 And export reception function prompt information.For example, this method detects in designated area that user be present (has detected user's entrance To specified services scope) when, the virtual image of the mascot of enterprise can be shown and export such as that " you are good, it is necessary to which I is you services " voice messaging.
This method can obtain user in step S405 and be directed to above-mentioned reception function after function prompt information is received in output The input information that prompt message is fed back, and judge now whether to need to start reception function according to the input information.
For example, if the input information that user is fed back for reception function prompt information is that such as " I wants to learn about The voice messaging of the product of your enterprises ", then this method will also judge now to need to start reception function;And if user The input information fed back for reception function prompt information is the voice messaging such as " being out of use, thanks ", then this method Also will judge now start reception function, at this moment party's rule can perform step S407 to generate and export default enterprise Industry promotion message.
In the present embodiment, if this method now needs to start reception function in step S405, then party's rule meeting Corresponding multi-modal feedback information is generated and exported according to current interaction scenarios information in step S406.In the present embodiment, This method generated in step S406 and export multi-modal feedback information preferably include it is corresponding with the reception function phase Quick Response Code.User can receive APP by scanning the Quick Response Code to obtain the enterprise of the public number of current enterprise or the enterprise, this Sample user can receive the guiding of virtual robot in real time on the intelligent terminal of oneself.
For example, if the application scenarios of this method are hotel, then this method is exported multi-modal in step S406 Feedback information can then include the reception APP in the hotel, and user can install reception APP on the intelligent terminal of oneself.With Family can know the house type guidance information and hotel's overall structure in the hotel during using the reception APP in the hotel Information etc..
For another example if the application scenarios of this method are amusement park, then this method is exported more in step S406 Mode feedback information can then include the public number of the amusement park, and user can know the field of the amusement park by the public number The information such as institute's general layout, points for attention introduction and the queuing situation in each place.
Present invention also offers a kind of storage medium, it is stored with the storage medium executable as described above based on virtual The program code of the man-machine interaction method step of robot.In addition, present invention also offers a kind of people based on virtual robot Machine interactive device, Fig. 5 show the structural representation of the human-computer interaction device in the present embodiment.
As shown in figure 5, in the present embodiment, it is somebody's turn to do the human-computer interaction device based on virtual robot and preferably includes:Input letter Cease acquisition module 501 and data processing module 502.Wherein, according to being actually needed, input data obtaining module 501 can be adopted Realized with different device or equipment, so as to obtain different types of input information.
For example, if necessary to obtain image information, then input data obtaining module 501 then needs to include corresponding image Collecting device (such as imaging first-class);And voice messaging is obtained if desired, then input data obtaining module 501 then needs to wrap Include corresponding voice capture device (such as microphone etc.);And text message is obtained if desired, then input acquisition of information mould Block 501 then needs to include corresponding text collection equipment (such as physical keyboard or dummy keyboard etc.).
Data processing module 502 is connected with input acquisition of information mould 501, and it can be passed to input data obtaining module 501 Defeated next multi-modal input information is parsed, and carries out intention assessment according to analysis result, so that it is determined that going out user view.
Specifically, in the present embodiment, data processing module 502 preferably includes cloud server.Input acquisition of information mould Block 501 get it is multi-modal input information after, can will it is above-mentioned it is multi-modal input information by related data transmission network (such as Ethernet etc.) transmit to cloud server, to be parsed by cloud server to above-mentioned multi-modal input information.
In the present embodiment, cloud server, which is preferably by the default collection of illustrative plates that is intended to, to be come to the multi-modal parsing knot for inputting information Fruit carries out intention assessment, so as to obtain user view.Certainly, in other embodiments of the invention, cloud server can be with User view is determined using other rational methods, the invention is not restricted to this.
After user view is obtained, cloud server is called with currently interacting field always according to the user view determined The related multi-modal interaction data of scape, to generate and export corresponding multi-modal feedback information.Wherein, above-mentioned multi-modal feedback letter Vivid (the enterprise for example, corresponding to current interaction scenarios related to current interaction scenarios information of virtual robot copyright in breath Lucky figure image or related cartoon character etc.).
After above-mentioned multi-modal feedback information is generated, cloud server can be by above-mentioned multi-modal transmission of feedback information to setting Output equipment in current interaction scenarios, with above-mentioned multi-modal feedback information is exported by the output equipment (such as Show virtual robot copyright image or output voice corresponding with virtual robot copyright image etc.).
It is pointed out that in different embodiments of the invention, above-mentioned cloud server realizes the specific original of its function Reason and process both can be identical to step S205 disclosure of that with step S202 in above-described embodiment one, can also with it is upper It is identical to step S306 disclosure of that to state step S302 in embodiment two, can also be with step S402 in above-described embodiment three It is identical to step S407 disclosure of that, therefore no longer the related content of cloud server is repeated herein.
Certainly, in other embodiments of the invention, the function of cloud server and the function of output equipment can be with It is integrated in a certain equipment being arranged in current interaction scenarios to realize, so inputs data obtaining module 501 also with regard to nothing The multi-modal input information got need to be uploaded to cloud server, and can changed by locally carrying out data processing.
It should be understood that disclosed embodiment of this invention is not limited to specific structure disclosed herein or processing step Suddenly, the equivalent substitute for these features that those of ordinary skill in the related art are understood should be extended to.It should also be understood that It is that term as used herein is only used for describing the purpose of specific embodiment, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means special characteristic, the structure described in conjunction with the embodiments Or during characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs Apply example " or " embodiment " same embodiment might not be referred both to.
Although above-mentioned example is used to illustrate principle of the present invention in one or more apply, for the technology of this area For personnel, in the case of without departing substantially from the principle and thought of the present invention, hence it is evident that can in form, the details of usage and implementation It is upper that various modifications may be made and does not have to pay creative work.Therefore, the present invention is defined by the appended claims.

Claims (11)

1. a kind of man-machine interaction method based on virtual robot, it is characterised in that virtual robot is enabled, by the virtual machine The image of device people is shown in default viewing area, including:
Step 1: obtain multi-modal input information;
Step 2: parsing the multi-modal input information, judge to whether there is user in designated area, wherein, if User in the designated area be present, then perform step 3;
Step 3: the analysis result based on the multi-modal input information carries out intention assessment, user view is determined, according to described User view, call the multi-modal interaction data related with current interaction scenarios to generate and export corresponding multi-modal feedback letter Breath, wherein, the virtual robot copyright in the multi-modal feedback information is vivid related to the current interaction scenarios information.
2. the method as described in claim 1, it is characterised in that in the step 2, if do not deposited in the designated area In user, then step 4 is performed, in the step 4, generate and export default enterprise's promotion message.
3. method as claimed in claim 1 or 2, it is characterised in that in the step 3, also by described multi-modal defeated Enter information to be parsed, obtain the user feeling information of the user, and generated with reference to the user feeling information and export phase The multi-modal feedback information answered.
4. such as method according to any one of claims 1 to 3, it is characterised in that the multi-modal feedback information also include with The corresponding voice feedback information of the virtual robot copyright image.
5. such as method according to any one of claims 1 to 4, it is characterised in that in the step 3, actively generate and defeated Go out to receive function prompt information, and the input information judgement fed back according to the user with regard to the reception function prompt information is It is no to need to start reception function, wherein, if necessary to start reception function, then generate and export according to current interaction scenarios information Corresponding multi-modal feedback information, wherein, the multi-modal feedback information includes Quick Response Code corresponding with the reception function phase.
6. a kind of human-computer interaction device based on virtual robot, it is characterised in that described device is configured to virtual robot Image be shown in default viewing area, described device includes:
Data obtaining module is inputted, it is used to obtain multi-modal input information;
Data processing module, it is connected with the input data obtaining module, for being solved to the multi-modal input information Analysis, judge to whether there is user in designated area, wherein, if user be present in the designated area, based on the multimode The analysis result of state input information carries out intention assessment, determines user view, according to the user view, calls with currently interacting The related multi-modal interaction data of scene generates and exports corresponding multi-modal feedback information, wherein, the multi-modal feedback letter Virtual robot copyright in breath is vivid related to the current interaction scenarios information.
7. device as claimed in claim 6, it is characterised in that if user, the data are not present in the designated area Processing module is then configured to generate and export default enterprise's promotion message.
8. device as claimed in claims 6 or 7, it is characterised in that the data processing module is configured to also by described Multi-modal input information is parsed, and obtains the user feeling information of the user, and generate with reference to the user feeling information And export corresponding multi-modal feedback information.
9. the device as any one of claim 6~8, it is characterised in that the multi-modal feedback information also include with The corresponding voice feedback information of the virtual robot copyright image.
10. the device as any one of claim 6~9, it is characterised in that the data processing module is configured to actively Generate and export reception function prompt information, and the input fed back according to the user with regard to the reception function prompt information is believed Breath judges whether to need to start reception function, wherein, if necessary to start reception function, then given birth to according to current interaction scenarios information Into and export corresponding multi-modal feedback information, wherein, the multi-modal feedback information includes corresponding with the reception function Quick Response Code.
11. a kind of storage medium, it is characterised in that be stored with the storage medium executable as any in claim 1-5 The program code of method and step described in.
CN201710840497.1A 2017-09-18 2017-09-18 A kind of man-machine interaction method and device based on virtual robot Pending CN107783650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710840497.1A CN107783650A (en) 2017-09-18 2017-09-18 A kind of man-machine interaction method and device based on virtual robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710840497.1A CN107783650A (en) 2017-09-18 2017-09-18 A kind of man-machine interaction method and device based on virtual robot

Publications (1)

Publication Number Publication Date
CN107783650A true CN107783650A (en) 2018-03-09

Family

ID=61437876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710840497.1A Pending CN107783650A (en) 2017-09-18 2017-09-18 A kind of man-machine interaction method and device based on virtual robot

Country Status (1)

Country Link
CN (1) CN107783650A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857929A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of man-machine interaction method and device for intelligent robot
CN111383346A (en) * 2020-03-03 2020-07-07 深圳创维-Rgb电子有限公司 Interaction method and system based on intelligent voice, intelligent terminal and storage medium
CN111966212A (en) * 2020-06-29 2020-11-20 百度在线网络技术(北京)有限公司 Multi-mode-based interaction method and device, storage medium and smart screen device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440677A (en) * 2013-07-30 2013-12-11 四川大学 Multi-view free stereoscopic interactive system based on Kinect somatosensory device
US20150089445A1 (en) * 2006-01-30 2015-03-26 Microsoft Corporation Controlling Application Windows In An Operating System
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150089445A1 (en) * 2006-01-30 2015-03-26 Microsoft Corporation Controlling Application Windows In An Operating System
CN103440677A (en) * 2013-07-30 2013-12-11 四川大学 Multi-view free stereoscopic interactive system based on Kinect somatosensory device
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张红琼: "《替代人类互作的机器人》", 30 April 2012, 安徽美术出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857929A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of man-machine interaction method and device for intelligent robot
CN109857929B (en) * 2018-12-29 2021-06-15 北京光年无限科技有限公司 Intelligent robot-oriented man-machine interaction method and device
CN111383346A (en) * 2020-03-03 2020-07-07 深圳创维-Rgb电子有限公司 Interaction method and system based on intelligent voice, intelligent terminal and storage medium
CN111383346B (en) * 2020-03-03 2024-03-12 深圳创维-Rgb电子有限公司 Interactive method and system based on intelligent voice, intelligent terminal and storage medium
CN111966212A (en) * 2020-06-29 2020-11-20 百度在线网络技术(北京)有限公司 Multi-mode-based interaction method and device, storage medium and smart screen device

Similar Documents

Publication Publication Date Title
CN107728780A (en) A kind of man-machine interaction method and device based on virtual robot
US11036469B2 (en) Parsing electronic conversations for presentation in an alternative interface
CN107294837A (en) Engaged in the dialogue interactive method and system using virtual robot
US11151765B2 (en) Method and apparatus for generating information
CN107704169B (en) Virtual human state management method and system
US11024286B2 (en) Spoken dialog system, spoken dialog device, user terminal, and spoken dialog method, retrieving past dialog for new participant
CN107632706B (en) Application data processing method and system of multi-modal virtual human
CN110400251A (en) Method for processing video frequency, device, terminal device and storage medium
CN107329990A (en) A kind of mood output intent and dialogue interactive system for virtual robot
CN106464768A (en) In-call translation
CN104735480B (en) Method for sending information and system between mobile terminal and TV
CN107808191A (en) The output intent and system of the multi-modal interaction of visual human
CN106471444A (en) A kind of exchange method of virtual 3D robot, system and robot
CN113793398A (en) Drawing method and device based on voice interaction, storage medium and electronic equipment
CN107783650A (en) A kind of man-machine interaction method and device based on virtual robot
CN109005190A (en) A method of full-duplex voice dialogue and page control are realized based on webpage
CN113850898A (en) Scene rendering method and device, storage medium and electronic equipment
CN111063346A (en) Cross-media star emotion accompany interaction system based on machine learning
CN113703585A (en) Interaction method, interaction device, electronic equipment and storage medium
CN113792196A (en) Method and device for man-machine interaction based on multi-modal dialog state representation
CN113742473A (en) Digital virtual human interaction system and calculation transmission optimization method thereof
CN107705166A (en) Information processing method and device
CN113157241A (en) Interaction equipment, interaction device and interaction system
KR20210025943A (en) Messenger based advertising method and apparatus
CN116628153B (en) Method, device, equipment and medium for controlling dialogue of artificial intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180309