CN105093986A - Humanoid robot control method based on artificial intelligence, system and the humanoid robot - Google Patents

Humanoid robot control method based on artificial intelligence, system and the humanoid robot Download PDF

Info

Publication number
CN105093986A
CN105093986A CN201510437864.4A CN201510437864A CN105093986A CN 105093986 A CN105093986 A CN 105093986A CN 201510437864 A CN201510437864 A CN 201510437864A CN 105093986 A CN105093986 A CN 105093986A
Authority
CN
China
Prior art keywords
user
intention
anthropomorphic robot
artificial intelligence
described user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510437864.4A
Other languages
Chinese (zh)
Inventor
王志昊
葛行飞
李福祥
孟超超
孙艳虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201510437864.4A priority Critical patent/CN105093986A/en
Publication of CN105093986A publication Critical patent/CN105093986A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a humanoid robot control method based on artificial intelligence, a system and the humanoid robot. The method comprises the following steps of receiving a voice signal input by a user and/or an image signal; according to the voice signal and/or the image signal, determining a user intention; processing the user intention and feeding back a processing result as a multi-modal output mode to the user, wherein the multi-modal output mode comprises one or more of a motion output mode of the humanoid robot, an image or video output mode and an audio output mode. By using the method, according to the collected voice signal and/or image signal of the user, after artificial intelligence analysis, the humanoid robot is autonomously controlled to interact with the user as several modes so that an artificial intelligence result can be visually and effectively displayed and an affine and effective effect is achieved in propaganda, demonstration and service fields.

Description

Based on anthropomorphic robot's control method of artificial intelligence, system and anthropomorphic robot
Technical field
The present invention relates to intelligent terminal technical field, particularly based on artificial intelligence, (ArtificialIntelligence is called for short: anthropomorphic robot's control method AI), control system and anthropomorphic robot one.
Background technology
The carrier normally equipment such as computer or mobile terminal of artificial intelligence, but, be limited to the design feature of computer and mobile terminal itself, what can not realize with the mankind well is mutual, in order to embody the advantage of artificial intelligence, the large-scale anthropomorphic robot being equipped with artificial intelligence arises, existing large-scale anthropomorphic robot with the interactivity of the mankind on more directly perceived than computer and mobile terminal etc., these anthropomorphic robots only possess some limited locomitivities and simple interactive means usually, its interactive mode is single, such as: user controls anthropomorphic robot by simple interactive meanses such as remote controls and does simple action, or perform single and simple action according to pre-set program.
Existing anthropomorphic robot does not possess the anthropoid organizational composition of class and interactive mode, the interactive mode that its interactive mode is single and normally passive, such as: perform corresponding action according to the telecommand of user, the advantage of artificial intelligence can not be played, " personification " is poor, that is: can not truly make anthropomorphic robot have the anthropoid thinking of class or interactive mode initiatively.To sum up, the hardware components relations such as the artificial intelligence part of the anthropomorphic robot in correlation technique and motion structure are peeled off, and do not set up an organic whole and the expression that can not more enrich, and more not various mutual between the mankind.
Summary of the invention
Object of the present invention is intended at least solve one of described technological deficiency.
For this reason, one object of the present invention is to propose a kind of anthropomorphic robot's control method based on artificial intelligence.The method can independently control anthropomorphic robot according to the voice signal of the user collected and/or picture signal and carry out with user alternately in many ways after the analysis of artificial intelligence, more intuitively effectively the achievement of artificial intelligence is shown in publicity, demonstration and service field, there is very affine effective effect.
Another object of the present invention is to propose a kind of Control system of humanoid robot based on artificial intelligence.
Another object of the present invention is to propose a kind of anthropomorphic robot.
For achieving the above object, the embodiment of a first aspect of the present invention discloses a kind of anthropomorphic robot's control method based on artificial intelligence, comprises the following steps: the voice signal and/or the picture signal that receive user's input; The intention of described user is determined according to described voice signal and/or picture signal; And the intention of described user is processed, and result is fed back to described user with the multi-modal way of output, the described multi-modal way of output comprise in the action way of output of described anthropomorphic robot, image or video frequency output mode and the audio frequency way of output one or more.
According to the anthropomorphic robot's control method based on artificial intelligence of the embodiment of the present invention, the voice signal of collection user that can be real-time and/or picture signal, after the analysis of artificial intelligence, independently control anthropomorphic robot carry out corresponding action or the display image relevant to user view or the broadcasting audio frequency etc. relevant with user view, enrich with the interactive means of user, can more intuitively effectively show the achievement of artificial intelligence.In addition, the motion of anthropomorphic robot is the feedback system realization of view-based access control model and the sense of hearing completely, there is anthropomorphic paleocinetic consciousness, be easy to user operation, embody the intelligent of anthropomorphic robot more all sidedly simultaneously, promote the experience of user, in publicity, demonstration and service field, there is very affine effective effect.
The embodiment of second aspect present invention discloses a kind of Control system of humanoid robot based on artificial intelligence, comprising: receiver module, for receiving voice signal and/or the picture signal of user's input; Artificial intelligence module, for determining the intention of described user according to described voice signal and/or picture signal; And control module, for processing the intention of described user; Feedback module, for the result of described processing module is fed back to described user with the multi-modal way of output, the described multi-modal way of output comprise in the action way of output of described anthropomorphic robot, image or video frequency output mode and the audio frequency way of output one or more.
According to the Control system of humanoid robot based on artificial intelligence of the embodiment of the present invention, the voice signal of collection user that can be real-time and/or picture signal, after the analysis of artificial intelligence, independently control anthropomorphic robot carry out corresponding action or the display image relevant to user view or the broadcasting audio frequency etc. relevant with user view, enrich with the interactive means of user, can more intuitively effectively show the achievement of artificial intelligence.In addition, the motion of anthropomorphic robot is the feedback system realization of view-based access control model and the sense of hearing completely, there is anthropomorphic paleocinetic consciousness, be easy to user operation, embody the intelligent of anthropomorphic robot more all sidedly simultaneously, promote the experience of user, in publicity, demonstration and service field, there is very affine effective effect.
The embodiment of third aspect present invention discloses a kind of anthropomorphic robot, comprising: the Control system of humanoid robot based on artificial intelligence according to above-mentioned second aspect embodiment.This anthropomorphic robot can be real-time the voice signal of collection user and/or picture signal, after the analysis of artificial intelligence, independently control anthropomorphic robot carry out corresponding action or the display image relevant to user view or the broadcasting audio frequency etc. relevant with user view, enrich with the interactive means of user, can more intuitively effectively show the achievement of artificial intelligence.In addition, the motion of anthropomorphic robot is the feedback system realization of view-based access control model and the sense of hearing completely, there is anthropomorphic paleocinetic consciousness, be easy to user operation, embody the intelligent of anthropomorphic robot more all sidedly simultaneously, promote the experience of user, in publicity, demonstration and service field, there is very affine effective effect.
The aspect that the present invention adds and advantage will part provide in the following description, and part will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
Of the present invention and/or additional aspect and advantage will become obvious and easy understand from the following description of the accompanying drawings of embodiments, wherein:
Fig. 1 is according to an embodiment of the invention based on the process flow diagram of anthropomorphic robot's control method of artificial intelligence; And
Fig. 2 be according to an embodiment of the invention based on anthropomorphic robot's control method of artificial intelligence determination described in the detail flowchart of intention of user;
Fig. 3 be in accordance with another embodiment of the present invention based on anthropomorphic robot's control method of artificial intelligence determination described in the detail flowchart of intention of user;
Fig. 4 controls anthropomorphic robot based on anthropomorphic robot's control method of artificial intelligence to move to process flow diagram in face of user according to an embodiment of the invention;
Fig. 5 is according to an embodiment of the invention based on the structured flowchart of the Control system of humanoid robot of artificial intelligence; And
Fig. 6 is according to an embodiment of the invention based on the frame diagram of the anthropomorphic robot of artificial intelligence.
Embodiment
Be described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Being exemplary below by the embodiment be described with reference to the drawings, only for explaining the present invention, and can not limitation of the present invention being interpreted as.
In describing the invention, it will be appreciated that, term " longitudinal direction ", " transverse direction ", " on ", D score, "front", "rear", "left", "right", " vertically ", " level ", " top ", " end " " interior ", the orientation of the instruction such as " outward " or position relationship be based on orientation shown in the drawings or position relationship, only the present invention for convenience of description and simplified characterization, instead of indicate or imply that the device of indication or element must have specific orientation, with specific azimuth configuration and operation, therefore can not be interpreted as limitation of the present invention.
In describing the invention, it should be noted that, unless otherwise prescribed and limit, term " installation ", " being connected ", " connection " should be interpreted broadly, such as, can be mechanical connection or electrical connection, also can be the connection of two element internals, can be directly be connected, also indirectly can be connected by intermediary, for the ordinary skill in the art, the concrete meaning of described term can be understood as the case may be.
In order to solve the intelligent difference of the anthropomorphic robot existed in correlation technique and can not very well and the mankind carry out mutual problem, the present invention is based on artificial intelligence and achieve intelligent height and the anthropomorphic robot's control method experienced with human interaction, system and anthropomorphic robot, wherein, artificial intelligence (ArtificialIntelligence, be called for short: AI), be research, develop the theory of intelligence for simulating, extending and expand people, method, one of application system new technological sciences.Artificial intelligence is a branch of computer science, the essence of intelligence is understood in attempt, and produce a kind of intelligent machine can made a response in the mode that human intelligence is similar newly, the research in this field comprises robot, speech recognition, image recognition, natural language processing and expert system etc.
Artificial intelligence is the simulation to the consciousness of people, the information process of thinking.Artificial intelligence is not the intelligence of people, but can think deeply as people, may exceed the intelligence of people yet.Artificial intelligence comprises science very widely, be made up of different fields, as machine learning, computer vision etc., generally speaking, a main target of artificial intelligence study is the complex work enabling machine be competent at some usually to need human intelligence just can complete.
Describe according to the anthropomorphic robot's control method based on artificial intelligence of the embodiment of the present invention, control system and anthropomorphic robot below in conjunction with accompanying drawing.
Fig. 1 is according to an embodiment of the invention based on the process flow diagram of anthropomorphic robot's control method of artificial intelligence.As shown in Figure 1, the method comprises the steps:
S101: the voice signal and/or the picture signal that receive user's input.
Particularly, the voice signal of user's input can be that user is inputted by microphone; Above-mentioned picture signal can be obtained by camera collection.
S102: the intention determining described user according to voice signal and/or picture signal.That is: analyzing and processing can be carried out to voice signal and/or picture signal by artificial intelligence, thus determine the intention of user.It should be noted that, can carry out the intention that analyzing and processing determines user by artificial intelligence to any one in voice signal and picture signal, also can be the intention by determining user to both combinations.
Particularly, as shown in Figure 2, the intention of user can be determined according to following two kinds of modes, specifically comprise:
1, speech recognition is carried out to voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, to determine the intention of user.Specifically, process for voice signal needs to carry out speech recognition, natural language understanding, semantic analysis, mechanical translation, sentiment analysis etc., by above-mentioned process, anthropomorphic robot is carrying out in mutual process with user, when user says in short time, the implication of the voice that user inputs can be learnt.
2, speech recognition is carried out to voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, and combining image signal determines the intention of user.Specifically, when anthropomorphic robot learns the implication of the voice that user inputs, the intention of definitely user can be carried out further combined with the picture signal of user.Such as: when the voice of user's input are " shaking hands ", and identify user by the picture signal of user and give the action of stretching out the right hand, then now can specify user be intended to shake hands with anthropomorphic robot.
In addition, the mode combined by both not only can the intention of definitely user, can also wherein one when not identifying, and is determined the intention of user by another.Such as: when the action of user is ambiguous or when not providing clear and definite action, " can be shaken hands " intention determining user by the voice identified.Equally, when voice signal is bad, when can not identify voice signal, then can determine that user goes out the intention of user, the intention of namely shaking hands according to " user stretches out the action of the right hand " of the user recognized in picture signal.
As shown in Figure 3, the intention of user can also be determined according to other two kinds of modes, specifically comprise:
3, image recognition is carried out to picture signal, determine the user in picture signal, and determine the limb action of user according to the action difference of the user in picture signal between multiple image, and determine the intention of user according to the limb action of user.Specifically, (as 2 seconds) gather the vision signal (i.e. multiple continuous print image) of user in a short period of time, then the limb action of user is determined according to the action difference of user in multiple continuous print picture signal, also for " user stretches out the action of the right hand ", then determine that the limb action of user is " action that user stretches out the right hand ", thus determine being intended to " shaking hands " of user.
4, image recognition is carried out to picture signal, determine the user in picture signal, and determine the limb action of user according to the action difference of the user in picture signal between multiple image, and determine the intention of user according to the limb action of user and/or voice signal.Specifically, after being determined that by picture signal the limb action of user is " action that user stretches out the right hand ", in conjunction with voice signal, the implication identifying voice signal is " shaking hands ", then both combine can the intention of definitely user.
In addition, the mode combined by both not only can the intention of definitely user, can also wherein one when not identifying, and is determined the intention of user by another.Such as: when the action of user is ambiguous or when not providing clear and definite action, " can be shaken hands " intention determining user by the voice identified.Equally, when voice signal is bad, when can not identify voice signal, then can determine that user goes out the intention of user, the intention of namely shaking hands according to " user stretches out the action of the right hand " of the user recognized in picture signal.
S103: the intention of user is processed, and result is fed back to user with the multi-modal way of output, the multi-modal way of output comprise in the action way of output of anthropomorphic robot, image or video frequency output mode and the audio frequency way of output one or more.
In one embodiment of the invention, result is fed back to user with the multi-modal way of output to comprise: control anthropomorphic robot and perform the action corresponding with the intention of user; And/or; Show the expression relevant to the intention of described user; And/or carry out the image demonstration relevant to the intention of described user or video display; And/or play the audio frequency relevant to the intention of described user.
Such as: when anthropomorphic robot determines being intended to " shaking hands " of user, then can control the one that anthropomorphic robot performs action, the display of stretching out its right hand " smiling face's image " corresponding with the scene of the close friend such as " shaking hands ", plays the music corresponding with the friendly scene such as " shaking hands " etc. intelligently, or the multiple combination in above-mentioned various feedback mode, as stretch out the right hand action while display " smiling face's image " to show friendship.Thus promote the man-machine interaction experience of anthropomorphic robot.
As shown in Figure 4, in one embodiment of the invention, before the intention determining user according to voice signal and/or picture signal, also comprise:
S401: detect user and whether call out anthropomorphic robot.Such as: set a name to anthropomorphic robot in advance, as " little Bai ", then, as user's calling " little Bai ", anthropomorphic robot carries out speech recognition by the mode of artificial intelligence to " little Bai " that user calls out, thus make anthropomorphic robot understand its implication, just know whether as calling out oneself.
S402: if then activate anthropomorphic robot, and carry out auditory localization to determine the position of user according to the calling of user, and control anthropomorphic robot moves in face of user.
Specifically, if anthropomorphic robot determines that user calls out oneself really, then activate, each functional module being such as anthropomorphic robot by battery etc. powers on, and makes anthropomorphic robot be in state of activation.In one embodiment of the invention, in order to promote man-machine interaction experience, when anthropomorphic robot determines that user is when calling out it, the activation expression of anthropomorphic robot can be shown, activate expression can be predefined, just can as long as can reflect that robot is activated intuitively.Thus, promote man-machine interaction experience further.
In addition, auditory localization is carried out to determine the position of user according to the calling of user.The sound around anthropomorphic robot is detected by ManyEars auditory localization technology, particularly, sound-source signal can be gathered according to microphone array, effective sound signal detection can be carried out to sound-source signal afterwards, and by ManyEars technology, the multi-acoustical detected be carried out being separated to obtain multiple independently sound source.Wherein, the term " at least one " of the present embodiment can be regarded as one or more.Further, by the auditory localization computing in ManyEars technology, certain sound source above-mentioned is positioned, thus determine the position of user.
After the position determining user, just can automatically control anthropomorphic robot and move in face of user.Further, also comprise: detect anthropomorphic robot and whether move in face of user; If not, then control anthropomorphic robot further to carry out moving until reach in face of user.Thus closed loop is formed to the motion control of anthropomorphic robot, ensure that the motion of anthropomorphic robot is more accurate.
In one embodiment of the invention, the method be also included in control anthropomorphic robot move to user in face of time, by the camera steering surface of anthropomorphic robot to the direction of user, to take pictures to user, and carry out recognition of face according to the picture signal of user, to determine the identity information of user.Like this, can prevent disabled user from using anthropomorphic robot on the one hand, on the other hand, also anthropomorphic machine people can be made to provide personalized service for user according to the identity information of user, promote the experience of anthropomorphic robot.Such as: computer can set up multiple account for different users, each user can enter system by the account of oneself, personal settings can be carried out under the account of oneself, anthropomorphic machine people provides personalized service to be also similar for user, anthropomorphic robot, by after the identity of determining user, can be just that different users provides personalized service according to the identity of user.
As a concrete example, suppose to judge that user calls out as anthropomorphic robot by artificial intelligence, anthropomorphic robot collects the position of user by microphone array, then move to intelligently in face of user, and anthropomorphic robot's head oscillation is aimed at user, carry out by camera laggard row recognition of face determination user identity of taking pictures, thus provide desirable interactive means for user or provide personalized service for user.At this moment when user reaches, robot collects picture signal corresponding to This move, then utilize artificial intelligence after carrying out specific aim analysis to user behavior, determine the action of shaking hands, just can initiatively make a stretch of the arm suitable position, display " smiling face's image " corresponding with the scene of the close friend such as " shaking hands " and play the one of the music corresponding with the friendly scene such as " shaking hands " etc., or the multiple combination in above-mentioned various feedback mode, as stretch out the right hand action while display " smiling face's image " to show friendship.Thus promote the man-machine interaction experience of anthropomorphic robot.
Make a more detailed description below in conjunction with the method for concrete example to the embodiment of the present invention.Such as: an indoor environment, the user of anthropomorphic robot calls its name, and anthropomorphic robot captures this sound by the microphone array be arranged on health, and is navigated to the position of user by location algorithm.Anthropomorphic robot is automatic activation after capturing call, power to each ingredient of anthropomorphic robot, expression (namely activating expression) after the display screen display of the head of anthropomorphic robot activates, then anthropomorphic robot is made to move in face of user by the wheel on the chassis of anthropomorphic robot in the mode of two-wheel differential motion, by head rotation to the direction towards user, the camera of head can be taken pictures and identification to user, go out the most appropriate mode mutual with user by existing database retrieval again or provide personalized service for it.Collected by microphone user voice signal or by camera collection to the picture signal of user after, the intention of user is learnt through the process of artificial intelligence and analysis, then just the audio frequency relevant to the intention of user can be exported by audio amplifier, also the picture relevant to the intention of user or video be can export by main display, even can expression and action and user be passed through mutual.Such as say " we have a dance " user, anthropomorphic robot is captured after this voice signal by after its implication of analysis and understanding of artificial intelligence by microphone, the music can liked by audio amplifier broadcasting user, then position and the action of user is captured by camera, move to suitable position and head and arm etc. are swung thereupon, reach the implementation effect of user's " we have a dance " this voice signal, thus the constructional hardware of anthropomorphic robot is organically combined with artificial intelligence, reach the interaction effect good with user.
According to the anthropomorphic robot's control method based on artificial intelligence of the embodiment of the present invention, the voice signal of collection user that can be real-time and/or picture signal, after the analysis of artificial intelligence, independently control anthropomorphic robot carry out corresponding action or the display image relevant to user view or the broadcasting audio frequency etc. relevant with user view, enrich with the interactive means of user, can more intuitively effectively show the achievement of artificial intelligence.In addition, the motion of anthropomorphic robot is the feedback system realization of view-based access control model and the sense of hearing completely, there is anthropomorphic paleocinetic consciousness, be easy to user operation, embody the intelligent of anthropomorphic robot more all sidedly simultaneously, promote the experience of user, in publicity, demonstration and service field, there is very affine effective effect.
Fig. 5 is according to an embodiment of the invention based on the structured flowchart of the Control system of humanoid robot of artificial intelligence.As shown in Figure 5, this control system 500, comprising: receiver module 510, artificial intelligence module 520, control module 530 and feedback module 540.
Specifically, receiver module 510 is for receiving voice signal and/or the picture signal of user's input.The voice signal of user's input can be that user is inputted by microphone; Above-mentioned picture signal can be obtained by camera collection.
Artificial intelligence module 520 is for determining the intention of user according to voice signal and/or picture signal.
Such as: analyzing and processing can be carried out to voice signal and/or picture signal by artificial intelligence, thus determine the intention of user.It should be noted that, can carry out the intention that analyzing and processing determines user by artificial intelligence to any one in voice signal and picture signal, also can be the intention by determining user to both combinations.
Particularly, as shown in Figure 2, the intention of user can be determined according to following two kinds of modes, specifically comprise:
1, speech recognition is carried out to voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, to determine the intention of user.Specifically, process for voice signal needs to carry out speech recognition, natural language understanding, semantic analysis, mechanical translation, sentiment analysis etc., by above-mentioned process, anthropomorphic robot is carrying out in mutual process with user, when user says in short time, the implication of the voice that user inputs can be learnt.
2, speech recognition is carried out to voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, and combining image signal determines the intention of user.Specifically, when anthropomorphic robot learns the implication of the voice that user inputs, the intention of definitely user can be carried out further combined with the picture signal of user.Such as: when the voice of user's input are " shaking hands ", and identify user by the picture signal of user and give the action of stretching out the right hand, then now can specify user be intended to shake hands with anthropomorphic robot.
In addition, the mode combined by both not only can the intention of definitely user, can also wherein one when not identifying, and is determined the intention of user by another.Such as: when the action of user is ambiguous or when not providing clear and definite action, " can be shaken hands " intention determining user by the voice identified.Equally, when voice signal is bad, when can not identify voice signal, then can determine that user goes out the intention of user, the intention of namely shaking hands according to " user stretches out the action of the right hand " of the user recognized in picture signal.
As shown in Figure 3, the intention of user can also be determined according to other two kinds of modes, specifically comprise:
3, image recognition is carried out to picture signal, determine the user in picture signal, and determine the limb action of user according to the action difference of the user in picture signal between multiple image, and determine the intention of user according to the limb action of user.Specifically, (as 2 seconds) gather the vision signal (i.e. multiple continuous print image) of user in a short period of time, then the limb action of user is determined according to the action difference of user in multiple continuous print picture signal, also for " user stretches out the action of the right hand ", then determine that the limb action of user is " action that user stretches out the right hand ", thus determine being intended to " shaking hands " of user.
4, image recognition is carried out to picture signal, determine the user in picture signal, and determine the limb action of user according to the action difference of the user in picture signal between multiple image, and determine the intention of user according to the limb action of user and/or voice signal.Specifically, after being determined that by picture signal the limb action of user is " action that user stretches out the right hand ", in conjunction with voice signal, the implication identifying voice signal is " shaking hands ", then both combine can the intention of definitely user.
In addition, the mode combined by both not only can the intention of definitely user, can also wherein one when not identifying, and is determined the intention of user by another.Such as: when the action of user is ambiguous or when not providing clear and definite action, " can be shaken hands " intention determining user by the voice identified.Equally, when voice signal is bad, when can not identify voice signal, then can determine that user goes out the intention of user, the intention of namely shaking hands according to " user stretches out the action of the right hand " of the user recognized in picture signal.
Control module 530 is for processing the intention of user.
Feedback module 540 for the result of processing module is fed back to described user with the multi-modal way of output, the multi-modal way of output comprise in the action way of output of anthropomorphic robot, image or video frequency output mode and the audio frequency way of output one or more.
In one embodiment of the invention, result is fed back to user with the multi-modal way of output to comprise: control anthropomorphic robot and perform the action corresponding with the intention of user; And/or; Show the expression relevant to the intention of user; And/or carry out the image demonstration relevant to the intention of user or video display; And/or play the audio frequency relevant to the intention of user.
Such as: when anthropomorphic robot determines being intended to " shaking hands " of user, then can control the one that anthropomorphic robot performs action, the display of stretching out its right hand " smiling face's image " corresponding with the scene of the close friend such as " shaking hands ", plays the music corresponding with the friendly scene such as " shaking hands " etc. intelligently, or the multiple combination in above-mentioned various feedback mode, as stretch out the right hand action while display " smiling face's image " to show friendship.Thus promote the man-machine interaction experience of anthropomorphic robot.
As shown in Figure 4, in one embodiment of the invention, artificial intelligence module 520 before the intention determining user according to voice signal and/or picture signal, also for:
S401: detect user and whether call out anthropomorphic robot.Such as: set a name to anthropomorphic robot in advance, as " little Bai ", then, as user's calling " little Bai ", anthropomorphic robot carries out speech recognition by the mode of artificial intelligence to " little Bai " that user calls out, thus make anthropomorphic robot understand its implication, just know whether as calling out oneself.
S402: if then activate anthropomorphic robot, and carry out auditory localization to determine the position of user according to the calling of user, and control anthropomorphic robot moves in face of user.
Specifically, if anthropomorphic robot determines that user calls out oneself really, then activate, each functional module being such as anthropomorphic robot by battery etc. powers on, and makes anthropomorphic robot be in state of activation.In one embodiment of the invention, in order to promote man-machine interaction experience, when anthropomorphic robot determines that user is when calling out it, the activation expression of anthropomorphic robot can be shown, activate expression can be predefined, just can as long as can reflect that robot is activated intuitively.Thus, promote man-machine interaction experience further.
In addition, auditory localization is carried out to determine the position of user according to the calling of user.The sound around anthropomorphic robot is detected by ManyEars auditory localization technology, particularly, sound-source signal can be gathered according to microphone array, effective sound signal detection can be carried out to sound-source signal afterwards, and by ManyEars technology, the multi-acoustical detected be carried out being separated to obtain multiple independently sound source.Wherein, the term " at least one " of the present embodiment can be regarded as one or more.Further, by the auditory localization computing in ManyEars technology, certain sound source above-mentioned is positioned, thus determine the position of user.
After the position determining user, just can automatically control anthropomorphic robot and move in face of user.Further, whether control module 530 also moves in face of user for detecting anthropomorphic robot; If not, then control anthropomorphic robot further to carry out moving until reach in face of user.Thus closed loop is formed to the motion control of anthropomorphic robot, ensure that the motion of anthropomorphic robot is more accurately reliable.
In one embodiment of the invention, control module 530 also for control anthropomorphic robot move to user in face of time, by the camera steering surface of anthropomorphic robot to the direction of user, to take pictures to user, and carry out recognition of face according to the picture signal of user, to determine the identity information of user.Like this, can prevent disabled user from using anthropomorphic robot on the one hand, on the other hand, also anthropomorphic machine people can be made to provide personalized service for user according to the identity information of user, promote the experience of anthropomorphic robot.Such as: computer can set up multiple account for different users, each user can enter system by the account of oneself, personal settings can be carried out under the account of oneself, anthropomorphic machine people provides personalized service to be also similar for user, anthropomorphic robot, by after the identity of determining user, can be just that different users provides personalized service according to the identity of user.
As a concrete example, suppose to judge that user calls out as anthropomorphic robot by artificial intelligence, anthropomorphic robot collects the position of user by microphone array, then move to intelligently in face of user, and anthropomorphic robot's head oscillation is aimed at user, carry out by camera laggard row recognition of face determination user identity of taking pictures, thus provide desirable interactive means for user or provide personalized service for user.At this moment when user reaches, robot collects picture signal corresponding to This move, then utilize artificial intelligence after carrying out specific aim analysis to user behavior, determine the action of shaking hands, just can initiatively make a stretch of the arm suitable position, display " smiling face's image " corresponding with the scene of the close friend such as " shaking hands " and play the one of the music corresponding with the friendly scene such as " shaking hands " etc., or the multiple combination in above-mentioned various feedback mode, as stretch out the right hand action while display " smiling face's image " to show friendship.Thus promote the man-machine interaction experience of anthropomorphic robot.
Make a more detailed description below in conjunction with the system of concrete example to the embodiment of the present invention.Such as: an indoor environment, the user of anthropomorphic robot calls its name, and anthropomorphic robot captures this sound by the microphone array be arranged on health, and is navigated to the position of user by location algorithm.Anthropomorphic robot is automatic activation after capturing call, power to each ingredient of anthropomorphic robot, expression (namely activating expression) after the display screen display of the head of anthropomorphic robot activates, then anthropomorphic robot is made to move in face of user by the wheel on the chassis of anthropomorphic robot in the mode of two-wheel differential motion, by head rotation to the direction towards user, the camera of head can be taken pictures and identification to user, go out the most appropriate mode mutual with user by existing database retrieval again or provide personalized service for it.Collected by microphone user voice signal or by camera collection to the picture signal of user after, the intention of user is learnt through the process of artificial intelligence and analysis, then just the audio frequency relevant to the intention of user can be exported by audio amplifier, also the picture relevant to the intention of user or video be can export by main display, even can expression and action and user be passed through mutual.Such as say " we have a dance " user, anthropomorphic robot is captured after this voice signal by after its implication of analysis and understanding of artificial intelligence by microphone, the music can liked by audio amplifier broadcasting user, then position and the action of user is captured by camera, move to suitable position and head and arm etc. are swung thereupon, reach the implementation effect of user's " we have a dance " this voice signal, thus the constructional hardware of anthropomorphic robot is organically combined with artificial intelligence, reach the interaction effect good with user.
According to the Control system of humanoid robot based on artificial intelligence of the embodiment of the present invention, the voice signal of collection user that can be real-time and/or picture signal, after the analysis of artificial intelligence, independently control anthropomorphic robot carry out corresponding action or the display image relevant to user view or the broadcasting audio frequency etc. relevant with user view, enrich with the interactive means of user, can more intuitively effectively show the achievement of artificial intelligence.In addition, the motion of anthropomorphic robot is the feedback system realization of view-based access control model and the sense of hearing completely, there is anthropomorphic paleocinetic consciousness, be easy to user operation, embody the intelligent of anthropomorphic robot more all sidedly simultaneously, promote the experience of user, in publicity, demonstration and service field, there is very affine effective effect.
It should be noted that, the specific implementation of the specific implementation of the Control system of humanoid robot based on artificial intelligence of the embodiment of the present invention and the anthropomorphic robot's control method based on artificial intelligence of the embodiment of the present invention is similar, specifically refer to the description of method part, in order to reduce redundancy, do not repeat herein.
Further, the invention discloses a kind of anthropomorphic robot, comprising: the Control system of humanoid robot based on artificial intelligence described in any one embodiment above-mentioned.This anthropomorphic robot can be real-time the voice signal of collection user and/or picture signal, after the analysis of artificial intelligence, independently control anthropomorphic robot carry out corresponding action, enrich with the interactive means of user.In addition, the motion of anthropomorphic robot is the feedback system realization of view-based access control model and the sense of hearing completely, has anthropomorphic paleocinetic consciousness, is easy to user operation, embody the intelligent of anthropomorphic robot more all sidedly simultaneously, promote the experience of user.
As shown in Figure 6, as a concrete example, anthropomorphic robot mainly comprises the requirement in following four in structural design: the ability to express of have the locomitivity of omnidirectional, have the multivariant manner of presentation, have vision and the mode such as the sense of hearing and expression, voice, video.
The composition structural drawing of anthropomorphic robot as shown in Figure 6, is respectively head assembly, skeleton shell, control system, energy resource system and chassis assembly, sets forth product structure design and the functional realiey of five parts below respectively.
One, head assembly
Head assembly comprise head skeleton and head install expression display screen, audio amplifier and the first-class mechanism that makes a video recording, one can carry out video acquisition function, as the input element of visual feedback, gather the information such as the position of user, action and expression, by analysis, feed back to output again.Carry the equipment of audio amplifier, expression display screen on the other hand, the output of audio frequency and expression can be carried out.
Two, skeleton shell
The skeleton structure of anthropomorphic robot is equivalent to the bone of the mankind, is the main load part of robot and the mounting structure providing each appliance arrangement of robot.Shell mechanism is equivalent to the skin of the mankind, can provide the outward appearance that one, robot is more anthropomorphic, thus makes user more natural when exchanging with robot.Simultaneously at the correct position of shell, point two-layer uniform eight microphone arrays, as the input element of audio feedback, collect the sound of user, through the comparative analysis of microphone array, through calculating the relative position of user, thus provide foundation for the outgoing position of motion.
Three, control system
Control system is the core of robot architecture, comprises mainboard, drive plate and topworks three chief components.Action command, receiving after artificial intelligence analysis sends instruction, can be converted into control signal, make steering wheel carry out the output of speed and position, realize the motion of robot by drive plate by mainboard on the one hand; Other interactive instructions can be converted on the other hand and control the signal such as audio amplifier and screen, thus realize the mutual of the aspects such as robot audio frequency, video and expression.
Four, energy resource system
Energy resource system provides energy source and power for whole robot, comprises 36V40AH high-capacity lithium battery, power transformation module, power switch, suddenly stops air switch and route bus etc., ensures that each position of robot can realize the function responded.
Five, chassis assembly
Chassis assembly comprises the parts such as high-power brushless motor, chassis skeleton, driving wheel and engaged wheel.Energy resource system drives driving wheel to move by brushless electric machine after powering, and adopts the motion planning mode of two-wheel differential, makes robot can omnidirectional moving.Play a supporting role with engaged wheel simultaneously, keep the centre of gravity place of robot to drop between driving wheel and engaged wheel, keep the whole machine balancing of robot.
Such as: an indoor environment, the user of anthropomorphic robot calls its name, and anthropomorphic robot captures this sound by the microphone array be arranged on health, and is navigated to the position of user by location algorithm.Anthropomorphic robot is automatic activation after capturing call, power to each ingredient of anthropomorphic robot, expression (namely activating expression) after the display screen display of the head of anthropomorphic robot activates, then anthropomorphic robot is made to move in face of user by the wheel on the chassis of anthropomorphic robot in the mode of two-wheel differential motion, by head rotation to the direction towards user, the camera of head can be taken pictures and identification to user, go out the most appropriate mode mutual with user by existing database retrieval again or provide personalized service for it.Collected by microphone user voice signal or by camera collection to the picture signal of user after, the intention of user is learnt through the process of artificial intelligence and analysis, then just the audio frequency relevant to the intention of user can be exported by audio amplifier, also the picture relevant to the intention of user or video be can export by main display, even can expression and action and user be passed through mutual.Such as say " we have a dance " user, anthropomorphic robot is captured after this voice signal by after its implication of analysis and understanding of artificial intelligence by microphone, the music can liked by audio amplifier broadcasting user, then position and the action of user is captured by camera, move to suitable position and head and arm etc. are swung thereupon, reach the implementation effect of user's " we have a dance " this voice signal, thus the constructional hardware of anthropomorphic robot is organically combined with artificial intelligence, reach the interaction effect good with user.
According to the anthropomorphic robot of the embodiment of the present invention, the voice signal of collection user that can be real-time and/or picture signal, after the analysis of artificial intelligence, independently control anthropomorphic robot carry out corresponding action or the display image relevant to user view or the broadcasting audio frequency etc. relevant with user view, enrich with the interactive means of user, can more intuitively effectively show the achievement of artificial intelligence.In addition, the motion of anthropomorphic robot is the feedback system realization of view-based access control model and the sense of hearing completely, there is anthropomorphic paleocinetic consciousness, be easy to user operation, embody the intelligent of anthropomorphic robot more all sidedly simultaneously, promote the experience of user, in publicity, demonstration and service field, there is very affine effective effect.
In addition, according to the anthropomorphic robot of the embodiment of the present invention other form and effect be all known for the ordinary skill in the art, in order to reduce redundancy, do not repeat herein.
In describing the invention, it will be appreciated that, term " " center ", " longitudinal direction ", " transverse direction ", " length ", " width ", " thickness ", " on ", D score, " front ", " afterwards ", " left side ", " right side ", " vertically ", " level ", " top ", " end " " interior ", " outward ", " clockwise ", " counterclockwise ", " axis ", " radial direction ", orientation or the position relationship of the instruction such as " circumference " are based on orientation shown in the drawings or position relationship, only the present invention for convenience of description and simplified characterization, instead of indicate or imply that the device of indication or element must have specific orientation, with specific azimuth configuration and operation, therefore limitation of the present invention can not be interpreted as.
In addition, term " first ", " second " only for describing object, and can not be interpreted as instruction or hint relative importance or imply the quantity indicating indicated technical characteristic.Thus, be limited with " first ", the feature of " second " can express or impliedly comprise at least one this feature.In describing the invention, the implication of " multiple " is at least two, such as two, three etc., unless otherwise expressly limited specifically.
In the description of this instructions, specific features, structure, material or feature that the description of reference term " embodiment ", " some embodiments ", " example ", " concrete example " or " some examples " etc. means to describe in conjunction with this embodiment or example are contained at least one embodiment of the present invention or example.In this manual, to the schematic representation of above-mentioned term not must for be identical embodiment or example.And the specific features of description, structure, material or feature can combine in one or more embodiment in office or example in an appropriate manner.In addition, when not conflicting, the feature of the different embodiment described in this instructions or example and different embodiment or example can carry out combining and combining by those skilled in the art.
Describe and can be understood in process flow diagram or in this any process otherwise described or method, represent and comprise one or more for realizing the module of the code of the executable instruction of the step of specific logical function or process, fragment or part, and the scope of the preferred embodiment of the present invention comprises other realization, wherein can not according to order that is shown or that discuss, comprise according to involved function by the mode while of basic or by contrary order, carry out n-back test, this should understand by embodiments of the invention person of ordinary skill in the field.
In flow charts represent or in this logic otherwise described and/or step, such as, the sequencing list of the executable instruction for realizing logic function can be considered to, may be embodied in any computer-readable medium, for instruction execution system, device or equipment (as computer based system, comprise the system of processor or other can from instruction execution system, device or equipment instruction fetch and perform the system of instruction) use, or to use in conjunction with these instruction execution systems, device or equipment.With regard to this instructions, " computer-readable medium " can be anyly can to comprise, store, communicate, propagate or transmission procedure for instruction execution system, device or equipment or the device that uses in conjunction with these instruction execution systems, device or equipment.The example more specifically (non-exhaustive list) of computer-readable medium comprises following: the electrical connection section (electronic installation) with one or more wiring, portable computer diskette box (magnetic device), random access memory (RAM), ROM (read-only memory) (ROM), erasablely edit ROM (read-only memory) (EPROM or flash memory), fiber device, and portable optic disk ROM (read-only memory) (CDROM).In addition, computer-readable medium can be even paper or other suitable media that can print described program thereon, because can such as by carrying out optical scanning to paper or other media, then carry out editing, decipher or carry out process with other suitable methods if desired and electronically obtain described program, be then stored in computer memory.
Should be appreciated that each several part of the present invention can realize with hardware, software, firmware or their combination.In the above-described embodiment, multiple step or method can with to store in memory and the software performed by suitable instruction execution system or firmware realize.Such as, if realized with hardware, the same in another embodiment, can realize by any one in following technology well known in the art or their combination: the discrete logic with the logic gates for realizing logic function to data-signal, there is the special IC of suitable combinational logic gate circuit, programmable gate array (PGA), field programmable gate array (FPGA) etc.
Those skilled in the art are appreciated that realizing all or part of step that above-described embodiment method carries is that the hardware that can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, this program perform time, step comprising embodiment of the method one or a combination set of.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, also can be that the independent physics of unit exists, also can be integrated in a module by two or more unit.Above-mentioned integrated module both can adopt the form of hardware to realize, and the form of software function module also can be adopted to realize.If described integrated module using the form of software function module realize and as independently production marketing or use time, also can be stored in a computer read/write memory medium.
The above-mentioned storage medium mentioned can be ROM (read-only memory), disk or CD etc.Although illustrate and describe embodiments of the invention above, be understandable that, above-described embodiment is exemplary, can not be interpreted as limitation of the present invention, and those of ordinary skill in the art can change above-described embodiment within the scope of the invention, revises, replace and modification.

Claims (15)

1., based on anthropomorphic robot's control method of artificial intelligence, it is characterized in that, comprise the following steps:
Receive voice signal and/or the picture signal of user's input;
The intention of described user is determined according to described voice signal and/or picture signal; And
The intention of described user is processed, and result is fed back to described user with the multi-modal way of output, the described multi-modal way of output comprise in the action way of output of described anthropomorphic robot, image or video frequency output mode and the audio frequency way of output one or more.
2. the anthropomorphic robot's control method based on artificial intelligence according to claim 1, is characterized in that, described determine the intention of described user according to described voice signal and/or picture signal before, also comprise:
Detect user and whether call out described anthropomorphic robot;
If so, then activate described anthropomorphic robot, and carry out auditory localization to determine the position of described user according to the calling of described user, and control described anthropomorphic robot and move in face of described user.
3. the anthropomorphic robot's control method based on artificial intelligence according to claim 2, it is characterized in that, control described anthropomorphic robot move to described user in face of time, also comprise: by the camera steering surface of described anthropomorphic robot to the direction of described user, to take pictures to user, and carry out recognition of face according to the picture signal of described user, to determine the identity information of described user.
4. the anthropomorphic robot's control method based on artificial intelligence according to claim 2, is characterized in that, after the described anthropomorphic robot of activation, also comprises: the activation expression showing described anthropomorphic robot.
5. the anthropomorphic robot's control method based on artificial intelligence according to claim 1, is characterized in that, the described intention determining described user according to described voice signal and/or picture signal, specifically comprises:
Speech recognition is carried out to described voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, to determine the intention of described user; Or
Speech recognition is carried out to described voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, and determine the intention of described user in conjunction with described picture signal.
6. the anthropomorphic robot's control method based on artificial intelligence according to claim 1, is characterized in that, the described intention determining described user according to described voice signal and/or picture signal, specifically comprises:
Image recognition is carried out to described picture signal, determine the user in described picture signal, and determine the limb action of described user according to the action difference of the user in described picture signal between multiple image, and determine the intention of described user according to the limb action of described user; Or
Image recognition is carried out to described picture signal, determine the user in described picture signal, and determine the limb action of described user according to the action difference of the user in described picture signal between multiple image, and determine the intention of described user according to the limb action of described user and/or described voice signal.
7. the anthropomorphic robot's control method based on artificial intelligence according to claim 1, is characterized in that, described result is fed back to described user with the multi-modal way of output, specifically comprises:
Control described anthropomorphic robot and perform the action corresponding with the intention of described user; And/or;
Show the expression relevant to the intention of described user; And/or
Carry out the image demonstration relevant to the intention of described user or video display; And/or
Play the audio frequency relevant to the intention of described user.
8. based on a Control system of humanoid robot for artificial intelligence, it is characterized in that, comprising:
Receiver module, for receiving voice signal and/or the picture signal of user's input;
Artificial intelligence module, for determining the intention of described user according to described voice signal and/or picture signal;
Control module, for processing the intention of described user; And
Feedback module, for the result of described processing module is fed back to described user with the multi-modal way of output, the described multi-modal way of output comprise in the action way of output of described anthropomorphic robot, image or video frequency output mode and the audio frequency way of output one or more.
9. the Control system of humanoid robot based on artificial intelligence according to claim 8, it is characterized in that, described artificial intelligence module is also for before the intention determining described user according to described voice signal and/or picture signal, judge whether user calls out described anthropomorphic robot according to described voice signal, if, then described control module activates described anthropomorphic robot, described artificial intelligence module carries out auditory localization to determine the position of described user according to the calling of described user further, and described control module controls described anthropomorphic robot and moves in face of described user.
10. the Control system of humanoid robot based on artificial intelligence according to claim 9, it is characterized in that, described control module control described anthropomorphic robot move to described user in face of time, also for controlling the camera steering surface of described anthropomorphic robot to the direction of described user, to make described camera, user is taken pictures, described artificial intelligence module also carries out recognition of face for the picture signal according to described user, to determine the identity information of described user.
11. Control system of humanoid robot based on artificial intelligence according to claim 9, is characterized in that, after the described anthropomorphic robot of activation, described control module also shows the activation expression of described anthropomorphic robot for controlling described feedback module.
12. Control system of humanoid robot based on artificial intelligence according to claim 8, it is characterized in that, described artificial intelligence module is used for:
Speech recognition is carried out to described voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, to determine the intention of described user; Or
Speech recognition is carried out to described voice signal, and one or more in natural language understanding, semantic analysis, sentiment analysis are carried out to recognition result, and determine the intention of described user in conjunction with described picture signal.
13. Control system of humanoid robot based on artificial intelligence according to claim 8, it is characterized in that, described artificial intelligence module is used for:
Image recognition is carried out to described picture signal, determine the user in described picture signal, and determine the limb action of described user according to the action difference of the user in described picture signal between multiple image, and determine the intention of described user according to the limb action of described user; Or
Image recognition is carried out to described picture signal, determine the user in described picture signal, and determine the limb action of described user according to the action difference of the user in described picture signal between multiple image, and determine the intention of described user according to the limb action of described user and/or described voice signal.
14. Control system of humanoid robot based on artificial intelligence according to claim 8, it is characterized in that, result is fed back to described user with the multi-modal way of output by described feedback module, specifically comprises:
Control described anthropomorphic robot and perform the action corresponding with the intention of described user; And/or;
Show the expression relevant to the intention of described user; And/or
Carry out the image demonstration relevant to the intention of described user or video display; And/or
Play the audio frequency relevant to the intention of described user.
15. 1 kinds of anthropomorphic robots, is characterized in that, comprising: the Control system of humanoid robot based on artificial intelligence according to Claim 8 described in-14 any one.
CN201510437864.4A 2015-07-23 2015-07-23 Humanoid robot control method based on artificial intelligence, system and the humanoid robot Pending CN105093986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510437864.4A CN105093986A (en) 2015-07-23 2015-07-23 Humanoid robot control method based on artificial intelligence, system and the humanoid robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510437864.4A CN105093986A (en) 2015-07-23 2015-07-23 Humanoid robot control method based on artificial intelligence, system and the humanoid robot

Publications (1)

Publication Number Publication Date
CN105093986A true CN105093986A (en) 2015-11-25

Family

ID=54574688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510437864.4A Pending CN105093986A (en) 2015-07-23 2015-07-23 Humanoid robot control method based on artificial intelligence, system and the humanoid robot

Country Status (1)

Country Link
CN (1) CN105093986A (en)

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446491A (en) * 2015-12-16 2016-03-30 北京光年无限科技有限公司 Intelligent robot based interactive method and apparatus
CN105700438A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Electronic control system for multi-joint small robot
CN105759650A (en) * 2016-03-18 2016-07-13 北京光年无限科技有限公司 Method used for intelligent robot system to achieve real-time face tracking
CN105785813A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Intelligent robot system multi-modal output method and device
CN105807933A (en) * 2016-03-18 2016-07-27 北京光年无限科技有限公司 Man-machine interaction method and apparatus used for intelligent robot
CN105843381A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Data processing method for realizing multi-modal interaction and multi-modal interaction system
CN105844329A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Method and system for processing thinking data for intelligent robot
CN105856261A (en) * 2016-05-26 2016-08-17 王帅 Voice control action system of robot
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot
CN105892372A (en) * 2016-05-31 2016-08-24 北京光年无限科技有限公司 Intelligent robot expression output method and intelligent robot
CN105898487A (en) * 2016-04-28 2016-08-24 北京光年无限科技有限公司 Interaction method and device for intelligent robot
CN105957525A (en) * 2016-04-26 2016-09-21 珠海市魅族科技有限公司 Interactive method of a voice assistant and user equipment
CN106022294A (en) * 2016-06-01 2016-10-12 北京光年无限科技有限公司 Intelligent robot-oriented man-machine interaction method and intelligent robot-oriented man-machine interaction device
CN106328132A (en) * 2016-08-15 2017-01-11 歌尔股份有限公司 Voice interaction control method and device for intelligent equipment
CN106361356A (en) * 2016-08-24 2017-02-01 北京光年无限科技有限公司 Emotion monitoring and early warning method and system
CN106371583A (en) * 2016-08-19 2017-02-01 北京智能管家科技有限公司 Control method and apparatus for intelligent device
CN106462254A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Robot interaction content generation method, system and robot
CN106502382A (en) * 2016-09-21 2017-03-15 北京光年无限科技有限公司 Active exchange method and system for intelligent robot
CN106529504A (en) * 2016-12-02 2017-03-22 合肥工业大学 Dual-mode video emotion recognition method with composite spatial-temporal characteristic
CN106548231A (en) * 2016-11-24 2017-03-29 北京地平线机器人技术研发有限公司 Mobile controller, mobile robot and the method for moving to optimal interaction point
CN106557164A (en) * 2016-11-18 2017-04-05 北京光年无限科技有限公司 It is applied to the multi-modal output intent and device of intelligent robot
CN106584480A (en) * 2016-12-31 2017-04-26 天津菲戈博特智能科技有限公司 Robot and facial recognition method and voice control method thereof
CN106625711A (en) * 2016-12-30 2017-05-10 华南智能机器人创新研究院 Method for positioning intelligent interaction of robot
CN106682638A (en) * 2016-12-30 2017-05-17 华南智能机器人创新研究院 System for positioning robot and realizing intelligent interaction
CN106773923A (en) * 2016-11-30 2017-05-31 北京光年无限科技有限公司 The multi-modal affection data exchange method and device of object manipulator
CN106934651A (en) * 2017-01-18 2017-07-07 北京光年无限科技有限公司 A kind of advertisement information output intent and system for robot
CN107015490A (en) * 2017-02-28 2017-08-04 北京光年无限科技有限公司 A kind of intelligent robot and intelligent robot operating system
CN107073314A (en) * 2016-07-07 2017-08-18 深圳狗尾草智能科技有限公司 A kind of robotic training method and apparatus based on virtual environment
CN107283429A (en) * 2017-08-23 2017-10-24 北京百度网讯科技有限公司 Control method, device, system and terminal based on artificial intelligence
CN107433591A (en) * 2017-08-01 2017-12-05 上海未来伙伴机器人有限公司 Various dimensions interact robot application control system and method
CN107437419A (en) * 2016-05-27 2017-12-05 广州零号软件科技有限公司 A kind of method, instruction set and the system of the movement of Voice command service robot
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device
CN107515944A (en) * 2017-08-31 2017-12-26 广东美的制冷设备有限公司 Exchange method, user terminal and storage medium based on artificial intelligence
WO2018006470A1 (en) * 2016-07-07 2018-01-11 深圳狗尾草智能科技有限公司 Artificial intelligence processing method and device
CN107643753A (en) * 2017-09-14 2018-01-30 广东格兰仕集团有限公司 A kind of intelligent robot positions addressing method
CN108297098A (en) * 2018-01-23 2018-07-20 上海大学 The robot control system and method for artificial intelligence driving
CN108664889A (en) * 2017-03-28 2018-10-16 卡西欧计算机株式会社 Object detection device, object object detecting method and recording medium
CN109074502A (en) * 2018-07-26 2018-12-21 深圳前海达闼云端智能科技有限公司 Method, apparatus, storage medium and the robot of training artificial intelligence model
CN109079813A (en) * 2018-08-14 2018-12-25 重庆四通都成科技发展有限公司 Automobile Marketing service robot system and its application method
CN109116981A (en) * 2018-07-03 2019-01-01 北京理工大学 A kind of mixed reality interactive system of passive touch feedback
CN109214556A (en) * 2018-08-14 2019-01-15 重庆四通都成科技发展有限公司 Automobile Innovative Service Modes platform
CN109377991A (en) * 2018-09-30 2019-02-22 珠海格力电器股份有限公司 Intelligent equipment control method and device
CN109382827A (en) * 2018-10-26 2019-02-26 深圳市三宝创新智能有限公司 A kind of robot system and its intelligent memory recognition methods
CN109571507A (en) * 2019-01-16 2019-04-05 鲁班嫡系机器人(深圳)有限公司 A kind of service robot system and method for servicing
CN109859751A (en) * 2018-12-03 2019-06-07 珠海格力电器股份有限公司 A method of it controlling equipment and its executes instruction
CN109885104A (en) * 2017-12-06 2019-06-14 湘潭宏远电子科技有限公司 A kind of tracking terminal system
CN109920425A (en) * 2019-04-03 2019-06-21 北京石头世纪科技股份有限公司 Robot voice control method and device, robot and medium
CN110021294A (en) * 2018-01-09 2019-07-16 深圳市优必选科技有限公司 Robot control method, device and storage device
CN110524559A (en) * 2019-08-30 2019-12-03 成都未至科技有限公司 Intelligent human-machine interaction system and method based on human behavior data
CN110673716A (en) * 2018-07-03 2020-01-10 百度在线网络技术(北京)有限公司 Method, device and equipment for interaction between intelligent terminal and user and storage medium
CN110751951A (en) * 2019-10-25 2020-02-04 智亮君 Handshake interaction method and system based on intelligent mirror and storage medium
CN110969053A (en) * 2018-09-29 2020-04-07 深圳市神州云海智能科技有限公司 Lottery buyer classification method and device and lottery robot
CN111086008A (en) * 2018-10-24 2020-05-01 国网河南省电力公司南阳供电公司 Electric power safety knowledge learning robot and method for preventing electric power operation fault
WO2020133405A1 (en) * 2018-12-29 2020-07-02 深圳市大疆创新科技有限公司 Method and device for controlling ground remote control robot
CN111722702A (en) * 2019-03-22 2020-09-29 北京京东尚科信息技术有限公司 Human-computer interaction method and system, medium and computer system
CN111741225A (en) * 2020-08-07 2020-10-02 成都极米科技股份有限公司 Human-computer interaction device, method and computer-readable storage medium
CN112001248A (en) * 2020-07-20 2020-11-27 北京百度网讯科技有限公司 Active interaction method and device, electronic equipment and readable storage medium
CN112099743A (en) * 2020-08-17 2020-12-18 数智医疗(深圳)有限公司 Interactive system, interactive device and interactive method
WO2022156611A1 (en) * 2021-01-21 2022-07-28 深圳市普渡科技有限公司 Sound source positioning method and device during interaction, and computer readable storage medium
CN116383620A (en) * 2023-03-29 2023-07-04 北京鹅厂科技有限公司 Method and device for applying multi-mode artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101036838A (en) * 2007-04-19 2007-09-19 复旦大学 Intelligent robot friend for study and entertainment
JP2009131928A (en) * 2007-11-30 2009-06-18 Olympus Corp Robot control system, robot, program and information recording medium
CN101977240A (en) * 2010-11-15 2011-02-16 南开大学 IPhone smart phone based robot human-machine interactive system
US20120268580A1 (en) * 2011-04-12 2012-10-25 Hyun Kim Portable computing device with intelligent robotic functions and method for operating the same
CN102981604A (en) * 2011-06-07 2013-03-20 索尼公司 Image processing apparatus, image processing method, and program
CN103324100A (en) * 2013-05-02 2013-09-25 郭海锋 Emotion vehicle-mounted robot driven by information
CN103996155A (en) * 2014-04-16 2014-08-20 深圳市易特科信息技术有限公司 Intelligent interaction and psychological comfort robot service system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101036838A (en) * 2007-04-19 2007-09-19 复旦大学 Intelligent robot friend for study and entertainment
JP2009131928A (en) * 2007-11-30 2009-06-18 Olympus Corp Robot control system, robot, program and information recording medium
CN101977240A (en) * 2010-11-15 2011-02-16 南开大学 IPhone smart phone based robot human-machine interactive system
US20120268580A1 (en) * 2011-04-12 2012-10-25 Hyun Kim Portable computing device with intelligent robotic functions and method for operating the same
CN102981604A (en) * 2011-06-07 2013-03-20 索尼公司 Image processing apparatus, image processing method, and program
CN103324100A (en) * 2013-05-02 2013-09-25 郭海锋 Emotion vehicle-mounted robot driven by information
CN103996155A (en) * 2014-04-16 2014-08-20 深圳市易特科信息技术有限公司 Intelligent interaction and psychological comfort robot service system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李弼程,邵美珍,黄洁: "《模式识别原理与应用》", 28 February 2008, 西安电子科技大学出版社 *

Cited By (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446491A (en) * 2015-12-16 2016-03-30 北京光年无限科技有限公司 Intelligent robot based interactive method and apparatus
CN105446491B (en) * 2015-12-16 2018-09-18 北京光年无限科技有限公司 A kind of exchange method and device based on intelligent robot
CN105807933A (en) * 2016-03-18 2016-07-27 北京光年无限科技有限公司 Man-machine interaction method and apparatus used for intelligent robot
CN105759650A (en) * 2016-03-18 2016-07-13 北京光年无限科技有限公司 Method used for intelligent robot system to achieve real-time face tracking
CN105843381B (en) * 2016-03-18 2020-07-28 北京光年无限科技有限公司 Data processing method for realizing multi-modal interaction and multi-modal interaction system
CN105843381A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Data processing method for realizing multi-modal interaction and multi-modal interaction system
CN105844329A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Method and system for processing thinking data for intelligent robot
CN105807933B (en) * 2016-03-18 2019-02-12 北京光年无限科技有限公司 A kind of man-machine interaction method and device for intelligent robot
CN105700438A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Electronic control system for multi-joint small robot
CN105785813A (en) * 2016-03-18 2016-07-20 北京光年无限科技有限公司 Intelligent robot system multi-modal output method and device
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot
CN105868827B (en) * 2016-03-25 2019-01-22 北京光年无限科技有限公司 A kind of multi-modal exchange method of intelligent robot and intelligent robot
CN105957525A (en) * 2016-04-26 2016-09-21 珠海市魅族科技有限公司 Interactive method of a voice assistant and user equipment
CN105898487A (en) * 2016-04-28 2016-08-24 北京光年无限科技有限公司 Interaction method and device for intelligent robot
CN105898487B (en) * 2016-04-28 2019-02-19 北京光年无限科技有限公司 A kind of exchange method and device towards intelligent robot
CN105856261A (en) * 2016-05-26 2016-08-17 王帅 Voice control action system of robot
CN105856261B (en) * 2016-05-26 2018-05-18 弘丰塑胶制品(深圳)有限公司 The voice control moving system of robot
CN107437419A (en) * 2016-05-27 2017-12-05 广州零号软件科技有限公司 A kind of method, instruction set and the system of the movement of Voice command service robot
CN105892372A (en) * 2016-05-31 2016-08-24 北京光年无限科技有限公司 Intelligent robot expression output method and intelligent robot
CN106022294A (en) * 2016-06-01 2016-10-12 北京光年无限科技有限公司 Intelligent robot-oriented man-machine interaction method and intelligent robot-oriented man-machine interaction device
CN106462254A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Robot interaction content generation method, system and robot
CN107590120A (en) * 2016-07-07 2018-01-16 深圳狗尾草智能科技有限公司 Artificial intelligence process method and device
WO2018006364A1 (en) * 2016-07-07 2018-01-11 深圳狗尾草智能科技有限公司 Robot training method and device based on virtual environment
WO2018006470A1 (en) * 2016-07-07 2018-01-11 深圳狗尾草智能科技有限公司 Artificial intelligence processing method and device
CN107073314A (en) * 2016-07-07 2017-08-18 深圳狗尾草智能科技有限公司 A kind of robotic training method and apparatus based on virtual environment
US11037561B2 (en) 2016-08-15 2021-06-15 Goertek Inc. Method and apparatus for voice interaction control of smart device
CN106328132A (en) * 2016-08-15 2017-01-11 歌尔股份有限公司 Voice interaction control method and device for intelligent equipment
WO2018032930A1 (en) * 2016-08-15 2018-02-22 歌尔股份有限公司 Method and device for voice interaction control of smart device
CN106371583A (en) * 2016-08-19 2017-02-01 北京智能管家科技有限公司 Control method and apparatus for intelligent device
CN106361356A (en) * 2016-08-24 2017-02-01 北京光年无限科技有限公司 Emotion monitoring and early warning method and system
CN106502382A (en) * 2016-09-21 2017-03-15 北京光年无限科技有限公司 Active exchange method and system for intelligent robot
CN106557164A (en) * 2016-11-18 2017-04-05 北京光年无限科技有限公司 It is applied to the multi-modal output intent and device of intelligent robot
CN106548231A (en) * 2016-11-24 2017-03-29 北京地平线机器人技术研发有限公司 Mobile controller, mobile robot and the method for moving to optimal interaction point
CN106548231B (en) * 2016-11-24 2020-04-24 北京地平线机器人技术研发有限公司 Mobile control device, mobile robot and method for moving to optimal interaction point
CN106773923A (en) * 2016-11-30 2017-05-31 北京光年无限科技有限公司 The multi-modal affection data exchange method and device of object manipulator
CN106529504B (en) * 2016-12-02 2019-05-31 合肥工业大学 A kind of bimodal video feeling recognition methods of compound space-time characteristic
CN106529504A (en) * 2016-12-02 2017-03-22 合肥工业大学 Dual-mode video emotion recognition method with composite spatial-temporal characteristic
CN106625711A (en) * 2016-12-30 2017-05-10 华南智能机器人创新研究院 Method for positioning intelligent interaction of robot
CN106682638A (en) * 2016-12-30 2017-05-17 华南智能机器人创新研究院 System for positioning robot and realizing intelligent interaction
CN106584480A (en) * 2016-12-31 2017-04-26 天津菲戈博特智能科技有限公司 Robot and facial recognition method and voice control method thereof
CN106934651A (en) * 2017-01-18 2017-07-07 北京光年无限科技有限公司 A kind of advertisement information output intent and system for robot
CN107015490A (en) * 2017-02-28 2017-08-04 北京光年无限科技有限公司 A kind of intelligent robot and intelligent robot operating system
CN108664889A (en) * 2017-03-28 2018-10-16 卡西欧计算机株式会社 Object detection device, object object detecting method and recording medium
CN107433591A (en) * 2017-08-01 2017-12-05 上海未来伙伴机器人有限公司 Various dimensions interact robot application control system and method
CN107450729B (en) * 2017-08-10 2019-09-10 上海木木机器人技术有限公司 Robot interactive method and device
CN107450729A (en) * 2017-08-10 2017-12-08 上海木爷机器人技术有限公司 Robot interactive method and device
CN107283429A (en) * 2017-08-23 2017-10-24 北京百度网讯科技有限公司 Control method, device, system and terminal based on artificial intelligence
CN107515944A (en) * 2017-08-31 2017-12-26 广东美的制冷设备有限公司 Exchange method, user terminal and storage medium based on artificial intelligence
CN107643753A (en) * 2017-09-14 2018-01-30 广东格兰仕集团有限公司 A kind of intelligent robot positions addressing method
CN109885104A (en) * 2017-12-06 2019-06-14 湘潭宏远电子科技有限公司 A kind of tracking terminal system
CN110021294A (en) * 2018-01-09 2019-07-16 深圳市优必选科技有限公司 Robot control method, device and storage device
CN108297098A (en) * 2018-01-23 2018-07-20 上海大学 The robot control system and method for artificial intelligence driving
CN110673716A (en) * 2018-07-03 2020-01-10 百度在线网络技术(北京)有限公司 Method, device and equipment for interaction between intelligent terminal and user and storage medium
CN109116981A (en) * 2018-07-03 2019-01-01 北京理工大学 A kind of mixed reality interactive system of passive touch feedback
CN109074502A (en) * 2018-07-26 2018-12-21 深圳前海达闼云端智能科技有限公司 Method, apparatus, storage medium and the robot of training artificial intelligence model
CN109079813A (en) * 2018-08-14 2018-12-25 重庆四通都成科技发展有限公司 Automobile Marketing service robot system and its application method
CN109214556A (en) * 2018-08-14 2019-01-15 重庆四通都成科技发展有限公司 Automobile Innovative Service Modes platform
CN110969053B (en) * 2018-09-29 2023-12-22 深圳市神州云海智能科技有限公司 Method and device for classifying players and lottery robot
CN110969053A (en) * 2018-09-29 2020-04-07 深圳市神州云海智能科技有限公司 Lottery buyer classification method and device and lottery robot
CN109377991B (en) * 2018-09-30 2021-07-23 珠海格力电器股份有限公司 Intelligent equipment control method and device
CN109377991A (en) * 2018-09-30 2019-02-22 珠海格力电器股份有限公司 Intelligent equipment control method and device
CN111086008A (en) * 2018-10-24 2020-05-01 国网河南省电力公司南阳供电公司 Electric power safety knowledge learning robot and method for preventing electric power operation fault
CN109382827A (en) * 2018-10-26 2019-02-26 深圳市三宝创新智能有限公司 A kind of robot system and its intelligent memory recognition methods
CN109859751A (en) * 2018-12-03 2019-06-07 珠海格力电器股份有限公司 A method of it controlling equipment and its executes instruction
WO2020133405A1 (en) * 2018-12-29 2020-07-02 深圳市大疆创新科技有限公司 Method and device for controlling ground remote control robot
CN109571507A (en) * 2019-01-16 2019-04-05 鲁班嫡系机器人(深圳)有限公司 A kind of service robot system and method for servicing
CN111722702A (en) * 2019-03-22 2020-09-29 北京京东尚科信息技术有限公司 Human-computer interaction method and system, medium and computer system
CN109920425A (en) * 2019-04-03 2019-06-21 北京石头世纪科技股份有限公司 Robot voice control method and device, robot and medium
CN109920425B (en) * 2019-04-03 2021-11-12 北京石头世纪科技股份有限公司 Robot voice control method and device, robot and medium
CN110524559B (en) * 2019-08-30 2022-06-10 成都未至科技有限公司 Intelligent man-machine interaction system and method based on personnel behavior data
CN110524559A (en) * 2019-08-30 2019-12-03 成都未至科技有限公司 Intelligent human-machine interaction system and method based on human behavior data
CN110751951A (en) * 2019-10-25 2020-02-04 智亮君 Handshake interaction method and system based on intelligent mirror and storage medium
CN110751951B (en) * 2019-10-25 2022-11-11 智亮君 Handshake interaction method and system based on intelligent mirror and storage medium
CN112001248A (en) * 2020-07-20 2020-11-27 北京百度网讯科技有限公司 Active interaction method and device, electronic equipment and readable storage medium
CN112001248B (en) * 2020-07-20 2024-03-01 北京百度网讯科技有限公司 Active interaction method, device, electronic equipment and readable storage medium
CN111741225A (en) * 2020-08-07 2020-10-02 成都极米科技股份有限公司 Human-computer interaction device, method and computer-readable storage medium
CN112099743A (en) * 2020-08-17 2020-12-18 数智医疗(深圳)有限公司 Interactive system, interactive device and interactive method
WO2022156611A1 (en) * 2021-01-21 2022-07-28 深圳市普渡科技有限公司 Sound source positioning method and device during interaction, and computer readable storage medium
CN116383620A (en) * 2023-03-29 2023-07-04 北京鹅厂科技有限公司 Method and device for applying multi-mode artificial intelligence
CN116383620B (en) * 2023-03-29 2023-10-20 北京鹅厂科技有限公司 Method and device for applying multi-mode artificial intelligence

Similar Documents

Publication Publication Date Title
CN105093986A (en) Humanoid robot control method based on artificial intelligence, system and the humanoid robot
CN104985599B (en) Study of Intelligent Robot Control method, system and intelligent robot based on artificial intelligence
US11858118B2 (en) Robot, server, and human-machine interaction method
CN108297098A (en) The robot control system and method for artificial intelligence driving
US20150298315A1 (en) Methods and systems to facilitate child development through therapeutic robotics
CN109545206B (en) Voice interaction processing method and device of intelligent equipment and intelligent equipment
CN107030691A (en) A kind of data processing method and device for nursing robot
CN203861914U (en) Pet robot
CN104951077A (en) Man-machine interaction method and device based on artificial intelligence and terminal equipment
CN106346487A (en) Interactive VR sand table show robot
CN105204642A (en) Adjustment method and device of virtual-reality interactive image
CN106363644B (en) A kind of Internet education Intelligent Service robot
Trafton et al. Integrating vision and audition within a cognitive architecture to track conversations
CN206029912U (en) Interactive VR's intelligent robot
CN111324409B (en) Artificial intelligence-based interaction method and related device
US20190164444A1 (en) Assessing a level of comprehension of a virtual lecture
CN111414506A (en) Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
CN109891357A (en) Emotion intelligently accompanies device
CN108762512A (en) Human-computer interaction device, method and system
CN111314771A (en) Video playing method and related equipment
CN112860064B (en) Intelligent interaction system and equipment based on AI technology
Nagao et al. Symbiosis between humans and artificial intelligence
CN109968365A (en) The control method and robot of a kind of robot control system, control system
CN108388399A (en) The method of state management and system of virtual idol
CN111984161A (en) Control method and device of intelligent robot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151125