Summary of the invention
Embodiments provide a kind of robot system, it is various alternately, and independence is strong.
Embodiments provide a kind of robot control method, it is various alternately, and independence is strong.
Embodiments provide a kind of robot, it is various alternately, and independence is strong.
The embodiment of the present invention provides a kind of robot system, is used in robot, comprises power module, voice input module, detecting module, image capture module, voice output module and controller.Described power module is used for powering to described robot.Described voice input module is for receiving external voice information.Described detecting module is for obtaining the object location information of the current environment residing for described robot.Described image capture module is for gathering the image information of current environment.Described controller is used for the state controlling described robot according to the one or more information in the image information of the object location information of described external voice information, described current environment, described current environment, or controls described voice output module and export corresponding interaction content.Wherein, the image information of described current environment comprises plane picture information and deep image information.
Preferably, described robot system also comprises picture recognition module, for identifying the specific image information that described image information comprises, described specific image information comprises expression information and/or predefined object information, and described controller also adjusts the interaction content of described voice output module output according to described specific image information.
Preferably, described controller also obtains the positional information of described robot according to described specific image information.
Preferably, described controller also carries out map structuring, to obtain the positional information of described robot according to the object location information of described current environment and/or the image information of described current environment.
Preferably, described robot system also comprises display module, and described controller also controls described display module for the one or more information in the image information of the object location information according to described external voice information, described current environment, described current environment and exports corresponding mutual picture material.
Preferably, described voice input module comprises multiple microphone array, the environmental noise comprised in the external voice information that described voice input module also receives for the sound source position and filtering of detecting current environment.
Preferably, what described detecting module comprised in laser radar, depth camera, ultrasonic wave, infrared switch is one or more.
Preferably, described robot system also comprises radio receiving transmitting module, for transmitting/receiving wireless signal, to carry out radio communication with external device (ED).
Preferably, described robot system also comprises charging module, and for judging the electricity size of described power module, and whether detecting current environment has charging device when electricity is not enough, close to described charging device to control described robot, to charge.
Preferably, described robot system also comprises walking module, controls the motion state of described robot for the one or more information in the image information of the object location information according to described external voice information, described current environment, described current environment.
One embodiment of the invention additionally provides a kind of robot control method, applies to, in robot, comprising:
Receive external voice information;
Obtain the object location information of the current environment residing for described robot and the image information of current environment;
Control the state of described robot according to the one or more information in the image information of the object location information of described external voice information, described current environment, described current environment, or export corresponding interaction content;
Wherein, the image information of described current environment comprises plane picture information and deep image information.
Preferably, the image information of the current environment residing for the described robot of described acquisition, comprising:
Obtain the image information of the current environment residing for described robot, and identify the specific image information that described image information comprises, described specific image information comprises expression information and/or predefined object information, to adjust the interaction content of described output.
Preferably, the image information of the current environment residing for the described robot of described acquisition, comprising:
Obtain the image information of the current environment residing for described robot, and identify the specific image information that described image information comprises, described specific image information comprises predefined object information, to obtain the positional information of described robot according to described specific image information.
Preferably, the object location information of current environment residing for the described robot of described acquisition and the image information of current environment, comprising:
Obtain the object location information of the current environment residing for described robot and the image information of current environment, and carry out map structuring, to obtain the positional information of described robot according to the object location information of described current environment and/or the image information of described current environment.
Preferably, described reception external voice information, comprising:
The sound source position of detecting current environment is to receive external voice information, and the environmental noise comprised in the external voice information that receives of filtering.
One embodiment of the invention also provides a kind of robot, comprises shell, voice output parts, display unit, voice-input component and image acquisition component.Described voice output parts are arranged on described shell.Described display unit is arranged on the housing rotationally, for catching human body information, and according to capture human body information adjustment with human body towards.Described voice-input component is arranged on described shell rotationally, for detecting the orientation of sound source, and according to detect sound bearing adjustment with sound source towards.Described image acquisition component is arranged on described shell rotationally, for gathering the image information of current environment.Wherein, the image information of described current environment comprises plane picture information and deep image information, and described robot controls described voice output parts according to the one or more information in the image information of the voice messaging of described sound source, described current environment, described human body information and exports corresponding interaction content.
Preferably, described robot also comprises driver part, is arranged at below described shell, moves for driving described robot.
Preferably, described robot also carries out map structuring and path planning according to the one or more information in the image information of the voice messaging of described sound source, described current environment, described human body information, with near described sound source or described human body, distance is made to be not more than predeterminable range threshold value.
Preferably, described robot also comprises distance measurement parts, is arranged on described shell rotationally, and for obtaining the object location information of current environment, described robot also carries out map structuring according to the object location information of described current environment.
Preferably, described display unit, described voice-input component, described image acquisition component, described distance measurement parts all can carry out translation and/or spinning movement.
Preferably, described image acquisition component is also for identifying the specific image information comprised in gathered image information, described robot also positions and navigation according to described specific image information, and described specific image information comprises predefined object information.
Preferably, described robot also comprises charging unit, for judging whether there is charging device in current environment, described robot also when electricity is not enough near described charging device, to be charged by described charging unit, described charging device.
Above-mentioned robot, robot system and robot control method obtain the image of surrounding environment, sound, the realization such as expression information of personage and people and carry out real-time, interactive by arranging multiple sensing module, simultaneously in moving process can real-time selection optimal path, carry out Obstacle avoidance, improve Consumer's Experience.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those skilled in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Figure 1 shows that the module map of robot system in an embodiment of the present invention.In the present embodiment, robot system 1 is used in robot 2.Robot system 1 comprises power module 10, voice input module 20, detecting module 30, image capture module 40, voice output module 50 and controller 60.Power module 10 for providing electric power signal, to power to robot 2.Voice input module 20 is for receiving external voice information.Detecting module 30 is for obtaining the object location information of the current environment residing for robot 1.Image capture module 40 is for gathering the image information of current environment.The external voice information of controller 60 for receiving according to voice input module 20, one or more information in the image information of the current environment that the object location information of the current environment that detecting module 30 obtains, image capture module 40 gather carry out the state of control 2, or control voice output module 50 exports corresponding interaction content.In the present embodiment, the external voice information that voice input module 20 receives comprises acoustic information and the environmental voice information of personage.Detecting module 30 carries out map structuring by the object location information obtaining current environment, and to obtain cartographic information and the obstacle information of current environment, obstacle information can be barrier newly-increased in a certain moment environmental map.The image information of the current environment that image capture module 40 gathers, image information comprises plane picture information and deep image information.Controller 60 is according to external voice information, one or more information in these three information of the object location information of current environment, the image information of current environment carry out the state of control 2, with Time Controller 60 also according to external voice information, the one or more information in these three information of the object location information of current environment, the image information of current environment or control also adjust the interaction content that voice output module 50 exports.The state of robot 2 comprises the walking states, operational factor etc. of robot.
Please refer to Fig. 2.As to further improvement of the present invention, robot system 1 also comprises picture recognition module 70.Picture recognition module 70 is for identifying the specific image information comprised in the image information that gathers as acquisition module 40, and controller 60 also adjusts according to the specific image information after identifying the interaction content that voice output module 50 exports.In the present embodiment, specific image information can be the expression information of personage and/or predefined object information.Robot system 1 is by can the current emotional of perception personage after the expression information that identifies personage, and then the interaction content of voice output module 50 output is adjusted according to the happiness, anger, grief and joy of personage, for example, can when perception personage be sad mood, export the interaction content with comfort character, relative to perception personage is happy emoticon, cheerful and light-hearted, interaction content easily can be exported.The interaction content that robot system prestores can be stored in cloud server, or is stored in memory (not shown) that robot comprises.
Controller 60 also obtains the positional information of robot 2 according to the predefined object information after identification, thus makes robot 2 to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.Robot system 1, by identifying that predefined object information obtains the positional information of robot 2, saves positioning time, reduces program computation complexity.
In an embodiment of the present invention, controller 60 also carries out map structuring according to the image information of the object location information of the current environment of detecting module 30 acquisition and/or the current environment of image capture module 40 collection, to obtain the positional information of robot 2, thus robot 2 is made to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.
As to further improvement of the present invention, robot system 1 also comprises walking module 80.Walking module 80 carrys out the motion state of control 2 for the one or more information in the image information of the object location information according to external voice information, current environment, current environment.
In an embodiment of the present invention, voice input module 20 comprises multiple microphone array.The environmental noise comprised in the external voice information that voice input module 20 also receives for the sound source position and filtering of detecting current environment.Voice input module 20, by detecting and judging that sound source position information can be used for making robot 2 adjustment when receiving external voice information receive the angle of sound, makes the external voice information of reception more clear, can determine the orientation of sound source simultaneously.
As to further improvement of the present invention, robot system 1 also comprises display module 90, the external voice information of controller 60 also for receiving according to voice input module 20, one or more information in the image information of the current environment that the object location information of the current environment that detecting module 30 obtains, image capture module 40 gather export corresponding mutual picture material to control display module 90, improve the mutual diversity of robot 2 further.In the present embodiment, display module 90 can be that liquid crystal display, projector equipment etc. are for showing the module of image.
As to further improvement of the present invention, robot system 1 also comprises radio receiving transmitting module 90.Radio receiving transmitting module 90 for transmitting/receiving wireless signal, thus makes robot 2 can carry out radio communication with external device (ED).Radio receiving transmitting module 90 can comprise WIFI, bluetooth, one or more in ZegBee, Z-WAVE.
As to further improvement of the present invention, robot system 1 also comprises charging module 100.Charging module 100 is for judging the electricity size of power module 10, and whether detecting current environment has charging device when judging that power module 10 electricity is not enough, close to charging device with control 2, to charge.For example, at charging device and robot 2 inside, wireless localization apparatus can be installed, for determining the relative position of charging device and robot, to navigate near charging device when robot electric quantity is not enough.In addition, robot 2 is also undertaken building figure and locating by detecting module 30, to position charging device and to charge near self-navigation to charging device.
In an embodiment of the present invention, can to adopt in laser radar, depth camera, ultrasonic wave, infrared switch one or more detects for detecting module 30.Image capture module 40 can be that RGB camera and depth camera combine.RGB camera is used for acquisition plane image information, and depth camera is used for sampling depth image information.Voice output module 50 can use speech synthesis technique, content transforming to be expressed is become sound, plays back with loudspeaker.It can play various warning, remind, music, and other needs the content by interactive voice, and then realizes identifying that personage's expression carries out interaction.
Fig. 3 is the flow chart of robot control method in an embodiment of the present invention.This robot control method can be realized by the functional module of Fig. 1 or Fig. 2, and this control method comprises the steps:
S300, receives external voice information;
S301, obtains the object location information of the current environment residing for described robot and the image information of current environment, and wherein, the image information of described current environment comprises plane picture information and deep image information;
S302, controls the state of described robot, or exports corresponding interaction content according to the one or more information in the image information of the object location information of described external voice information, described current environment, described current environment.
In step S300, the external voice information of reception comprises acoustic information and the environmental voice information of personage.In step S301, the object location information of current environment can be obtained to carry out map structuring by detecting module 30, to obtain cartographic information and the obstacle information of current environment, and the image information of the current environment to be gathered by image capture module 40, image information comprises plane picture information and deep image information.In step s 302, can by controller 60 according to external voice information, one or more information in these three information of the object location information of current environment, the image information of current environment carry out the walking states of control 2, or control voice output module 50 exports corresponding interaction content.
As the further improvement to step S300, detected the sound source position of current environment by detecting module 20 to receive external voice information, and the environmental noise comprised in the external voice information that receives of filtering.By detecting and judging that sound source position information can be used for making robot 2 adjustment when receiving external voice information receive the angle of sound, make the external voice information of reception more clear, also can determine the orientation of sound source simultaneously.
As the further improvement to step S301, after the image information obtaining the current environment residing for robot 2, identify the specific image information that the image information obtained comprises, to adjust the interaction content of output.In the present embodiment, specific image information can be the expression information of personage and/or predefined object information.By can the current emotional of perception personage after the expression information that identifies personage, and then the interaction content of output is adjusted according to the happiness, anger, grief and joy of personage, for example, can when perception personage be sad mood, export the interaction content with comfort character, relative to perception personage is happy emoticon, cheerful and light-hearted, interaction content easily can be exported.The interaction content exported can be stored in advance in cloud server, or is stored in memory that robot 2 comprises.
Improve as to the another kind of step S301, after the image information obtaining the current environment residing for described robot 2, identify the specific image information that the image information obtained comprises, to obtain the positional information of described robot according to specific image information, thus robot 2 is made to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.In the present embodiment, specific image information comprises predefined object information.Robot 2, by identifying that predefined object information obtains the positional information of himself, saves positioning time, reduces program computation complexity.
Improve as to the another kind of step S301, obtain the object location information of the current environment residing for described robot and the image information of current environment, and carry out map structuring according to the object location information of current environment and/or the image information of current environment, to obtain the positional information of robot 2, thus robot 2 is made to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.
Fig. 4 is the structure chart of robot in an embodiment of the present invention.In the present embodiment, robot 2 comprises shell 21, voice output parts 22, display unit 23, voice-input component 24, image acquisition component 25.Voice output parts 22 are arranged on shell 21.Display unit 23 is arranged on shell 21 rotationally, for catching human body information, and according to capture human body information adjustment with human body towards, such as catch the facial information of human body, and then adjustment display unit is towards human face, makes human face can face display unit 23.Voice-input component 24 is arranged on shell rotationally, for detecting the orientation of sound source, and according to detect sound bearing adjustment with sound source towards, thus make voice-input component 24 can forward in sound bearing, clearly can receive the voice messaging that sound source sends.Image acquisition component 25 is arranged on shell 21 rotationally, and for gathering the image information of current environment, image acquisition component 25 can gather the image information of current environment 360 °.Wherein, the image information of described current environment comprises plane picture information and deep image information, and robot 2 controls voice output parts according to the one or more information in the image information of the voice messaging of sound source, current environment, human body information and exports corresponding interaction content.In the present embodiment, display unit 23, voice-input component 24, image acquisition component 25 all can carry out translation and/or spinning movement relative to shell, and display unit 23, voice-input component 24, image acquisition component 25 can be rotated as human body head.In other embodiments of the present invention, image acquisition component 25 also can be arranged on the top of shell 21.
Refer to Fig. 5, as to further improvement of the present invention, robot 2 also comprises driver part 26 and distance measurement parts 27.Driver part 26 is arranged at the below of shell 21, moves for driven machine people 2, thus makes robot 2 can automatic moving, enhances the interactivity of robot.Distance measurement parts 27 are arranged on shell 21 rotationally, distance measurement parts 27 are for obtaining the object location information of current environment, thus making robot 2 can obtain the positional information of robot 2 and the cartographic information of current environment and obstacle information according to the object location information of current environment, robot 2 can plan, navigate, cruise, keep away the functions such as barrier by realizing route further.Distance measurement parts 27 can carry out translation and/or spinning movement relative to shell.It is one or more that distance measurement parts 27 can comprise in laser radar, depth camera, ultrasonic wave, infrared switch.
In one embodiment of the present invention, robot 2 also carries out map structuring and path planning according to the one or more information in the image information of the voice messaging of sound source, current environment, human body information, with close sound source or human body, the distance of robot and human body or sound source is made to be not more than predeterminable range threshold value, better interaction effect can be obtained like this, as human body can carry out closely mutual with robot 2, or robot 2 can follow human body automatically.Robot 2 can realize above-mentioned functions by the controller of its inside.
In an embodiment of the present invention, image acquisition component 25 is also for identifying the specific image information comprised in gathered image information, and robot 2 also can position and navigation according to identified specific image information.In the present embodiment, specific image information comprises predefined object information.Robot 2, by identifying that predefined object information obtains the positional information of himself, saves positioning time, reduces program computation complexity.Image acquisition component 25 can comprise RGB camera and depth camera.Wherein, RGB camera is used for acquisition plane image information, and depth camera is used for sampling depth image information.
As to further improvement of the present invention, robot 2 also comprises charging unit 28 (not shown).Charging unit 28 for the charge condition of monitoring robot 2, and is used for judging whether there is charging device in current environment.When charging unit 28 judges that the electricity of current robot 2 is not enough, search the charging device in current environment, thus make robot 2 when electricity is not enough near self-navigation to charging device, charge to obtain electric power signal by charging unit from charging device.For example, at charging device and robot 2 inside, wireless localization apparatus can be installed, for determining the relative position of charging device and robot 2, to navigate near charging device when robot electric quantity is not enough.In addition, robot 2 is also undertaken building figure and locating by distance measurement parts 27, to position charging device and to charge near self-navigation to charging device.
Above-mentioned robot, robot system and robot control method obtain the image of surrounding environment, sound, the realization such as expression information of personage and people and carry out real-time, interactive by arranging multiple sensing module, simultaneously in moving process can real-time selection optimal path, carry out Obstacle avoidance, improve Consumer's Experience.
Apply specific case herein to set forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for those skilled in the art, according to the thought of the embodiment of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.