CN105058389A - Robot system, robot control method, and robot - Google Patents

Robot system, robot control method, and robot Download PDF

Info

Publication number
CN105058389A
CN105058389A CN201510417388.XA CN201510417388A CN105058389A CN 105058389 A CN105058389 A CN 105058389A CN 201510417388 A CN201510417388 A CN 201510417388A CN 105058389 A CN105058389 A CN 105058389A
Authority
CN
China
Prior art keywords
information
robot
current environment
image information
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510417388.XA
Other languages
Chinese (zh)
Inventor
郭盖华
徐成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen LD Robot Co Ltd
Original Assignee
Shenzhen Inmotion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Inmotion Technologies Co Ltd filed Critical Shenzhen Inmotion Technologies Co Ltd
Priority to CN201510417388.XA priority Critical patent/CN105058389A/en
Publication of CN105058389A publication Critical patent/CN105058389A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Abstract

The invention discloses a robot system. The robot system comprises a power supply module, a voice input module, a detection module, an image acquiring module, a voice output module and a controller, wherein the voice input module is used for receiving external voice information; the detection module is used for obtaining the object position information of current environment in which a robot is positioned; the image acquiring module is used for acquiring the image information of the current environment; the controller is used for controlling the states of the robot according to one or multiple of the external voice information, the object position information of the current environment, and the image information of the current environment, or for controlling the voice output module to output corresponding interactive content; the image information of the current environment comprises plane image information and depth image information. The invention further discloses a robot control method and a robot. The robot, the robot system and the robot control method obtain information, such as images and voice, of the surrounding environment through the arrangement of a plurality of sensing modules, so that the real-time interaction between the robot and a person is realized, and the user experience is improved.

Description

A kind of robot system, robot control method and robot
Technical field
The present invention relates to robot field, particularly relate to a kind of robot for home services.
Background technology
Along with the development of Robotics and deepening continuously of artificial intelligence study, intelligent robot plays the part of more and more important role in human lives.There is the robot of a lot of function singleness on the market, its interactive mode is single, each is all that specific function adds that autonomous forms the robot of certain function singleness, but the demand of people increases day by day, kind is also varied, the robot of these function singlenesses can not meet people's demand, and does not possess certain independence and intelligent.Therefore, a kind of follow-on robot is badly in need of to overcome above-mentioned defect.
Summary of the invention
Embodiments provide a kind of robot system, it is various alternately, and independence is strong.
Embodiments provide a kind of robot control method, it is various alternately, and independence is strong.
Embodiments provide a kind of robot, it is various alternately, and independence is strong.
The embodiment of the present invention provides a kind of robot system, is used in robot, comprises power module, voice input module, detecting module, image capture module, voice output module and controller.Described power module is used for powering to described robot.Described voice input module is for receiving external voice information.Described detecting module is for obtaining the object location information of the current environment residing for described robot.Described image capture module is for gathering the image information of current environment.Described controller is used for the state controlling described robot according to the one or more information in the image information of the object location information of described external voice information, described current environment, described current environment, or controls described voice output module and export corresponding interaction content.Wherein, the image information of described current environment comprises plane picture information and deep image information.
Preferably, described robot system also comprises picture recognition module, for identifying the specific image information that described image information comprises, described specific image information comprises expression information and/or predefined object information, and described controller also adjusts the interaction content of described voice output module output according to described specific image information.
Preferably, described controller also obtains the positional information of described robot according to described specific image information.
Preferably, described controller also carries out map structuring, to obtain the positional information of described robot according to the object location information of described current environment and/or the image information of described current environment.
Preferably, described robot system also comprises display module, and described controller also controls described display module for the one or more information in the image information of the object location information according to described external voice information, described current environment, described current environment and exports corresponding mutual picture material.
Preferably, described voice input module comprises multiple microphone array, the environmental noise comprised in the external voice information that described voice input module also receives for the sound source position and filtering of detecting current environment.
Preferably, what described detecting module comprised in laser radar, depth camera, ultrasonic wave, infrared switch is one or more.
Preferably, described robot system also comprises radio receiving transmitting module, for transmitting/receiving wireless signal, to carry out radio communication with external device (ED).
Preferably, described robot system also comprises charging module, and for judging the electricity size of described power module, and whether detecting current environment has charging device when electricity is not enough, close to described charging device to control described robot, to charge.
Preferably, described robot system also comprises walking module, controls the motion state of described robot for the one or more information in the image information of the object location information according to described external voice information, described current environment, described current environment.
One embodiment of the invention additionally provides a kind of robot control method, applies to, in robot, comprising:
Receive external voice information;
Obtain the object location information of the current environment residing for described robot and the image information of current environment;
Control the state of described robot according to the one or more information in the image information of the object location information of described external voice information, described current environment, described current environment, or export corresponding interaction content;
Wherein, the image information of described current environment comprises plane picture information and deep image information.
Preferably, the image information of the current environment residing for the described robot of described acquisition, comprising:
Obtain the image information of the current environment residing for described robot, and identify the specific image information that described image information comprises, described specific image information comprises expression information and/or predefined object information, to adjust the interaction content of described output.
Preferably, the image information of the current environment residing for the described robot of described acquisition, comprising:
Obtain the image information of the current environment residing for described robot, and identify the specific image information that described image information comprises, described specific image information comprises predefined object information, to obtain the positional information of described robot according to described specific image information.
Preferably, the object location information of current environment residing for the described robot of described acquisition and the image information of current environment, comprising:
Obtain the object location information of the current environment residing for described robot and the image information of current environment, and carry out map structuring, to obtain the positional information of described robot according to the object location information of described current environment and/or the image information of described current environment.
Preferably, described reception external voice information, comprising:
The sound source position of detecting current environment is to receive external voice information, and the environmental noise comprised in the external voice information that receives of filtering.
One embodiment of the invention also provides a kind of robot, comprises shell, voice output parts, display unit, voice-input component and image acquisition component.Described voice output parts are arranged on described shell.Described display unit is arranged on the housing rotationally, for catching human body information, and according to capture human body information adjustment with human body towards.Described voice-input component is arranged on described shell rotationally, for detecting the orientation of sound source, and according to detect sound bearing adjustment with sound source towards.Described image acquisition component is arranged on described shell rotationally, for gathering the image information of current environment.Wherein, the image information of described current environment comprises plane picture information and deep image information, and described robot controls described voice output parts according to the one or more information in the image information of the voice messaging of described sound source, described current environment, described human body information and exports corresponding interaction content.
Preferably, described robot also comprises driver part, is arranged at below described shell, moves for driving described robot.
Preferably, described robot also carries out map structuring and path planning according to the one or more information in the image information of the voice messaging of described sound source, described current environment, described human body information, with near described sound source or described human body, distance is made to be not more than predeterminable range threshold value.
Preferably, described robot also comprises distance measurement parts, is arranged on described shell rotationally, and for obtaining the object location information of current environment, described robot also carries out map structuring according to the object location information of described current environment.
Preferably, described display unit, described voice-input component, described image acquisition component, described distance measurement parts all can carry out translation and/or spinning movement.
Preferably, described image acquisition component is also for identifying the specific image information comprised in gathered image information, described robot also positions and navigation according to described specific image information, and described specific image information comprises predefined object information.
Preferably, described robot also comprises charging unit, for judging whether there is charging device in current environment, described robot also when electricity is not enough near described charging device, to be charged by described charging unit, described charging device.
Above-mentioned robot, robot system and robot control method obtain the image of surrounding environment, sound, the realization such as expression information of personage and people and carry out real-time, interactive by arranging multiple sensing module, simultaneously in moving process can real-time selection optimal path, carry out Obstacle avoidance, improve Consumer's Experience.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those skilled in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the module diagram of robot system in an embodiment of the present invention;
Fig. 2 is the module diagram of robot system in another embodiment of the present invention;
Fig. 3 is the flow chart of robot control method in an embodiment of the present invention;
Fig. 4 is the structural representation of robot in an embodiment of the present invention;
Fig. 5 is the structural representation of robot in another embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those skilled in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Figure 1 shows that the module map of robot system in an embodiment of the present invention.In the present embodiment, robot system 1 is used in robot 2.Robot system 1 comprises power module 10, voice input module 20, detecting module 30, image capture module 40, voice output module 50 and controller 60.Power module 10 for providing electric power signal, to power to robot 2.Voice input module 20 is for receiving external voice information.Detecting module 30 is for obtaining the object location information of the current environment residing for robot 1.Image capture module 40 is for gathering the image information of current environment.The external voice information of controller 60 for receiving according to voice input module 20, one or more information in the image information of the current environment that the object location information of the current environment that detecting module 30 obtains, image capture module 40 gather carry out the state of control 2, or control voice output module 50 exports corresponding interaction content.In the present embodiment, the external voice information that voice input module 20 receives comprises acoustic information and the environmental voice information of personage.Detecting module 30 carries out map structuring by the object location information obtaining current environment, and to obtain cartographic information and the obstacle information of current environment, obstacle information can be barrier newly-increased in a certain moment environmental map.The image information of the current environment that image capture module 40 gathers, image information comprises plane picture information and deep image information.Controller 60 is according to external voice information, one or more information in these three information of the object location information of current environment, the image information of current environment carry out the state of control 2, with Time Controller 60 also according to external voice information, the one or more information in these three information of the object location information of current environment, the image information of current environment or control also adjust the interaction content that voice output module 50 exports.The state of robot 2 comprises the walking states, operational factor etc. of robot.
Please refer to Fig. 2.As to further improvement of the present invention, robot system 1 also comprises picture recognition module 70.Picture recognition module 70 is for identifying the specific image information comprised in the image information that gathers as acquisition module 40, and controller 60 also adjusts according to the specific image information after identifying the interaction content that voice output module 50 exports.In the present embodiment, specific image information can be the expression information of personage and/or predefined object information.Robot system 1 is by can the current emotional of perception personage after the expression information that identifies personage, and then the interaction content of voice output module 50 output is adjusted according to the happiness, anger, grief and joy of personage, for example, can when perception personage be sad mood, export the interaction content with comfort character, relative to perception personage is happy emoticon, cheerful and light-hearted, interaction content easily can be exported.The interaction content that robot system prestores can be stored in cloud server, or is stored in memory (not shown) that robot comprises.
Controller 60 also obtains the positional information of robot 2 according to the predefined object information after identification, thus makes robot 2 to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.Robot system 1, by identifying that predefined object information obtains the positional information of robot 2, saves positioning time, reduces program computation complexity.
In an embodiment of the present invention, controller 60 also carries out map structuring according to the image information of the object location information of the current environment of detecting module 30 acquisition and/or the current environment of image capture module 40 collection, to obtain the positional information of robot 2, thus robot 2 is made to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.
As to further improvement of the present invention, robot system 1 also comprises walking module 80.Walking module 80 carrys out the motion state of control 2 for the one or more information in the image information of the object location information according to external voice information, current environment, current environment.
In an embodiment of the present invention, voice input module 20 comprises multiple microphone array.The environmental noise comprised in the external voice information that voice input module 20 also receives for the sound source position and filtering of detecting current environment.Voice input module 20, by detecting and judging that sound source position information can be used for making robot 2 adjustment when receiving external voice information receive the angle of sound, makes the external voice information of reception more clear, can determine the orientation of sound source simultaneously.
As to further improvement of the present invention, robot system 1 also comprises display module 90, the external voice information of controller 60 also for receiving according to voice input module 20, one or more information in the image information of the current environment that the object location information of the current environment that detecting module 30 obtains, image capture module 40 gather export corresponding mutual picture material to control display module 90, improve the mutual diversity of robot 2 further.In the present embodiment, display module 90 can be that liquid crystal display, projector equipment etc. are for showing the module of image.
As to further improvement of the present invention, robot system 1 also comprises radio receiving transmitting module 90.Radio receiving transmitting module 90 for transmitting/receiving wireless signal, thus makes robot 2 can carry out radio communication with external device (ED).Radio receiving transmitting module 90 can comprise WIFI, bluetooth, one or more in ZegBee, Z-WAVE.
As to further improvement of the present invention, robot system 1 also comprises charging module 100.Charging module 100 is for judging the electricity size of power module 10, and whether detecting current environment has charging device when judging that power module 10 electricity is not enough, close to charging device with control 2, to charge.For example, at charging device and robot 2 inside, wireless localization apparatus can be installed, for determining the relative position of charging device and robot, to navigate near charging device when robot electric quantity is not enough.In addition, robot 2 is also undertaken building figure and locating by detecting module 30, to position charging device and to charge near self-navigation to charging device.
In an embodiment of the present invention, can to adopt in laser radar, depth camera, ultrasonic wave, infrared switch one or more detects for detecting module 30.Image capture module 40 can be that RGB camera and depth camera combine.RGB camera is used for acquisition plane image information, and depth camera is used for sampling depth image information.Voice output module 50 can use speech synthesis technique, content transforming to be expressed is become sound, plays back with loudspeaker.It can play various warning, remind, music, and other needs the content by interactive voice, and then realizes identifying that personage's expression carries out interaction.
Fig. 3 is the flow chart of robot control method in an embodiment of the present invention.This robot control method can be realized by the functional module of Fig. 1 or Fig. 2, and this control method comprises the steps:
S300, receives external voice information;
S301, obtains the object location information of the current environment residing for described robot and the image information of current environment, and wherein, the image information of described current environment comprises plane picture information and deep image information;
S302, controls the state of described robot, or exports corresponding interaction content according to the one or more information in the image information of the object location information of described external voice information, described current environment, described current environment.
In step S300, the external voice information of reception comprises acoustic information and the environmental voice information of personage.In step S301, the object location information of current environment can be obtained to carry out map structuring by detecting module 30, to obtain cartographic information and the obstacle information of current environment, and the image information of the current environment to be gathered by image capture module 40, image information comprises plane picture information and deep image information.In step s 302, can by controller 60 according to external voice information, one or more information in these three information of the object location information of current environment, the image information of current environment carry out the walking states of control 2, or control voice output module 50 exports corresponding interaction content.
As the further improvement to step S300, detected the sound source position of current environment by detecting module 20 to receive external voice information, and the environmental noise comprised in the external voice information that receives of filtering.By detecting and judging that sound source position information can be used for making robot 2 adjustment when receiving external voice information receive the angle of sound, make the external voice information of reception more clear, also can determine the orientation of sound source simultaneously.
As the further improvement to step S301, after the image information obtaining the current environment residing for robot 2, identify the specific image information that the image information obtained comprises, to adjust the interaction content of output.In the present embodiment, specific image information can be the expression information of personage and/or predefined object information.By can the current emotional of perception personage after the expression information that identifies personage, and then the interaction content of output is adjusted according to the happiness, anger, grief and joy of personage, for example, can when perception personage be sad mood, export the interaction content with comfort character, relative to perception personage is happy emoticon, cheerful and light-hearted, interaction content easily can be exported.The interaction content exported can be stored in advance in cloud server, or is stored in memory that robot 2 comprises.
Improve as to the another kind of step S301, after the image information obtaining the current environment residing for described robot 2, identify the specific image information that the image information obtained comprises, to obtain the positional information of described robot according to specific image information, thus robot 2 is made to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.In the present embodiment, specific image information comprises predefined object information.Robot 2, by identifying that predefined object information obtains the positional information of himself, saves positioning time, reduces program computation complexity.
Improve as to the another kind of step S301, obtain the object location information of the current environment residing for described robot and the image information of current environment, and carry out map structuring according to the object location information of current environment and/or the image information of current environment, to obtain the positional information of robot 2, thus robot 2 is made to plan, navigate, cruise, to keep away the functions such as barrier by realizing route further.
Fig. 4 is the structure chart of robot in an embodiment of the present invention.In the present embodiment, robot 2 comprises shell 21, voice output parts 22, display unit 23, voice-input component 24, image acquisition component 25.Voice output parts 22 are arranged on shell 21.Display unit 23 is arranged on shell 21 rotationally, for catching human body information, and according to capture human body information adjustment with human body towards, such as catch the facial information of human body, and then adjustment display unit is towards human face, makes human face can face display unit 23.Voice-input component 24 is arranged on shell rotationally, for detecting the orientation of sound source, and according to detect sound bearing adjustment with sound source towards, thus make voice-input component 24 can forward in sound bearing, clearly can receive the voice messaging that sound source sends.Image acquisition component 25 is arranged on shell 21 rotationally, and for gathering the image information of current environment, image acquisition component 25 can gather the image information of current environment 360 °.Wherein, the image information of described current environment comprises plane picture information and deep image information, and robot 2 controls voice output parts according to the one or more information in the image information of the voice messaging of sound source, current environment, human body information and exports corresponding interaction content.In the present embodiment, display unit 23, voice-input component 24, image acquisition component 25 all can carry out translation and/or spinning movement relative to shell, and display unit 23, voice-input component 24, image acquisition component 25 can be rotated as human body head.In other embodiments of the present invention, image acquisition component 25 also can be arranged on the top of shell 21.
Refer to Fig. 5, as to further improvement of the present invention, robot 2 also comprises driver part 26 and distance measurement parts 27.Driver part 26 is arranged at the below of shell 21, moves for driven machine people 2, thus makes robot 2 can automatic moving, enhances the interactivity of robot.Distance measurement parts 27 are arranged on shell 21 rotationally, distance measurement parts 27 are for obtaining the object location information of current environment, thus making robot 2 can obtain the positional information of robot 2 and the cartographic information of current environment and obstacle information according to the object location information of current environment, robot 2 can plan, navigate, cruise, keep away the functions such as barrier by realizing route further.Distance measurement parts 27 can carry out translation and/or spinning movement relative to shell.It is one or more that distance measurement parts 27 can comprise in laser radar, depth camera, ultrasonic wave, infrared switch.
In one embodiment of the present invention, robot 2 also carries out map structuring and path planning according to the one or more information in the image information of the voice messaging of sound source, current environment, human body information, with close sound source or human body, the distance of robot and human body or sound source is made to be not more than predeterminable range threshold value, better interaction effect can be obtained like this, as human body can carry out closely mutual with robot 2, or robot 2 can follow human body automatically.Robot 2 can realize above-mentioned functions by the controller of its inside.
In an embodiment of the present invention, image acquisition component 25 is also for identifying the specific image information comprised in gathered image information, and robot 2 also can position and navigation according to identified specific image information.In the present embodiment, specific image information comprises predefined object information.Robot 2, by identifying that predefined object information obtains the positional information of himself, saves positioning time, reduces program computation complexity.Image acquisition component 25 can comprise RGB camera and depth camera.Wherein, RGB camera is used for acquisition plane image information, and depth camera is used for sampling depth image information.
As to further improvement of the present invention, robot 2 also comprises charging unit 28 (not shown).Charging unit 28 for the charge condition of monitoring robot 2, and is used for judging whether there is charging device in current environment.When charging unit 28 judges that the electricity of current robot 2 is not enough, search the charging device in current environment, thus make robot 2 when electricity is not enough near self-navigation to charging device, charge to obtain electric power signal by charging unit from charging device.For example, at charging device and robot 2 inside, wireless localization apparatus can be installed, for determining the relative position of charging device and robot 2, to navigate near charging device when robot electric quantity is not enough.In addition, robot 2 is also undertaken building figure and locating by distance measurement parts 27, to position charging device and to charge near self-navigation to charging device.
Above-mentioned robot, robot system and robot control method obtain the image of surrounding environment, sound, the realization such as expression information of personage and people and carry out real-time, interactive by arranging multiple sensing module, simultaneously in moving process can real-time selection optimal path, carry out Obstacle avoidance, improve Consumer's Experience.
Apply specific case herein to set forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for those skilled in the art, according to the thought of the embodiment of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (22)

1. a robot system, is used in robot, it is characterized in that, comprising:
Power module, for powering to described robot;
Voice input module, for receiving external voice information;
Detecting module, for obtaining the object location information of the current environment residing for described robot;
Image capture module, for gathering the image information of current environment;
Voice output module; And
Controller, control the state of described robot for the one or more information in the image information of the object location information according to described external voice information, described current environment, described current environment, or control described voice output module and export corresponding interaction content;
Wherein, the image information of described current environment comprises plane picture information and deep image information.
2. robot system as claimed in claim 1, is characterized in that, also comprise
Picture recognition module, for identifying the specific image information that described image information comprises, described specific image information comprises expression information and/or predefined object information, and described controller also adjusts the interaction content of described voice output module output according to described specific image information.
3. robot system as claimed in claim 2, it is characterized in that, described controller also obtains the positional information of described robot according to described specific image information.
4. robot system as claimed in claim 1, is characterized in that,
Described controller also carries out map structuring, to obtain the positional information of described robot according to the object location information of described current environment and/or the image information of described current environment.
5. robot system as claimed in claim 1, it is characterized in that, also comprise display module, described controller also controls described display module for the one or more information in the image information of the object location information according to described external voice information, described current environment, described current environment and exports corresponding mutual picture material.
6. robot system as claimed in claim 1, is characterized in that,
Described voice input module comprises multiple microphone array, the environmental noise comprised in the external voice information that described voice input module also receives for the sound source position and filtering of detecting current environment.
7. robot system as claimed in claim 1, is characterized in that,
It is one or more that described detecting module comprises in laser radar, depth camera, ultrasonic wave, infrared switch.
8. robot system as claimed in claim 1, is characterized in that, also comprise
Radio receiving transmitting module, for transmitting/receiving wireless signal, to carry out radio communication with external device (ED).
9. robot system as claimed in claim 1, is characterized in that, also comprise
Charging module, for judging the electricity size of described power module, and whether detecting current environment has charging device when electricity is not enough, close to described charging device to control described robot, to charge.
10. robot system as claimed in claim 1, is characterized in that, also comprise
Walking module, controls the motion state of described robot for the one or more information in the image information of the object location information according to described external voice information, described current environment, described current environment.
11. 1 kinds of robot control methods, apply to, in robot, it is characterized in that, comprising:
Receive external voice information;
Obtain the object location information of the current environment residing for described robot and the image information of current environment;
Control the state of described robot according to the one or more information in the image information of the object location information of described external voice information, described current environment, described current environment, or export corresponding interaction content;
Wherein, the image information of described current environment comprises plane picture information and deep image information.
12. robot control methods as claimed in claim 11, it is characterized in that, the image information of the current environment residing for the described robot of described acquisition, comprising:
Obtain the image information of the current environment residing for described robot, and identify the specific image information that described image information comprises, described specific image information comprises expression information and/or predefined object information, to adjust the interaction content of described output.
13. robot control methods as claimed in claim 11, it is characterized in that, the image information of the current environment residing for the described robot of described acquisition, comprising:
Obtain the image information of the current environment residing for described robot, and identify the specific image information that described image information comprises, described specific image information comprises predefined object information, to obtain the positional information of described robot according to described specific image information.
14. robot control methods as claimed in claim 11, it is characterized in that, the object location information of current environment residing for the described robot of described acquisition and the image information of current environment, comprising:
Obtain the object location information of the current environment residing for described robot and the image information of current environment, and carry out map structuring, to obtain the positional information of described robot according to the object location information of described current environment and/or the image information of described current environment.
15. robot control methods as claimed in claim 11, is characterized in that, described reception external voice information, comprising:
The sound source position of detecting current environment is to receive external voice information, and the environmental noise comprised in the external voice information that receives of filtering.
16. 1 kinds of robots, is characterized in that, comprising:
Shell;
Voice output parts, are arranged on described shell;
Display unit, is arranged on the housing rotationally, for catching human body information, and according to capture human body information adjustment with human body towards;
Voice-input component, is arranged on described shell rotationally, for detecting the orientation of sound source, and according to detect sound bearing adjustment with sound source towards;
Image acquisition component, is arranged on described shell rotationally, for gathering the image information of current environment;
Wherein, the image information of described current environment comprises plane picture information and deep image information, and described robot controls described voice output parts according to the one or more information in the image information of the voice messaging of described sound source, described current environment, described human body information and exports corresponding interaction content.
17. robots as claimed in claim 16, is characterized in that, also comprise driver part, be arranged at below described shell, moving for driving described robot.
18. robots as claimed in claim 17, it is characterized in that, described robot also carries out map structuring and path planning according to the one or more information in the image information of the voice messaging of described sound source, described current environment, described human body information, with near described sound source or described human body, distance is made to be not more than predeterminable range threshold value.
19. robots as claimed in claim 17, it is characterized in that, also comprise distance measurement parts, be arranged on described shell rotationally, for obtaining the object location information of current environment, described robot also carries out map structuring according to the object location information of described current environment.
20. robots as claimed in claim 19, is characterized in that, described display unit, described voice-input component, described image acquisition component, described distance measurement parts all can carry out translation and/or spinning movement.
21. robots as claimed in claim 17, it is characterized in that, described image acquisition component is also for identifying the specific image information comprised in gathered image information, described robot also positions and navigation according to described specific image information, and described specific image information comprises predefined object information.
22. robots as claimed in claim 17, it is characterized in that, also comprise charging unit, for judging whether there is charging device in current environment, described robot also when electricity is not enough near described charging device, to be charged by described charging unit, described charging device.
CN201510417388.XA 2015-07-15 2015-07-15 Robot system, robot control method, and robot Pending CN105058389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510417388.XA CN105058389A (en) 2015-07-15 2015-07-15 Robot system, robot control method, and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510417388.XA CN105058389A (en) 2015-07-15 2015-07-15 Robot system, robot control method, and robot

Publications (1)

Publication Number Publication Date
CN105058389A true CN105058389A (en) 2015-11-18

Family

ID=54488013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510417388.XA Pending CN105058389A (en) 2015-07-15 2015-07-15 Robot system, robot control method, and robot

Country Status (1)

Country Link
CN (1) CN105058389A (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105345822A (en) * 2015-12-17 2016-02-24 成都英博格科技有限公司 Intelligent robot control method and device
CN105375897A (en) * 2015-11-30 2016-03-02 北京光年无限科技有限公司 Intelligent-robot-oriented environmental information processing method and device
CN105700438A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Electronic control system for multi-joint small robot
CN105700528A (en) * 2016-02-19 2016-06-22 深圳前海勇艺达机器人有限公司 Autonomous navigation and obstacle avoidance system and method for robot
CN105825213A (en) * 2016-03-14 2016-08-03 深圳市华讯方舟科技有限公司 Human body identification and positioning method, robot and characteristic clothing
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN105856243A (en) * 2016-06-28 2016-08-17 湖南科瑞特科技股份有限公司 Movable intelligent robot
CN105856260A (en) * 2016-06-24 2016-08-17 深圳市鑫益嘉科技股份有限公司 On-call robot
CN105910599A (en) * 2016-04-15 2016-08-31 深圳乐行天下科技有限公司 Robot device and method for locating target
CN105929827A (en) * 2016-05-20 2016-09-07 北京地平线机器人技术研发有限公司 Mobile robot and positioning method thereof
CN106078755A (en) * 2016-06-21 2016-11-09 昆明理工大学 A kind of multifunctional medical service robot
CN106205106A (en) * 2016-06-29 2016-12-07 北京智能管家科技有限公司 Intelligent mobile device based on acoustics and moving method, location moving method
CN106303476A (en) * 2016-08-03 2017-01-04 纳恩博(北京)科技有限公司 The control method of robot and device
CN106774325A (en) * 2016-12-23 2017-05-31 湖南晖龙股份有限公司 Robot is followed based on ultrasonic wave, bluetooth and vision
CN106881716A (en) * 2017-02-21 2017-06-23 深圳市锐曼智能装备有限公司 Human body follower method and system based on 3D cameras robot
CN107452381A (en) * 2016-05-30 2017-12-08 中国移动通信有限公司研究院 A kind of multi-media voice identification device and method
WO2018000258A1 (en) * 2016-06-29 2018-01-04 深圳狗尾草智能科技有限公司 Method and system for generating robot interaction content, and robot
WO2018000268A1 (en) * 2016-06-29 2018-01-04 深圳狗尾草智能科技有限公司 Method and system for generating robot interaction content, and robot
CN107643509A (en) * 2016-07-22 2018-01-30 腾讯科技(深圳)有限公司 Localization method, alignment system and terminal device
CN107680593A (en) * 2017-10-13 2018-02-09 歌尔股份有限公司 The sound enhancement method and device of a kind of smart machine
CN108115695A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of emotional color expression system and robot
WO2018113263A1 (en) * 2016-12-22 2018-06-28 深圳光启合众科技有限公司 Method, system and apparatus for controlling robot, and robot
CN108241311A (en) * 2018-02-05 2018-07-03 安徽微泰导航电子科技有限公司 A kind of microrobot electronics disability system
CN108237547A (en) * 2016-12-27 2018-07-03 发那科株式会社 Industrial robot control device
CN108687775A (en) * 2018-07-12 2018-10-23 上海常仁信息科技有限公司 Robot movable regional planning system based on robot identity card
CN108985667A (en) * 2018-10-25 2018-12-11 重庆鲁班机器人技术研究院有限公司 Home education auxiliary robot and home education auxiliary system
CN109605363A (en) * 2017-10-05 2019-04-12 财团法人交大思源基金会 Robot voice control system and method
CN109703607A (en) * 2017-10-25 2019-05-03 北京眸视科技有限公司 A kind of Intelligent baggage car
CN109771163A (en) * 2019-03-01 2019-05-21 弗徕威智能机器人科技(上海)有限公司 A kind of wheelchair automatic control system
CN109991969A (en) * 2017-12-29 2019-07-09 周秦娜 A kind of control method and device that the robot based on depth transducer makes a return voyage automatically
CN110936383A (en) * 2019-12-20 2020-03-31 上海有个机器人有限公司 Obstacle avoiding method, medium, terminal and device for robot
CN111300429A (en) * 2020-03-25 2020-06-19 深圳市天博智科技有限公司 Robot control system, method and readable storage medium
CN111515946A (en) * 2018-10-31 2020-08-11 杭州程天科技发展有限公司 Control method and device for human body auxiliary robot
CN112017661A (en) * 2019-05-31 2020-12-01 江苏美的清洁电器股份有限公司 Voice control system and method of sweeping robot and sweeping robot
CN112140118A (en) * 2019-06-28 2020-12-29 北京百度网讯科技有限公司 Interaction method, device, robot and medium
CN112230652A (en) * 2020-09-04 2021-01-15 安克创新科技股份有限公司 Walking robot, method of controlling movement of walking robot, and computer storage medium
CN113084796A (en) * 2021-03-03 2021-07-09 广东理工学院 Control method and control device for intelligent interactive guidance robot
CN113739322A (en) * 2021-08-20 2021-12-03 科沃斯机器人股份有限公司 Purifier and control method thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799786A (en) * 2006-01-06 2006-07-12 华南理工大学 Housekeeping service robot
US20080215184A1 (en) * 2006-12-07 2008-09-04 Electronics And Telecommunications Research Institute Method for searching target object and following motion thereof through stereo vision processing and home intelligent service robot using the same
CN101436037A (en) * 2008-11-28 2009-05-20 深圳先进技术研究院 Dining room service robot system
CN102176222A (en) * 2011-03-18 2011-09-07 北京科技大学 Multi-sensor information collection analyzing system and autism children monitoring auxiliary system
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof
CN103240749A (en) * 2013-05-10 2013-08-14 广州博斯特智能科技有限公司 Service robot
CN103699126A (en) * 2013-12-23 2014-04-02 中国矿业大学 Intelligent tour guide robot
CN104102346A (en) * 2014-07-01 2014-10-15 华中科技大学 Household information acquisition and user emotion recognition equipment and working method thereof
CN104493827A (en) * 2014-11-17 2015-04-08 福建省泉州市第七中学 Intelligent cognitive robot and cognitive system thereof
CN104742139A (en) * 2015-03-23 2015-07-01 长源动力(北京)科技有限公司 Tele medicine auxiliary robot

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799786A (en) * 2006-01-06 2006-07-12 华南理工大学 Housekeeping service robot
US20080215184A1 (en) * 2006-12-07 2008-09-04 Electronics And Telecommunications Research Institute Method for searching target object and following motion thereof through stereo vision processing and home intelligent service robot using the same
CN101436037A (en) * 2008-11-28 2009-05-20 深圳先进技术研究院 Dining room service robot system
CN102176222A (en) * 2011-03-18 2011-09-07 北京科技大学 Multi-sensor information collection analyzing system and autism children monitoring auxiliary system
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof
CN103240749A (en) * 2013-05-10 2013-08-14 广州博斯特智能科技有限公司 Service robot
CN103699126A (en) * 2013-12-23 2014-04-02 中国矿业大学 Intelligent tour guide robot
CN104102346A (en) * 2014-07-01 2014-10-15 华中科技大学 Household information acquisition and user emotion recognition equipment and working method thereof
CN104493827A (en) * 2014-11-17 2015-04-08 福建省泉州市第七中学 Intelligent cognitive robot and cognitive system thereof
CN104742139A (en) * 2015-03-23 2015-07-01 长源动力(北京)科技有限公司 Tele medicine auxiliary robot

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105375897A (en) * 2015-11-30 2016-03-02 北京光年无限科技有限公司 Intelligent-robot-oriented environmental information processing method and device
CN105345822A (en) * 2015-12-17 2016-02-24 成都英博格科技有限公司 Intelligent robot control method and device
CN105345822B (en) * 2015-12-17 2017-05-10 成都英博格科技有限公司 Intelligent robot control method and device
CN105700528A (en) * 2016-02-19 2016-06-22 深圳前海勇艺达机器人有限公司 Autonomous navigation and obstacle avoidance system and method for robot
CN105825213A (en) * 2016-03-14 2016-08-03 深圳市华讯方舟科技有限公司 Human body identification and positioning method, robot and characteristic clothing
CN105825213B (en) * 2016-03-14 2020-09-11 华讯方舟科技有限公司 Human body identification and positioning method, robot and characteristic clothes
CN105700438A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Electronic control system for multi-joint small robot
CN105843118A (en) * 2016-03-25 2016-08-10 北京光年无限科技有限公司 Robot interacting method and robot system
CN105843118B (en) * 2016-03-25 2018-07-27 北京光年无限科技有限公司 A kind of robot interactive method and robot system
CN105910599A (en) * 2016-04-15 2016-08-31 深圳乐行天下科技有限公司 Robot device and method for locating target
CN105929827A (en) * 2016-05-20 2016-09-07 北京地平线机器人技术研发有限公司 Mobile robot and positioning method thereof
CN107452381B (en) * 2016-05-30 2020-12-29 中国移动通信有限公司研究院 Multimedia voice recognition device and method
CN107452381A (en) * 2016-05-30 2017-12-08 中国移动通信有限公司研究院 A kind of multi-media voice identification device and method
CN106078755A (en) * 2016-06-21 2016-11-09 昆明理工大学 A kind of multifunctional medical service robot
CN106078755B (en) * 2016-06-21 2019-02-05 昆明理工大学 A kind of multifunctional medical service robot
CN105856260A (en) * 2016-06-24 2016-08-17 深圳市鑫益嘉科技股份有限公司 On-call robot
CN105856243A (en) * 2016-06-28 2016-08-17 湖南科瑞特科技股份有限公司 Movable intelligent robot
WO2018000268A1 (en) * 2016-06-29 2018-01-04 深圳狗尾草智能科技有限公司 Method and system for generating robot interaction content, and robot
WO2018000258A1 (en) * 2016-06-29 2018-01-04 深圳狗尾草智能科技有限公司 Method and system for generating robot interaction content, and robot
CN106205106A (en) * 2016-06-29 2016-12-07 北京智能管家科技有限公司 Intelligent mobile device based on acoustics and moving method, location moving method
CN107643509B (en) * 2016-07-22 2019-01-11 腾讯科技(深圳)有限公司 Localization method, positioning system and terminal device
CN107643509A (en) * 2016-07-22 2018-01-30 腾讯科技(深圳)有限公司 Localization method, alignment system and terminal device
CN106303476A (en) * 2016-08-03 2017-01-04 纳恩博(北京)科技有限公司 The control method of robot and device
CN108115695A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of emotional color expression system and robot
WO2018113263A1 (en) * 2016-12-22 2018-06-28 深圳光启合众科技有限公司 Method, system and apparatus for controlling robot, and robot
CN106774325A (en) * 2016-12-23 2017-05-31 湖南晖龙股份有限公司 Robot is followed based on ultrasonic wave, bluetooth and vision
CN108237547A (en) * 2016-12-27 2018-07-03 发那科株式会社 Industrial robot control device
US10456921B2 (en) 2016-12-27 2019-10-29 Fanuc Corporation Industrial-robot control device
CN106881716A (en) * 2017-02-21 2017-06-23 深圳市锐曼智能装备有限公司 Human body follower method and system based on 3D cameras robot
CN109605363B (en) * 2017-10-05 2021-10-26 财团法人交大思源基金会 Robot voice control system and method
CN109605363A (en) * 2017-10-05 2019-04-12 财团法人交大思源基金会 Robot voice control system and method
US10984816B2 (en) 2017-10-13 2021-04-20 Goertek Inc. Voice enhancement using depth image and beamforming
CN107680593A (en) * 2017-10-13 2018-02-09 歌尔股份有限公司 The sound enhancement method and device of a kind of smart machine
CN109703607B (en) * 2017-10-25 2020-06-23 北京眸视科技有限公司 Intelligent luggage van
CN109703607A (en) * 2017-10-25 2019-05-03 北京眸视科技有限公司 A kind of Intelligent baggage car
CN109991969A (en) * 2017-12-29 2019-07-09 周秦娜 A kind of control method and device that the robot based on depth transducer makes a return voyage automatically
CN108241311A (en) * 2018-02-05 2018-07-03 安徽微泰导航电子科技有限公司 A kind of microrobot electronics disability system
CN108241311B (en) * 2018-02-05 2024-03-19 安徽微泰导航电子科技有限公司 Micro-robot electronic disabling system
CN108687775A (en) * 2018-07-12 2018-10-23 上海常仁信息科技有限公司 Robot movable regional planning system based on robot identity card
CN108985667A (en) * 2018-10-25 2018-12-11 重庆鲁班机器人技术研究院有限公司 Home education auxiliary robot and home education auxiliary system
CN111515946A (en) * 2018-10-31 2020-08-11 杭州程天科技发展有限公司 Control method and device for human body auxiliary robot
CN111515946B (en) * 2018-10-31 2021-07-20 杭州程天科技发展有限公司 Control method and device for human body auxiliary robot
CN109771163A (en) * 2019-03-01 2019-05-21 弗徕威智能机器人科技(上海)有限公司 A kind of wheelchair automatic control system
CN112017661A (en) * 2019-05-31 2020-12-01 江苏美的清洁电器股份有限公司 Voice control system and method of sweeping robot and sweeping robot
CN112140118A (en) * 2019-06-28 2020-12-29 北京百度网讯科技有限公司 Interaction method, device, robot and medium
CN112140118B (en) * 2019-06-28 2022-05-31 北京百度网讯科技有限公司 Interaction method, device, robot and medium
CN110936383A (en) * 2019-12-20 2020-03-31 上海有个机器人有限公司 Obstacle avoiding method, medium, terminal and device for robot
CN111300429A (en) * 2020-03-25 2020-06-19 深圳市天博智科技有限公司 Robot control system, method and readable storage medium
CN112230652A (en) * 2020-09-04 2021-01-15 安克创新科技股份有限公司 Walking robot, method of controlling movement of walking robot, and computer storage medium
CN113084796A (en) * 2021-03-03 2021-07-09 广东理工学院 Control method and control device for intelligent interactive guidance robot
CN113739322A (en) * 2021-08-20 2021-12-03 科沃斯机器人股份有限公司 Purifier and control method thereof

Similar Documents

Publication Publication Date Title
CN105058389A (en) Robot system, robot control method, and robot
EP3463244B1 (en) Guide robot and method and system for operating the same
CN204814723U (en) Lead blind system
CN103699126B (en) The guidance method of intelligent guide robot
US9316502B2 (en) Intelligent mobility aid device and method of navigating and providing assistance to a user thereof
CN104772748B (en) A kind of social robot
AU2011256720B2 (en) Mobile human interface robot
CN105058393A (en) Guest greeting robot
US11330951B2 (en) Robot cleaner and method of operating the same
US11806862B2 (en) Robots, methods, computer programs, computer-readable media, arrays of microphones and controllers
CN104965426A (en) Intelligent robot control system, method and device based on artificial intelligence
CN106074096A (en) A kind of blind person's portable navigating instrument based on computer vision
CN112135553B (en) Method and apparatus for performing cleaning operations
CN109062201B (en) ROS-based intelligent navigation microsystem and control method thereof
CN107160403A (en) A kind of intelligent robot system with multi-functional human-machine interface module
KR102331672B1 (en) Artificial intelligence device and method for determining user's location
CN211022482U (en) Cleaning robot
CN110686694A (en) Navigation method, navigation device, wearable electronic equipment and computer readable storage medium
CN204423154U (en) A kind of automatic charging toy robot based on independent navigation
KR20190104488A (en) Artificial intelligence robot for managing movement of object using artificial intelligence and operating method thereof
KR20190098102A (en) Artificial intelligence device for controlling external device
US20180203515A1 (en) Monitoring
Narayani et al. Design of Smart Cane with integrated camera module for visually impaired people
CN110871446A (en) Vehicle-mounted robot, control method and system thereof, vehicle and storage medium
US20230050825A1 (en) Hands-Free Crowd Sourced Indoor Navigation System and Method for Guiding Blind and Visually Impaired Persons

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
CB02 Change of applicant information

Address after: Nanshan District Xueyuan Road in Shenzhen city of Guangdong province 518055 No. 1001 Nanshan Chi Park B1 building 18 floor

Applicant after: INMOTION TECHNOLOGIES, INC.

Address before: Nanshan District Xili Tong long Shenzhen city of Guangdong Province in 518055 with rich industrial city 8 Building 2, 6 floor

Applicant before: INMOTION TECHNOLOGIES, INC.

COR Change of bibliographic data
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20171219

Address after: 518000 Guangdong city of Shenzhen province Nanshan District Taoyuan Street Xueyuan Road No. 1001 Nanshan Chi Park B1 building 16 floor

Applicant after: Shenzhen music robot Co., Ltd.

Address before: Nanshan District Xueyuan Road in Shenzhen city of Guangdong province 518055 No. 1001 Nanshan Chi Park B1 building 18 floor

Applicant before: INMOTION TECHNOLOGIES, INC.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151118