WO2018113263A1 - 机器人的控制方法、系统和装置及机器人 - Google Patents

机器人的控制方法、系统和装置及机器人 Download PDF

Info

Publication number
WO2018113263A1
WO2018113263A1 PCT/CN2017/092047 CN2017092047W WO2018113263A1 WO 2018113263 A1 WO2018113263 A1 WO 2018113263A1 CN 2017092047 W CN2017092047 W CN 2017092047W WO 2018113263 A1 WO2018113263 A1 WO 2018113263A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
information
task
control information
environment
Prior art date
Application number
PCT/CN2017/092047
Other languages
English (en)
French (fr)
Inventor
刘若鹏
刘忠银
欧阳一村
Original Assignee
深圳光启合众科技有限公司
深圳光启创新技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光启合众科技有限公司, 深圳光启创新技术有限公司 filed Critical 深圳光启合众科技有限公司
Publication of WO2018113263A1 publication Critical patent/WO2018113263A1/zh

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0217Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with energy consumption, time reduction or distance reduction criteria
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback

Definitions

  • the present invention relates to the field of robots, and in particular to a method, system and device for controlling a robot and a robot.
  • an intelligent robot can realize the purpose of performing some simple tasks according to an instruction issued by a user. For example, a home intelligent robot performs a home cleaning operation according to an instruction issued by a remote controller, and an industrial robot performs an operation on a pipeline according to an instruction. . It is precisely because the intelligent robot has various sensors similar to human visual, auditory, and tactile functions, and a central processor similar to the human brain, the intelligent robot can complete various preset instructions. However, after receiving the task, the intelligent robot usually performs according to the originally set scheme. Once an abnormal situation occurs during the execution, it may affect the execution of the task, so that the robot cannot complete the task or even cause damage to the robot itself. At present, the types of sensors on the intelligent robot itself are still relatively small, the reliability of the data and the stability of the system are poor, and the detected information is not accurate, which affects the decision process of the processor. technical problem
  • Embodiments of the present invention provide a method, system, and apparatus for controlling a robot and a robot to solve at least the technical problem of inaccurate control of the robot due to environmental changes in the prior art.
  • a control method of a robot including: generating control information according to the collected task instruction, wherein the robot performs a task according to the control information; in the process of executing the control information by the robot , detecting environmental information of the environment in which the robot is located; adjusting the control information according to the environmental information of the detected environment of the robot.
  • a control system for a robot including: a collecting device, configured to collect a task command; a controller, connected to the data collecting device, configured to obtain control information according to the task information, wherein the robot executes the task according to the control information; and the executing device is executed in the robot In the process of controlling the information, the environment information of the environment in which the robot is located is detected; wherein the controller is further configured to adjust the control information according to the detected environment information of the environment in which the robot is located.
  • a control device for a robot including: an acquiring module, configured to generate control information according to the collected task instruction, where the robot performs a task according to the control information; a detecting module, configured to detect environment information of an environment in which the robot is located in the process of executing the control information by the robot; and an adjusting module, configured to adjust the control information according to the environment information of the detected environment of the robot.
  • a robot comprising the control system of the above robot.
  • the control information is generated according to the task information collected by the data collection device, and the environment information of the environment in which the robot is located is detected in the process of executing the control information by the robot, according to the environment of the detected environment of the robot.
  • Information adjustment control information is generated according to the task information collected by the data collection device, and the environment information of the environment in which the robot is located is detected in the process of executing the control information by the robot, according to the environment of the detected environment of the robot.
  • Information adjustment control information is generated according to the task information collected by the data collection device, and the environment information of the environment in which the robot is located is detected in the process of executing the control information by the robot, according to the environment of the detected environment of the robot.
  • Information adjustment control information is generated according to the task information collected by the data collection device, and the environment information of the environment in which the robot is located is detected in the process of executing the control information by the robot, according to the environment of the detected environment of the robot.
  • Information adjustment control information is generated according to the task information collected by the data
  • FIG. 1 is a flowchart of a control method of a robot according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a control system of a robot according to an embodiment of the present application
  • 3 is a schematic structural diagram of an optional robot control system according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a system configuration of a robot performing a task of removing a water cup according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a control device for a robot according to an embodiment of the present application.
  • an embodiment of a control method of a robot is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer executable instructions. Also, although logical sequences are shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flowchart of a control method of a robot according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • Step S102 Generate control information according to the collected task instruction, where the robot executes the task according to the control information.
  • the foregoing collection task instruction may be performed by a data collection device, and the data collection device may be a sensor disposed at any part of the robot for acquiring environmental information of the environment in which the robot is located and information of the robot itself, for example: Sensors, attitude sensors, tactile sensors, distance sensors, color sensors, and sound sensors.
  • the task instruction may be a task carried in an instruction received by the robot, and the control information is information generated to control the execution of the task by the robot.
  • the task information may include: a specific position of the water cup, a distance between the water cup and the robot, and the like, and the control information may be Including: Planning the optimal path, the strength of the cup, etc.
  • Step S104 Detecting environmental information of the environment in which the robot is located in the process of executing the control information by the robot.
  • the foregoing environmental information may be detected by using various image acquisition devices such as an image sensor of the robot and an infrared sensor.
  • the above task is still taken as an example of taking a water cup on the table, and the robot detects the distance between each obstacle and the robot through the infrared sensor, and acquires the environment of the robot through the image sensor. image.
  • Step S106 Adjust the control information according to the environment information of the detected environment where the robot is located.
  • the controller controls the robot to execute the control information
  • the infrared sensor detects that there is an obstacle in the preset range of the robot, and determines the specificity of the obstacle, and the robot starts the image sensor to obtain a picture of the position of the current obstacle, after passing through
  • the route is re-planned so that the robot can bypass the obstacle.
  • the above steps of the present application obtain control information according to the control task information collected by the data acquisition device, and detect the environment information of the environment where the robot is located in the process of executing the control information by the robot, according to the detected environment of the robot.
  • Environmental information adjusts control information.
  • the above solution solves the problem by continuously detecting the environmental information during the execution of the task of the robot and adjusting the control information according to the environmental information, so that the robot can adjust the control information according to the environment transformation and ⁇ to prevent the environment from being disturbed by the robot.
  • the technical problem of inaccurate control due to environmental changes achieves the technical effect of the robot on environmental strain.
  • S102 generating control information according to the collected task instruction, includes:
  • S1021 The task instruction is detected by a plurality of sensors, and the task information corresponding to the task instruction is detected by the plurality of sensors according to the task instruction.
  • S1023 Perform fusion processing on the task information detected by the multiple sensors.
  • the foregoing fusion processing may be performed by using any one or more of the following models or algorithms: a Kalman filtering algorithm, a weighted average fusion, a Bayes estimation, a statistical decision theory, a probability theory method, a fuzzy logic reasoning, and the like.
  • the above-described procedure detects task information by a plurality of sensors, fuses the task information detected by the plurality of sensors, and performs control based on the result obtained by the fusion processing.
  • the above solution combines the information detected by multiple sensors, and has the technical effect of improving the accuracy of the detection result, and solves the technical problem that the detection result is inaccurate due to the single sensor.
  • S1023, performing the fusion processing on the task information detected by the multiple sensors includes:
  • S 10231 Obtain credibility of the plurality of sensors of the same category and the same detection target, wherein the detection target is task information corresponding to any one of the task instructions.
  • S 10233 determining a fusion processing result of the task information according to the reliability of each sensor and the task information detected by each sensor.
  • the weight of the information detected by each sensor may be determined according to the reliability of each sensor, and the weight information is used to obtain the task information based on the weight of each sensor.
  • a plurality of position sensors may be used to detect the position of the target object, the sensor with the highest price is set to have the highest weight, and the remaining sensors have the same average value, and multiple bits are obtained. After the coordinates of the target object detected by the sensor in the world coordinate system are set, the values on each coordinate axis are weighted to obtain the position of the final target object.
  • S1023, performing the fusion processing on the task information detected by the multiple sensors includes:
  • S 10235 Obtain task information corresponding to the task instruction detected by the plurality of different types of sensors but the detection target is the same, wherein the detection target is task information corresponding to any one of the task instructions.
  • the robot can obtain the distance between the obstacle and the robot according to the infrared sensor, and can also obtain the distance between the obstacle and the robot by analyzing the image acquired by the image sensor, so the robot can be infrared
  • the distance information acquired by the sensor and the image information obtained by the analysis in the image are averaged, and the average value is taken as the final distance information.
  • the method further includes:
  • Step S108 detecting execution information of the robot, and comparing the execution information with the control information.
  • the execution information is information that the robot operates on the robot according to the control information, such as: posture information of the robot, a walking path of the robot, a gait of the robot, and the like.
  • Step S1010 When the execution information is different from the control information, adjust the execution mechanism corresponding to the execution information.
  • the task is to take a water cup on the desktop as an example, and the posture of the robot acquired by the posture sensor of the robot does not match the posture in the control information, and the robot is adjusted.
  • Each institution performs a gesture in the control information.
  • the above steps of the present application not only detect the environment information of the robot environment in the process of the robot executing the human task, but also detect the information of the robot itself to ensure that the robot performs the task according to the control information, and the execution information of the robot Inconsistent with the control information, the robot is adjusted to ensure the accuracy of the robot's mission.
  • S106 adjusting the control information according to the task information detected in the execution control information, includes: adjusting the control information according to a change of the environment, where, according to Changes in the environment adjust the control information, including any one or more of the following:
  • S1061 Perform path adjustment when it is detected that there is an obstacle in the environment.
  • the task is still to take a water cup on the desktop as an example, in the case that an obstacle exists in the path determined in the control information, Subtask, adjust control information.
  • S1063 Perform force adjustment when detecting that the robot execution control information application force does not satisfy the preset condition.
  • the task is to take a water cup on the desktop as an example.
  • the robot controls the quilt with the force in the control information.
  • the force sensor detects the reaction force of the cup on the robot. When the reaction force is insufficient to grasp the cup, the strength is adjusted to increase the strength of the robot to apply the cup.
  • the method further includes: issuing an alarm after adjusting the control information.
  • the laser pointer and the camera determine the position and the target position, and when the position is determined, the target position is determined;
  • the posture of the self-pose sensor is automatically adjusted by the output information of the self-pose sensor, and the module will send an alarm message when the obstacle is encountered, and the robot needs to re-plan the optimal path;
  • the decision module continuously evaluates according to changes in the environment and outputs correct decisions until the specified task is completed.
  • FIG. 2 is a schematic structural diagram of a control system of a robot according to an embodiment of the present application. As shown in FIG. 2, the system includes: [0066] The data collection device 20 is configured to collect task instructions.
  • the controller 22 is connected to the data collection device, and is configured to generate control information according to the task instruction, wherein the robot executes the task according to the control information.
  • detecting device 24 in the process of executing the control information by the robot, detecting environment information of an environment in which the robot is located; wherein the controller is further configured to: according to the detected environment of the environment in which the robot is located The information adjusts the control information.
  • the system of the present application generates control information according to the task information collected by the data collection device, and detects environmental information of the environment in which the robot is located in the process of executing the control information by the robot, according to the detected environment of the robot.
  • Environmental information adjustment control information The above solution solves the problem by continuously detecting the environmental information during the execution of the task of the robot and adjusting the control information according to the environmental information, so that the robot can adjust the control information according to the environment transformation and ⁇ to prevent the environment from being disturbed by the robot.
  • the technical problem of inaccurate control due to environmental changes achieves the technical effect of the robot on environmental strain.
  • the data collection device comprises any one or more of the following: a camera, an attitude sensor, a tactile sensor, a distance sensor, a color sensor, and an acoustic sensor.
  • the foregoing system further includes:
  • the communication device is configured to transmit the information collected by the data collection device to the controller, wherein the communication device is a combination of any one or more of the following: a wireless fidelity transmission device, a serial transmission device, and a Bluetooth transmission device.
  • the foregoing communication module is mainly used to send the collected data to the controller for data processing;
  • the communication manners implemented by the communication device include but are not limited to: WIFI transmission, serial port transmission, Bluetooth transmission, etc., according to The size of the data to be sent and received and the security requirements require the appropriate transmission method.
  • a power source for powering the data collection device, the controller, the execution device, and the communication device for powering the data collection device, the controller, the execution device, and the communication device.
  • the power source may be composed of one or two lithium batteries.
  • FIG. 3 is a schematic structural diagram of an optional robot control system according to an embodiment of the present application, and shown in FIG. 3, wherein the decision device is a controller for generating control information, and the power supply is separately executed.
  • the device, the decision device, the communication device and the data collection device are connected for powering the execution device, the decision device, the communication device and the data acquisition device; the decision device is also connected to the execution device for the execution device Outputting control information, and receiving execution information returned by the execution device; the decision device is further connected to the communication device for transmitting information through the communication device; the data collection device is also connected to the communication device for transmitting the collection to the decision device through the communication device Information to.
  • the robot is any one of the following: a biped robot, a multi-legged robot, a wheeled robot, and a crawler robot.
  • the decision device is a controller, and the robot passes the sound sensor. Acquire the voice information including the task command, and then determine the environment in which the task is located and the position of the water cup through the laser, radar and camera. After the control information is determined, the task is executed and the posture sensor is used for self-orientation recognition, and the posture is corrected if the posture is not in conformity with the control information. In the process of performing the task, the distance sensor is also used to detect the distance between the obstacle and the self.
  • the distance between the obstacle and the object is detected to be smaller than the distance between the target cup and the self, it is determined that there are other obstacles between the self and the cup, so Plan the path, and use the re-planned path to reach the position of the cup, slap the cup, and use the tactile sensor to detect the force feedback of the cup against the snatch, and determine whether the strength of the feedback can snatch the cup, according to the judgment result. Adjust the force on the cup.
  • information detected by a plurality of sensors is fed back to the decision device, and the decision device generates control information based on the detected information, and adjusts the control information.
  • FIG. 5 is a schematic structural diagram of a control device for a robot according to an embodiment of the present application. As shown in FIG. 5, the device includes:
  • the obtaining module 50 is configured to generate control information according to the collected task instruction, where the robot performs the task according to the control information.
  • the data collection device may be a sensor disposed on any part of the robot for acquiring environmental information of the environment in which the robot is located and information of the robot itself, such as: an image sensor, an attitude sensor, a tactile sensor, and a distance sensor. , color sensors and sound sensors.
  • the above tasks may be tasks carried in instructions received by the robot, and the control information is information generated to control the robot to perform tasks.
  • the task included in the instruction is to take a water cup on the table.
  • the task information may include: a specific location of the water cup, a distance between the water cup and the robot, and the like, and the control information may include: a planned optimal path, a strength of the water cup, and the like.
  • the first detecting module 52 is configured to detect environment information of an environment in which the robot is located in the process of executing the control information.
  • the foregoing environmental information may be detected by using various image acquisition devices such as an image sensor of the robot and an infrared sensor.
  • the above task is still taken as an example of taking a water cup on the table, and the robot detects the distance between each obstacle and the robot through the infrared sensor, and acquires the environment of the robot through the image sensor. image.
  • the adjustment module 54 is configured to adjust the control according to the detected environmental information of the environment in which the robot is located.
  • the above task is still taken as an example of taking a water cup on the table.
  • the controller controls the robot to execute the control information.
  • the infrared sensor detects that there is an obstacle in the preset range of the robot, and determines the specificity of the obstacle, and the robot starts the image sensor to obtain a picture of the position of the current obstacle, after passing through
  • the route is re-planned so that the robot can bypass the obstacle.
  • the device of the present application generates control information according to the task information collected by the data acquisition device by the acquiring module, and continues to detect the environment information of the environment where the robot is located in the process of executing the control information by the first detecting module.
  • the adjustment information is adjusted by the adjustment module according to the environmental information of the detected environment of the robot.
  • the foregoing control acquiring module includes:
  • the detecting submodule is configured to detect the task instruction by using the plurality of sensors, and detect the task information corresponding to the task instruction by using the plurality of sensors according to the task instruction.
  • a fusion submodule configured to perform fusion processing on the task information detected by the multiple sensors.
  • a control submodule configured to obtain control information according to a result of the fusion processing.
  • control fusion submodule includes:
  • the first obtaining unit is configured to acquire the reliability of the plurality of sensors of the same category and the same detection target, wherein the control target is the task information corresponding to any one of the task instructions.
  • the first determining unit is configured to determine a fusion processing result of the control task information according to controlling the reliability of each sensor and the task information detected by each sensor.
  • the foregoing control fusion submodule includes:
  • a second acquiring unit configured to acquire task information corresponding to the task command detected by the plurality of different types of sensors but the detection target is the same, wherein the control detection target is task information corresponding to any one of the task instructions.
  • the second determining unit is configured to determine a mean value of the task information detected by the plurality of sensors different in the categories but the detection targets are the fusion processing result of the control task information.
  • the foregoing control apparatus further includes:
  • the second detecting module is configured to detect execution information of the robot in the process of executing the control information by the robot, and compare the control execution information with the control control information.
  • the adjustment module is configured to adjust an execution mechanism corresponding to the execution information when the control execution information is different from the control control information.
  • a robot comprising the control system of any one of the robots of Embodiment 2.
  • the robot body is mainly used as a platform for the control method described in the embodiment, and the robot includes but is not limited to: a biped robot, a multi-legged (three-legged or three-upper) robot, a wheeled robot , crawler robots.
  • the control system of the robot included in the robot obtains control information according to the control task information collected by the data acquisition device, and detects environmental information of the environment in which the robot is located in the process of executing the control information by the robot, according to the detected robot
  • the environmental information of the environment adjusts the control information.
  • the above solution continuously detects environmental information during the execution of the task of the robot, and adjusts the control information according to the environmental information. Therefore, the robot can adjust the control information according to the transformation and the environment of the environment, so as to prevent the disturbance of the environment from interfering with the robot, thereby solving the technical problem of inaccurate control due to environmental changes in the prior art, and achieving the strain on the environment of the robot.
  • the disclosed technical content may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • the actual implementation may have another division manner.
  • multiple units or components may be combined or may be Integration into another system, or some features can be ignored, or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the unit described as a separate component may or may not be physically distributed, and the component displayed as a unit may or may not be a physical unit, that is, may be located in one place, or may be distributed to multiple On the unit. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiment of the present embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present invention.
  • the foregoing storage medium includes: a USB flash drive, a read only memory (ROM, Read-Only) Memory), random access memory (RAM), removable hard disk, disk or optical disk, etc., which can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

一种机器人的控制方法、系统和装置及机器人。其中,该方法包括:根据采集到的任务指令生成控制信息(S102),其中,机器人根据控制信息执行任务;在机器人执行控制信息的过程中检测机器人所处环境的环境信息(S104);根据检测到的机器人所处环境的环境信息调整控制信息(S106)。解决了现有技术中由于环境变化导致对机器人的控制不准确的技术问题。

Description

机器人的控制方法、 系统和装置及机器人 技术领域
[0001] 本发明涉及机器人领域, 具体而言, 涉及一种机器人的控制方法、 系统和装置 及机器人。
背景技术
[0002] 目前, 智能机器人能够实现根据用户发出的指令来执行一些简单的任务的目的 , 例如, 家用智能机器人根据遥控器发出的指令为用户进行家庭清扫工作, 工 业机器人根据指令进行流水线上的操作。 正是由于智能机器人具有类似于人类 的视觉、 听觉、 触觉功能的各种传感器, 以及类似于人类大脑的中央处理器, 所以智能机器人能够完成预设的各种指令。 但是智能机器人在接收到任务后, 通常会按照最初设定的方案来执行, 一旦在执行过程中出现非正常情况, 则有 可能影响任务的执行, 使机器人不能完成任务甚至造成机器人自身的损坏。 而 且目前智能机器人本身上传感器种类还比较少, 其数据的可靠性以及系统的稳 定性较差, 所检测到的信息并准确度不高, 从而影响到处理器的决策过程。 技术问题
[0003] 针对现有技术中由于环境变化导致对机器人的控制不准确的问题, 目前尚未提 出有效的解决方案。 问题的解决方案
技术解决方案
[0004] 本发明实施例提供了一种机器人的控制方法、 系统和装置及机器人, 以至少解 决现有技术中由于环境变化导致对机器人的控制不准确的技术问题。
[0005] 根据本发明实施例的一个方面, 提供了一种机器人的控制方法, 包括: 根据采 集到的任务指令生成控制信息, 其中, 机器人根据控制信息执行任务; 在机器 人执行控制信息的过程中, 检测机器人所处环境的环境信息; 根据检测到的机 器人所处环境的环境信息调整控制信息。
[0006] 根据本发明实施例的另一方面, 还提供了一种机器人的控制系统, 包括: 数据 采集装置, 用于采集任务指令; 控制器, 与数据采集装置相连, 用于根据所述 任务信息得到控制信息, 其中, 所述机器人根据所述控制信息执行所述任务; 执行装置, 在机器人执行所述控制信息的过程中, 检测所述机器人所处环境的 环境信息; 其中, 所述控制器还用于根据检测到的所述机器人所处环境的环境 信息调整所述控制信息。
[0007] 根据本发明实施例的另一方面, 还提供了一种机器人的控制装置, 包括: 获取 模块, 用于根据采集到的任务指令生成控制信息, 其中, 机器人根据控制信息 执行任务; 第一检测模块, 用于在机器人执行控制信息的过程中, 检测机器人 所处环境的环境信息; 调整模块, 用于根据检测到的机器人所处环境的环境信 息调整控制信息。
[0008] 根据本发明实施例的另一方面, 还提供了一种机器人, 包括上述机器人的控制 系统。
发明的有益效果
有益效果
[0009] 在本发明实施例中, 根据数据采集装置采集到的任务信息生成控制信息, 在机 器人执行控制信息的过程中检测机器人所处环境的环境信息, 根据检测到的机 器人所处环境的环境信息调整控制信息。 上述方案通过在机器人执行任务的过 程中继续检测环境信息, 并根据环境信息调整控制信息, 从而使得机器人能够 根据环境的变换及吋调整控制信息, 以防止环境的变换对机器人的干扰, 从而 解决了现有技术中由于环境变化导致控制不准确的技术问题, 达到了机器人对 环境应变的技术效果。
对附图的简要说明
附图说明
[0010] 此处所说明的附图用来提供对本发明的进一步理解, 构成本申请的一部分, 本 发明的示意性实施例及其说明用于解释本发明, 并不构成对本发明的不当限定 。 在附图中:
[0011] 图 1是根据本发明实施例的机器人的控制方法的流程图;
[0012] 图 2是根据本申请实施例的一种机器人的控制系统的结构示意图; [0013] 图 3是根据本申请实施例的一种可选的机器人的控制系统的结构示意图;
[0014] 图 4是根据本申请实施例的一种机器人在执行去拿水杯的任务吋的系统结构示 意图; 以及
[0015] 图 5是根据本申请实施例的一种机器人的控制装置的结构示意图。
本发明的实施方式
[0016] 为了使本技术领域的人员更好地理解本发明方案, 下面将结合本发明实施例中 的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所描述 的实施例仅仅是本发明一部分的实施例, 而不是全部的实施例。 基于本发明中 的实施例, 本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其 他实施例, 都应当属于本发明保护的范围。
[0017] 需要说明的是, 本发明的说明书和权利要求书及上述附图中的术语"第一"、 " 第二"等是用于区别类似的对象, 而不必用于描述特定的顺序或先后次序。 应该 理解这样使用的数据在适当情况下可以互换, 以便这里描述的本发明的实施例 能够以除了在这里图示或描述的那些以外的顺序实施。 此外, 术语"包括"和"具 有"以及他们的任何变形, 意图在于覆盖不排他的包含, 例如, 包含了一系列步 骤或单元的过程、 方法、 系统、 产品或设备不必限于清楚地列出的那些步骤或 单元, 而是可包括没有清楚地列出的或对于这些过程、 方法、 产品或设备固有 的其它步骤或单元。
[0018] 实施例 1
[0019] 根据本发明实施例, 提供了一种机器人的控制方法实施例, 需要说明的是, 在 附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执 行, 并且, 虽然在流程图中示出了逻辑顺序, 但是在某些情况下, 可以以不同 于此处的顺序执行所示出或描述的步骤。
[0020] 图 1是根据本发明实施例的机器人的控制方法的流程图, 如图 1所示, 该方法包 括如下步骤:
[0021] 步骤 S102, 根据采集到的任务指令生成控制信息, 其中, 机器人根据控制信息 执行任务。 [0022] 具体的, 上述采集任务指令可以由数据采集装置进行, 数据采集装置可以是设 置于机器人上任意部位的传感器, 用于获取机器人所处环境的环境信息以及机 器人本身的信息, 例如: 图像传感器、 姿态传感器、 触觉传感器、 距离传感器 、 颜色传感器和声音传感器等。 上述任务指令可以为机器人接收到的指令中携 带的任务, 控制信息是为了控制机器人执行任务而生成的信息。
[0023] 在一种可选的实施例中, 在指令中包含的任务是取桌子上的一个水杯的情况下 , 任务信息可以包括: 水杯的具体位置、 水杯与机器人的距离等、 控制信息可 以包括: 规划好的最优路径、 握水杯的力度等信息。
[0024] 步骤 S104, 在机器人执行控制信息的过程中检测机器人所处环境的环境信息。
[0025] 具体的, 上述环境信息可以通过机器人的图像传感器、 红外传感器等多种数据 采集设备进行检测。
[0026] 在一种可选的实施例中, 仍以上述任务为取桌子上的一个水杯作为示例, 机器 人通过红外传感器检测各个障碍物与机器人的距离, 并通过图像传感器获取机 器人所处环境的图像。
[0027] 步骤 S 106, 根据检测到的机器人所处环境的环境信息调整控制信息。
[0028] 在一种可选的实施例中, 仍以上述任务为取桌子上的一个水杯作为示例, 在机 器人根据图像传感器以及位置传感器确定了最优路径后, 控制器控制机器人执 行控制信息, 在机器人执行控制信息的过程中, 红外线传感器检测到在于机器 人的预设范围内具有障碍物, 并确定障碍物的具体为止, 此吋机器人启动图像 传感器获取当前障碍物所处位置的图片, 在经过图像分析后确定具有障碍物, 则重新规划路线, 以使机器人绕幵障碍物。
[0029] 由上可知, 本申请上述步骤根据数据采集装置采集到的控制任务信息得到控制 信息, 在机器人执行控制信息的过程中检测机器人所处环境的环境信息, 根据 检测到的机器人所处环境的环境信息调整控制信息。 上述方案通过在机器人执 行任务的过程中继续检测环境信息, 并根据环境信息调整控制信息, 从而使得 机器人能够根据环境的变换及吋调整控制信息, 以防止环境的变换对机器人的 干扰, 从而解决了现有技术中由于环境变化导致控制不准确的技术问题, 达到 了机器人对环境应变的技术效果。 [0030] 可选的, 根据本申请上述实施例, S102, 根据采集到的任务指令生成控制信息 , 包括:
[0031] S1021 , 通过多个传感器检测所述任务指令, 并根据所述任务指令通过多个传 感器检测所述任务指令所对应的任务信息。
[0032] S1023 , 将多个传感器检测到的任务信息进行融合处理。
[0033] S1025 , 根据融合处理的结果得到控制信息。
[0034] 具体的, 上述融合处理可以应用如下任意一种或多种模型或算法进行: 卡尔曼 滤波算法、 加权平均融合、 Bayes估计、 统计决策理论、 概率论方法、 模糊逻辑 推理等。
[0035] 由上可知, 上述步骤通过多个传感器检测任务信息, 将多个传感器检测到的任 务信息进行融合处理, 根据融合处理得到的结果进行控制。 上述方案对多个传 感检测到的信息进行融合, 起到了提高检测结果的精确度的技术效果, 解决了 由于传感器单一导致的检测结果不精确的技术问题。
[0036] 此处需要说明的是, 人类本能地具有将身体上的各种器官 (眼、 耳、 鼻和四肢 等) 所探测的信息 (景物、 声音、 气味和触觉等) 与先验知识进行综合的能力 , 以便对其周围的环境和正在发生的事件做出评估。 而上述机器人的多传感数 据融合就是将多种传感器采集到的数据进行综合判断处理, 以提高自身决策和 容错能力。
[0037] 可选的, 根据本申请上述实施例, S1023 , 将多个传感器检测到的任务信息进 行融合处理, 包括:
[0038] S 10231 , 获取多个类别相同且检测目标相同的传感器的可信度, 其中, 检测目 标为任意一个任务指令所对应的任务信息。
[0039] S 10233, 根据每个传感器的可信度和每个传感器检测到的任务信息确定任务信 息的融合处理结果。
[0040] 在上述步骤中, 可以根据每个传感器的可信度确定每个传感器检测到的信息的 权重, 在获取每个传感器的权重的基础中, 采用加权的方法求得任务信息。
[0041] 在一种可选的情况下, 可以使用多个位置传感器检测目标对象的位置, 设置价 格最高的传感器具有最高的权重, 其余传感器的具有相同均值, 在得到多个位 置传感器检测到的目标对象在世界坐标系中的坐标后, 对每个坐标轴上的值进 行加权, 得到最终的目标对象的位置。
[0042] 可选的, 根据本申请上述实施例, S1023 , 将多个传感器检测到的任务信息进 行融合处理, 包括:
[0043] S 10235 , 获取多个类别不同但检测目标相同的传感器检测到的任务指令对应的 任务信息, 其中, 检测目标为任意一个任务指令所对应的任务信息。
[0044] S 10237 , 确定多个类别不同但检测目标相同的传感器所检测到的任务信息的均 值为任务信息的融合处理结果。
[0045] 在一种可选的实施例中, 机器人可以根据红外传感器得到障碍物与机器人的距 离, 也可以通过分析图像传感器获取的图像得到障碍物与机器人之间的距离, 因此机器人可以将由红外传感器获取的距离信息和由图像中分析得到的图像信 息进行求平均值的运行, 将均值作为最终的距离信息。
[0046] 可选的, 根据本申请上述实施例, 在机器人执行控制信息的过程中, 上述方法 还包括:
[0047] 步骤 S108, 检测机器人的执行信息, 并将执行信息与控制信息进行比对。
[0048] 具体的, 上述执行信息为机器人在根据控制信息动作吋机器人自身的信息, 例 如: 机器人的姿态信息、 机器人的行走路径、 机器人的步态等。
[0049] 步骤 S1010, 在执行信息与控制信息不同的情况下, 调整执行信息对应的执行 机构。
[0050] 在一种可选的实施例中, 仍以任务为拿去桌面上的一个水杯作为示例, 在机器 人的姿态传感器获取到的机器人的姿态与控制信息中的姿态不符吋, 控制机器 人调整各个机构, 执行控制信息中的姿态。
[0051] 由上可知, 本申请上述步骤不仅在机器人执行人任务的过程中检测机器人环境 的环境信息, 还检测机器人本身的信息, 以确保机器人按照控制信息来执行任 务, 并在机器人的执行信息与控制信息不符吋, 对机器人进行调整, 确保了机 器人执行任务的准确程度。
[0052] 可选的, 根据本申请上述实施例, S106, 根据在执行控制信息中检测到的任务 信息调整控制信息, 包括: 根据环境的变化对控制信息进行调整, 其中, 根据 环境的变化对控制信息进行调整, 包括如下任意一种或者多种:
[0053] S1061 , 在检测到环境中存在障碍物的情况下进行路径调整。
[0054] 在一种可选的实施例中, 仍以任务为拿去桌面上的一个水杯作为示例, 在检测 到控制信息中确定的路径中存在障碍物的情况下, 以绕幵障碍物为子任务, 调 整控制信息。
[0055] S1063 , 在检测到机器人执行控制信息吋施力不满足预设条件的情况下进行力 度调整。
[0056] 在一种可选的实施例中, 仍以任务为拿去桌面上的一个水杯作为示例, 在机器 人到达桌子附件, 检测到水杯的位置后, 当机器人以控制信息中的力对被子进 行抓举吋, 通过力传感器检测杯子对机器人的反作用力, 在反作用力不足以对 杯子进行抓举吋, 进行力度的调整, 增加机器人对杯子施例的力度。
[0057] 可选的, 根据本申请上述实施例, 上述方法还包括: 在调整控制信息吋发出告 警 息。
[0058] 上述步骤用以提示用户机器人改变了最初的控制信息。
[0059] 下面, 根据本申请上述机器人的控制方法对一个完整的实施例进行描述, 在实 施例中, 仍假设场景为双足机器人去取一杯水:
[0060] 1、 当机器人的声音传感器接收语音信息后, 就会通过激光雷达和摄像头确定 自身位置以及目标位置, 当其位置确定后就会向目标位置走去;
[0061] 2、 在行走过程中会通过自身姿态传感器的输出信息自主调整自身姿态, 同吋 遇到障碍物后会距离模块会发出报警信息, 机器人需重新规划最优路径;
[0062] 3、 当到达目标位置后, 机器人会确定水杯位置, 从而伸出手去拿, 为了保证 水杯不会脱落, 在手处装有触觉传感器进行对施加的力进行实吋监测, 最终完 成拿水杯任务;
[0063] 4、 在任务执行过程中, 决策模块不断根据环境的变化进行评估, 输出正确决 策, 直至完成指定任务。
[0064] 实施例 2
[0065] 根据本发明实施例, 提供了一种机器人的控制系统的实施例, 图 2是根据本申 请实施例的一种机器人的控制系统的结构示意图, 结合图 2所示, 该系统包括: [0066] 数据采集装置 20, 用于采集任务指令。
[0067] 控制器 22, 与所述数据采集装置相连, 用于根据所述任务指令生成控制信息, 其中, 所述机器人根据所述控制信息执行所述任务。
[0068] 检测装置 24, 在机器人执行所述控制信息的过程中, 检测所述机器人所处环境 的环境信息; 其中, 所述控制器还用于根据检测到的所述机器人所处环境的环 境信息调整所述控制信息。
[0069] 由上可知, 本申请上述系统根据数据采集装置采集到的任务信息生成控制信息 , 在机器人执行控制信息的过程中检测机器人所处环境的环境信息, 根据检测 到的机器人所处环境的环境信息调整控制信息。 上述方案通过在机器人执行任 务的过程中继续检测环境信息, 并根据环境信息调整控制信息, 从而使得机器 人能够根据环境的变换及吋调整控制信息, 以防止环境的变换对机器人的干扰 , 从而解决了现有技术中由于环境变化导致控制不准确的技术问题, 达到了机 器人对环境应变的技术效果。
[0070] 可选的, 根据本申请上述实施例, 数据采集装置包括如下任意一种或多种: 摄 像头、 姿态传感器、 触觉传感器、 距离传感器、 颜色传感器和声音传感器。
[0071] 可选的, 根据本申请上述实施例, 上述系统还包括:
[0072] 通信装置, 用于将数据采集装置采集到的信息传输至控制器, 其中, 通信装置 为如下任意一种或多种的结合: 无线保真传输装置、 串口传输装置和蓝牙传输 装置。
[0073] 具体的, 上述通信模块主要用于将采集到的数据发送至控制器, 以便进行数据 处理; 通过通信装置实现的通信方式包括但不限于: WIFI传输, 串口传输、 蓝 牙传输等, 根据收发数据的大小以及安全性能要求选择合适的传输方式。
[0074] 电源, 用于为数据采集装置、 控制器、 执行装置和通信装置供电。
[0075] 具体的, 上述电源可以一到两块锂电池组成。
[0076] 图 3是根据本申请实施例的一种可选的机器人的控制系统的结构示意图, 结合 图 3所示, 其中, 决策装置即为控制器, 用于生成控制信息, 电源分别与执行装 置、 决策装置、 通信装置和数据采集装置相连, 用于为执行装置、 决策装置、 通信装置和数据采集装置供电; 决策装置还与执行装置相连, 用于向执行装置 输出控制信息, 并接收执行装置返回的执行信息; 决策装置还与通信装置相连 , 用于通过通信装置进行信息的传输; 数据采集装置也与通信装置相连, 用于 通过通信装置向决策装置发送采集到的信息。
[0077] 可选的, 根据本申请上述实施例, 在控制器应用于机器人的情况下, 机器人为 如下任意一种: 双足机器人、 多足机器人、 轮式机器人和履带式机器人。
[0078] 图 4是根据本申请实施例的一种机器人在执行去拿水杯的任务吋的系统结构示 意图, 结合图 4所示, 在此示例中, 决策装置即为控制器, 机器人通过声音传感 器获取到包括任务指令的语音信息, 再通过激光、 雷达和摄像头确定自身所处 的环境以及水杯所处的位置。 在确定控制信息之后执行任务并用姿态传感器进 行自身姿态识别, 在自身姿态与控制信息不符吋对自身姿态进行纠正。 在执行 任务的过程中还使用距离传感器检测障碍物与自身的距离, 如果检测到障碍物 与自身的距离小于目标对象水杯与自身的距离, 则确定自身与水杯之间存在其 他障碍物, 因此重新规划路径, 并使用重新规划的路径达到水杯的位置吋, 对 水杯进行抓举, 同吋使用触觉传感器检测水杯对抓举不为的力反馈, 并判断反 馈的力度是否能够对水杯进行抓举, 根据判断结果调整对水杯的施力。 上述过 程都由多种传感器检测的信息反馈至决策装置处, 由决策装置根据检测到的信 息生成控制信息, 并对控制信息进行调整。
[0079] 实施例 3
[0080] 根据本发明实施例, 提供了一种机器人的控制装置的实施例, 图 5是根据本申 请实施例的一种机器人的控制装置的结构示意图, 结合图 5所示, 该装置包括:
[0081] 获取模块 50, 用于根据采集到的任务指令生成控制信息, 其中, 所述机器人根 据所述控制信息执行所述任务。
[0082] 具体的, 上述数据采集装置可以是设置于机器人上任意部位的传感器, 用于获 取机器人所处环境的环境信息以及机器人本身的信息, 例如: 图像传感器、 姿 态传感器、 触觉传感器、 距离传感器、 颜色传感器和声音传感器等。 上述任务 可以为机器人接收到的指令中携带的任务, 控制信息是为了控制机器人执行任 务而生成的信息。
[0083] 在一种可选的实施例中, 在指令中包含的任务是取桌子上的一个水杯的情况下 , 任务信息可以包括: 水杯的具体位置、 水杯与机器人的距离等、 控制信息可 以包括: 规划好的最优路径、 握水杯的力度等信息。
[0084] 第一检测模块 52, 用于在执行所述控制信息的过程中检测所述机器人所处环境 的环境信息。
[0085] 具体的, 上述环境信息可以通过机器人的图像传感器、 红外传感器等多种数据 采集设备进行检测。
[0086] 在一种可选的实施例中, 仍以上述任务为取桌子上的一个水杯作为示例, 机器 人通过红外传感器检测各个障碍物与机器人的距离, 并通过图像传感器获取机 器人所处环境的图像。
[0087] 调整模块 54, 用于根据检测到的所述机器人所处环境的环境信息调整所述控制 f π息。
[0088] 在一种可选的实施例中, 仍以上述任务为取桌子上的一个水杯作为示例, 在机 器人根据图像传感器以及位置传感器确定了最优路径后, 控制器控制机器人执 行控制信息, 在机器人执行控制信息的过程中, 红外线传感器检测到在于机器 人的预设范围内具有障碍物, 并确定障碍物的具体为止, 此吋机器人启动图像 传感器获取当前障碍物所处位置的图片, 在经过图像分析后确定具有障碍物, 则重新规划路线, 以使机器人绕幵障碍物。
[0089] 由上可知, 本申请上述装置通过获取模块根据数据采集装置采集到的任务信息 生成控制信息, 通过第一检测模块在机器人执行控制信息的过程中继续检测机 器人所处环境的环境信息, 通过调整模块根据检测到的机器人所处环境的环境 信息调整控制信息。 上述方案通过在机器人执行任务的过程中继续检测环境信 息, 并根据环境信息调整控制信息, 从而使得机器人能够根据环境的变换及吋 调整控制信息, 以防止环境的变换对机器人的干扰, 从而解决了现有技术中由 于环境变化导致控制不准确的技术问题, 达到了机器人对环境应变的技术效果
[0090] 可选的, 根据本申请上述实施例, 上述控制获取模块包括:
[0091] 检测子模块, 用于通过多个传感器检测任务指令, 并根据任务指令通过多个传 感器检测任务指令所对应的任务信息。 [0092] 融合子模块, 用于将所述多个传感器检测到的所述任务信息进行融合处理。
[0093] 控制子模块, 用于根据所述融合处理的结果得到控制信息。
[0094] 可选的, 根据本申请上述实施例, 控制融合子模块包括:
[0095] 第一获取单元, 用于获取多个类别相同且检测目标相同的传感器的可信度, 其 中, 控制检测目标为任意一个任务指令所对应的任务信息。
[0096] 第一确定单元, 用于根据控制每个传感器的可信度和每个传感器检测到的任务 信息确定控制任务信息的融合处理结果。
[0097] 可选的, 根据本申请上述实施例, 上述控制融合子模块包括:
[0098] 第二获取单元, 用于获取多个类别不同但检测目标相同的传感器检测到的任务 指令所对应的任务信息, 其中, 控制检测目标为任意一个任务指令所对应的任 务信息。
[0099] 第二确定单元, 用于确定多个类别不同但检测目标相同的传感器所检测到的任 务信息的均值为控制任务信息的融合处理结果。
[0100] 可选的, 根据本申请上述实施例, 上述控制装置还包括:
[0101] 第二检测模块, 用于在机器人执行控制控制信息的过程中检测机器人的执行信 息, 并将控制执行信息与控制控制信息进行比对。
[0102] 调整模块, 用于在控制执行信息与控制控制信息不同的情况下, 调整执行信息 对应的执行机构。
[0103] 实施例 4
[0104] 根据本发明实施例, 提供了一种机器人, 该机器人包括实施例 2中的任意一种 机器人的控制系统。
[0105] 具体的, 上述机器人本体主要用于作为实施例中所述的控制方法的平台, 该机 器人包括但不限于: 双足机器人、 多足 (三足或三足以上) 机器人、 轮式机器 人、 履带式机器人。
[0106] 上述机器人所包括的机器人的控制系统根据数据采集装置采集到的控制任务信 息得到控制信息, 在机器人执行控制信息的过程中检测机器人所处环境的环境 信息, 根据检测到的机器人所处环境的环境信息调整控制信息。 上述方案通过 在机器人执行任务的过程中继续检测环境信息, 并根据环境信息调整控制信息 , 从而使得机器人能够根据环境的变换及吋调整控制信息, 以防止环境的变换 对机器人的干扰, 从而解决了现有技术中由于环境变化导致控制不准确的技术 问题, 达到了机器人对环境应变的技术效果。
[0107] 上述本发明实施例序号仅仅为了描述, 不代表实施例的优劣。
[0108] 在本发明的上述实施例中, 对各个实施例的描述都各有侧重, 某个实施例中没 有详述的部分, 可以参见其他实施例的相关描述。
[0109] 在本申请所提供的几个实施例中, 应该理解到, 所揭露的技术内容, 可通过其 它的方式实现。 其中, 以上所描述的装置实施例仅仅是示意性的, 例如所述单 元的划分, 可以为一种逻辑功能划分, 实际实现吋可以有另外的划分方式, 例 如多个单元或组件可以结合或者可以集成到另一个系统, 或一些特征可以忽略 , 或不执行。 另一点, 所显示或讨论的相互之间的耦合或直接耦合或通信连接 可以是通过一些接口, 单元或模块的间接耦合或通信连接, 可以是电性或其它 的形式。
[0110] 所述作为分离部件说明的单元可以是或者也可以不是物理上分幵的, 作为单元 显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可 以分布到多个单元上。 可以根据实际的需要选择其中的部分或者全部单元来实 现本实施例方案的目的。
[0111] 另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元中, 也可 以是各个单元单独物理存在, 也可以两个或两个以上单元集成在一个单元中。 上述集成的单元既可以采用硬件的形式实现, 也可以采用软件功能单元的形式 实现。
[0112] 所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用 吋, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明的技 术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分 可以以软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质中, 包括若干指令用以使得一台计算机设备 (可为个人计算机、 服务器或者网络设 备等) 执行本发明各个实施例所述方法的全部或部分步骤。 而前述的存储介质 包括: U盘、 只读存储器 (ROM, Read-Only Memory) 、 随机存取存储器 (RAM, Random Access Memory) 、 移动硬盘、 磁 碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式, 应当指出, 对于本技术领域的普通技术 人员来说, 在不脱离本发明原理的前提下, 还可以做出若干改进和润饰, 这些 改进和润饰也应视为本发明的保护范围。

Claims

权利要求书
[权利要求 1] 一种机器人的控制方法, 其特征在于, 包括:
根据采集到的任务指令生成控制信息, 其中, 所述机器人根据所述控 制信息执行所述任务;
在所述机器人执行所述控制信息的过程中检测所述机器人所处环境的 环境信息;
根据检测到的所述机器人所处环境的环境信息调整所述控制信息。
[权利要求 2] 根据权利要求 1所述的方法, 其特征在于, 根据采集到的任务指令生 成控制信息, 包括:
通过多个传感器检测所述任务指令, 并根据所述任务指令通过多个传 感器检测所述任务指令所对应的任务信息;
将所述多个传感器检测到的所述任务信息进行融合处理;
根据所述融合处理的结果得到控制信息。
[权利要求 3] 根据权利要求 2所述的方法, 其特征在于, 将所述多个传感器检测到 的所述任务信息进行融合处理, 包括:
获取多个类别相同且检测目标相同的传感器的可信度, 其中, 所述检 测目标为任意一个所述任务指令所对应的任务信息;
根据每个所述传感器的可信度和每个传感器检测到的任务信息确定所 述任务信息的融合处理结果。
[权利要求 4] 根据权利要求 2所述的方法, 其特征在于, 将所述多个传感器检测到 的所述任务信息进行融合处理, 包括:
获取多个类别不同但检测目标相同的传感器检测到的所述任务指令所 对应的任务信息, 其中, 所述检测目标为任意一个所述任务指令所对 应的任务信息;
确定多个类别不同但检测目标相同的传感器所检测到的所述任务信息 的均值为所述任务信息的融合处理结果。
[权利要求 5] 根据权利要求 1至 4中任意一项所述的方法, 其特征在于, 在所述机器 人执行所述控制信息的过程中, 所述方法还包括: 检测所述机器人的执行信息, 并将所述执行信息与所述控制信息进行 比对;
在所述执行信息与所述控制信息不同的情况下, 调整执行信息对应的 执行机构。
根据权利要求 1至 4中任意一项所述的方法, 其特征在于, 根据检测到 的所述机器人所处环境的环境信息调整所述控制信息, 包括: 根据环 境的变化对控制信息进行调整, 其中, 根据环境的变化对控制信息进 行调整, 包括如下任意一种或者多种:
在检测到所述环境中存在障碍物的情况下进行路径调整;
在检测到所述机器人执行所述控制信息吋施力不满足预设条件的情况 下进行力度调整。
根据权利要求 6所述的方法, 其特征在于, 在调整所述控制信息吋发 出告警 息。
一种机器人的控制系统, 其特征在于, 包括:
数据采集装置, 用于采集任务指令;
控制器, 与所述数据采集装置相连, 用于根据所述任务指令生成控制 信息, 其中, 所述机器人根据所述控制信息执行所述任务; 检测装置, 在所述机器人执行所述控制信息的过程中, 检测所述机器 人所处环境的环境信息;
其中, 所述控制器还用于根据检测到的所述机器人所处环境的环境信 息调整所述控制信息。
根据权利要求 8所述的系统, 其特征在于, 所述数据采集装置包括如 下任意一种或多种: 摄像头、 姿态传感器、 触觉传感器、 距离传感器 、 颜色传感器和声音传感器。
根据权利要求 9所述的系统, 其特征在于, 所述系统还包括: 通信装置, 用于将所述数据采集装置采集到的信息传输至所述控制器 , 其中, 所述通信装置为如下任意一种或多种的结合: 无线保真传输 装置、 串口传输装置和蓝牙传输装置; 电源, 用于为所述数据采集装置、 所述控制器、 所述检测装置和所述 通信装置供电。
根据权利要求 8至 10中任意一项所述的系统, 其特征在于, 在所述控 制器应用于机器人的情况下, 所述机器人为如下任意一种: 双足机器 人、 多足机器人、 轮式机器人和履带式机器人。
一种机器人的控制装置, 其特征在于, 包括:
获取模块, 用于根据采集到的任务指令生成控制信息;
第一检测模块, 用于在所述机器人执行所述控制信息的过程中检测所 述机器人所处环境的环境信息;
调整模块, 用于根据检测到的所述机器人所处环境的环境信息调整所 述控制信息。
根据权利要求 12所述的装置, 其特征在于, 所述获取模块包括: 检测子模块, 用于通过多个传感器检测所述任务指令, 并根据所述任 务指令通过多个传感器检测所述任务指令所对应的任务信息; 融合子模块, 用于将所述多个传感器检测到的所述任务信息进行融合 处理;
控制子模块, 用于根据所述融合处理的结果得到控制信息。
根据权利要求 13所述的装置, 其特征在于, 所述融合子模块包括: 第一获取单元, 用于获取多个类别相同且检测目标相同的传感器的可 信度, 其中, 所述检测目标为任意一个所述任务指令所对应的任务信 息;
第一确定单元, 用于根据每个所述传感器的可信度和每个传感器检测 到的任务信息确定所述任务信息的融合处理结果。
根据权利要求 13所述的装置, 其特征在于, 所述融合子模块包括: 第二获取单元, 用于获取多个类别不同但检测目标相同的传感器检测 到的所述任务指令所对应的任务信息, 其中, 所述检测目标为任意一 个所述任务指令所对应的任务信息;
第二确定单元, 用于确定多个类别不同但检测目标相同的传感器所检 测到的所述任务信息的均值为所述任务信息的融合处理结果。
[权利要求 16] 根据权利要求 12至 15中任意一项所述的装置, 其特征在于, 所述装置 还包括:
第二检测模块, 用于在所述机器人执行所述控制信息的过程中检测所 述机器人的执行信息, 并将所述执行信息与所述控制信息进行比对; 调整模块, 用于在所述执行信息与所述控制信息不同的情况下, 调整 执行信息对应的执行机构。
[权利要求 17] —种机器人, 其特征在于, 包括权利要求 8至 11中任意一种机器人的 控制系统或权利要求 12至 16中任意一种机器人的控制装置。
PCT/CN2017/092047 2016-12-22 2017-07-06 机器人的控制方法、系统和装置及机器人 WO2018113263A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611199120.4A CN108227691A (zh) 2016-12-22 2016-12-22 机器人的控制方法、系统和装置及机器人
CN201611199120.4 2016-12-22

Publications (1)

Publication Number Publication Date
WO2018113263A1 true WO2018113263A1 (zh) 2018-06-28

Family

ID=62624334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/092047 WO2018113263A1 (zh) 2016-12-22 2017-07-06 机器人的控制方法、系统和装置及机器人

Country Status (2)

Country Link
CN (1) CN108227691A (zh)
WO (1) WO2018113263A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114275676A (zh) * 2021-12-24 2022-04-05 广东省特种设备检测研究院(广东省特种设备事故调查中心) 一种起重机结构安全评估系统及方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460030A (zh) * 2018-11-29 2019-03-12 广东电网有限责任公司 一种机器人避障系统
CN110238879B (zh) * 2019-05-22 2022-09-23 菜鸟智能物流控股有限公司 一种定位方法、装置和机器人
CN111474935B (zh) * 2020-04-27 2023-05-23 华中科技大学无锡研究院 一种移动机器人路径规划和定位方法、装置及系统
CN112859851B (zh) * 2021-01-08 2023-02-21 广州视源电子科技股份有限公司 多足机器人控制系统及多足机器人
CN117697769B (zh) * 2024-02-06 2024-04-30 成都威世通智能科技有限公司 一种基于深度学习的机器人控制系统和方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1394660A (zh) * 2002-08-06 2003-02-05 哈尔滨工业大学 全自主型足球机器人及其智能控制系统
JP2005111654A (ja) * 2003-09-19 2005-04-28 Sony Corp ロボット装置及びロボット装置の歩行制御方法
CN101251756A (zh) * 2007-12-21 2008-08-27 西北工业大学 四足式仿生机器人控制装置
US7840308B2 (en) * 2004-09-10 2010-11-23 Honda Motor Co., Ltd. Robot device control based on environment and position of a movable robot
CN101943916A (zh) * 2010-09-07 2011-01-12 陕西科技大学 一种基于卡尔曼滤波器预测的机器人避障方法
CN103413313A (zh) * 2013-08-19 2013-11-27 国家电网公司 基于电力机器人的双目视觉导航系统及方法
CN105058389A (zh) * 2015-07-15 2015-11-18 深圳乐行天下科技有限公司 一种机器人系统、机器人控制方法及机器人
CN105116785A (zh) * 2015-06-26 2015-12-02 北京航空航天大学 一种多平台远程机器人通用控制系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689321B2 (en) * 2004-02-13 2010-03-30 Evolution Robotics, Inc. Robust sensor fusion for mapping and localization in a simultaneous localization and mapping (SLAM) system
CN100528492C (zh) * 2007-08-16 2009-08-19 上海交通大学 带有并联结构六维力传感的精密装配机械手
CN101612733B (zh) * 2008-06-25 2013-07-31 中国科学院自动化研究所 一种分布式多传感器移动机器人系统
CN101356877B (zh) * 2008-09-19 2012-06-20 中国农业大学 一种温室环境下黄瓜采摘机器人系统及采摘方法
CN102175774B (zh) * 2011-01-26 2013-05-01 北京主导时代科技有限公司 基于机械手的轮辋轮辐探伤系统探头定位装置及方法
CN103412490B (zh) * 2013-08-14 2015-09-16 山东大学 用于多机器人动态路径规划的多克隆人工免疫网络算法
CN104199454A (zh) * 2014-09-27 2014-12-10 江苏华宏实业集团有限公司 高压线路巡检机器人控制系统
CN104898662B (zh) * 2014-11-27 2017-04-26 宁波市智能制造产业研究院 一种实现智能化越障的服务机器人
CN105706637A (zh) * 2016-03-10 2016-06-29 西北农林科技大学 一种可自主导航的履带式多机械臂苹果采摘机器人
CN106054829B (zh) * 2016-05-27 2018-12-21 山东建筑大学 家庭送水服务机器人系统的动作方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1394660A (zh) * 2002-08-06 2003-02-05 哈尔滨工业大学 全自主型足球机器人及其智能控制系统
JP2005111654A (ja) * 2003-09-19 2005-04-28 Sony Corp ロボット装置及びロボット装置の歩行制御方法
US7840308B2 (en) * 2004-09-10 2010-11-23 Honda Motor Co., Ltd. Robot device control based on environment and position of a movable robot
CN101251756A (zh) * 2007-12-21 2008-08-27 西北工业大学 四足式仿生机器人控制装置
CN101943916A (zh) * 2010-09-07 2011-01-12 陕西科技大学 一种基于卡尔曼滤波器预测的机器人避障方法
CN103413313A (zh) * 2013-08-19 2013-11-27 国家电网公司 基于电力机器人的双目视觉导航系统及方法
CN105116785A (zh) * 2015-06-26 2015-12-02 北京航空航天大学 一种多平台远程机器人通用控制系统
CN105058389A (zh) * 2015-07-15 2015-11-18 深圳乐行天下科技有限公司 一种机器人系统、机器人控制方法及机器人

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114275676A (zh) * 2021-12-24 2022-04-05 广东省特种设备检测研究院(广东省特种设备事故调查中心) 一种起重机结构安全评估系统及方法

Also Published As

Publication number Publication date
CN108227691A (zh) 2018-06-29

Similar Documents

Publication Publication Date Title
WO2018113263A1 (zh) 机器人的控制方法、系统和装置及机器人
JP6927938B2 (ja) クラウドサービスシステムを組み込んだロボットシステム
CN109571468A (zh) 安防巡检机器人及安防巡检方法
US20050149227A1 (en) Architecture for robot intelligence
Spaan Cooperative active perception using POMDPs
EP2690582B1 (en) System for controlling an automated device
US20210208595A1 (en) User recognition-based stroller robot and method for controlling the same
US10490039B2 (en) Sensors for detecting and monitoring user interaction with a device or product and systems for analyzing sensor data
US11971709B2 (en) Learning device, control device, learning method, and recording medium
US20210339392A1 (en) Robot control system and robot control method
US20220250247A1 (en) Remote controlled device, remote control system and remote control device
Carreto et al. An eye-gaze tracking system for teleoperation of a mobile robot
US20180268280A1 (en) Information processing apparatus, information processing system, and non-transitory computer readable medium
CN111300429A (zh) 机器人控制系统、方法及可读存储介质
KR102503757B1 (ko) 각각의 인공지능을 탑재한 복수의 로봇을 포함하는 로봇 시스템
WO2019010612A1 (zh) 一种传感融合技术的机器人关节防撞保护系统及其方法
Alam et al. A smart approach for human rescue and environment monitoring autonomous robot
Shiarlis et al. Acquiring social interaction behaviours for telepresence robots via deep learning from demonstration
Felip et al. Multi-sensor and prediction fusion for contact detection and localization
Chatzithanos et al. Fessonia: a method for real-time estimation of human operator workload using behavioural entropy
Panagopoulos et al. A bayesian-based approach to human operator intent recognition in remote mobile robot navigation
Fan et al. Learning motion predictors for smart wheelchair using autoregressive sparse Gaussian process
US20220016761A1 (en) Robot control device, robot system, and robot control method
US20230141359A1 (en) Robot process
WO2015060182A1 (ja) 自律探索システム、操作端末、移動探索装置及び探索制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17883601

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15/10/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17883601

Country of ref document: EP

Kind code of ref document: A1