WO2023143408A1 - 机器人物品抓取方法、装置、机器人、程序及存储介质 - Google Patents

机器人物品抓取方法、装置、机器人、程序及存储介质 Download PDF

Info

Publication number
WO2023143408A1
WO2023143408A1 PCT/CN2023/073269 CN2023073269W WO2023143408A1 WO 2023143408 A1 WO2023143408 A1 WO 2023143408A1 CN 2023073269 W CN2023073269 W CN 2023073269W WO 2023143408 A1 WO2023143408 A1 WO 2023143408A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
grasping
target item
current
posture
Prior art date
Application number
PCT/CN2023/073269
Other languages
English (en)
French (fr)
Inventor
黄晓庆
张站朝
马世奎
Original Assignee
达闼机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 达闼机器人股份有限公司 filed Critical 达闼机器人股份有限公司
Publication of WO2023143408A1 publication Critical patent/WO2023143408A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls

Definitions

  • the embodiments of the present application relate to the field of robots, and in particular to a method, device, robot, program, and storage medium for grabbing objects by a robot.
  • robots are mainly divided into industrial robots and service robots in the field of robots.
  • Industrial robots are mainly used in industrial manufacturing scenarios, such as automobile manufacturing, parts processing and other scenarios. Their working environment is structured, usually in a fixed space suitable for robot arms to work.
  • Service robots are mainly used in people's work and life scenes, and the working environment is relatively complex, such as: hotels, restaurants, office buildings, etc., and the control accuracy and repeatability of service robots are also lower than industrial robots. Therefore, it is difficult for service robots to grasp items in complex scenes.
  • the commonly used robotic object grasping method is based on the dynamic model and the forward and reverse solution algorithm of kinematics.
  • the robot arm is modeled, and the joint angles of each joint of the robot arm are obtained from the Cartesian coordinates of the end of the object to be grasped to realize the robot arm. Grab the target object.
  • the calculation process of the whole method is complicated, it is not suitable for complex environments, and it depends on the high-precision control of each part of the robot.
  • the purpose of the embodiments of the present application is to provide a method, device, robot, program, and storage medium for grabbing objects by a robot, which can dynamically grab objects through the feedback of the visual sensor, so that the robot can use low-precision, low-cost joints to complete the robot in complex Accurate capture in the scene.
  • an embodiment of the present application provides a method for grabbing a robot item, including: acquiring the 3D position and posture of the target item through a visual sensor at a preset frequency; according to the 3D position and posture of the target item Obtain the optimal grasping point of the robot; when the robot moves to the optimal grasping point and the target item is within the optimal grasping range, obtain the current posture of the robot, and based on the The current posture of the target item, the current 3D position of the target item fed back by the visual sensor and the current posture of the target item, dynamic programming executes the grabbing operation, wherein the optimal grabbing range is centered on the optimal grabbing point and The area that the robot arm can grasp.
  • the embodiment of the present application also provides an article grabbing device, including:
  • the information acquisition module is used to obtain the 3D position and posture of the target item according to the preset frequency through the visual sensor; obtain the best grasping point of the robot according to the 3D position and posture of the target item;
  • a dynamic grasping module configured to obtain the current posture of the robot when the robot moves to the optimal grasping point and the target item is within the optimum grasping range, and based on the current posture of the robot posture, the current 3D position of the target item fed back by the visual sensor, and the current posture of the target item, dynamic programming executes the grabbing operation, wherein the optimal grabbing range is centered on the optimal grabbing point and the robot arm Crawlable area.
  • the embodiment of the present application also provides a robot, including: at least one processor; and, a memory communicatively coupled to the at least one processor; wherein, The memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, to enable the at least one processor to perform the robotic object grasping method as described above.
  • the embodiment of the present application also provides a computer program, which implements the above-mentioned method for grasping objects by a robot when the computer program is executed by a processor.
  • the embodiment of the present application also provides a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the above-mentioned robot object grasping method is realized.
  • the robot object grasping method, device, robot, program and storage medium provided in the embodiments of the present application can obtain the 3D position and posture of the target object according to the preset frequency through the visual sensor, can monitor whether the target object moves, and obtain the latest position of the target object and posture, according to the 3D position and posture of the target object to obtain the optimal grasping point of the robot, when the robot moves to the optimal grasping point and the target object is within the optimal grasping range, obtain the current posture of the robot, and based on The current posture of the robot, the current 3D position of the target item fed back by the visual sensor, and the current posture of the target item are dynamically programmed to perform the grasping operation.
  • the target item is monitored through the visual sensor, so that the target item is within the optimal grabbing range to perform the grabbing operation to ensure accurate grabbing of the target item; on the other hand, the data including the target item and other items fed back by the visual sensor Information, can adjust the grasping operation in real time, perform three-dimensional obstacle avoidance in space, and dynamically plan to grasp the target object, reducing the dependence on the high control accuracy of the robotic arm, and is more suitable for object grasping in complex scenes.
  • FIG. 1 is a flow chart 1 of a method for grasping objects by a robot according to an embodiment of the present application
  • FIG. 2 is a second flow chart of a method for grabbing objects by a robot according to an embodiment of the present application
  • Fig. 3 is a schematic structural diagram of a robot object grabbing device provided in an embodiment of the present application
  • Fig. 4 is a schematic structural diagram of a robot provided in an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of a cloud server provided according to an embodiment of the present application.
  • the embodiment of the present application relates to a method for grasping objects by a robot.
  • the method for grasping objects by a robot includes the following steps: Step 101: Obtain the 3D position and posture of the target item through the visual sensor at a preset frequency.
  • the visual sensor can be set on the robot body, or can be set at any position in the environment where the target item is located.
  • Vision sensors include: any one or more devices such as lidar, inertial sensor IMU, depth camera, RGB camera, sonar sensor, etc.
  • step 101 includes: acquiring an image of a target item through a visual sensor, and the image of the target item is a 2D image or a 3D image; detecting the image of the target item through a visual perception recognition algorithm, and acquiring the 3D position and posture of the target item.
  • the RGB image of the target item can be obtained, feature extraction and classification of the RGB image of the target item, and matching with the preset original image of the target item to identify the target object, and further determine on this basis The position and pose of the target item.
  • the 3D image of the target item can be acquired, and the RGB-D data of the target item can be acquired based on the 3D image, where D represents the depth value, that is, the distance between the target item and the shooting device.
  • the point cloud data of the target item can be obtained, and the position and posture of the target item can be obtained by directly performing 3D body recognition based on the point cloud data.
  • determining the pose of the target object is beneficial for the robot to determine the pose and grasping action of the robot when grasping. For example: when grabbing a can placed on a table, if it is recognized that there is an occluded object next to the left side of the can, it is determined that it is easier for the robot to grab the can from other directions.
  • Step 102 Obtain the optimal grasping point of the robot according to the 3D position and posture of the target item.
  • the optimal grasping point means that after the robot moves from the current position to this position, the robot arm can easily grasp the target object without the robot moving.
  • the optimal grasping point can be determined according to the 3D position and posture of the target object, combined with the parameters of the robot arm.
  • the parameters of the robot arm can include the number of joints, the length of joints, the angle of joint movement, etc.
  • the grasping posture of the robot indicates the posture of the robot facing the target object at the best grasping point, for example: the robot is at the most The best grabbing point is facing the target item (that is, the best grabbing point is the origin and the target item is in the north direction, and the robot is facing the target item), or the robot is offset 30 degrees to the left at the best grabbing point
  • the direction side faces the target item (that is, the best grabbing point is the origin and the target item is in the north direction, and the grabbing posture of the robot is 30 degrees west of north). That is to say, when the robot reaches the optimal grasping point, the posture of the robot has been adjusted to the grasping posture suitable for grasping.
  • the visual sensor still acquires the 3D position and posture of the target item in real time or according to the preset frequency to monitor whether the target item is moving, so as to ensure that the position of the best grabbing point is up to date , Adapted to the target item.
  • Step 103 When the robot moves to the optimal grasping point and the target item is within the optimal grasping range, obtain the current posture of the robot, and based on the current posture of the robot, the current 3D position and target of the target item fed back by the visual sensor The current posture of the item, the dynamic programming performs the grasping operation, wherein the optimal grasping range is the area centered on the optimal grasping point and the robot arm can grasp.
  • the grasping posture of the robot can be determined, then the posture of the robot when it reaches the optimal grasping point is theoretically the determined grasping posture, but when this application is applied to When controlling a robot with low precision, such as a service robot, there is a certain deviation between the actual posture of the robot at the best grasping point and the determined grasping posture, which may affect whether the target object can be accurately grasped in the end. Therefore, it is necessary to obtain the current posture of the robot, and adjust the grasping operation in real time based on the current posture of the robot, the current 3D position of the target item, and the current posture of the target item, and dynamically grab the target item.
  • the robot when the robot reaches the best grasping point and starts to perform the grasping operation, in the process of finally grasping the target item, it does not determine the grasping path and grasping action at one time, but the robot's position based on the visual sensor feedback.
  • Posture and the position and posture of the target item continuously adjust the grasping operation, and in the process of adjusting the grasping path and grasping operation, it also includes the operation of analyzing and avoiding obstacles in the three-dimensional space where the target item is located. ,
  • the adjustment within a short distance can reduce the dependence on the high control accuracy of each part of the robot, simulate the grasping process of the human hand, and realize the precise grasping of complex objects in complex environments.
  • the current posture of the robot may refer to the grasping posture of the robot, or may refer to the grasping posture of the robot and the posture of the robot arm.
  • step 102 after step 102, it also includes: when the robot moves to the optimal grasping point, when it is determined by the visual sensor that the target item is out of the optimum grasping range and within the preset When the effective grasping range is available, the new 3D position and new posture of the target item are reacquired, and the optimal grasping point is recalculated based on the new 3D position and new posture; when the robot moves to the optimal grasping point
  • a reminder message is fed back to the user.
  • the vision sensor when the robot is moving from the current position to the optimal grasping point, the vision sensor will acquire the 3D position of the target item at a preset frequency to determine whether the target item is still within the optimal grasping range. If it is within the optimal grabbing range, the grabbing operation is performed. If it is not within the optimal grasping range but within the effective grasping range, then re-determine the best grasping point, and if it is not within the effective grasping range, feedback a reminder to the user. It should be noted that the effective grasping range is preset. After leaving the effective grasping range, the target object cannot be detected by turning or adjusting the vision sensor, that is, the target object cannot be grasped.
  • the robot object grasping method obtaineds the 3D position and posture of the target object according to the preset frequency through the visual sensor, can monitor whether the target object moves, obtain the latest position and posture of the target object, and obtain the latest position and posture of the target object according to the 3D position of the target object and posture to obtain the optimal grasping point of the robot.
  • the robot moves to the optimal grasping point and the target item is within the optimal grasping range
  • the current posture of the robot is obtained, and based on the current posture of the robot and the visual sensor feedback
  • the current 3D position of the target item and the current pose of the target item dynamic programming performs the grasping operation.
  • the target item is monitored through the visual sensor, so that the target item is within the optimal grabbing range to perform the grabbing operation to ensure accurate grabbing of the target item; on the other hand, the data including the target item and other items fed back by the visual sensor Information, can adjust the grasping operation in real time, perform three-dimensional obstacle avoidance in space, and dynamically plan to grasp the target object, reducing the dependence on the high control accuracy of the robotic arm, and is more suitable for object grasping in complex scenes.
  • the embodiment of the present application relates to a method for grasping objects by a robot.
  • the method for grasping objects by a robot includes the following steps: Step 201: Obtain the 3D position and posture of the target item through the visual sensor at a preset frequency.
  • step 201 in this embodiment is basically the same as those of step 101, and will not be repeated here.
  • Step 202 Obtain the optimal grasping point of the robot according to the 3D position and posture of the target item.
  • Step 203 Obtain the navigation path of the robot from the current position to the optimal grasping point based on the optimal grasping point, wherein the navigation path includes the moving path of the robot from the current position to the optimal grasping point, the movement of the robot in the moving path speed and steering curvature.
  • step 203 specifically includes: receiving a moving path calculated and issued by a cloud server connected to the robot according to a preset three-dimensional map, wherein the three-dimensional map includes items in the environment where the target item is located
  • the three-dimensional coordinates of the moving path and the three-dimensional coordinates of each item in the preset three-dimensional map are used to obtain the moving speed and steering curvature of the robot in the moving path; the moving path and the robot in the The navigation path is obtained by combining the moving speed and the steering curvature in the initial path.
  • the cloud server connected to the robot will also provide corresponding support during the process of the robot grabbing objects.
  • the robot After the robot acquires the 2D image or 3D image of the target object through the visual sensor, it will also synchronously transmit the 2D image or 3D image to the cloud server.
  • the cloud server can also recognize and obtain the 3D position and posture of the target object, and synchronize it in the cloud server In the digital twin environment, when the 3D position and attitude of the target object change, the digital twin environment is updated synchronously.
  • the cloud server also includes the robot digital twin. After obtaining the relevant data of the robot body (such as: chassis parameters, robot arm parameters, etc.), the robot digital twin can simulate the motion state of the real robot based on the data synchronized by the robot. In addition, the calculation robot body of the best grasping point can be processed, and the cloud server can also process the acquisition.
  • the cloud server plans the road network according to the preset three-dimensional map to obtain the moving path. It should be noted that there may be one or more moving paths. When there are multiple moving paths, you can choose one moving path and send it to the robot, or you can send multiple moving paths to the robot, and the robot will feedback to the user for selection.
  • the three-dimensional map includes the three-dimensional coordinates of each item in the environment where the target item is located, and the robot obtains the moving speed and steering curvature of the robot in the moving path according to the three-dimensional coordinates of each item in the preset three-dimensional map.
  • the robot needs to calculate the moving speed of the robot in each distance, the curvature of the steering at the turning point in each distance, and the corresponding steering speed. In this way, it can be ensured that the robot does not collide with other objects on the moving path during the moving process.
  • Step 204 When the robot moves to the optimal grasping point according to the navigation path, and the target item is within the optimal grasping range, obtain the current pose of the robot.
  • Step 205 Receive the initial capture path and initial capture action delivered by the cloud server.
  • step 204 and before step 205 it also includes: acquiring the three-dimensional environment information near the target item fed back by the visual sensor;
  • the three-dimensional environment information near the target item fed back by the visual sensor is synchronously transmitted to the cloud server connected to the robot, so that the digital twin of the robot in the cloud server is based on the current posture of the robot, the current 3D position of the target item, the current posture of the target item and
  • the 3D environment information near the target object is used for 3D space obstacle avoidance planning, and the initial grasping path and initial grasping action are obtained.
  • the initial grasping path is a three-dimensional path
  • the initial grasping action is also a three-dimensional action.
  • the cloud server also includes the analysis of the three-dimensional environment information near the target object, and plans a reasonable grasping path to avoid space obstacles.
  • the three-dimensional environment information near the target item includes: current position information and/or current posture information of items other than the target item within the optimal grasping range.
  • the robot performs dynamic programming based on the current posture of the robot, the current 3D position of the target item fed back by the visual sensor, and the current posture of the target item. Before the grabbing operation, it also includes: when the current position information and/or current attitude information of the target item fed back by the visual sensor changes, then synchronously transmit the current position information and/or current attitude information of the target item to the cloud server, so that The digital twin of the robot in the cloud server re-plans the three-dimensional space obstacle avoidance, updates the initial grasping path and the initial grasping action; and/or, when the three-dimensional environment information near the target item fed back by the visual sensor changes, the The three-dimensional environment information near the target item is synchronously transmitted to the cloud server, so that the robot digital twin in the cloud server can re-plan the three-dimensional space obstacle avoidance, and update the initial grasping path and initial grasping action.
  • the cloud server obtains the initial grasping path and grasping action, and before the robot starts to perform the grasping operation, for this period of time, if the data information of the target item and/or other items (referring to the location information and/or attitude information) changes, it is necessary to update the initial grasping path and the initial grasping action according to the latest data information. For example: when the robot receives the initial grasping path and initial grasping action from the cloud server, but the robot has not started grasping, the visual sensor detects that a new object appears in front of the target object, and the new object blocks the target. part of the object, the robot will synchronize the latest 3D environment information near the target object to the cloud server.
  • the cloud server After the cloud server receives the information, it will re-plan the 3D space obstacle avoidance, update the initial grasping path and initial grasping action and Send it to the robot again.
  • the vision sensor detects that the position of the target item has moved, and a new item is placed on the left side of the target item, the robot will move the target item The latest location information of the item and the latest 3D environment information near the target item are synchronized to the cloud server.
  • the cloud server After receiving the information, the cloud server will re-plan the 3D space obstacle avoidance, obtain the latest initial grasping path and initial grasping action and Send it to the robot.
  • the robot can synchronize the relevant data of the robot to the robot digital twin in the cloud server, and synchronize the relevant data of the target item to the digital twin environment in the cloud server.
  • the current 3D position of the target item and the current posture of the target item to simulate the grasping process, obtain multiple grasping paths and grasping actions, and select the one with the best grasping effect as the initial grasping path and initial grasping action.
  • Step 206 During the process of approaching the target object according to the initial grasping path and initial grasping action, according to the distance between the robot arm and the target item, the current 3D position of the target item, the current posture of the target item and the vicinity of the target item.
  • the three-dimensional environment information of the system continuously adjusts the initial grasping path and initial grasping action until the target object is grasped.
  • the robot's visual sensor is still running, and it is determined whether the target item and other items near the initial grasping path are moving according to the feedback data , and analyze the current robot grasping state based on these data, obtain the analysis results, continuously adjust the initial grasping path and initial grasping action according to the analysis results, and dynamically plan the grasping path and grasping action until the target object is grasped.
  • the entire process of the robot object grasping method of the present application imitates the process of human eyes and hands cooperatively grasping objects, and realizes dynamic grasping based on the double closed-loop control mechanism of closed-loop control of the robot body and collaborative closed-loop control of the robot and the cloud server.
  • the method improves the robustness of the robot's grasping control and the stability of the action behavior.
  • the closed-loop control of the robot body is manifested in that the visual sensor of the robot body perceives the target object, and transmits the sensing results to the path planning module to adjust the moving speed, steering curvature, and grasping path.
  • the path planning module sends the path planning results to the chassis And the limb movement control module, the chassis and the limb movement control module perform specific operations, and the visual sensor perceives the current grasping state of the robot after performing specific operations, so closed-loop control.
  • the collaborative closed-loop control of the robot and the cloud server is similar to the above.
  • the robot object grasping method obtaineds the 3D position and posture of the target object according to the preset frequency through the visual sensor, can monitor whether the target object moves, obtain the latest position and posture of the target object, and obtain the latest position and posture of the target object according to the 3D position of the target object and posture to obtain the optimal grasping point of the robot.
  • the robot moves to the optimal grasping point and the target item is within the optimal grasping range
  • the current posture of the robot is obtained, and based on the current posture of the robot and the visual sensor feedback
  • the current 3D position of the target item and the current pose of the target item dynamic programming performs the grasping operation.
  • the target item is monitored through the visual sensor, so that the grabbing operation is performed only when the target item is within the optimal grabbing range, ensuring accurate grabbing of the target item; on the other hand, the grabbing operation can be adjusted in real time through the feedback results of the visual sensor , dynamic grasping, which reduces the dependence on the high control precision of the robotic arm, and is more suitable for grasping items in complex scenes.
  • this application implements a dynamic grasping method based on the dual closed-loop control mechanism of the robot body and the cloud server, which improves the robustness of the robot's grasping control and the stability of the action behavior.
  • the embodiment of the present application relates to an article grabbing device, as shown in Figure 3, comprising:
  • the information acquisition module 301 is used to obtain the 3D position and posture of the target item according to the preset frequency through the visual sensor; obtain the optimal grasping point of the robot according to the 3D position and posture of the target item;
  • a dynamic grasping module 302 configured to obtain the current posture of the robot when the robot moves to the optimal grasping point and the target item is within the optimum grasping range, and based on the robot's The current posture, the current 3D position of the target item fed back by the visual sensor and the current posture of the target item, dynamic programming executes the grasping operation, wherein the optimal grasping range is centered on the optimal grasping point and the robot The graspable area of the arm.
  • modules involved in this embodiment are logical modules, and a logical unit may be a physical unit, or a part of a physical unit, or may be realized by a combination of multiple physical units.
  • a logical unit may be a physical unit, or a part of a physical unit, or may be realized by a combination of multiple physical units.
  • units that are not closely related to solving the technical problem proposed in the present application are not introduced in this embodiment, but this does not mean that there are no other units in this embodiment.
  • this embodiment is a device embodiment corresponding to the embodiment of the method for grasping objects by a robot, and this embodiment can be implemented in cooperation with the above-mentioned embodiments.
  • the relevant technical details mentioned in the foregoing embodiments are still valid in this embodiment, and will not be repeated here in order to reduce repetition.
  • the relevant technical details mentioned in this embodiment can also be applied to the above method embodiments.
  • Embodiments of the present application relate to a robot, as shown in FIG. 4 , including at least one processor 402; Instructions to be executed, the instructions are executed by at least one processor 402, so that at least one processor 402 can execute the method for grasping objects by a robot described in any one of the above embodiments.
  • the robot may also include other devices such as vision sensors and positioning devices.
  • the cloud server includes a road network planning module 501 , a robot digital twin module 502 , a digital twin environment module 503 , a robot simulation grasping module 504 , and a robot task planning module 505 .
  • the road network planning module 501 is used to obtain the navigation path for the robot to move from the current position to the best grabbing point;
  • the robot digital twin module 502 is used to simulate the real robot;
  • the digital twin environment module 503 is used to simulate the real The working environment of the robot;
  • the robot task planning module 505 which is used to train the robot digital twin in the digital twin environment, and obtain all the tasks to be grasped by the target object;
  • the robot simulation grasping module 504 which is used to simulate the robot grasping operate.
  • the dynamic grasping of the target object is realized, which improves the robustness of the robot's grasping control and the stability of the action behavior.
  • the closed-loop control of the robot body is manifested in that the visual sensor module of the robot body perceives the target object, and transmits the perception results to the path planning module in the robot to adjust the moving speed, steering curvature and grasping path. It is sent to the chassis and limb movement control module in the robot, and the chassis and limb movement control module perform specific operations. The visual sensor perceives the current grasping state of the robot after performing specific operations, so closed-loop control.
  • the collaborative closed-loop control of the robot and the cloud server is similar to the above.
  • the memory 401 and the processor 402 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors 402 and various circuits of the memory 401 together.
  • the bus may also connect together various other circuits such as peripherals, voltage regulators, and power management circuits, all of which are well known in the art and therefore will not be further described herein.
  • the bus interface provides an interface between the bus and the transceivers.
  • a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing means for communicating with various other devices over a transmission medium.
  • the data processed by the processor 402 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor 402 .
  • Processor 402 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interfacing, voltage regulation, power management, and other control functions. And the memory 401 may be used to store data used by the processor 402 when performing operations.
  • Embodiments of the present application relate to a computer program.
  • the computer program is executed by a processor, the robot object grasping method described in any one of the above method embodiments is implemented.
  • Embodiments of the present application relate to a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, the robot object grasping method described in any of the above embodiments is realized.
  • the program is stored in a storage medium, and includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. .

Abstract

一种机器人物品抓取方法、装置、机器人、程序及存储介质。该机器人物品抓取方法,包括:通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态;根据目标物品的3D位置和姿态获取机器人的最佳抓取点;当机器人移动到最佳抓取点、且目标物品在最佳抓取范围内时,获取机器人的当前姿态,并基于机器人的当前姿态、视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,其中最佳抓取范围为以最佳抓取点为中心且机器人手臂可抓取的区域。

Description

机器人物品抓取方法、装置、机器人、程序及存储介质
本申请基于申请号为“202210103212.7”、申请日为2022年1月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请实施例涉及机器人领域,特别涉及一种机器人物品抓取方法、装置、机器人、程序及存储介质。
背景技术
目前在机器人领域主要将机器人分为工业机器人和服务机器人。工业机器人主要用于工业制造场景,比如:汽车制造、零部件加工等场景,其工作环境是结构化的,通常在一个固定且适合机器人手臂开展工作的空间。服务机器人主要用于人们工作和生活场景,工作环境较为复杂,比如:酒店、餐厅、写字楼等,且服务机器人的控制精度和重复精度也低于工业机器人。因此对于服务机器人来说,在复杂场景中进行物品抓取较为困难。
常用的机器人物品抓取方法是基于动力模型和运动学的正逆解算法,对机器人手臂进行建模,由待抓取物体末端的笛卡尔坐标得到机器人手臂的各个关节的关节角度,实现机器人手臂对目标物体的抓取。但整个方法计算过程复杂,不适用于复杂环境,且依赖于机器人各部分的高精度控制。
技术解决方案
本申请实施例的目的在于提供一种机器人物品抓取方法、装置、机器人、程序及存储介质,通过视觉传感器的反馈进行动态抓取,使得机器人可以采用低精度、低成本的关节完成机器人在复杂场景下的精准抓取。
为解决上述技术问题,本申请的实施例提供了一种机器人物品抓取方法,包括:通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态;根据所述目标物品的3D位置和姿态获取机器人的最佳抓取点;当所述机器人移动到所述最佳抓取点、且所述目标物品在最佳抓取范围内时,获取所述机器人的当前姿态,并基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,其中所述最佳抓取范围为以所述最佳抓取点为中心且机器人手臂可抓取的区域。
为解决上述技术问题,本申请的实施例还提供了一种物品抓取装置,包括:
信息获取模块,用于通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态;根据所述目标物品的3D位置和姿态获取机器人的最佳抓取点;
动态抓取模块,用于当所述机器人移动到所述最佳抓取点、且所述目标物品在最佳抓取范围内时,获取所述机器人的当前姿态,并基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,其中所述最佳抓取范围为以所述最佳抓取点为中心且机器人手臂可抓取的区域。
为解决上述技术问题,本申请的实施例还提供了一种机器人,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如上所述的机器人物品抓取方法。
本申请的实施例还提供了一种计算机程序,所述计算机程序被处理器执行时实现如上所述的机器人物品抓取方法。
本申请的实施例还提供了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的机器人物品抓取方法。
本申请实施例提供的机器人物品抓取方法、装置、机器人、程序及存储介质,通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态,可以监测目标物品是否移动,得到目标物品最新位置和姿态,根据目标物品的3D位置和姿态获取机器人的最佳抓取点,当机器人移动到最佳抓取点、且目标物品在最佳抓取范围内时,获取机器人的当前姿态,并基于机器人的当前姿态、视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作。一方面通过视觉传感器监测目标物品,使得目标物品在最佳抓取范围内时才执行抓取操作,保证精准抓取目标物品;另一方面,通过视觉传感器反馈的包含目标物品和其他物品的数据信息,可以实时调整抓取操作,进行空间三维避障,动态规划抓取目标物品,减少了对机械手臂高控制精度的依赖,也更适用于复杂场景的物品抓取。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是根据本申请的实施例提供的机器人物品抓取方法的流程图一;
图2是根据本申请的实施例提供的机器人物品抓取方法的流程图二;
图3是根据本申请的实施例中提供的机器人物品抓取装置的结构示意图;
图4是根据本申请的实施例中提供的机器人的结构示意图;
图5是根据本申请的实施例提供的云端服务器的结构示意图。
本发明的实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。
本申请的实施例涉及一种机器人物品抓取方法,如图1所示,该机器人物品抓取方法包括如下步骤:
步骤101:通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态。
本实施例中,视觉传感器可以设置于机器人本体上,也可以设置于目标物品所处环境的任意位置。视觉传感器包括:激光雷达、惯性传感器IMU、深度相机、RGB相机、声纳传感器等任意一种或多种设备。
在一些实施例中,步骤101包括:通过视觉传感器获取目标物品图像,目标物品图像为2D图像或3D图像;通过视觉感知识别算法检测所述目标物品图像,获取目标物品的3D位置和姿态。
本实施例中,比如:可以获取目标物品的RGB图像,对目标物品的RGB图像进行特征提取、分类,并于预设的目标物品原图进行匹配,以识别目标物体,在此基础上进一步确定目标物品的位置和姿态。又比如:可以获取目标物品的3D图像,基于3D图像获取目标物品的RGB-D数据,D表示深度值,即目标物品距离拍摄装置的距离。根据RGB-D数据可以获取目标物品的点云数据,基于点云数据直接进行3D体识别,即可获取目标物品的位置和姿态。另外,获取RGB-D数据后,也可以直接利用训练好的深度神经网络获取目标物体的3D位置和姿态。
当然,无论采用哪一种方法获取目标物品的位置和姿态,为提高结果的准确度,都可以采用多种图像处理算法对目标物品的图像进行处理,如:对图像进行滤波、二值化、颜色空间转化等。
另外,值得一提的是,确定目标物品的姿态有利于机器人在抓取时确定机器人的姿态和抓取的动作。比如:抓取摆放于桌子上的易拉罐时,若识别出紧靠易拉罐的左侧有一遮挡物品,则确定机器人从其他方位更容易抓取易拉罐。又比如:抓取摆放于桌子上的胶带卷时,根据胶带卷的摆放姿态,确定机器人的姿态和抓取动作,即机器人移动到胶带的哪一侧和机器人手臂以何种动作抓取,抓取动作可以是抓取整个胶带的光滑曲面,还可以是抓取胶带的内壁和外壁。
步骤102:根据目标物品的3D位置和姿态获取机器人的最佳抓取点。
本实施例中,最佳抓取点表示机器人从当前位置移动到该位置后,机器人在不移动的情况下,机器人手臂就可轻松抓取到目标物品。具体地,可以根据目标物体的3D位置和姿态,结合机器人手臂的参数即可确定最佳抓取点。机器人手臂的参数可以包括关节数量、关节长度、关节活动角度等。
需要说明的是,一旦获取到目标物品的姿态,就可确定机器人的抓取姿态,机器人的抓取姿态表示机器人在最佳抓取点是以何种姿态面对目标物品,比如:机器人在最佳抓取点与目标物品正向面对面(即最佳抓取点作为原点且目标物品在正北方向、机器人与目标物品面对面),或者,机器人在最佳抓取点向左偏移30度的方向侧对目标物品(即最佳抓取点作为原点且目标物品在正北方向、机器人的抓取姿态为北偏西30度方向)。也就是说,当机器人到达最佳抓取点时,机器人的姿态已经调整到适合抓取的抓取姿态。
另外,在计算最佳抓取点的过程中,视觉传感器依然实时或按照预设的频率获取目标物品的3D位置和姿态,以监测目标物品是否移动,保证最佳抓取点的位置是最新的、适应于目标物品的。
步骤103:当机器人移动到最佳抓取点,且目标物品在最佳抓取范围内时,获取机器人的当前姿态,并基于机器人的当前姿态、视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,其中最佳抓取范围为以所述最佳抓取点为中心且机器人手臂可抓取的区域。
需要注意的是,一旦获取到目标物品的姿态,就可确定机器人的抓取姿态,那么机器人到达最佳抓取点时的姿态,理论上是已经确定的抓取姿态,但当本申请应用于控制精度较低的机器人时,如服务机器人,机器人在最佳抓取点的实际姿态与确定好的抓取姿态有一定的偏差,这有可能影响到最后能否精准抓取到目标物品。因此,需要获取机器人的当前姿态,基于机器人当前姿态、目标物品当前3D位置和目标物品当前姿态,实时调整抓取操作,动态抓取目标物品。
也就是说,机器人到达最佳抓取点开始执行抓取操作,到最后抓取到目标物品的过程中,不是一次性确定好抓取路径和抓取动作,而是根据视觉传感器反馈的机器人的姿态和目标物品的位置、姿态,不间断调整抓取操作,在调整抓取路径和抓取操作的过程中还包含了对目标物品所处的三维空间进行分析避障的操作,这种多次、短距离范围内的调整可以减少对机器人各部分高控制精度的依赖,模拟出人手抓取过程,实现在复杂环境对复杂物品的精准抓取。
另外,本实施例中,机器人的当前姿态既可以指机器人的抓取姿态,也可以指机器人的抓取姿态和机器人手臂的姿态。
在一实施例中,在步骤102之后,还包括:在机器人移动到所述最佳抓取点的过程中,当通过视觉传感器确定所述目标物品脱离最佳抓取范围内且在预设的有效抓取范围时,重新获取目标物品新的3D位置和新的姿态,并基于新的3D位置和新的姿态重新计算最佳抓取点;在机器人移动到所述最佳抓取点的过程中,当通过视觉传感器确定所述目标物品脱离预设的有效抓取范围时,向用户反馈提醒信息。
本实施例中,机器人在从当前位置移动最佳抓取点的过程中,视觉传感器会按照预设的频率获取目标物品3D位置,确定目标物品是否还在最佳抓取范围内。若在最佳抓取范围内,则进行抓取操作。若不在最佳抓取范围内但在有效抓取范围内,则重新确定最佳抓取点,若不在有效抓取范围,则向用户反馈提醒信息。需要说明的是,有效抓取范围是预先设定的,脱离有效抓取范围后,转动或调整视觉传感器已无法检测到目标物体,即无法抓取目标物体。
本申请实施例提供的机器人物品抓取方法,通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态,可以监测目标物品是否移动,得到目标物品最新位置和姿态,根据目标物品的3D位置和姿态获取机器人的最佳抓取点,当机器人移动到最佳抓取点、且目标物品在最佳抓取范围内时,获取机器人的当前姿态,并基于机器人的当前姿态、视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作。一方面通过视觉传感器监测目标物品,使得目标物品在最佳抓取范围内时才执行抓取操作,保证精准抓取目标物品;另一方面,通过视觉传感器反馈的包含目标物品和其他物品的数据信息,可以实时调整抓取操作,进行空间三维避障,动态规划抓取目标物品,减少了对机械手臂高控制精度的依赖,也更适用于复杂场景的物品抓取。
本申请的实施例涉及一种机器人物品抓取方法,如图2所示,该机器人物品抓取方法包括如下步骤:
步骤201:通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态。
具体地,本实施例中步骤201的具体实施细节与步骤101基本相同,在此不做作赘述。
步骤202:根据目标物品的3D位置和姿态获取机器人的最佳抓取点。
步骤203:基于最佳抓取点,获取机器人从当前位置到最佳抓取点的导航路径,其中导航路径包括机器人从当前位置到最佳抓取点的移动路径、机器人在移动路径中的移动速度和转向曲率。
在一些实施例中,步骤203具体包括:接收与所述机器人连接的云端服务器根据预设的三维地图计算并下发的移动路径,其中所述三维地图包括所述目标物品所处环境中各物品的三维坐标;根据所述移动路径和预设的三维地图中各物品的三维坐标,获取所述机器人在所述移动路径中的移动速度和转向曲率;将所述移动路径和所述机器人在所述初始路径中的移动速度和转向曲率结合,获取所述导航路径。
本实施例中与机器人连接的云端服务器在机器人进行物品抓取的过程中,也会提供相应的支持。机器人通过视觉传感器获取目标物品的2D图像或3D图像后,也会将2D图像或3D图像同步传输至云端服务器,云端服务器也可以识别并获取目标物体的3D位置和姿态,并同步在云端服务器中的数字孪生环境中,当目标物体的3D位置和姿态发生变化时,数字孪生环境同步更新。当然,云端服务器中还包括机器人数字孪生体,在获取机器人本体相关数据(如:底盘参数、机器人手臂参数等)后,机器人数字孪生体可以根据机器人同步的数据模拟真实机器人的运动状态。另外,最佳抓取点的计算机器人本体可以进行处理,云端服务器也可以处理获取。
在确定最佳抓取点后,云端服务器根据预设的三维地图进行路网规划,获取移动路径。需要注意的是,移动路径可以是一个,也可以是多个。当移动路径有多个时,可以任选一个移动路径下发给机器人,也可以下发多个移动路径给机器人,机器人反馈至用户进行选择。
另外,三维地图包括目标物品所处环境中各物品的三维坐标,机器人根据预设的三维地图中各物品的三维坐标,获取机器人在移动路径中的移动速度和转向曲率。比如:移动路径为A—B—C—D ,那么机器人需要计算每段路程中的机器人的移动速度、每段路程中转向处的转向曲率和对应的转向速度。如此,可以保证机器人在移动过程中不碰撞移动路径上的其他物品。
步骤204:当机器人按照导航路径移动到最佳抓取点,且目标物品在最佳抓取范围内时,获取机器人当前姿态。
步骤205:接收云端服务器下发的初始抓取路径和初始抓取动作。
在一些实施例中,在步骤204之后,步骤205之前,还包括:获取视觉传感器反馈的目标物品附近的三维环境信息;将机器人的当前姿态、目标物品的当前3D位置、目标物品的当前姿态和视觉传感器反馈的目标物品附近的三维环境信息同步传输至与机器人连接的云端服务器,以使云端服务器中的机器人数字孪生体根据机器人的当前姿态、目标物品的当前3D位置、目标物品的当前姿态和目标物品附近的三维环境信息进行三维空间避障规划,获取初始抓取路径和初始抓取动作。
需要说明的是,初始抓取路径是一种三维路径,初始抓取动作也是一种三维动作。云端服务器在规划初始抓取路径和初始抓取动作的过程中,还包含了对目标物品附近的三维环境信息的分析,规划出一种合理的抓取路径以避开空间障碍物。而目标物品附近的三维环境信息,包括:在所述最佳抓取范围内除目标物品以外的其他物品的当前位置信息和/或当前姿态信息。
在另一实施例中,在云端服务器获取初始抓取路径和初始抓取动作之后,在机器人基于机器人的当前姿态、视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作之前,还包括:当视觉传感器反馈的目标物品的当前位置信息和/或当前姿态信息出现变化,则将目标物品的当前位置信息和/或当前姿态信息同步传输至云端服务器,以使云端服务器中的机器人数字孪生体重新进行三维空间避障规划,更新初始抓取路径和初始抓取动作;和/或,当视觉传感器反馈的所述目标物品附近的三维环境信息出现变化,则将所述目标物品附近的三维环境信息同步传输至所述云端服务器,以使云端服务器中的机器人数字孪生体重新进行三维空间避障规划,更新初始抓取路径和初始抓取动作。
本实施例中,在云端服务器获取初始抓取路径和抓取动作之后,机器人开始执行抓取操作之前,对于这个时间段内,若目标物品和/或其他物品的数据信息(指的是位置信息和/或姿态信息)出现变化,则需要根据最新的数据信息更新初始抓取路径和初始抓取动作的情况。比如:在机器人接收到云端服务器下发的初始抓取路径和初始抓取动作,但机器人还未开始抓取时,视觉传感器监测到目标物品的前方出现新的物品,且新的物品阻挡了目标物品的一部分,则机器人会将最新的目标物品附近的三维环境信息同步至云端服务器,云端服务器接收到该信息后,会重新进行三维空间避障规划,更新初始抓取路径和初始抓取动作并重新下发给机器人。又比如:云端服务器正在计算规划初始抓取路径和初始抓取动作的过程中,视觉传感器监测到目标物品的位置出现移动、且目标物品的左侧被放置了一新物品,则机器人会将目标物品最新的位置信息和最新的目标物品附近的三维环境信息同步至云端服务器,云端服务器接收到该信息后,会重新进行三维空间避障规划,获取最新的初始抓取路径和初始抓取动作并下发给机器人。
本实施例中,机器人可以将机器人的相关数据同步至云端服务器中的机器人数字孪生体中,将目标物品的相关数据同步至云端服务器中的数字孪生环境中,机器人数字孪生体根据机器人的当前姿态、目标物品的当前3D位置和目标物品的当前姿态模拟抓取过程,得到多个抓取路径和抓取动作,从中选取抓取效果最好的一个作为初始抓取路径和初始抓取动作。
步骤206:在按照初始抓取路径和初始抓取动作靠近目标物体的过程中,根据机器人手臂与所述目标物品的距离、目标物品的当前3D位置、目标物品的当前姿态和所述目标物品附近的三维环境信息,不断调整初始抓取路径和初始抓取动作,直至抓取到目标物体。
本实施例中,机器人在按照初始抓取路径和初始抓取动作靠近目标物体的过程中,机器人的视觉传感器依然在运行,根据反馈的数据确定目标物品和初始抓取路径附近的其他物品是否移动,并根据这些数据分析当前的机器人抓取状态,获取分析结果,根据分析结果不断调整初始抓取路径和初始抓取动作,动态规划抓取路径和抓取动作直至抓取到目标物体。
也就是说,本申请的机器人物品抓取方法整个过程模仿人眼和人手协同抓取物体的过程,且基于机器人本体闭环控制、机器人与云端服务器协同闭环控制的双闭环控制机制实现动态抓取的方法,提升了机器人抓取控制的鲁棒性和动作行为的稳定性。机器人本体的闭环控制表现在机器人本体的视觉传感器感知目标物体,并将感知结果传输给路径规划模块进行移动速度、转向曲率和抓取路径的调整,路径规划模块将路径规划的结果下发给底盘和肢体运动控制模块,底盘和肢体运动控制模块进行具体操作,视觉传感器感知机器人执行具体操作后的当前抓取状态,如此闭环控制。而机器人与云端服务器协同闭环控制与上述类似。
本申请实施例提供的机器人物品抓取方法,通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态,可以监测目标物品是否移动,得到目标物品最新位置和姿态,根据目标物品的3D位置和姿态获取机器人的最佳抓取点,当机器人移动到最佳抓取点、且目标物品在最佳抓取范围内时,获取机器人的当前姿态,并基于机器人的当前姿态、视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作。一方面通过视觉传感器监测目标物品,使得目标物品在最佳抓取范围内时才执行抓取操作,保证精准抓取目标物品;另一方面,通过视觉传感器反馈的结果,可以实时调整抓取操作,动态抓取,减少了对机械手臂高控制精度的依赖,也更适用于复杂场景的物品抓取。另外,本申请基于机器人本体和云端服务器双闭环控制机制实现动态抓取的方法,提升了机器人抓取控制的鲁棒性和动作行为的稳定性。
此外,应当理解的是,上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其流程的核心设计都在该专利的保护范围内。
本申请的实施例涉及一种物品抓取装置,如图3所示,包括:
信息获取模块301,用于通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态;根据所述目标物品的3D位置和姿态获取机器人的最佳抓取点;
动态抓取模块302,用于当所述机器人移动到所述最佳抓取点、且所述目标物品在最佳抓取范围内时,获取所述机器人的当前姿态,并基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,其中所述最佳抓取范围为以所述最佳抓取点为中心且机器人手臂可抓取的区域。
值得一提的是,本实施例中所涉及到的各模块均为逻辑模块,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本申请的创新部分,本实施例中并没有将与解决本申请所提出的技术问题关系不太密切的单元引入,但这并不表明本实施例中不存在其它的单元。
不难发现,本实施例为与机器人物品抓取方法实施例相对应的装置实施例,本实施例可与上述实施例互相配合实施。上述实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在上述方法实施例中。
本申请的实施例涉及一种机器人,如图4所示,包括至少一个处理器402;以及,与至少一个处理器402通信连接的存储器401;其中,存储器401存储有可被至少一个处理器402执行的指令,指令被至少一个处理器402执行,以使至少一个处理器402能够执行上述任一实施例所描述的机器人物品抓取方法。
另外,机器人还可以包括视觉传感器、定位设备等其他设备。
需要说明的是,本申请的机器人物品抓取方法需要机器人与云端服务器配合实施。具体地,云端服务器的结构示意图,如图5所示,云端服务器包括路网规划模块501、机器人数字孪生体模块502、数字孪生环境模块503、机器人仿真抓取模块504、机器人任务规划505。其中,路网规划模块501,用于获取机器人从当前位置移动到最佳抓取点的导航路径;机器人数字孪生体模块502,用于模拟真实的机器人;数字孪生环境模块503,用于模拟真实的机器人的工作环境;机器人任务规划模块505,用于使机器人数字孪生体在数字孪生环境中训练,获取要完成目标物体抓取的所有任务;机器人仿真抓取模块504,用于模拟机器人抓取操作。
本实施例中,基于机器人本体闭环控制、机器人与云端服务器协同闭环控制的双闭环控制机制实现目标物品的动态抓取,提升了机器人抓取控制的鲁棒性和动作行为的稳定性。机器人本体的闭环控制表现在机器人本体的视觉传感器模块感知目标物体,并将感知结果传输给机器人中的路径规划模块进行移动速度、转向曲率和抓取路径的调整,路径规划模块将路径规划的结果下发给机器人中的底盘和肢体运动控制模块,底盘和肢体运动控制模块进行具体操作,视觉传感器感知机器人执行具体操作后的当前抓取状态,如此闭环控制。而机器人与云端服务器协同闭环控制与上述类似。
其中,存储器401和处理器402采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器402和存储器401的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器402处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器402。
处理器402负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器401可以被用于存储处理器402在执行操作时所使用的数据。
本申请的实施例涉及一种计算机程序,该计算机程序被处理器执行时实现上述任一方法实施例所描述的机器人物品抓取方法。
本申请的实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述任一实施例所描述的机器人物品抓取方法。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (14)

  1. 一种机器人物品抓取方法,包括:
    通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态;
    根据所述目标物品的3D位置和姿态获取机器人的最佳抓取点;
    当所述机器人移动到所述最佳抓取点、且所述目标物品在最佳抓取范围内时,获取所述机器人的当前姿态,并基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,其中所述最佳抓取范围为以所述最佳抓取点为中心且机器人手臂可抓取的区域。
  2. 根据权利要求1所述的机器人物品抓取方法,其中,所述根据所述目标物品的3D位置和姿态获取机器人的最佳抓取点之后,还包括:
    在所述机器人移动到所述最佳抓取点的过程中,当通过所述视觉传感器确定所述目标物品脱离最佳抓取范围内且在预设的有效抓取范围时,重新获取目标物品新的3D位置和新的姿态,并基于新的3D位置和新的姿态重新计算最佳抓取点;
    在所述机器人移动到所述最佳抓取点的过程中,当通过所述视觉传感器确定所述目标物品脱离预设的有效抓取范围时,向用户反馈提醒信息。
  3. 根据权利要求1或2所述的机器人物品抓取方法,其中,在所述获取所述机器人的当前姿态之后,所述基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作之前,还包括:
    获取所述视觉传感器反馈的目标物品附近的三维环境信息;
    将所述机器人的当前姿态、所述目标物品的当前3D位置、所述目标物品的当前姿态和所述目标物品附近的三维环境信息同步传输至与所述机器人连接的云端服务器,以使所述云端服务器中的机器人数字孪生体根据所述机器人的当前姿态、所述目标物品的当前3D位置、所述目标物品的当前姿态和所述目标物品附近的三维环境信息进行三维空间避障规划,获取初始抓取路径和初始抓取动作。
  4. 根据权利要求3所述的机器人物品抓取方法,其中,所述基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,包括:
    接收所述云端服务器下发的初始抓取路径和初始抓取动作;
    在按照所述初始抓取路径和所述初始抓取动作靠近目标物体的过程中,根据机器人手臂与所述目标物品的距离、目标物品的当前3D位置、目标物品的当前姿态和所述目标物品附近的三维环境信息,不断调整所述初始抓取路径和所述初始抓取动作,直至抓取到目标物体。
  5. 根据权利要求3所述的机器人物品抓取方法,其中,所述目标物品附近的三维环境信息,包括:在所述最佳抓取范围内除目标物品以外的其他物品的当前位置信息和/或当前姿态信息。
  6. 根据权利要求3所述的机器人物品抓取方法,其中,在所述获取初始抓取路径和初始抓取动作之后,所述基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作之前,还包括:
    当所述视觉传感器反馈的所述目标物品的当前位置信息和/或当前姿态信息出现变化,则将所述目标物品的当前位置信息和/或当前姿态信息同步传输至所述云端服务器,以使所述云端服务器中的机器人数字孪生体重新进行三维空间避障规划,更新初始抓取路径和初始抓取动作;
    和/或,
    当所述视觉传感器反馈的所述目标物品附近的三维环境信息出现变化,则将所述目标物品附近的三维环境信息同步传输至所述云端服务器,以使所述云端服务器中的机器人数字孪生体重新进行三维空间避障规划,更新初始抓取路径和初始抓取动作。
  7. 根据权利要求1至6中任一项所述的机器人物品抓取方法,其中,所述当所述机器人移动到所述最佳抓取点,且所述目标物品在最佳抓取范围内时,基于所述视觉传感器反馈的目标物品的当前3D位置和当前姿态,动态规划执行抓取操作之前,还包括:
    基于所述最佳抓取点,获取所述机器人从当前位置到所述最佳抓取点的导航路径;
    其中,所述导航路径包括所述机器人从当前位置到所述最佳抓取点的移动路径、机器人在所述移动路径中的移动速度和转向曲率。
  8. 根据权利要求7所述的机器人物品抓取方法,其中,所述机器人按照所述导航路径移动到所述最佳抓取点。
  9. 根据权利要求7所述的机器人物品抓取方法,其中,所述基于所述最佳抓取点,获取所述机器人从当前位置到所述最佳抓取点的导航路径,包括:
    接收与所述机器人连接的云端服务器根据预设的三维地图计算并下发的移动路径,其中所述三维地图包括所述目标物品所处环境中各物品的三维坐标;
    根据所述移动路径和预设的三维地图中各物品的三维坐标,获取所述机器人在所述移动路径中的移动速度和转向曲率;
    将所述移动路径和所述机器人在所述初始路径中的移动速度和转向曲率结合,获取所述导航路径。
  10. 根据权利要求1至9中任一项所述的机器人物品抓取方法,其中,所述通过视觉传感器获取目标物品的3D位置和姿态,包括:
    通过所述视觉传感器获取目标物品图像,所述目标物品图像为2D图像或3D图像;
    通过视觉感知识别算法检测所述目标物品图像,获取所述目标物品的3D位置和姿态。
  11. 一种机器人物品抓取装置,包括:
    信息获取模块,用于通过视觉传感器按照预设的频率获取目标物品的3D位置和姿态;根据所述目标物品的3D位置和姿态获取机器人的最佳抓取点;
    动态抓取模块,用于当所述机器人移动到所述最佳抓取点、且所述目标物品在最佳抓取范围内时,获取所述机器人的当前姿态,并基于所述机器人的当前姿态、所述视觉传感器反馈的目标物品的当前3D位置和目标物品的当前姿态,动态规划执行抓取操作,其中所述最佳抓取范围为以所述最佳抓取点为中心且机器人手臂可抓取的区域。
  12. 一种机器人,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至10中任一项所述的机器人物品抓取方法。
  13. 一种计算机程序,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的机器人物品抓取方法。
  14. 一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至10中任一项所述的机器人物品抓取方法。
PCT/CN2023/073269 2022-01-27 2023-01-20 机器人物品抓取方法、装置、机器人、程序及存储介质 WO2023143408A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210103212.7 2022-01-27
CN202210103212.7A CN114347033B (zh) 2022-01-27 2022-01-27 机器人物品抓取方法、装置、机器人及存储介质

Publications (1)

Publication Number Publication Date
WO2023143408A1 true WO2023143408A1 (zh) 2023-08-03

Family

ID=81092720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/073269 WO2023143408A1 (zh) 2022-01-27 2023-01-20 机器人物品抓取方法、装置、机器人、程序及存储介质

Country Status (2)

Country Link
CN (1) CN114347033B (zh)
WO (1) WO2023143408A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725730A (zh) * 2023-08-11 2023-09-12 北京市农林科学院智能装备技术研究中心 一种基于视觉引导的猪只疫苗注射方法、系统及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114347033B (zh) * 2022-01-27 2023-12-08 达闼机器人股份有限公司 机器人物品抓取方法、装置、机器人及存储介质
CN114905513A (zh) * 2022-05-17 2022-08-16 安徽果力智能科技有限公司 一种基于软体手的复合机器人的抓取方法及系统
CN114932554B (zh) * 2022-06-06 2023-12-01 北京钢铁侠科技有限公司 抓取机器人的自主移动方法、装置、存储介质及设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007290056A (ja) * 2006-04-24 2007-11-08 Yaskawa Electric Corp ロボットおよびその物体把持方法
CN108161931A (zh) * 2016-12-07 2018-06-15 广州映博智能科技有限公司 基于视觉的工件自动识别及智能抓取系统
CN111015655A (zh) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 机械臂抓取方法、装置、计算机可读存储介质及机器人
CN111015662A (zh) * 2019-12-25 2020-04-17 深圳蓝胖子机器人有限公司 一种动态抓取物体方法、系统、设备和动态抓取垃圾方法、系统、设备
CN111823223A (zh) * 2019-08-19 2020-10-27 北京伟景智能科技有限公司 一种基于智能立体视觉的机器人手臂抓取控制系统及方法
CN113547521A (zh) * 2021-07-29 2021-10-26 中国科学技术大学 视觉引导的移动机器人自主抓取与精确移动方法及系统
CN114347033A (zh) * 2022-01-27 2022-04-15 达闼机器人有限公司 机器人物品抓取方法、装置、机器人及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106965180A (zh) * 2017-04-13 2017-07-21 北京理工大学 流水线上瓶子的机械臂抓取装置与方法
CN110281231B (zh) * 2019-03-01 2020-09-29 浙江大学 无人化fdm增材制造的移动机器人三维视觉抓取方法
US11312581B2 (en) * 2019-04-16 2022-04-26 Abb Schweiz Ag Object grasp system and method
CN112936275B (zh) * 2021-02-05 2023-03-21 华南理工大学 一种基于深度相机的机械臂抓取系统和控制方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007290056A (ja) * 2006-04-24 2007-11-08 Yaskawa Electric Corp ロボットおよびその物体把持方法
CN108161931A (zh) * 2016-12-07 2018-06-15 广州映博智能科技有限公司 基于视觉的工件自动识别及智能抓取系统
CN111823223A (zh) * 2019-08-19 2020-10-27 北京伟景智能科技有限公司 一种基于智能立体视觉的机器人手臂抓取控制系统及方法
CN111015655A (zh) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 机械臂抓取方法、装置、计算机可读存储介质及机器人
CN111015662A (zh) * 2019-12-25 2020-04-17 深圳蓝胖子机器人有限公司 一种动态抓取物体方法、系统、设备和动态抓取垃圾方法、系统、设备
CN113547521A (zh) * 2021-07-29 2021-10-26 中国科学技术大学 视觉引导的移动机器人自主抓取与精确移动方法及系统
CN114347033A (zh) * 2022-01-27 2022-04-15 达闼机器人有限公司 机器人物品抓取方法、装置、机器人及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725730A (zh) * 2023-08-11 2023-09-12 北京市农林科学院智能装备技术研究中心 一种基于视觉引导的猪只疫苗注射方法、系统及存储介质
CN116725730B (zh) * 2023-08-11 2023-12-05 北京市农林科学院智能装备技术研究中心 一种基于视觉引导的猪只疫苗注射方法、系统及存储介质

Also Published As

Publication number Publication date
CN114347033B (zh) 2023-12-08
CN114347033A (zh) 2022-04-15

Similar Documents

Publication Publication Date Title
WO2023143408A1 (zh) 机器人物品抓取方法、装置、机器人、程序及存储介质
WO2022021739A1 (zh) 一种语义智能变电站机器人仿人巡视作业方法及系统
US11640517B2 (en) Update of local features model based on correction to robot action
US20230405812A1 (en) Determining and utilizing corrections to robot actions
CN108283021B (zh) 机器人和用于机器人定位的方法
Snape et al. Independent navigation of multiple mobile robots with hybrid reciprocal velocity obstacles
US20230154015A1 (en) Virtual teach and repeat mobile manipulation system
AU2018252237B2 (en) Co-localisation
US8355816B2 (en) Action teaching system and action teaching method
WO2012153629A1 (ja) 運動予測制御装置と方法
WO2018026836A1 (en) Generating a model for an object encountered by a robot
CN110170995A (zh) 一种基于立体视觉的机器人快速示教方法
US11254003B1 (en) Enhanced robot path planning
WO2017166767A1 (zh) 一种信息处理方法和移动装置、计算机存储介质
US20220382282A1 (en) Mobility aid robot navigating method and mobility aid robot using the same
TWI555524B (zh) 機器人的行動輔助系統
US20220366725A1 (en) Engagement Detection and Attention Estimation for Human-Robot Interaction
WO2020024150A1 (zh) 地图处理方法、设备、计算机可读存储介质
US11436063B1 (en) Systems and methods for inter-process communication within a robot
WO2018133074A1 (zh) 一种基于大数据及人工智能的智能轮椅系统
US11865724B2 (en) Movement control method, mobile machine and non-transitory computer readable storage medium
CN115922731B (zh) 一种机器人的控制方法以及机器人
WO2018133076A1 (zh) 一种智能轮椅的机械传动控制方法与系统
Buys et al. Haptic coupling with augmented feedback between two KUKA Light-Weight Robots and the PR2 robot arms
TWI788217B (zh) 立體空間中定位方法與定位系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23746287

Country of ref document: EP

Kind code of ref document: A1