CN113538459A - Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection - Google Patents

Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection Download PDF

Info

Publication number
CN113538459A
CN113538459A CN202110766116.6A CN202110766116A CN113538459A CN 113538459 A CN113538459 A CN 113538459A CN 202110766116 A CN202110766116 A CN 202110766116A CN 113538459 A CN113538459 A CN 113538459A
Authority
CN
China
Prior art keywords
target
grabbing
coordinate system
area
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110766116.6A
Other languages
Chinese (zh)
Other versions
CN113538459B (en
Inventor
陈锐
刘道会
朱信宇
王慧港
李洋
蒲华燕
罗均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110766116.6A priority Critical patent/CN113538459B/en
Publication of CN113538459A publication Critical patent/CN113538459A/en
Application granted granted Critical
Publication of CN113538459B publication Critical patent/CN113538459B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

本发明公开了一种基于落点区域检测的多模式抓取避障检测优化方法,主要步骤包括:目标进行典型模式下的抓取设计,获得不同抓取模式下夹持器的落点区域;建立多模式抓取规划数据库;获取目标场景信息并检测目标零件的类型、姿态及其在相机坐标系下的位置信息;索引抓取规划数据库,获得该模式下的抓取规划;求得抓取该目标时相机坐标系下夹持器位姿及工件坐标系下的落点区域;检查落点区域,无碰撞则执行该次抓取,有碰撞则优化后再次检测;变换得到夹持器在机器人坐标系下的位姿,执行抓取。本发明可以准确对所规划抓取进行碰撞检查,可用于多物体多模式下的抓取规划,应用简单,适用于各类抓取场景,提高了自动化抓取的安全性。

Figure 202110766116

The invention discloses a multi-mode grabbing obstacle avoidance detection optimization method based on the detection of the landing point area. Establish a multi-mode grasping planning database; obtain the target scene information and detect the type, posture and position information of the target parts in the camera coordinate system; index the grasping planning database to obtain the grasping plan in this mode; obtain the grasping plan The target is the position of the gripper in the camera coordinate system and the drop point area in the workpiece coordinate system; check the drop point area, if there is no collision, execute the grab, if there is a collision, it will be optimized and checked again; The pose in the robot coordinate system to perform grabbing. The invention can accurately perform collision checking on the planned grasping, can be used for grasping planning under multi-object and multi-mode, is simple in application, is suitable for various grasping scenarios, and improves the safety of automatic grasping.

Figure 202110766116

Description

Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection
Technical Field
The invention relates to the technical field of robots, in particular to an obstacle avoidance detection and optimization method for grabbing planning in an autonomous grabbing and sorting process of a robot.
Background
With the development of automatic control and machine vision detection technologies, the robot has realized the complete replacement of human beings in many industrial scenes, has incomparable advantages of human beings in the aspects of working continuity, working stability and precision, has helped a lot of production lines to realize full-automatic transformation, and has greatly improved production efficiency. Meanwhile, with the development of modern control theory and artificial intelligence technology, the intelligence level of the robot is higher and higher, and the tasks which the robot can undertake are more and more complex and various. Grabbing, as one of the important basic actions for human interaction with the real world, is also the ability that a robot must possess to accomplish numerous tasks. Most of existing robot grabbing technologies are deployed in a structured scene, a robot only needs to complete a certain fixed grabbing action under the starting point of certain sensor signals, fixed-point grabbing, carrying and placing effects are achieved, and although the robot is stable, only simple tasks can be completed. The development of robot technology, especially a household service robot, is bound to tend to be more complex and fit scenes of an actual environment, when the grasping requirements of such unstructured scenes are met, the current mainstream target detection methods, such as YOLO based on deep learning, template matching algorithm based on a traditional image processing method and the like, cannot achieve 100% detection precision, have a false detection risk, may cause the problems that grasping is unstable, dropping is caused, grasping position is wrong, grasping cannot be grasped and the like, and the most serious consequence is that grasping position interference causes collision, and can bring serious consequences to a target object, a holder, a mechanical arm and even the whole robot system. Most of the existing robot obstacle avoidance algorithms need to acquire and map three-dimensional information of a scene, and further adopt a path planning obstacle avoidance algorithm to avoid obstacles, the deployment and application are complex, a plurality of devices need to be additionally arranged, the reconstruction of the existing production line is difficult, and the obstacle avoidance strategy of the path planning algorithm also has the possibility of collision caused by detection failure. In view of the above problems, how to design a simple and reliable method for grabbing and avoiding obstacles to achieve rapid deployment of a new system, improve the safety and reliability of the robot grabbing technology by convenient modification of the existing production line is an important research topic.
Disclosure of Invention
The invention discloses a multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection, which aims to solve the problem of obstacle avoidance detection in the robot grabbing technology, and aims to improve the safety and reliability of the existing robot grabbing scene and reduce the accident risk in a simple and convenient mode.
In order to achieve the purpose, the invention adopts the following technical scheme.
The invention provides a multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection, which is oriented to a robot grabbing system composed of an upper computer, a robot, a camera, a plurality of target objects and the like, wherein the robot grabbing system works in an unstructured scene, the target objects are randomly and randomly distributed in the working range of the robot, stacking and shielding possibly exist, the main function of the system is to collect scene information through the camera, and the upper computer transmits the information to the robot after processing so as to complete grabbing actions of the target objects. The upper computer is a computer or other type of processor with corresponding image information processing capacity; the robot comprises a robot body, a controller, a tail end gripper and the like, wherein the robot body has multi-degree-of-freedom motion capability, such as a joint type or coordinate type mechanical arm, and can accurately complete a motion instruction sent by an upper computer, the tail end gripper is selected according to a target object and has the capability of completing the grabbing of the target object, and a 3-dimensional model of the robot is known so as to be convenient for segmenting and extracting a drop point region of the robot; the cameras include but are not limited to 2D plane cameras, depth cameras, 3D cameras and the like, are arranged at reasonable positions, and can provide scene information including target objects and robot working spaces for upper computer collection. The target object is an object in the actual application scene, and a 3-dimensional model of the target object is known.
The construction and operation of the robot gripping system adopting the obstacle avoidance detection optimization method provided by the invention comprise the following steps:
s1, determining a working scene, the type and the geometric shape of a target part to be grabbed;
s2, completing hardware model selection (mainly comprising a robot, a camera, a clamp holder and a working platform) and scene construction;
s3, comprehensively considering the target to be grabbed and the geometric shape of the used gripper, and carrying out grabbing planning design on each target in different typical modes to obtain the drop point areas of the gripper in different grabbing modes;
s4, establishing a target to be grabbed and a multi-mode grabbing planning database;
s5, calibrating the position relation between the camera and the holder to obtain a hand-eye transformation matrix;
s6, acquiring target scene information by using a camera and acquiring the type, the posture and the position information of the target part in a camera coordinate system by using a detection algorithm;
s7, indexing a grabbing planning database according to the target part information obtained through identification, and obtaining the position and the posture of a clamp under a camera coordinate system and a falling point area of a workpiece coordinate system when the target object is grabbed;
s8, carrying out obstacle avoidance detection on the obtained drop point area;
s9, if no collision exists, the grabbing is executed;
s10, replacing the target if collision exists or repeating the steps S8-S10 after optimization;
s11, obtaining the pose of the gripper in the robot coordinate system through the hand-eye transformation matrix, and executing grabbing.
Step S3, performing a grabbing planning design for each target in different typical modes by comprehensively considering the target to be grabbed and the geometric shape of the gripper used, and obtaining the drop point area of the gripper in different grabbing modes, specifically including the following steps:
s3-1, establishing a workpiece coordinate system Coor for each target partobjDetermining the origin and each axis direction of a coordinate system;
s3-2, analyzing the placing state of each target part, and determining each possible typical posture of each target part, such as flat placement, side standing, inclined standing or inversion;
s3-3, determining the coordinate system Coor of each typical gesture in the cameracamSetting an angle threshold value for subsequent identification and classification in the lower angle range; for example, for a square thin plate type part, the possible postures are normal or reverse, when the 3D camera is used for processing, the coordinate system color of the square thin plate type part in the camera coordinate system can be obtained through a point cloud matching algorithmcamPose coordinate of lower position is pcam=(xcam,ycam,Zcam,rxcam,rycam,rzcam) If rx is less than or equal to-90 degreescamJudging that the front surface is approximately upward when the angle is less than or equal to 90 degrees, grabbing according to a forward placing plan, and if rx is more than or equal to 90 degreescamIf the angle is less than or equal to 270 degrees, the reverse side is judged to be upward approximately, and the gripper is grabbed according to the inverted plan. Other objects with complex shapes can be set with more complex rules to realize judgment planning.
S3-4, selecting a graspable area on the target object for each typical posture;
s3-5, designing the center of the gripper and the finger drop point area according to the selected grippable area and the shape of the gripper, and calculating the object coordinate system color of the drop point areaobjThe following coordinate ranges.
The step S4 is to establish a target to be grabbed and a multi-mode grabbing plan database, where the constructed grabbing plan database at least includes:
(1) information such as the type and 3-dimensional model of the target object to be grabbed;
(2) the grabbing mode of the clamper of each target object under different placing poses is that the pose of the clamper is under the target object workpiece coordinate system;
(3) optimizing rules when collision exists in the falling point areas in all the grabbing modes;
(4) the end shape and size parameters of the used holder;
(5) key position parameters of a point falling area at the tail end of the gripper in a local coordinate system of a target object, such as corner point coordinates of a rectangular area, circle center coordinates and radius of a circular area and the like;
in the detection and obstacle avoidance process described in steps S6 to S8, the specific algorithm and processing strategy used should be determined by the camera and detection method used, including but not limited to the following:
if an ordinary 2D camera is used for obtaining an RGB image or a gray image, a template matching or target detection method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then non-zero pixel point number or pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision.
If a depth camera is used for obtaining a target depth image, a surface matching method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then the number of non-zero pixel points or the pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision;
if structured light, binocular and other stereo cameras are used for obtaining point cloud information, an ICP (inductively coupled plasma) or point cloud surface matching algorithm can be used for extracting a target from a scene, then a point drop area of a holder is cut out, the number of point clouds in the area is counted, and when a certain set threshold value is reached, the area is considered to have collision.
In the obstacle avoidance detection process, all parts of the grabbing system need to be carried out under different coordinate systems, and the coordinate systems mainly comprise a target workpiece coordinate system colorobjCamera coordinate system CoorcamRobot base coordinate system CoorbaseAnd robot end tool coordinate system Coortool. Wherein the object coordinate system CoorobjThe user determines according to the geometric shape of the target part, the grabbing planning and the collision detection optimization are mainly carried out under the coordinate system, and the camera coordinate system colorcamThe camera is determined when leaving the factory, the processing and the segmentation of the image and the judgment of the target posture are mainly carried out under the coordinate system, and the robot base coordinate system colorrobThe robot is determined by the factory, and the movement and the grabbing execution of the robot are mainly performed in the following process. The coordinate transformation between different coordinate systems can be realized by a transformation matrix, wherein the hand-eye transformation matrix between the camera coordinate system and the robot base coordinate system is obtained by calibration, and the transformation matrix between the workpiece coordinate system and the camera coordinate system is obtained by inverse solution of the coordinate of the workpiece origin in the camera coordinate system.
Compared with the prior art, the invention has the beneficial effects that:
1. the multi-mode grabbing obstacle avoidance detection optimization method based on the falling point area detection can effectively detect the collision in the robot autonomous grabbing system, and can ensure the safety of the system working in a complex unstructured scene;
2. the method provided by the invention has simple structure, does not need complex algorithm, does not need to increase hardware equipment, and can be quickly deployed into the existing grabbing system only by analyzing the target part and changing a few algorithm levels;
3. the method provided by the invention has strong adaptability, can be suitable for various 2D and 3D cameras and various different types of clampers, and only needs to design and grab the target object according to the method, calculate the falling point area and set a reasonable detection threshold value.
Drawings
FIG. 1 is a schematic diagram of a typical application scenario of the present invention
FIG. 2 is a schematic diagram of the location of coordinate systems involved in the workflow of the present invention;
FIG. 3 is a preparation workflow diagram of the present invention;
FIG. 4 is a flow chart of an exemplary operation of a robotic grasping system applying the present invention;
FIG. 5 is a schematic view of a target object grabbing mode plan in a lay-flat position;
FIG. 6 is a schematic view of a target object grabbing mode plan in a side-up position;
FIG. 7 is a schematic view of a target object grabbing mode plan in an upright position;
FIG. 8 is a schematic diagram of a rectangular original grabbing plan with collision in a drop point area;
FIG. 9 is a schematic diagram of a default translational optimized cuboid capture plan;
FIG. 10 is a schematic diagram of a default translational optimized cuboid capture plan;
FIG. 11 is a schematic diagram of an original grabbing plan of a cylindrical object with collision in a landing point area;
FIG. 12 is a schematic diagram of a default rotation optimized cylindrical object grabbing plan;
FIG. 13 is a schematic diagram of the directed obstacle avoidance optimization strategy based on the analysis of the drop point sub-regions in the present invention;
fig. 14 is a diagram illustrating the result of the directional obstacle avoidance optimization.
Detailed Description
Specific embodiments of the present invention will be described in further detail below with reference to examples and drawings, but the present invention is not limited thereto.
The implementation of the invention firstly needs to complete the determination and construction of the grabbing scene, and a typical grabbing system scene is shown in fig. 1: the robot comprises a camera 1, a grabbing clamp 2, a robot 3, a grabbing platform 4, a target part 5 to be grabbed, a camera support 6, an upper computer, a sensor and other related equipment which are not shown in the figure. The camera 1 is of a type determined according to actual shooting requirements, and comprises a 2D plane camera, a depth camera and a 3D camera, such as a structured light camera or a binocular camera, wherein the camera is arranged at a reasonable position and can provide scene information including a target object and a robot working space for the collection of an upper computer. The grabbing clamp is selected according to the target object, has the capability of grabbing the target object, and has a known 3-dimensional model, so that the falling point area of the target object can be conveniently segmented and extracted; the robot 3 should have multiple degrees of freedom motion capability, such as a joint type or coordinate type mechanical arm, and can accurately complete the action instruction sent by the upper computer; the grabbing platform 4 can be fixed or movable, such as a conveyor belt, and the position of the grabbing platform is within the optimal imaging range of the camera; the target part 5 to be grabbed is an object in the actual application scene, the 3-dimensional model of the object is known, and a plurality of different types of target parts which are scattered and distributed can exist simultaneously during actual grabbing, and only one target part is drawn in the figure for illustration.
The implementation of the work of the parts of the invention involves a plurality of different coordinate systems, the position and definition of each coordinate system is shown in fig. 2, which mainly comprises a camera coordinate system C1, colorcamRobot end-of-arm tool coordinate system C2, CoortoolRobot base coordinate system C3, CoorbaseAnd the target object coordinate system C4, Coorobj. Wherein the object coordinate system CoorobjThe user determines according to the geometric shape of the target part, the grabbing planning and the collision detection optimization are mainly carried out under the coordinate system, and the camera coordinate system colorcamThe camera is determined when leaving the factory, the processing and the segmentation of the image and the judgment of the target posture are mainly carried out under the coordinate system, and the robot base coordinate system colorrobThe robot is determined by the factory, and the movement and the grabbing execution of the robot are mainly carried out under the coordinate system. The coordinate transformation between different coordinate systems can be realized by a transformation matrix, wherein the hand-eye transformation matrix between the camera coordinate system and the robot base coordinate system is obtained by calibration, and the transformation matrix between the workpiece coordinate system and the camera coordinate system is obtained by inverse solution of the coordinate of the workpiece origin in the camera coordinate system.
The implementation of the present invention also requires to complete various preparation tasks, as shown in fig. 3, which mainly includes:
s101, determining a target. Determining the information such as the type, the geometric shape and the like of an object to be grabbed and obtaining a 3-dimensional model of the object;
s102, analyzing the target. Analyzing the geometric shape and the outline of the target to be grabbed and different typical postures possibly appearing in the scene, and determining the grabbed area under each posture;
and S103, multi-mode grabbing planning. Namely, aiming at the target analysis result, the grabbing mode of each mode of the target is determined by combining the geometric parameters of the used clamper. The gripper is in the coordinate system of the workpiece Coor when determining each gripping modeobjCoordinates of the lower part;
and S104, calculating a drop point area. After the grabbing mode is determined, determining a finger drop point area of the gripper in the current grabbing mode according to the grabbing position and the gripper shape;
and S105, designing an optimization rule. Designing a grabbing optimization method for the grabbing in the presence of collision according to the target geometric shape and the selected grabbing mode;
and S107, establishing a grabbing plan database. The data is summarized and integrated by taking each grabbing target as a unit, and a grabbing planning index database is established;
and S108, calibrating the hand and the eye of the robot-camera. Completing hand-eye calibration between the robot and the camera by using a calibration plate to obtain a transformation matrix between two coordinate systems;
wherein, the step S106 can be added according to the specific working scene, and more working related information is added in the database.
The main flow of the robot gripping system working by applying the invention is shown in fig. 4, and mainly comprises the following steps:
s201, collecting an image. The upper computer or other sensing signals trigger the camera to take a picture and collect images, and capture scene information is obtained. The cameras include, but are not limited to, 2D planar cameras, depth cameras, 3D cameras, such as structured light or binocular cameras, etc.;
s202, identifying the target. And acquiring information such as target type, pose and the like from the original image information by using an image processing algorithm. The specific recognition algorithm depends on the camera used and the image information obtained;
and S204, capturing the plan. Indexing the identified target information in a grabbing plan database to plan grabbing for the current target;
and S205, extracting a drop point region. Obtaining key position parameters of the drop point area through the grabbing calculation, and extracting a local image of the drop point area of the gripper from the image;
and S206, detecting the collision of the drop point area. And (4) processing and analyzing the local graph of the drop point area to obtain a data index indicating whether collision exists or not, comparing the data index with a set index threshold, determining that collision exists if the data index exceeds the threshold, executing S207, determining that collision does not exist if the data index does not exceed the threshold, and executing S208. The specific detection index, the threshold setting, and the recognition method described in step S202 are determined by the selected camera type, processing method, and actual experiment:
and S207, detecting that collision exists in a clamp holder drop point area, abandoning the target according to the working requirement or executing a set optimization rule to optimize the grabbing and then executing S205-S207 again until collision-free feasible grabbing is obtained, abandoning the target and executing a preset exit algorithm if collision still exists after all the optimization positions are iterated.
S208, obtaining feasible grabbing without collision, and transforming the grabbing to a robot base coordinate system Coor by using a transformation matrixbaseNext, the grabbing is performed.
The preparation work and the overall implementation flow of the invention have been specifically described, and the following is a detailed description of the implementation of the obstacle avoidance detection optimization method of the invention through the drawings and examples.
Example one
This example uses two finger parallel jaws as grippers for the gripping operation.
Firstly, the multi-mode grabbing planning design and obstacle avoidance optimization method of the present invention will be specifically described with reference to fig. 5 to 7:
the left part of the figure is a grabbing work scene under a camera coordinate system, the right part of the figure is a top view under a target object workpiece coordinate system, 4 is a grabbing platform, 51 is an example of a cuboid target object, and 61 and 62 are the falling point areas of two-finger clampers.
The left side of fig. 5-7 shows three typical postures of the target object, namely, flat, side and upright postures, which can occur when the target object is randomly placed, which are referred to as 3 typical modes of the object, and shows three angles of the posture of the object under the camera coordinate system; the grabbing modes of the object in the three modes are mutually exclusive, independent design and planning are needed, and other possible postures such as inclined posture can be judged to be one of the typical modes according to the angle range and the grabbing mode is adopted.
The right sides of fig. 5 to 7 show the sample of the grabbing mode designed for each corresponding mode in the target object workpiece coordinate system. After the grabbing plan is completed, the grabbing drop point areas represented by 61 and 62 can be obtained by calculating the shape and the size of the clamping jaw and grabbing parameters, and are stored in a grabbing plan database in advance, and can be directly indexed and read during working.
The obstacle avoidance detection and default rule optimization method of the present invention will be specifically described below with reference to fig. 8 to 12:
in the figure, 51 is an example of a rectangular parallelepiped target object, 52 is an example of a cylindrical target object, 6 is a capture landing area, and 7 is obstacles distributed on both sides of the target object.
As shown in fig. 8, there is an overlapping portion between the drop point area and the obstacle, the impact can be recognized by cutting the drop point area and detecting the overlapping portion, and the specific algorithm and processing strategy adopted should be determined by the camera and the detection method used, including but not limited to the following:
(1) if an ordinary 2D camera is used for obtaining an RGB image or a gray image, a template matching or target detection method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then non-zero pixel point number or pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision.
(2) If a depth camera is used for obtaining a target depth image, a surface matching method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then the number of non-zero pixel points or the pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision;
(3) if structured light, binocular and other stereo cameras are used for obtaining point cloud information, an ICP (inductively coupled plasma) or point cloud surface matching algorithm can be used for extracting a target from a scene, then a point drop area of a holder is cut out, the number of point clouds in the area is counted, and when a certain set threshold value is reached, the area is considered to have collision.
For objects with simple shapes similar to 51 and 52, when collision exists in a grabbing drop point area, rules of small-displacement translation along an axis, small-angle rotation around the center and the like can be set under a workpiece coordinate system to perform iterative optimization, a collision-free area can be quickly found, grabbing is optimized, and as shown in fig. 8, for 51, when collision exists in an original grabbing plan, optimization can be quickly achieved through simple operations of iteratively moving a grabbing position upwards (fig. 9), iteratively moving a grabbing position downwards (fig. 10) and the like. As shown in fig. 11, for 52, there is a collision in the original grabbing plan, and the collision-free region can be found by iterative rotation (fig. 12). The default optimization rules for translation and rotation are only examples, and the default optimization rules can be specifically designed according to the grasped target object in use.
The following describes the directional obstacle avoidance optimization strategy based on the analysis of the drop point sub-regions in detail with reference to fig. 13:
in fig. 13, the two drop point areas are further divided to obtain four drop point sub-areas 61, 62, 63, 64, 51, which are an example of a rectangular target object, and 7 are obstacles distributed on two sides of the target object. An index table as shown in table 1 is established by further analyzing the grabbing conditions of the areas and establishing a corresponding directional optimization strategy for each condition. Further analysis of the corresponding situation shown results in that there is a collision in the areas 61, 64, and thus a directional optimization strategy is obtained to rotate the grab counter-clockwise.
Table 1 is a landing sub-region oriented optimization strategy table, which is only schematic and does not list all cases.
TABLE 1
Figure BDA0003151577590000121
The optimized collision-free grab oriented clockwise for this target is shown in fig. 14.

Claims (6)

1.一种基于落点区域检测的多模式抓取避障检测优化方法,其特征在于,应用此方法的抓取技术实现主要包括以下步骤:1. a multi-mode grabbing obstacle avoidance detection optimization method based on the detection of the landing point area, is characterized in that, the realization of the grabbing technique applying this method mainly comprises the following steps: S1.结合待抓取目标以及所用夹持器的几何形状对目标进行典型模式下的抓取设计,获得不同抓取模式下夹持器的落点区域;S1. According to the target to be grasped and the geometry of the gripper to be grasped, the target is grasped in a typical mode, and the landing area of the gripper under different grasping modes is obtained; S2.建立待抓取目标和多模式抓取规划数据库;S2. Establish a target to be grabbed and a multi-mode grabbing planning database; S3.标定相机与机器人末端夹持器之间的位置关系,获得手眼变换矩阵S3. Calibrate the positional relationship between the camera and the robot end gripper, and obtain the hand-eye transformation matrix S4.使用相机获取目标场景信息并使用检测算法获得目标零件的类型、姿态及其在相机坐标系下的位置信息;S4. Use the camera to obtain the target scene information and use the detection algorithm to obtain the type, attitude and position information of the target part in the camera coordinate system; S5.依据识别得到的目标零件信息索引抓取规划数据库,获得该模式下的抓取规划及落点区域;S5. Index the grabbing planning database according to the identified target part information, and obtain the grabbing plan and landing area in this mode; S6.求得抓取该目标物体时的相机坐标系下夹持器位姿及工件坐标系下的落点区域;S6. Obtain the position and orientation of the gripper in the camera coordinate system when grabbing the target object and the drop point area in the workpiece coordinate system; S7.对所获落点区域进行检查,无碰撞则执行该次抓取,有碰撞则更换目标或优化后再次检测;S7. Check the obtained landing area. If there is no collision, the grab will be executed. If there is a collision, replace the target or check again after optimization; S8.根据手眼变换矩阵得到夹持器的实际位姿,执行抓取。S8. Obtain the actual pose of the gripper according to the hand-eye transformation matrix, and perform grasping. 2.如权利要求1中所述的基于落点区域检测的多模式抓取避障检测优化方法,其特征在于,抓取落点区域检测避障方法实现包括以下步骤:2. The multi-mode grabbing obstacle avoidance detection optimization method based on the detection of the landing area as claimed in claim 1, wherein the realization of the grabbing landing area detection and obstacle avoidance method comprises the following steps: S1.结合待抓取目标以及所用夹持器的几何形状对目标进行典型模式下的抓取设计,获得不同抓取模式下夹持器的落点区域;S1. According to the target to be grasped and the geometry of the gripper to be grasped, the target is grasped in a typical mode, and the landing area of the gripper under different grasping modes is obtained; S2.建立待抓取目标和多模式抓取规划数据库;S2. Establish a target to be grabbed and a multi-mode grabbing planning database; S3.对某个所规划抓取的落点区域进行碰撞检查;S3. Perform a collision check on a planned landing area; S4.根据碰撞检查结果决定下一步执行动作。S4. Determine the next action to be executed according to the result of the collision check. 3.如权利要求1中所述的基于落点区域检测的多模式抓取避障检测优化方法,其特征在于,步骤S2中所构建的抓取规划数据库中至少应当包括:(1)待抓取目标物体的种类、3维模型等信息;(2)每个目标物体在不同放置位姿下的夹持器的抓取模式,即夹持器在目标物体工件坐标系下的位姿;(3)各抓取模式下落点区域存在碰撞时的优化规则(4)所用夹持器的末端形状,尺寸参数;(5)夹持器末端落点区域在目标物体局部坐标系下的关键位置参数,如矩形区域的角点坐标,圆形区域的圆心坐标及半径等;3. The multi-mode grabbing obstacle avoidance detection optimization method based on the detection of the landing point area as claimed in claim 1, wherein the grabbing planning database constructed in step S2 should at least include: (1) to be grabbed Take information such as the type of target object, 3D model, etc.; (2) The grasping mode of the gripper under different placement poses of each target object, that is, the pose of the gripper in the workpiece coordinate system of the target object; ( 3) Optimization rules when there is a collision in the drop point area of each grabbing mode (4) The shape and size parameters of the gripper end used; (5) The key position parameters of the drop point area of the gripper end in the local coordinate system of the target object , such as the corner coordinates of a rectangular area, the center coordinates and radius of a circular area, etc.; 所述的不同抓取模式定义为:对于同一抓取目标,依据其所处姿态,为机器人设计不同的相对位置来完成对该目标在当前姿态下的最优抓取即为一种模式。The different grasping modes are defined as: for the same grasping target, according to its posture, designing different relative positions for the robot to complete the optimal grasping of the target under the current posture is a mode. 4.如权利要求1中所述的基于落点区域检测的多模式抓取避障检测优化方法,其特征在于,抓取落点区域碰撞检测方法包括以下步骤:4. The multi-mode grabbing obstacle avoidance detection optimization method based on the detection of the landing point area as claimed in claim 1, is characterized in that, the collision detection method of grabbing the landing point area comprises the following steps: S1.使用相机采集场景图像;S1. Use a camera to capture scene images; S2.根据夹持器落点区域的关键位置参数裁剪得到落点区域图像;S2. According to the key position parameters of the gripper landing area, the image of the landing area is obtained by cropping; S3.为某一图像特征指标设定碰撞阈值;S3. Set a collision threshold for a certain image feature index; S4.比较区域内计算所得所述指标与设定阈值相比较,获得碰撞检测结果;S4. Compare the index calculated in the comparison area with the set threshold to obtain a collision detection result; 所述的碰撞检查方法由所用的相机及检测方式确定,包括但不限于以下几种:The collision check method is determined by the camera used and the detection method, including but not limited to the following: (1)如使用普通2D相机获取RGB图像或灰度图像,可用模板匹配或目标检测方法从场景中提取目标,将图像二值化,使用canny算子等对原始图像进行边缘提取,随后对夹持器落点区域进行非零像素点个数或像素值统计,达到某一设定阈值即认为该区域存在碰撞;(1) If an ordinary 2D camera is used to obtain an RGB image or a grayscale image, the template matching or target detection method can be used to extract the target from the scene, the image is binarized, and the canny operator is used to extract the edge of the original image. Count the number of non-zero pixels or pixel values in the area where the holder falls, and when a certain threshold is reached, it is considered that there is a collision in the area; (2)如使用深度相机获取目标深度图像,可用表面匹配的方法从场景中提取目标,将图像二值化,使用canny算子等对原始图像进行边缘提取,随后对夹持器落点区域进行非零像素点个数或像素值统计,达到某一设定阈值即认为该区域存在碰撞;(2) If the depth camera is used to obtain the depth image of the target, the surface matching method can be used to extract the target from the scene, the image is binarized, and the canny operator is used to extract the edge of the original image, and then the gripper landing area is processed. The number of non-zero pixels or the statistics of pixel values, reaching a certain set threshold means that there is a collision in the area; (3)如使用结构光,双目等立体相机获取点云信息,可用ICP等点云匹配算法从场景中提取目标,随后裁剪出夹持器落点区域,对该区域内点云个数进行统计,达到某一设定阈值即认为该区域存在碰撞。(3) If the point cloud information is obtained by using a stereo camera such as structured light and binocular, point cloud matching algorithms such as ICP can be used to extract the target from the scene, and then the gripper landing area is cut out, and the number of point clouds in the area is analyzed. Statistics, reaching a certain set threshold means that there is a collision in the area. 5.如权利要求1所述的基于落点区域检测的多模式抓取避障检测优化方法,其特征在于,可以针对不同目标或抓取规划设计不同的避障优化方法:5. The multi-mode grabbing obstacle avoidance detection optimization method based on the detection of the landing point area as claimed in claim 1, characterized in that, different obstacle avoidance optimization methods can be designed for different targets or grab plans: 如优化方式1:可以针对不同目标抓取位置设计默认优化规则,检测到落点区域存在碰撞后,执行现有规避规则变换抓取位置即可确定优化后的抓取区域;For example, optimization method 1: Default optimization rules can be designed for different target grab positions. After detecting a collision in the landing area, the optimized grab area can be determined by executing the existing evasion rules to change the grab position; 如优化方式2:对落点区域进一步子区域划分分析获得定向优化策略:For example, optimization method 2: further sub-regional analysis of the landing area to obtain a directional optimization strategy: (1)将夹持器手指的每个落点区域划分成n个子区域,子区域个数n及划分形式由目标及夹持器几何形状而定;(1) Divide each landing area of the gripper finger into n sub-areas, and the number of sub-areas n and the division form are determined by the target and the geometry of the gripper; (2)对每个落点子区域Ri(1≤i≤n)进行如权利要求3所述的避障检测,进一步根据落点子区域检测结果获取定向优化参数,通过平移,旋转,缩放夹持器尺寸等变换来实现定向避障优化。(2) Perform the obstacle avoidance detection according to claim 3 for each drop point sub-area R i (1≤i≤n), further obtain orientation optimization parameters according to the detection result of the drop point sub-area, and clamp through translation, rotation and scaling Transformer size and other transformations to achieve directional obstacle avoidance optimization. 6.如权利要求1所述的基于落点区域检测的多模式抓取避障检测优化方法,其特征在于,采用本方法的抓取系统的各部分需要在不同的坐标系下进行,坐标系主要包括目标工件坐标系Coorobj、相机坐标系Coorcam、机器人基坐标系Coorbase和机器人末端工具坐标系Coortool,其中工件坐标系Coorobj由用户根据目标零件的几何形状确定,抓取规划和碰撞检测优化主要在此坐标系下进行,相机坐标系Coorcam由相机出厂时确定,图像的处理和分割以及目标姿态的判定主要在此坐标系下进行,机器人基坐标系Coorrob由机器人出厂时确定,机器人的运动和抓取执行主要在此下进行,不同坐标系之间的坐标变换可以通过变换矩阵实现,其中,相机坐标系与机器人基坐标系之间的手眼变换矩阵通过标定获得,工件坐标系与相机坐标系之间的变换矩阵通过工件在相机坐标系下的位姿坐标逆解求得。6. The multi-mode grabbing obstacle avoidance detection optimization method based on the detection of the landing point area as claimed in claim 1, wherein each part of the grabbing system using the method needs to be carried out under different coordinate systems, and the coordinate system It mainly includes the target workpiece coordinate system Coor obj , the camera coordinate system Coor cam , the robot base coordinate system Coor base and the robot end tool coordinate system Coor tool . The workpiece coordinate system Coor obj is determined by the user according to the geometric shape of the target part. The optimization of collision detection is mainly carried out in this coordinate system. The camera coordinate system Coor cam is determined by the camera when it leaves the factory. The image processing and segmentation and the determination of the target attitude are mainly carried out in this coordinate system. The robot base coordinate system Coor rob is determined by the robot when it leaves the factory. It is determined that the movement and grasping of the robot are mainly carried out here, and the coordinate transformation between different coordinate systems can be realized through transformation matrices. The hand-eye transformation matrix between the camera coordinate system and the robot base coordinate system is obtained through calibration. The transformation matrix between the coordinate system and the camera coordinate system is obtained by the inverse solution of the pose coordinates of the workpiece in the camera coordinate system.
CN202110766116.6A 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection Expired - Fee Related CN113538459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110766116.6A CN113538459B (en) 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110766116.6A CN113538459B (en) 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection

Publications (2)

Publication Number Publication Date
CN113538459A true CN113538459A (en) 2021-10-22
CN113538459B CN113538459B (en) 2023-08-11

Family

ID=78097991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110766116.6A Expired - Fee Related CN113538459B (en) 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection

Country Status (1)

Country Link
CN (1) CN113538459B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299116A (en) * 2021-12-29 2022-04-08 伯朗特机器人股份有限公司 Dynamic object grabbing method, device and storage medium
CN114494949A (en) * 2022-01-10 2022-05-13 深圳市菲普莱体育发展有限公司 Floor point area detection method and device for space moving object
CN115056215A (en) * 2022-05-20 2022-09-16 梅卡曼德(北京)机器人科技有限公司 Collision detection method, control method, capture system and computer storage medium
CN115837363A (en) * 2023-02-20 2023-03-24 成都河狸智能科技有限责任公司 Shared bicycle sorting system and method
WO2024183490A1 (en) * 2023-03-06 2024-09-12 赛那德科技有限公司 Method, system and device for generating image of disorderly stacked packages

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196453A (en) * 2018-01-24 2018-06-22 中南大学 A kind of manipulator motion planning Swarm Intelligent Computation method
WO2018161305A1 (en) * 2017-03-09 2018-09-13 深圳蓝胖子机器人有限公司 Grasp quality detection method, and method and system employing same
WO2018193130A1 (en) * 2017-04-21 2018-10-25 Roboception Gmbh Method for creating a database of gripper poses, method for controlling a robot, computer-readable storage medium and materials handling system
CN109816730A (en) * 2018-12-20 2019-05-28 先临三维科技股份有限公司 Workpiece grabbing method, apparatus, computer equipment and storage medium
EP3623115A1 (en) * 2018-09-06 2020-03-18 Kabushiki Kaisha Toshiba Hand control device
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN112060087A (en) * 2020-08-28 2020-12-11 佛山隆深机器人有限公司 Point cloud collision detection method for robot to grab scene
US20200391385A1 (en) * 2019-06-17 2020-12-17 Kabushiki Kaisha Toshiba Object handling control device, object handling device, object handling method, and computer program product
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112847375A (en) * 2021-01-22 2021-05-28 熵智科技(深圳)有限公司 Workpiece grabbing method and device, computer equipment and storage medium
CN112847374A (en) * 2021-01-20 2021-05-28 湖北师范大学 Parabolic-object receiving robot system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161305A1 (en) * 2017-03-09 2018-09-13 深圳蓝胖子机器人有限公司 Grasp quality detection method, and method and system employing same
WO2018193130A1 (en) * 2017-04-21 2018-10-25 Roboception Gmbh Method for creating a database of gripper poses, method for controlling a robot, computer-readable storage medium and materials handling system
CN108196453A (en) * 2018-01-24 2018-06-22 中南大学 A kind of manipulator motion planning Swarm Intelligent Computation method
EP3623115A1 (en) * 2018-09-06 2020-03-18 Kabushiki Kaisha Toshiba Hand control device
CN109816730A (en) * 2018-12-20 2019-05-28 先临三维科技股份有限公司 Workpiece grabbing method, apparatus, computer equipment and storage medium
US20200391385A1 (en) * 2019-06-17 2020-12-17 Kabushiki Kaisha Toshiba Object handling control device, object handling device, object handling method, and computer program product
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN112060087A (en) * 2020-08-28 2020-12-11 佛山隆深机器人有限公司 Point cloud collision detection method for robot to grab scene
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112847374A (en) * 2021-01-20 2021-05-28 湖北师范大学 Parabolic-object receiving robot system
CN112847375A (en) * 2021-01-22 2021-05-28 熵智科技(深圳)有限公司 Workpiece grabbing method and device, computer equipment and storage medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
JIAHAO ZHANG 等: "Grasping Novel Objects with Real-Time Obstacle Avoidance", 《INTERNATIONAL CONFERENCE ON SOCIAL ROBOTICS》 *
JIAHAO ZHANG 等: "Grasping Novel Objects with Real-Time Obstacle Avoidance", 《INTERNATIONAL CONFERENCE ON SOCIAL ROBOTICS》, 27 November 2018 (2018-11-27), pages 160 - 169, XP047495696, DOI: 10.1007/978-3-030-05204-1_16 *
KE REN 等: "Target Grasping and Obstacle Avoidance Motion Planning of Humanoid Robot", 《2018 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SAFETY FOR ROBOTICS (ISR)》 *
KE REN 等: "Target Grasping and Obstacle Avoidance Motion Planning of Humanoid Robot", 《2018 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SAFETY FOR ROBOTICS (ISR)》, 15 November 2018 (2018-11-15), pages 250 - 255 *
刁琛桃: "家庭服务机器人物体识别与抓取方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
刁琛桃: "家庭服务机器人物体识别与抓取方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 03, 15 March 2020 (2020-03-15), pages 140 - 500 *
王正万: "基于模糊控制系统的采摘机器人避障系统研究分析", 《农机化研究》 *
王正万: "基于模糊控制系统的采摘机器人避障系统研究分析", 《农机化研究》, no. 1, 31 January 2019 (2019-01-31), pages 230 - 233 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299116A (en) * 2021-12-29 2022-04-08 伯朗特机器人股份有限公司 Dynamic object grabbing method, device and storage medium
CN114494949A (en) * 2022-01-10 2022-05-13 深圳市菲普莱体育发展有限公司 Floor point area detection method and device for space moving object
CN115056215A (en) * 2022-05-20 2022-09-16 梅卡曼德(北京)机器人科技有限公司 Collision detection method, control method, capture system and computer storage medium
CN115837363A (en) * 2023-02-20 2023-03-24 成都河狸智能科技有限责任公司 Shared bicycle sorting system and method
WO2024183490A1 (en) * 2023-03-06 2024-09-12 赛那德科技有限公司 Method, system and device for generating image of disorderly stacked packages

Also Published As

Publication number Publication date
CN113538459B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN113538459A (en) Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection
CN105729468B (en) A kind of robotic workstation based on the enhancing of more depth cameras
CN104842361B (en) Robotic system with 3d box location functionality
US9259844B2 (en) Vision-guided electromagnetic robotic system
WO2017015898A1 (en) Control system for robotic unstacking equipment and method for controlling robotic unstacking
CN113420746B (en) Robot visual sorting method and device, electronic equipment and storage medium
JP2004050390A (en) Work taking out device
CN113284178B (en) Object stacking method, device, computing equipment and computer storage medium
CN113386122B (en) Method and device for optimizing measurement parameters and computer-readable storage medium
US12172303B2 (en) Robot teaching by demonstration with visual servoing
CN112828892B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN116529760A (en) Grabbing control method, grabbing control device, electronic equipment and storage medium
CN114078162B (en) Truss sorting method and system for workpiece after steel plate cutting
CN117325170A (en) Method for grasping hard disk rack by robotic arm guided by depth vision
KR102267514B1 (en) Method for picking and place object
CN115797332B (en) Object grabbing method and device based on instance segmentation
Xu et al. A vision-guided robot manipulator for surgical instrument singulation in a cluttered environment
Lin et al. Vision based object grasping of industrial manipulator
CN112338922B (en) Five-axis mechanical arm grabbing and placing method and related device
CN116197885B (en) Image data filtering method, device, equipment and medium based on press-fit detection
CN116175542B (en) Method, device, electronic equipment and storage medium for determining clamp grabbing sequence
CN117036470A (en) Object identification and pose estimation method of grabbing robot
US12097627B2 (en) Control apparatus for robotic system, control method for robotic system, computer-readable storage medium storing a computer control program, and robotic system
CN114782535A (en) Workpiece pose identification method and device, computer equipment and storage medium
CN116188559A (en) Image data processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230811