CN113538459A - Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection - Google Patents

Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection Download PDF

Info

Publication number
CN113538459A
CN113538459A CN202110766116.6A CN202110766116A CN113538459A CN 113538459 A CN113538459 A CN 113538459A CN 202110766116 A CN202110766116 A CN 202110766116A CN 113538459 A CN113538459 A CN 113538459A
Authority
CN
China
Prior art keywords
grabbing
target
coordinate system
detection
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110766116.6A
Other languages
Chinese (zh)
Other versions
CN113538459B (en
Inventor
陈锐
刘道会
朱信宇
王慧港
李洋
蒲华燕
罗均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110766116.6A priority Critical patent/CN113538459B/en
Publication of CN113538459A publication Critical patent/CN113538459A/en
Application granted granted Critical
Publication of CN113538459B publication Critical patent/CN113538459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection, which mainly comprises the following steps: the target is subjected to grabbing design in a typical mode, and the falling point areas of the clamper in different grabbing modes are obtained; establishing a multi-mode grabbing planning database; acquiring target scene information and detecting the type, the posture and the position information of a target part in a camera coordinate system; indexing a capture plan database to obtain a capture plan in the mode; obtaining the position and the attitude of the gripper under the coordinate system of the target camera and the falling point area under the coordinate system of the workpiece; checking a drop point area, executing the grabbing if no collision exists, and detecting again after optimization if collision exists; and transforming to obtain the pose of the gripper in the robot coordinate system, and executing grabbing. The method can accurately perform collision check on planned grabbing, can be used for grabbing planning under multiple objects and multiple modes, is simple to apply, is suitable for various grabbing scenes, and improves the safety of automatic grabbing.

Description

Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection
Technical Field
The invention relates to the technical field of robots, in particular to an obstacle avoidance detection and optimization method for grabbing planning in an autonomous grabbing and sorting process of a robot.
Background
With the development of automatic control and machine vision detection technologies, the robot has realized the complete replacement of human beings in many industrial scenes, has incomparable advantages of human beings in the aspects of working continuity, working stability and precision, has helped a lot of production lines to realize full-automatic transformation, and has greatly improved production efficiency. Meanwhile, with the development of modern control theory and artificial intelligence technology, the intelligence level of the robot is higher and higher, and the tasks which the robot can undertake are more and more complex and various. Grabbing, as one of the important basic actions for human interaction with the real world, is also the ability that a robot must possess to accomplish numerous tasks. Most of existing robot grabbing technologies are deployed in a structured scene, a robot only needs to complete a certain fixed grabbing action under the starting point of certain sensor signals, fixed-point grabbing, carrying and placing effects are achieved, and although the robot is stable, only simple tasks can be completed. The development of robot technology, especially a household service robot, is bound to tend to be more complex and fit scenes of an actual environment, when the grasping requirements of such unstructured scenes are met, the current mainstream target detection methods, such as YOLO based on deep learning, template matching algorithm based on a traditional image processing method and the like, cannot achieve 100% detection precision, have a false detection risk, may cause the problems that grasping is unstable, dropping is caused, grasping position is wrong, grasping cannot be grasped and the like, and the most serious consequence is that grasping position interference causes collision, and can bring serious consequences to a target object, a holder, a mechanical arm and even the whole robot system. Most of the existing robot obstacle avoidance algorithms need to acquire and map three-dimensional information of a scene, and further adopt a path planning obstacle avoidance algorithm to avoid obstacles, the deployment and application are complex, a plurality of devices need to be additionally arranged, the reconstruction of the existing production line is difficult, and the obstacle avoidance strategy of the path planning algorithm also has the possibility of collision caused by detection failure. In view of the above problems, how to design a simple and reliable method for grabbing and avoiding obstacles to achieve rapid deployment of a new system, improve the safety and reliability of the robot grabbing technology by convenient modification of the existing production line is an important research topic.
Disclosure of Invention
The invention discloses a multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection, which aims to solve the problem of obstacle avoidance detection in the robot grabbing technology, and aims to improve the safety and reliability of the existing robot grabbing scene and reduce the accident risk in a simple and convenient mode.
In order to achieve the purpose, the invention adopts the following technical scheme.
The invention provides a multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection, which is oriented to a robot grabbing system composed of an upper computer, a robot, a camera, a plurality of target objects and the like, wherein the robot grabbing system works in an unstructured scene, the target objects are randomly and randomly distributed in the working range of the robot, stacking and shielding possibly exist, the main function of the system is to collect scene information through the camera, and the upper computer transmits the information to the robot after processing so as to complete grabbing actions of the target objects. The upper computer is a computer or other type of processor with corresponding image information processing capacity; the robot comprises a robot body, a controller, a tail end gripper and the like, wherein the robot body has multi-degree-of-freedom motion capability, such as a joint type or coordinate type mechanical arm, and can accurately complete a motion instruction sent by an upper computer, the tail end gripper is selected according to a target object and has the capability of completing the grabbing of the target object, and a 3-dimensional model of the robot is known so as to be convenient for segmenting and extracting a drop point region of the robot; the cameras include but are not limited to 2D plane cameras, depth cameras, 3D cameras and the like, are arranged at reasonable positions, and can provide scene information including target objects and robot working spaces for upper computer collection. The target object is an object in the actual application scene, and a 3-dimensional model of the target object is known.
The construction and operation of the robot gripping system adopting the obstacle avoidance detection optimization method provided by the invention comprise the following steps:
s1, determining a working scene, the type and the geometric shape of a target part to be grabbed;
s2, completing hardware model selection (mainly comprising a robot, a camera, a clamp holder and a working platform) and scene construction;
s3, comprehensively considering the target to be grabbed and the geometric shape of the used gripper, and carrying out grabbing planning design on each target in different typical modes to obtain the drop point areas of the gripper in different grabbing modes;
s4, establishing a target to be grabbed and a multi-mode grabbing planning database;
s5, calibrating the position relation between the camera and the holder to obtain a hand-eye transformation matrix;
s6, acquiring target scene information by using a camera and acquiring the type, the posture and the position information of the target part in a camera coordinate system by using a detection algorithm;
s7, indexing a grabbing planning database according to the target part information obtained through identification, and obtaining the position and the posture of a clamp under a camera coordinate system and a falling point area of a workpiece coordinate system when the target object is grabbed;
s8, carrying out obstacle avoidance detection on the obtained drop point area;
s9, if no collision exists, the grabbing is executed;
s10, replacing the target if collision exists or repeating the steps S8-S10 after optimization;
s11, obtaining the pose of the gripper in the robot coordinate system through the hand-eye transformation matrix, and executing grabbing.
Step S3, performing a grabbing planning design for each target in different typical modes by comprehensively considering the target to be grabbed and the geometric shape of the gripper used, and obtaining the drop point area of the gripper in different grabbing modes, specifically including the following steps:
s3-1, establishing a workpiece coordinate system Coor for each target partobjDetermining the origin and each axis direction of a coordinate system;
s3-2, analyzing the placing state of each target part, and determining each possible typical posture of each target part, such as flat placement, side standing, inclined standing or inversion;
s3-3, determining the coordinate system Coor of each typical gesture in the cameracamSetting an angle threshold value for subsequent identification and classification in the lower angle range; for example, for a square thin plate type part, the possible postures are normal or reverse, when the 3D camera is used for processing, the coordinate system color of the square thin plate type part in the camera coordinate system can be obtained through a point cloud matching algorithmcamPose coordinate of lower position is pcam=(xcam,ycam,Zcam,rxcam,rycam,rzcam) If rx is less than or equal to-90 degreescamJudging that the front surface is approximately upward when the angle is less than or equal to 90 degrees, grabbing according to a forward placing plan, and if rx is more than or equal to 90 degreescamIf the angle is less than or equal to 270 degrees, the reverse side is judged to be upward approximately, and the gripper is grabbed according to the inverted plan. Other objects with complex shapes can be set with more complex rules to realize judgment planning.
S3-4, selecting a graspable area on the target object for each typical posture;
s3-5, designing the center of the gripper and the finger drop point area according to the selected grippable area and the shape of the gripper, and calculating the object coordinate system color of the drop point areaobjThe following coordinate ranges.
The step S4 is to establish a target to be grabbed and a multi-mode grabbing plan database, where the constructed grabbing plan database at least includes:
(1) information such as the type and 3-dimensional model of the target object to be grabbed;
(2) the grabbing mode of the clamper of each target object under different placing poses is that the pose of the clamper is under the target object workpiece coordinate system;
(3) optimizing rules when collision exists in the falling point areas in all the grabbing modes;
(4) the end shape and size parameters of the used holder;
(5) key position parameters of a point falling area at the tail end of the gripper in a local coordinate system of a target object, such as corner point coordinates of a rectangular area, circle center coordinates and radius of a circular area and the like;
in the detection and obstacle avoidance process described in steps S6 to S8, the specific algorithm and processing strategy used should be determined by the camera and detection method used, including but not limited to the following:
if an ordinary 2D camera is used for obtaining an RGB image or a gray image, a template matching or target detection method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then non-zero pixel point number or pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision.
If a depth camera is used for obtaining a target depth image, a surface matching method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then the number of non-zero pixel points or the pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision;
if structured light, binocular and other stereo cameras are used for obtaining point cloud information, an ICP (inductively coupled plasma) or point cloud surface matching algorithm can be used for extracting a target from a scene, then a point drop area of a holder is cut out, the number of point clouds in the area is counted, and when a certain set threshold value is reached, the area is considered to have collision.
In the obstacle avoidance detection process, all parts of the grabbing system need to be carried out under different coordinate systems, and the coordinate systems mainly comprise a target workpiece coordinate system colorobjCamera coordinate system CoorcamRobot base coordinate system CoorbaseAnd robot end tool coordinate system Coortool. Wherein the object coordinate system CoorobjThe user determines according to the geometric shape of the target part, the grabbing planning and the collision detection optimization are mainly carried out under the coordinate system, and the camera coordinate system colorcamThe camera is determined when leaving the factory, the processing and the segmentation of the image and the judgment of the target posture are mainly carried out under the coordinate system, and the robot base coordinate system colorrobThe robot is determined by the factory, and the movement and the grabbing execution of the robot are mainly performed in the following process. The coordinate transformation between different coordinate systems can be realized by a transformation matrix, wherein the hand-eye transformation matrix between the camera coordinate system and the robot base coordinate system is obtained by calibration, and the transformation matrix between the workpiece coordinate system and the camera coordinate system is obtained by inverse solution of the coordinate of the workpiece origin in the camera coordinate system.
Compared with the prior art, the invention has the beneficial effects that:
1. the multi-mode grabbing obstacle avoidance detection optimization method based on the falling point area detection can effectively detect the collision in the robot autonomous grabbing system, and can ensure the safety of the system working in a complex unstructured scene;
2. the method provided by the invention has simple structure, does not need complex algorithm, does not need to increase hardware equipment, and can be quickly deployed into the existing grabbing system only by analyzing the target part and changing a few algorithm levels;
3. the method provided by the invention has strong adaptability, can be suitable for various 2D and 3D cameras and various different types of clampers, and only needs to design and grab the target object according to the method, calculate the falling point area and set a reasonable detection threshold value.
Drawings
FIG. 1 is a schematic diagram of a typical application scenario of the present invention
FIG. 2 is a schematic diagram of the location of coordinate systems involved in the workflow of the present invention;
FIG. 3 is a preparation workflow diagram of the present invention;
FIG. 4 is a flow chart of an exemplary operation of a robotic grasping system applying the present invention;
FIG. 5 is a schematic view of a target object grabbing mode plan in a lay-flat position;
FIG. 6 is a schematic view of a target object grabbing mode plan in a side-up position;
FIG. 7 is a schematic view of a target object grabbing mode plan in an upright position;
FIG. 8 is a schematic diagram of a rectangular original grabbing plan with collision in a drop point area;
FIG. 9 is a schematic diagram of a default translational optimized cuboid capture plan;
FIG. 10 is a schematic diagram of a default translational optimized cuboid capture plan;
FIG. 11 is a schematic diagram of an original grabbing plan of a cylindrical object with collision in a landing point area;
FIG. 12 is a schematic diagram of a default rotation optimized cylindrical object grabbing plan;
FIG. 13 is a schematic diagram of the directed obstacle avoidance optimization strategy based on the analysis of the drop point sub-regions in the present invention;
fig. 14 is a diagram illustrating the result of the directional obstacle avoidance optimization.
Detailed Description
Specific embodiments of the present invention will be described in further detail below with reference to examples and drawings, but the present invention is not limited thereto.
The implementation of the invention firstly needs to complete the determination and construction of the grabbing scene, and a typical grabbing system scene is shown in fig. 1: the robot comprises a camera 1, a grabbing clamp 2, a robot 3, a grabbing platform 4, a target part 5 to be grabbed, a camera support 6, an upper computer, a sensor and other related equipment which are not shown in the figure. The camera 1 is of a type determined according to actual shooting requirements, and comprises a 2D plane camera, a depth camera and a 3D camera, such as a structured light camera or a binocular camera, wherein the camera is arranged at a reasonable position and can provide scene information including a target object and a robot working space for the collection of an upper computer. The grabbing clamp is selected according to the target object, has the capability of grabbing the target object, and has a known 3-dimensional model, so that the falling point area of the target object can be conveniently segmented and extracted; the robot 3 should have multiple degrees of freedom motion capability, such as a joint type or coordinate type mechanical arm, and can accurately complete the action instruction sent by the upper computer; the grabbing platform 4 can be fixed or movable, such as a conveyor belt, and the position of the grabbing platform is within the optimal imaging range of the camera; the target part 5 to be grabbed is an object in the actual application scene, the 3-dimensional model of the object is known, and a plurality of different types of target parts which are scattered and distributed can exist simultaneously during actual grabbing, and only one target part is drawn in the figure for illustration.
The implementation of the work of the parts of the invention involves a plurality of different coordinate systems, the position and definition of each coordinate system is shown in fig. 2, which mainly comprises a camera coordinate system C1, colorcamRobot end-of-arm tool coordinate system C2, CoortoolRobot base coordinate system C3, CoorbaseAnd the target object coordinate system C4, Coorobj. Wherein the object coordinate system CoorobjThe user determines according to the geometric shape of the target part, the grabbing planning and the collision detection optimization are mainly carried out under the coordinate system, and the camera coordinate system colorcamThe camera is determined when leaving the factory, the processing and the segmentation of the image and the judgment of the target posture are mainly carried out under the coordinate system, and the robot base coordinate system colorrobThe robot is determined by the factory, and the movement and the grabbing execution of the robot are mainly carried out under the coordinate system. The coordinate transformation between different coordinate systems can be realized by a transformation matrix, wherein the hand-eye transformation matrix between the camera coordinate system and the robot base coordinate system is obtained by calibration, and the transformation matrix between the workpiece coordinate system and the camera coordinate system is obtained by inverse solution of the coordinate of the workpiece origin in the camera coordinate system.
The implementation of the present invention also requires to complete various preparation tasks, as shown in fig. 3, which mainly includes:
s101, determining a target. Determining the information such as the type, the geometric shape and the like of an object to be grabbed and obtaining a 3-dimensional model of the object;
s102, analyzing the target. Analyzing the geometric shape and the outline of the target to be grabbed and different typical postures possibly appearing in the scene, and determining the grabbed area under each posture;
and S103, multi-mode grabbing planning. Namely, aiming at the target analysis result, the grabbing mode of each mode of the target is determined by combining the geometric parameters of the used clamper. The gripper is in the coordinate system of the workpiece Coor when determining each gripping modeobjCoordinates of the lower part;
and S104, calculating a drop point area. After the grabbing mode is determined, determining a finger drop point area of the gripper in the current grabbing mode according to the grabbing position and the gripper shape;
and S105, designing an optimization rule. Designing a grabbing optimization method for the grabbing in the presence of collision according to the target geometric shape and the selected grabbing mode;
and S107, establishing a grabbing plan database. The data is summarized and integrated by taking each grabbing target as a unit, and a grabbing planning index database is established;
and S108, calibrating the hand and the eye of the robot-camera. Completing hand-eye calibration between the robot and the camera by using a calibration plate to obtain a transformation matrix between two coordinate systems;
wherein, the step S106 can be added according to the specific working scene, and more working related information is added in the database.
The main flow of the robot gripping system working by applying the invention is shown in fig. 4, and mainly comprises the following steps:
s201, collecting an image. The upper computer or other sensing signals trigger the camera to take a picture and collect images, and capture scene information is obtained. The cameras include, but are not limited to, 2D planar cameras, depth cameras, 3D cameras, such as structured light or binocular cameras, etc.;
s202, identifying the target. And acquiring information such as target type, pose and the like from the original image information by using an image processing algorithm. The specific recognition algorithm depends on the camera used and the image information obtained;
and S204, capturing the plan. Indexing the identified target information in a grabbing plan database to plan grabbing for the current target;
and S205, extracting a drop point region. Obtaining key position parameters of the drop point area through the grabbing calculation, and extracting a local image of the drop point area of the gripper from the image;
and S206, detecting the collision of the drop point area. And (4) processing and analyzing the local graph of the drop point area to obtain a data index indicating whether collision exists or not, comparing the data index with a set index threshold, determining that collision exists if the data index exceeds the threshold, executing S207, determining that collision does not exist if the data index does not exceed the threshold, and executing S208. The specific detection index, the threshold setting, and the recognition method described in step S202 are determined by the selected camera type, processing method, and actual experiment:
and S207, detecting that collision exists in a clamp holder drop point area, abandoning the target according to the working requirement or executing a set optimization rule to optimize the grabbing and then executing S205-S207 again until collision-free feasible grabbing is obtained, abandoning the target and executing a preset exit algorithm if collision still exists after all the optimization positions are iterated.
S208, obtaining feasible grabbing without collision, and transforming the grabbing to a robot base coordinate system Coor by using a transformation matrixbaseNext, the grabbing is performed.
The preparation work and the overall implementation flow of the invention have been specifically described, and the following is a detailed description of the implementation of the obstacle avoidance detection optimization method of the invention through the drawings and examples.
Example one
This example uses two finger parallel jaws as grippers for the gripping operation.
Firstly, the multi-mode grabbing planning design and obstacle avoidance optimization method of the present invention will be specifically described with reference to fig. 5 to 7:
the left part of the figure is a grabbing work scene under a camera coordinate system, the right part of the figure is a top view under a target object workpiece coordinate system, 4 is a grabbing platform, 51 is an example of a cuboid target object, and 61 and 62 are the falling point areas of two-finger clampers.
The left side of fig. 5-7 shows three typical postures of the target object, namely, flat, side and upright postures, which can occur when the target object is randomly placed, which are referred to as 3 typical modes of the object, and shows three angles of the posture of the object under the camera coordinate system; the grabbing modes of the object in the three modes are mutually exclusive, independent design and planning are needed, and other possible postures such as inclined posture can be judged to be one of the typical modes according to the angle range and the grabbing mode is adopted.
The right sides of fig. 5 to 7 show the sample of the grabbing mode designed for each corresponding mode in the target object workpiece coordinate system. After the grabbing plan is completed, the grabbing drop point areas represented by 61 and 62 can be obtained by calculating the shape and the size of the clamping jaw and grabbing parameters, and are stored in a grabbing plan database in advance, and can be directly indexed and read during working.
The obstacle avoidance detection and default rule optimization method of the present invention will be specifically described below with reference to fig. 8 to 12:
in the figure, 51 is an example of a rectangular parallelepiped target object, 52 is an example of a cylindrical target object, 6 is a capture landing area, and 7 is obstacles distributed on both sides of the target object.
As shown in fig. 8, there is an overlapping portion between the drop point area and the obstacle, the impact can be recognized by cutting the drop point area and detecting the overlapping portion, and the specific algorithm and processing strategy adopted should be determined by the camera and the detection method used, including but not limited to the following:
(1) if an ordinary 2D camera is used for obtaining an RGB image or a gray image, a template matching or target detection method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then non-zero pixel point number or pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision.
(2) If a depth camera is used for obtaining a target depth image, a surface matching method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then the number of non-zero pixel points or the pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision;
(3) if structured light, binocular and other stereo cameras are used for obtaining point cloud information, an ICP (inductively coupled plasma) or point cloud surface matching algorithm can be used for extracting a target from a scene, then a point drop area of a holder is cut out, the number of point clouds in the area is counted, and when a certain set threshold value is reached, the area is considered to have collision.
For objects with simple shapes similar to 51 and 52, when collision exists in a grabbing drop point area, rules of small-displacement translation along an axis, small-angle rotation around the center and the like can be set under a workpiece coordinate system to perform iterative optimization, a collision-free area can be quickly found, grabbing is optimized, and as shown in fig. 8, for 51, when collision exists in an original grabbing plan, optimization can be quickly achieved through simple operations of iteratively moving a grabbing position upwards (fig. 9), iteratively moving a grabbing position downwards (fig. 10) and the like. As shown in fig. 11, for 52, there is a collision in the original grabbing plan, and the collision-free region can be found by iterative rotation (fig. 12). The default optimization rules for translation and rotation are only examples, and the default optimization rules can be specifically designed according to the grasped target object in use.
The following describes the directional obstacle avoidance optimization strategy based on the analysis of the drop point sub-regions in detail with reference to fig. 13:
in fig. 13, the two drop point areas are further divided to obtain four drop point sub-areas 61, 62, 63, 64, 51, which are an example of a rectangular target object, and 7 are obstacles distributed on two sides of the target object. An index table as shown in table 1 is established by further analyzing the grabbing conditions of the areas and establishing a corresponding directional optimization strategy for each condition. Further analysis of the corresponding situation shown results in that there is a collision in the areas 61, 64, and thus a directional optimization strategy is obtained to rotate the grab counter-clockwise.
Table 1 is a landing sub-region oriented optimization strategy table, which is only schematic and does not list all cases.
TABLE 1
Figure BDA0003151577590000121
The optimized collision-free grab oriented clockwise for this target is shown in fig. 14.

Claims (6)

1. A multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection is characterized in that grabbing technology applying the method mainly comprises the following steps:
s1, carrying out grabbing design on an object in a typical mode by combining the object to be grabbed and the geometric shape of a used clamper to obtain a drop point area of the clamper in different grabbing modes;
s2, establishing a target to be grabbed and a multi-mode grabbing planning database;
s3, calibrating the position relation between the camera and the tail end clamp holder of the robot to obtain a hand-eye transformation matrix
S4, acquiring target scene information by using a camera and acquiring the type, the posture and the position information of the target part in a camera coordinate system by using a detection algorithm;
s5, indexing a grabbing planning database according to the target part information obtained through identification, and obtaining a grabbing plan and a drop point area in the mode;
s6, solving the position and the location area of the gripper under the camera coordinate system and the workpiece coordinate system when the target object is grabbed;
s7, the obtained drop point area is checked, the grabbing is executed if no collision exists, and the target is replaced or the target is optimized and then the detection is carried out again if the collision exists;
and S8, obtaining the actual pose of the gripper according to the hand-eye transformation matrix, and executing grabbing.
2. The multi-mode grabbing obstacle avoidance detection optimization method based on the drop point area detection as claimed in claim 1, wherein the method for grabbing the drop point area detection obstacle avoidance is implemented by the following steps:
s1, carrying out grabbing design on an object in a typical mode by combining the object to be grabbed and the geometric shape of a used clamper to obtain a drop point area of the clamper in different grabbing modes;
s2, establishing a target to be grabbed and a multi-mode grabbing planning database;
s3, performing collision check on a certain planned and grabbed drop point area;
and S4, determining the next action to be executed according to the collision check result.
3. The multi-mode grabbing obstacle avoidance detection optimizing method based on the landing area detection as claimed in claim 1, wherein the grabbing plan database constructed in step S2 at least includes: (1) information such as the type and 3-dimensional model of the target object to be grabbed; (2) the grabbing mode of the clamper of each target object under different placing poses is that the pose of the clamper is under the target object workpiece coordinate system; (3) the tail end shape and the size parameters of the clamp used by the optimization rule (4) when collision exists in the falling point area of each grabbing mode; (5) key position parameters of a point falling area at the tail end of the gripper in a local coordinate system of a target object, such as corner point coordinates of a rectangular area, circle center coordinates and radius of a circular area and the like;
the different grabbing modes are defined as follows: for the same grabbing target, different relative positions are designed for the robot according to the posture of the same grabbing target to finish the optimal grabbing of the target under the current posture, namely a mode.
4. The multi-mode grabbing obstacle avoidance detection optimization method based on the drop point area detection as claimed in claim 1, wherein the grabbing drop point area collision detection method comprises the following steps:
s1, acquiring a scene image by using a camera;
s2, cutting according to key position parameters of a clamp holder drop point area to obtain a drop point area image;
s3, setting a collision threshold value for a certain image characteristic index;
s4, comparing the calculated indexes in the comparison area with a set threshold value to obtain a collision detection result;
the collision inspection method is determined by the used camera and detection mode, and includes but is not limited to the following:
(1) if an ordinary 2D camera is used for obtaining an RGB image or a gray image, a target can be extracted from a scene by using a template matching or target detection method, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then the non-zero pixel point number or pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision;
(2) if a depth camera is used for obtaining a target depth image, a surface matching method can be used for extracting a target from a scene, the image is binarized, edge extraction is carried out on an original image by using a canny operator and the like, then the number of non-zero pixel points or the pixel value statistics is carried out on a clamp drop point area, and when a certain set threshold value is reached, the area is considered to have collision;
(3) if structured light, binocular and other stereo cameras are used for obtaining point cloud information, a target can be extracted from a scene through a point cloud matching algorithm such as ICP (inductively coupled plasma), a point drop area of a clamp holder is cut out, the number of point clouds in the area is counted, and the area is considered to have collision when a certain set threshold value is reached.
5. The multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection as claimed in claim 1, wherein different obstacle avoidance optimization methods can be designed for different targets or grabbing plans:
as the optimization mode 1: default optimization rules can be designed for different target grabbing positions, and after collision of a point falling region is detected, the grabbing positions are transformed by executing the existing evasion rules, so that the optimized grabbing region can be determined;
as the optimization mode 2: and further carrying out sub-area division analysis on the drop point area to obtain a directional optimization strategy:
(1) dividing each drop point area of the gripper fingers into n sub-areas, wherein the number n of the sub-areas and the dividing form are determined by the geometric shapes of the target and the gripper;
(2) for each landing point subregion Ri(i is more than or equal to 1 and less than or equal to n), the obstacle avoidance detection of the method as claimed in claim 3 is carried out, further directional optimization parameters are obtained according to the detection result of the drop point sub-region, and directional obstacle avoidance optimization is realized through translation, rotation, size scaling of the clamp and other transformations.
6. The multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection as claimed in claim 1, wherein each part of the grabbing system adopting the method needs to be performed under different coordinate systems, and the coordinate systems mainly comprise a target workpiece coordinate system CoorobjCamera coordinate system CoorcamRobot base coordinate system CoorbaseAnd robot end tool coordinate system CoortoolWherein the object coordinate system CoorobjThe user determines according to the geometric shape of the target part, the grabbing planning and the collision detection optimization are mainly carried out under the coordinate system, and the camera coordinate system colorcamThe processing and the segmentation of the image and the judgment of the target posture are mainly determined by the camera when the camera leaves the factory and are carried out under the coordinate systemBasic coordinate system of robotrobThe robot is determined when leaving a factory, the motion and the grabbing execution of the robot are mainly carried out under the condition, the coordinate transformation between different coordinate systems can be realized through a transformation matrix, wherein a hand-eye transformation matrix between a camera coordinate system and a robot base coordinate system is obtained through calibration, and the transformation matrix between a workpiece coordinate system and the camera coordinate system is obtained through inverse solution of the position and orientation coordinates of a workpiece in the camera coordinate system.
CN202110766116.6A 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection Active CN113538459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110766116.6A CN113538459B (en) 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110766116.6A CN113538459B (en) 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection

Publications (2)

Publication Number Publication Date
CN113538459A true CN113538459A (en) 2021-10-22
CN113538459B CN113538459B (en) 2023-08-11

Family

ID=78097991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110766116.6A Active CN113538459B (en) 2021-07-07 2021-07-07 Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection

Country Status (1)

Country Link
CN (1) CN113538459B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494949A (en) * 2022-01-10 2022-05-13 深圳市菲普莱体育发展有限公司 Floor point area detection method and device for space moving object
CN115056215A (en) * 2022-05-20 2022-09-16 梅卡曼德(北京)机器人科技有限公司 Collision detection method, control method, capture system and computer storage medium
CN115837363A (en) * 2023-02-20 2023-03-24 成都河狸智能科技有限责任公司 Shared bicycle sorting system and method
WO2024183490A1 (en) * 2023-03-06 2024-09-12 赛那德科技有限公司 Method, system and device for generating image of disorderly stacked packages

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196453A (en) * 2018-01-24 2018-06-22 中南大学 A kind of manipulator motion planning Swarm Intelligent Computation method
WO2018161305A1 (en) * 2017-03-09 2018-09-13 深圳蓝胖子机器人有限公司 Grasp quality detection method, and method and system employing same
WO2018193130A1 (en) * 2017-04-21 2018-10-25 Roboception Gmbh Method for creating a database of gripper poses, method for controlling a robot, computer-readable storage medium and materials handling system
CN109816730A (en) * 2018-12-20 2019-05-28 先临三维科技股份有限公司 Workpiece grabbing method, apparatus, computer equipment and storage medium
EP3623115A1 (en) * 2018-09-06 2020-03-18 Kabushiki Kaisha Toshiba Hand control device
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN112060087A (en) * 2020-08-28 2020-12-11 佛山隆深机器人有限公司 Point cloud collision detection method for robot to grab scene
US20200391385A1 (en) * 2019-06-17 2020-12-17 Kabushiki Kaisha Toshiba Object handling control device, object handling device, object handling method, and computer program product
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112847374A (en) * 2021-01-20 2021-05-28 湖北师范大学 Parabolic-object receiving robot system
CN112847375A (en) * 2021-01-22 2021-05-28 熵智科技(深圳)有限公司 Workpiece grabbing method and device, computer equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161305A1 (en) * 2017-03-09 2018-09-13 深圳蓝胖子机器人有限公司 Grasp quality detection method, and method and system employing same
WO2018193130A1 (en) * 2017-04-21 2018-10-25 Roboception Gmbh Method for creating a database of gripper poses, method for controlling a robot, computer-readable storage medium and materials handling system
CN108196453A (en) * 2018-01-24 2018-06-22 中南大学 A kind of manipulator motion planning Swarm Intelligent Computation method
EP3623115A1 (en) * 2018-09-06 2020-03-18 Kabushiki Kaisha Toshiba Hand control device
CN109816730A (en) * 2018-12-20 2019-05-28 先临三维科技股份有限公司 Workpiece grabbing method, apparatus, computer equipment and storage medium
US20200391385A1 (en) * 2019-06-17 2020-12-17 Kabushiki Kaisha Toshiba Object handling control device, object handling device, object handling method, and computer program product
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN112060087A (en) * 2020-08-28 2020-12-11 佛山隆深机器人有限公司 Point cloud collision detection method for robot to grab scene
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112847374A (en) * 2021-01-20 2021-05-28 湖北师范大学 Parabolic-object receiving robot system
CN112847375A (en) * 2021-01-22 2021-05-28 熵智科技(深圳)有限公司 Workpiece grabbing method and device, computer equipment and storage medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
JIAHAO ZHANG 等: "Grasping Novel Objects with Real-Time Obstacle Avoidance", 《INTERNATIONAL CONFERENCE ON SOCIAL ROBOTICS》 *
JIAHAO ZHANG 等: "Grasping Novel Objects with Real-Time Obstacle Avoidance", 《INTERNATIONAL CONFERENCE ON SOCIAL ROBOTICS》, 27 November 2018 (2018-11-27), pages 160 - 169, XP047495696, DOI: 10.1007/978-3-030-05204-1_16 *
KE REN 等: "Target Grasping and Obstacle Avoidance Motion Planning of Humanoid Robot", 《2018 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SAFETY FOR ROBOTICS (ISR)》 *
KE REN 等: "Target Grasping and Obstacle Avoidance Motion Planning of Humanoid Robot", 《2018 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SAFETY FOR ROBOTICS (ISR)》, 15 November 2018 (2018-11-15), pages 250 - 255 *
刁琛桃: "家庭服务机器人物体识别与抓取方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
刁琛桃: "家庭服务机器人物体识别与抓取方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 03, 15 March 2020 (2020-03-15), pages 140 - 500 *
王正万: "基于模糊控制系统的采摘机器人避障系统研究分析", 《农机化研究》 *
王正万: "基于模糊控制系统的采摘机器人避障系统研究分析", 《农机化研究》, no. 1, 31 January 2019 (2019-01-31), pages 230 - 233 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494949A (en) * 2022-01-10 2022-05-13 深圳市菲普莱体育发展有限公司 Floor point area detection method and device for space moving object
CN115056215A (en) * 2022-05-20 2022-09-16 梅卡曼德(北京)机器人科技有限公司 Collision detection method, control method, capture system and computer storage medium
CN115837363A (en) * 2023-02-20 2023-03-24 成都河狸智能科技有限责任公司 Shared bicycle sorting system and method
WO2024183490A1 (en) * 2023-03-06 2024-09-12 赛那德科技有限公司 Method, system and device for generating image of disorderly stacked packages

Also Published As

Publication number Publication date
CN113538459B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN113538459B (en) Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection
CN109483554B (en) Robot dynamic grabbing method and system based on global and local visual semantics
CN105729468B (en) A kind of robotic workstation based on the enhancing of more depth cameras
Domae et al. Fast graspability evaluation on single depth maps for bin picking with general grippers
US9802317B1 (en) Methods and systems for remote perception assistance to facilitate robotic object manipulation
WO2017015898A1 (en) Control system for robotic unstacking equipment and method for controlling robotic unstacking
JP2004050390A (en) Work taking out device
CN112828892B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN112847375B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN113386122B (en) Method and device for optimizing measurement parameters and computer-readable storage medium
CN113284178B (en) Object stacking method, device, computing equipment and computer storage medium
CN113858188A (en) Industrial robot gripping method and apparatus, computer storage medium, and industrial robot
CN111390910A (en) Manipulator target grabbing and positioning method, computer readable storage medium and manipulator
CN114078162B (en) Truss sorting method and system for workpiece after steel plate cutting
US20230173660A1 (en) Robot teaching by demonstration with visual servoing
CN112338922B (en) Five-axis mechanical arm grabbing and placing method and related device
Lin et al. Vision based object grasping of industrial manipulator
CN117036470A (en) Object identification and pose estimation method of grabbing robot
KR102267514B1 (en) Method for picking and place object
CN116197885B (en) Image data filtering method, device, equipment and medium based on press-fit detection
CN116175542B (en) Method, device, electronic equipment and storage medium for determining clamp grabbing sequence
US12097627B2 (en) Control apparatus for robotic system, control method for robotic system, computer-readable storage medium storing a computer control program, and robotic system
WO2004052596A1 (en) Method and arrangement to avoid collision between a robot and its surroundings while picking details including a sensorsystem
CN116197888B (en) Method and device for determining position of article, electronic equipment and storage medium
CN115797332B (en) Object grabbing method and device based on instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant