CN111136656B - Method for automatically identifying and grabbing three-dimensional irregular object of robot - Google Patents
Method for automatically identifying and grabbing three-dimensional irregular object of robot Download PDFInfo
- Publication number
- CN111136656B CN111136656B CN201911346789.5A CN201911346789A CN111136656B CN 111136656 B CN111136656 B CN 111136656B CN 201911346789 A CN201911346789 A CN 201911346789A CN 111136656 B CN111136656 B CN 111136656B
- Authority
- CN
- China
- Prior art keywords
- robot
- camera
- point
- coordinate system
- grabbing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
The invention provides a method for automatically identifying and grabbing a three-dimensional irregular object by a robot, which comprises the following specific operation methods: earlier stage data collection, data test, data analysis, robot move to snatch the position appearance, realize snatching to the tool to lock. The robot provided by the invention is additionally provided with automatic identification and grabbing of three-dimensional irregular objects, surface information of the three-dimensional irregular objects is obtained through the line laser 3D camera, the camera is installed on a flange plate at the tail end of the robot, when a workpiece reaches a scanning area of the robot, the robot carries the camera to perform linear motion to scan the surface of the product so as to obtain the surface information of the product, products of different specifications are scanned one by one to obtain a point cloud library of the whole product, for each product of each specification type, grabbing points are taught through the robot and recorded, and then current scanning workpiece information is converted into current robot grabbing position points through formulas 2-5, so that workpiece grabbing is realized.
Description
Technical Field
The invention relates to the field of industrial automation, in particular to a method for automatically identifying and grabbing a three-dimensional irregular object by a robot.
Background
Under the background of national intelligent manufacturing warfare, particularly in recent years, with rapid development in the field of logistics, there are more and more demands for automatically sorting, identifying and grabbing objects, and currently, a mainstream automatic identification and grabbing robot system in the market mainly extracts a two-dimensional plane feature on the surface of an object and performs template matching according to the feature, so that the current automatic identification and grabbing robot system can only automatically identify and grab regular objects, such as identification and grabbing of express boxes, but the type of product cannot identify and grab irregular three-dimensional objects. Based on the requirement of the robot for automatically identifying and grabbing irregular objects (such as locks for containers) provided by customers, the robot three-dimensional irregular object automatic identification and grabbing method is provided.
Disclosure of Invention
According to the technical problem, the invention provides a method for automatically identifying and grabbing a three-dimensional irregular object by a robot, which comprises the following specific operation methods:
firstly, early data collection: the client software firstly extracts key features in the lockset by acquiring images of the locksets, and simultaneously establishes a point cloud library of each product and stores the point cloud library in the client software;
secondly, data testing: installing a camera on a flange plate at the tail end of the robot, determining that the camera is installed at a specified position, placing a workpiece on a workbench, and when the workpiece reaches a scanning area of the robot, the robot carries the camera to perform linear motion to scan the surface of a product so as to acquire the surface information of the product;
thirdly, data analysis: when the robot starts to identify and grab the lock, the robot starts to scan the lock from an appointed position through a line laser 3-dimensional camera clamped at the tail end to obtain an image of the lock, the image is stored in client software, the client software is matched with a point cloud base stored by the client software through the image to determine the model of a product, if the model of the product exists, the client software is matched with the point cloud base through key features in the image, and the client sends the pose to the robot through Socket communication by determining the pose of a lock grabbing point;
and fourthly, the robot moves to the grabbing point position, and the grabbing of the lock is realized.
The camera is a line laser 3-dimensional camera.
The method for determining the specified position of the camera comprises the following steps:
robot camera calibration is mainly used for confirming the relation between a camera coordinate system and the position of a flange plate at the tail end of a robot, namely obtaining
Step1, printing a visual calibration plate, wherein the calibration plate is generally given by a camera manufacturer and is printed according to the use requirement;
step2, placing the calibration plate at a flat and wide position and ensuring that the position and posture of the robot can be reached;
step3 establishing a tool coordinate system and a robot world by mounting a pinpoint tool on the end of a robot and teaching the toolRelation of coordinate systemThe tool is mainly used for calibrating the calibration plate;
step4 teaching the workpiece coordinate system of the calibration plate by using the robot tip to mount the needle tip, wherein the center point of the coordinate axis is the origin of the workpiece coordinate system, the direction of the coordinate system is as shown in FIG. 1, note that the current workpiece coordinate system is
Step5, moving the camera to the position right above the calibration plate to make the laser line projected by the camera aligned with the X axis of the calibration plate, the distance from the camera to the calibration plate along the direction of the laser line is consistent, and the distance from the camera to the calibration plate along the length direction of the camera, namely the direction vertical to the laser line, is also consistent, in other words, the installation plane on the camera is parallel to the plane of the calibration plate, at this time, the alignment of the camera coordinate system and the coordinate system of the calibration plate is realized;
step6, recording the point of the aligned robot flange (tool0) in the Step5 in the workpiece coordinate systemThe lower point is pAlignMiddle, so that the prelimingnprestart position (palignprestart.x ═ palignmiddle.x, palignprestart.y ═ palignmiddle.y-250) and the end position pAlignEnd (palignend.x ═ palignmiddle.x, palignend.y ═ palignmiddle.y +250) of the next camera movement are calculated;
step7 moving the robot's camera laser line to the pAlignPreStart point (to getFor reference, the same applies below); starting the robot to move from the pAlignPreStart point to the pAlignEnd point in a straight line at the speed of 100 mm/s; it should be noted that the robot has an acceleration and deceleration process from a static state to a uniform speed of 100mm/s, and the robot sends a signal to trigger the trisector after the robot runs for 50mm in a straight line (at the moment, the robot is accelerated and completed)Taking an image by a camera, wherein a trigger point of a signal at the moment is a Y coordinate start position of an acquired image, and recording that the position of a flange plate (tool0) at the point is pAlignStart.y ═ pAlignMiddle.y-200; note that the length of the Y-direction graph of the trisector is set to 350mm, as shown in fig. 2; step8, after the image is taken, checking whether the Z values of a plurality of arbitrary positions of the scanned image are consistent through software provided by a line laser camera, so as to confirm whether the image-taking plane of the camera is parallel to the plane of the calibration plate (if the difference is too much, such as nearly 1mm or more, and the image-taking plane needs to be adjusted to be parallel, the operation is resumed from the Step 5); recording the point of the origin of coordinates of the calibration plate in the image coordinate system at the moment, and recording the coordinate values of the point as X, Y and Z, wherein the Euler angles of the rotation of the camera around the X, Y and Z axes are all 0 because the camera is ensured to be aligned with the coordinates of the calibration plate board in the direction when the image is taken, thereby obtaining the calibration plate
Step9, expressing the point location and the coordinate system in a homogeneous coordinate system mode respectively as follows:
representing the relation between the center point of the calibration plate and the world coordinate system of the robot;
representing the relation between a flange plate at the tail end of the robot and the central point of a calibration plate;
as described aboveNamely the tool coordinate system of the camera, namely the conversion relation between the camera coordinate system and the flange plate central point coordinate system.
The method for determining the positions and postures of the grabbing points of the robot is mainly characterized in that a conversion matrix of image grabbing points and template grabbing points is obtained, the grabbing points of the robot under the template are converted into grabbing points of the robot under a current scanning image, the robot scans a target workpiece to obtain the target characteristics of a required image, and a conversion matrix H and grabbing points teaching the positions of the template are given according to the characteristics and the characteristics in the templateEstablishing a current image robot grasp pointThe calculation schematic is as follows:
step1: the pose from the origin of the camera to the base coordinate is obtained, and the formula is obtained as follows:
step2: and (3) solving the grabbing pose of the robot at the position 2, and obtaining a formula (2-5) by combining the following formulas (2-1), (2-2) and (2-3), namely the grabbing point of the robot at the position 2:
the invention has the beneficial effects that: the robot is different from the automatic identification and grabbing technology of a robot on the existing market for a two-dimensional plane object, the automatic identification and grabbing of a three-dimensional irregular object are added, the surface information of the three-dimensional irregular object is obtained through a line laser 3D camera, the camera is installed on a flange plate at the tail end of the robot, when a workpiece reaches a scanning area of the robot, the robot carries the camera to perform linear motion to scan the surface of a product so as to obtain the surface information of the product, the products of different specifications are scanned one by one to obtain a point cloud library of the whole product, for each specification type of product, grabbing points are taught through the robot and recorded, and then current scanned workpiece information is converted into current robot grabbing position points through formulas 2-5, so that the workpiece is grabbed.
Drawings
Fig. 1 is a diagram of an automatic recognition and grabbing framework of a three-dimensional irregular object of the robot.
FIG. 2 is a calibration plate workpiece coordinate system setting according to the present invention.
FIG. 3 is a schematic view of camera calibration according to the present invention.
FIG. 4 is a schematic diagram of the present invention for establishing a robot grasp point.
FIG. 5 is a schematic flow chart of the present invention.
Fig. 6 illustrates several lock images and features of the present invention.
Detailed Description
The invention will be further explained with reference to the figures:
example 1
A method for automatically identifying and grabbing a three-dimensional irregular object of a robot comprises the following specific operation methods:
1. collecting data at the early stage: the client software firstly extracts key features in the lockset by acquiring images of the locksets, and simultaneously establishes a point cloud library of each product and stores the point cloud library in the client software;
2. and (3) data testing: installing a camera on a flange plate at the tail end of the robot, determining that the camera is installed at a specified position, placing a workpiece on a workbench, and when the workpiece reaches a scanning area of the robot, the robot carries the camera to perform linear motion to scan the surface of a product so as to acquire the surface information of the product;
3. and (3) data analysis: when the robot starts to identify and grab the lock, the robot starts to scan the lock from an appointed position through a line laser 3-dimensional camera clamped at the tail end to obtain an image of the lock, the image is stored in client software, the client software is matched with a point cloud base stored by the client software through the image to determine the model of a product, if the model of the product exists, the client software is matched with the point cloud base through key features in the image, and the client sends the pose to the robot through Socket communication by determining the pose of a lock grabbing point;
4. the robot moves to the grabbing point position, and the lock is grabbed.
Example 2
Robot camera calibration is mainly used for confirming the relation between a camera coordinate system and the position of a flange plate at the tail end of a robot, namely obtaining
Step1, printing a visual calibration plate, wherein the calibration plate is generally given by a camera manufacturer and is printed according to the use requirement;
step2, placing the calibration plate at a flat and wide position and ensuring that the position and posture of the robot can be reached;
step3 is to set a tool with a needle point at the end of a robot and teach the tool to establish the relation between the tool coordinate system and the robot world coordinate systemThe tool is mainly used for calibrating the calibration plate;
step4 teaching the workpiece coordinate system of the calibration plate by using the robot tip to mount the needle tip, wherein the center point of the coordinate axis is the origin of the workpiece coordinate system, the direction of the coordinate system is as shown in FIG. 1, note that the current workpiece coordinate system is
Step5, moving the camera to the position right above the calibration plate to make the laser line projected by the camera aligned with the X axis of the calibration plate, the distance from the camera to the calibration plate along the direction of the laser line is consistent, and the distance from the camera to the calibration plate along the length direction of the camera, namely the direction vertical to the laser line, is also consistent, in other words, the installation plane on the camera is parallel to the plane of the calibration plate, at this time, the alignment of the camera coordinate system and the coordinate system of the calibration plate is realized;
step6, recording the point of the aligned robot flange (tool0) in the Step5 in the workpiece coordinate systemThe lower point is pAlignMiddle, so that the prelimingnprestart position (palignprestart.x ═ palignmiddle.x, palignprestart.y ═ palignmiddle.y-250) and the end position pAlignEnd (palignend.x ═ palignmiddle.x, palignend.y ═ palignmiddle.y +250) of the next camera movement are calculated;
step7 moving the robot's camera laser line to the pAlignPreStart point (to getFor reference, the same applies below); starting the robot to move from the pAlignPreStart point to the pAlignEnd point in a straight line at the speed of 100 mm/s; it should be noted that there is an acceleration and deceleration process at a constant speed when the robot moves from a stationary state to a moving state to reach 100mm/s, and we enable the robot to send a signal to trigger the trisector camera to take an image after the robot runs for 50mm in a straight line (at this time, the robot has already accelerated to finish), where a trigger point of the signal is a start position of a Y coordinate of an acquired image, and the position of a flange (tool0) at this point is recorded as pAlignStart, and palignstart.y is palignmiddle.y-200; note that the length of the Y-direction graph of the trisector is set to 350mm, as shown in fig. 2; step8, after the image is taken, checking whether the Z values of a plurality of arbitrary positions of the scanned image are consistent through software provided by a line laser camera, so as to confirm whether the image-taking plane of the camera is parallel to the plane of the calibration plate (if the difference is too much, such as nearly 1mm or more, and the image-taking plane needs to be adjusted to be parallel, the operation is resumed from the Step 5); recording the point of the origin of coordinates of the calibration plate in the image coordinate system at the moment, and recording the coordinate values of the point as X, Y and Z, wherein the Euler angles of the rotation of the camera around the X, Y and Z axes are all 0 because the camera is ensured to be aligned with the coordinates of the calibration plate board in the direction when the image is taken, thereby obtaining the calibration plate
Step9, expressing the point location and the coordinate system in a homogeneous coordinate system mode respectively as follows:
representing the relation between the center point of the calibration plate and the world coordinate system of the robot;
representing the relation between a flange plate at the tail end of the robot and the central point of a calibration plate;
Example 3
The method for determining the positions of the grabbing points of the robot mainly comprises the steps of obtaining a conversion matrix of image grabbing points and template grabbing points, converting the grabbing points of the robot under the template into the grabbing points of the robot under a current scanning image, scanning a target workpiece by the robot to obtain the target characteristics of a required image, comparing the characteristics with the characteristics in the template to give a conversion matrix H and teaching the grabbing points of the position of the templateEstablishing a current image robot grasp pointThe calculation schematic is as follows:
step1: the pose from the origin of the camera to the base coordinate is obtained, and the formula is obtained as follows:
step2: and (3) solving the grabbing pose of the robot at the position 2, and obtaining a formula (2-5) by combining the following formulas (2-1), (2-2) and (2-3), namely the grabbing point of the robot at the position 2:
example 4
When the invention is used for the lock for the container, the detailed description of the specific operation steps is as follows:
step1 preparation 1:
the preparation 1 is mainly used for calibrating the relation between the camera coordinate system and the position of the flange plate at the tail end of the robot, and the using and operating steps of the algorithm are described in detail in the algorithm 1.
Step2 preparation 2:
the preparation 2 is mainly used for obtaining the surface information of each product, so that an irregular product surface information template library and a robot grabbing point table are formed. The specific operation process is as follows:
the robot receives the product arrival information and then moves to a designated position with a camera, scans the designated work area through linear motion to acquire product surface information, extracts characteristic lines and characteristic line midpoints in the surface of the product according to the surface characteristics of the product to form a product template, and is used for determining the grabbing position of the robot. And moving the robot to the midpoint of the characteristic line of the surface of the product, ensuring the gesture of the robot clamp to be consistent with the direction of the characteristic line, determining the grabbing point of the robot to the current product, and recording the grabbing point. And repeating the steps for the products with different specifications and types so as to establish an irregular product surface information template library and a robot grabbing point table. For robot grab point calculation in algorithm 2.
Step3:
After the operation of Step1 and Step2 is completed, when a new product moves to a designated work area and an in-place signal is sent to the robot, the robot receives the arrival information of the product and then moves to the designated position with a camera, the robot scans the designated work area through linear motion to acquire the surface information of the product, the robot performs template matching with the surface information template library of the product to determine whether the current product exists in the library, and if the current product does not exist in the library, the robot is informed to manually add a new product template to the product template library, and meanwhile, the robot captures point information. And if the current product is already stored in the product template library, matching the product with the product in the product template library so as to establish the product type, and after the product type is confirmed, extracting a characteristic line on the surface of the product of the type and matching the characteristic line with the template so as to establish a conversion matrix of the current product grabbing point.
And Step4, establishing the robot grabbing point pose under the current image according to the algorithm 2.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications can be made without departing from the principle of the present invention, and these modifications should also be construed as the protection scope of the present invention.
Claims (2)
1. A method for automatically identifying and grabbing a three-dimensional irregular object by a robot is characterized by comprising the following operation steps:
firstly, early data collection: the client software firstly extracts key features in the lockset by acquiring images of the locksets, and simultaneously establishes a point cloud library of each product and stores the point cloud library in the client software;
secondly, data testing: installing a camera on a flange plate at the tail end of a robot, determining that the camera is installed at a designated position, placing a workpiece on a workbench, and when the workpiece reaches a scanning area of the robot, the robot carries the camera to perform linear motion to scan the surface of a product so as to acquire the surface information of the product, wherein the designated position determining method comprises the following steps:
the robot camera is calibrated to confirm the relation between the camera coordinate system and the position of the flange plate at the tail end of the robot, namely obtaining
Step1, printing a visual calibration board;
step2, placing the calibration plate at a flat and wide position and ensuring that the position and posture of the robot can be reached;
step3 is to set a tool with a pinpoint at the end of the robot and to teach the tool to establish the relation between the tool coordinate system and the robot world coordinate systemThe needle point tool is used for calibrating the calibration plate;
step4 teaching the workpiece coordinate system of the calibration plate by using the robot tip mounting tool, wherein the center point of the coordinate axes is the origin of the workpiece coordinate system, and the current workpiece coordinate system is recorded
Step5, moving the camera to the position right above the calibration plate to align the laser line shot by the camera with the X axis of the calibration plate, wherein the distance from the camera to the calibration plate is consistent along the direction of the laser line, and the distance from the camera to the calibration plate is also consistent along the length direction of the camera, namely the direction vertical to the laser line, so that the alignment of the coordinate system of the camera and the coordinate system of the calibration plate is realized;
step6, recording the point position of the aligned robot flange in the Step5 in the workpiece coordinate systemThe lower point position is pAlignMiddle, so that the pre-starting position and the terminating position of the next camera movement are calculated;
step7, moving a camera laser line of the robot to a pre-starting position point; starting the robot to move linearly from a pre-starting position point to an ending position point at the speed of 100 mm/s; the robot is subjected to an acceleration and deceleration process from rest to a uniform speed when the robot moves to 100mm/s, the robot sends a signal to trigger a camera to take an image after the robot runs for 50mm in a straight line, a trigger point of the signal is a Y coordinate start position of an acquired image, the position of a flange plate at the point is pAlignStart, and then pAlignStart.y is pAlignMiddle.y-200; setting the Y-direction drawing length of the camera to be 350 mm;
step8, after the drawing is finished, checking whether the Z values of a plurality of arbitrary positions of the scanned image are consistent through software provided by the camera, thereby confirming whether the drawing plane of the camera is parallel to the plane of the calibration plate; recording the point of the coordinate origin of the calibration plate in the image coordinate system at the moment, and recording the coordinate values of the point as X, Y and Z, wherein the Euler angles of the rotation of the camera around the X, Y and Z axes are all 0 because the camera is ensured to be aligned with the coordinate of the calibration plate in the direction when the image is taken, thereby obtaining the coordinate values of the calibration plate in the image coordinate system
Step9, expressing the point location and the coordinate system in a homogeneous coordinate system mode respectively as follows:
representing the relation between the center point of the calibration plate and the world coordinate system of the robot;
representing the relation between a flange plate at the tail end of the robot and the central point of a calibration plate;
the above-mentionedThe method is a tool coordinate system of the camera, namely a conversion relation from the camera coordinate system to a flange plate central point coordinate system;
thirdly, data analysis: when the robot starts to identify and grab the lock, the robot starts to scan the lock from an appointed position through a camera clamped at the tail end to obtain an image of the lock, the image is stored in client software, the client software is matched with a point cloud base stored by the client software through the image to determine the model of a product, if the model of the product exists, the client software is matched with the point cloud base through key features in the image, the position of a grabbing point of the lock is determined, and the client sends the position to the robot through Socket communication;
and fourthly, the robot moves to the grabbing point position, and the grabbing of the lock is realized.
2. The method as claimed in claim 1, wherein the capturing point pose is obtained by obtaining a transformation matrix of image capturing points and template capturing points, transforming the capturing points of the robot under the template into the capturing points of the robot under the current scanning image, the robot obtains the required image target feature by scanning the target workpiece, and the target feature of the robot is obtained according to the required image target feature and the feature in the templateThe comparison gives a conversion matrix H and a grabbing point for teaching the position of the templateEstablishing a current image robot grasp pointThe calculation is as follows:
step1: the pose from the origin of the camera to the base coordinate is obtained, and the formula is obtained as follows:
step2: and (3) solving the grabbing pose of the robot at the position 2, and simultaneously obtaining a formula 2-5 by the following formulas 2-1, 2-2, 2-3 and 2-4, namely the grabbing point of the robot at the position 2:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911346789.5A CN111136656B (en) | 2019-12-24 | 2019-12-24 | Method for automatically identifying and grabbing three-dimensional irregular object of robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911346789.5A CN111136656B (en) | 2019-12-24 | 2019-12-24 | Method for automatically identifying and grabbing three-dimensional irregular object of robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111136656A CN111136656A (en) | 2020-05-12 |
CN111136656B true CN111136656B (en) | 2020-12-08 |
Family
ID=70519705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911346789.5A Active CN111136656B (en) | 2019-12-24 | 2019-12-24 | Method for automatically identifying and grabbing three-dimensional irregular object of robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111136656B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111595266A (en) * | 2020-06-02 | 2020-08-28 | 西安航天发动机有限公司 | Spatial complex trend catheter visual identification method |
CN112549021B (en) * | 2020-11-16 | 2022-06-14 | 北京配天技术有限公司 | Robot control method, robot and storage device |
CN112873205A (en) * | 2021-01-15 | 2021-06-01 | 陕西工业职业技术学院 | Industrial robot disordered grabbing method based on real-time switching of double clamps |
CN113114766B (en) * | 2021-04-13 | 2024-03-08 | 江苏大学 | Potted plant information detection method based on ZED camera |
CN113510697B (en) * | 2021-04-23 | 2023-02-14 | 知守科技(杭州)有限公司 | Manipulator positioning method, device, system, electronic device and storage medium |
CN113770059A (en) * | 2021-09-16 | 2021-12-10 | 中冶东方工程技术有限公司 | Intelligent sorting system and method for steel structure parts |
CN115810052A (en) * | 2021-09-16 | 2023-03-17 | 梅卡曼德(北京)机器人科技有限公司 | Camera calibration method and device, electronic equipment and storage medium |
CN114113163B (en) * | 2021-12-01 | 2023-12-08 | 北京航星机器制造有限公司 | Automatic digital ray detection device and method based on intelligent robot |
CN114074331A (en) * | 2022-01-19 | 2022-02-22 | 成都考拉悠然科技有限公司 | Disordered grabbing method based on vision and robot |
CN115070779B (en) * | 2022-08-22 | 2023-03-24 | 菲特(天津)检测技术有限公司 | Robot grabbing control method and system and electronic equipment |
CN115446392B (en) * | 2022-10-13 | 2023-08-04 | 芜湖行健智能机器人有限公司 | Intelligent chamfering system and method for unordered plates |
CN116330306B (en) * | 2023-05-31 | 2023-08-15 | 之江实验室 | Object grabbing method and device, storage medium and electronic equipment |
CN117621092A (en) * | 2023-10-24 | 2024-03-01 | 上海奔曜科技有限公司 | Teaching system, teaching method and teaching-free automatic device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1293752A (en) * | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
EP2455198A1 (en) * | 2010-11-17 | 2012-05-23 | Samsung Electronics Co., Ltd. | Control of robot hand to contact an object |
CN105014667A (en) * | 2015-08-06 | 2015-11-04 | 浙江大学 | Camera and robot relative pose calibration method based on pixel space optimization |
US9403278B1 (en) * | 2015-03-19 | 2016-08-02 | Waterloo Controls Inc. | Systems and methods for detecting and picking up a waste receptacle |
CN106041937A (en) * | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
CN106778790A (en) * | 2017-02-15 | 2017-05-31 | 苏州博众精工科技有限公司 | A kind of target identification based on three-dimensional point cloud and localization method and system |
CN108827154A (en) * | 2018-07-09 | 2018-11-16 | 深圳辰视智能科技有限公司 | A kind of robot is without teaching grasping means, device and computer readable storage medium |
CN109493384A (en) * | 2018-09-20 | 2019-03-19 | 顺丰科技有限公司 | Camera position and orientation estimation method, system, equipment and storage medium |
-
2019
- 2019-12-24 CN CN201911346789.5A patent/CN111136656B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1293752A (en) * | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
EP2455198A1 (en) * | 2010-11-17 | 2012-05-23 | Samsung Electronics Co., Ltd. | Control of robot hand to contact an object |
US9403278B1 (en) * | 2015-03-19 | 2016-08-02 | Waterloo Controls Inc. | Systems and methods for detecting and picking up a waste receptacle |
CN105014667A (en) * | 2015-08-06 | 2015-11-04 | 浙江大学 | Camera and robot relative pose calibration method based on pixel space optimization |
CN106041937A (en) * | 2016-08-16 | 2016-10-26 | 河南埃尔森智能科技有限公司 | Control method of manipulator grabbing control system based on binocular stereoscopic vision |
CN106778790A (en) * | 2017-02-15 | 2017-05-31 | 苏州博众精工科技有限公司 | A kind of target identification based on three-dimensional point cloud and localization method and system |
CN108827154A (en) * | 2018-07-09 | 2018-11-16 | 深圳辰视智能科技有限公司 | A kind of robot is without teaching grasping means, device and computer readable storage medium |
CN109493384A (en) * | 2018-09-20 | 2019-03-19 | 顺丰科技有限公司 | Camera position and orientation estimation method, system, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111136656A (en) | 2020-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111136656B (en) | Method for automatically identifying and grabbing three-dimensional irregular object of robot | |
CN111791239B (en) | Method for realizing accurate grabbing by combining three-dimensional visual recognition | |
US20170249729A1 (en) | Automated optical metrology computer aided inspection station and method of operation | |
CN110370286A (en) | Dead axle motion rigid body spatial position recognition methods based on industrial robot and monocular camera | |
US11972589B2 (en) | Image processing device, work robot, substrate inspection device, and specimen inspection device | |
US7283661B2 (en) | Image processing apparatus | |
US20040172164A1 (en) | Method and apparatus for single image 3D vision guided robotics | |
US8379224B1 (en) | Prismatic alignment artifact | |
CN110146017B (en) | Industrial robot repeated positioning precision measuring method | |
CN110980276B (en) | Method for implementing automatic casting blanking by three-dimensional vision in cooperation with robot | |
CN106269548A (en) | A kind of object automatic sorting method and device thereof | |
CN112518748A (en) | Automatic grabbing method and system of vision mechanical arm for moving object | |
CN113532277B (en) | Method and system for detecting plate-shaped irregular curved surface workpiece | |
CN105478363A (en) | Defective product detection and classification method and system based on three-dimensional figures | |
CN112361958B (en) | Line laser and mechanical arm calibration method | |
KR20110095700A (en) | Industrial robot control method for workpiece object pickup | |
CN115063670A (en) | Automatic sorting method, device and system | |
US20240003675A1 (en) | Measurement system, measurement device, measurement method, and measurement program | |
CN114913346A (en) | Intelligent sorting system and method based on product color and shape recognition | |
KR20230128862A (en) | Method and system for auto calibration of robot workcells | |
JP2011093058A (en) | Target object holding area extraction apparatus and robot system using the same | |
CN114918723B (en) | Workpiece positioning control system and method based on surface detection | |
CN116465335A (en) | Automatic thickness measurement method and system based on point cloud matching | |
CN111259928A (en) | Rapid and automatic stacking and stacking method for parts based on machine learning | |
Cheng | Design of Visual Search and Positioning System Based on Labview+ PLC |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210330 Address after: 315400 Chengdong new district, Ningbo Economic Development Zone, Zhejiang Province Patentee after: Zhichang Technology Group Co.,Ltd. Address before: Room 320, building 1, 358 Huayan village, Nanqiao Town, Fengxian District, Shanghai Patentee before: SHANGHAI GENE AUTOMATION TECHNOLOGY Co.,Ltd. |