CN114742883A - Automatic assembly method and system based on plane type workpiece positioning algorithm - Google Patents

Automatic assembly method and system based on plane type workpiece positioning algorithm Download PDF

Info

Publication number
CN114742883A
CN114742883A CN202210332377.1A CN202210332377A CN114742883A CN 114742883 A CN114742883 A CN 114742883A CN 202210332377 A CN202210332377 A CN 202210332377A CN 114742883 A CN114742883 A CN 114742883A
Authority
CN
China
Prior art keywords
plane
coordinate system
point cloud
pose
assembly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210332377.1A
Other languages
Chinese (zh)
Inventor
李中伟
贾若愚
钟凯
吴浪
李蹊
何文韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Shenzhen Huazhong University of Science and Technology Research Institute
Original Assignee
Huazhong University of Science and Technology
Shenzhen Huazhong University of Science and Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology, Shenzhen Huazhong University of Science and Technology Research Institute filed Critical Huazhong University of Science and Technology
Priority to CN202210332377.1A priority Critical patent/CN114742883A/en
Publication of CN114742883A publication Critical patent/CN114742883A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • B23P19/04Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for assembling or disassembling parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses an automatic assembly method and system based on a plane type workpiece positioning algorithm, and belongs to the field of robot vision. The method adopts a plane detection and outlier removal algorithm to extract a plurality of planes, then determines the point cloud pose of the planes through the minimum bounding box, finally finds out the required plane by utilizing the length and width constraint of the planes, and guides the industrial robot to move to the correct assembly position. Compared with a positioning algorithm based on images, the method has higher identification precision in a high-dynamic and high-reflection scene; compared with a conventional point cloud-based positioning algorithm, the method avoids a complex template matching process, has high calculation efficiency and strong robustness, and can be suitable for plane workpieces in any shapes. The invention provides a robust solution for scenes in which workpiece deviation may occur through a mode of combining three-dimensional vision and an industrial robot.

Description

Automatic assembly method and system based on plane type workpiece positioning algorithm
Technical Field
The invention belongs to the field of robot vision, and particularly relates to an automatic assembly method and system based on a plane type workpiece positioning algorithm.
Background
The assembly is a post-process of product production and plays an important role in the manufacturing industry. On a traditional assembly production line, all actions of an industrial robot are preset, and when the position of a workpiece deviates, the robot cannot smoothly complete assembly work. With the development of machine vision technology, the mode of combining a machine vision positioning algorithm with an industrial robot gradually becomes the mainstream, and the working flexibility and automation level of the robot are remarkably improved.
The current machine vision positioning algorithms are mainly divided into two categories: based on two-dimensional images and based on three-dimensional point clouds. In a visual positioning algorithm based on a two-dimensional image, the two-dimensional image algorithm is influenced by factors such as illumination, rotation, scale and the like, so that the two-dimensional image algorithm is deficient in precision and robustness; the method based on three-dimensional point cloud registration has poor performance because a large amount of calculation is required in the processes of matching and the like.
In actual assembly, a large number of parts containing one or a plurality of planes as assembly surfaces exist, and the calculation cost for extracting the planes from the three-dimensional point cloud is low, so that the positioning algorithm can be designed aiming at the scenes and has robustness, accuracy and high efficiency.
Disclosure of Invention
In view of the above defects or improvement needs in the prior art, the present invention provides an automatic assembly method and system for planar workpieces, which aims to achieve robotic automatic assembly of planar parts and take robustness, accuracy and high efficiency into consideration.
In order to achieve the above object, the present invention provides a robot automatic assembly method for a planar workpiece, comprising the following steps:
s1, extracting plane point clouds in assembled parts;
s2, creating a bounding box according to the plane point cloud obtained in the S1, and calculating the pose of the assembly position of the assembled part relative to a camera coordinate system;
s3, calculating the pose of the assembly phase of the assembled plane part relative to the robot base coordinate system according to the pose of the assembly phase of the assembled part relative to the camera coordinate system;
and S4, controlling the robot to complete assembly according to the pose obtained in the S3.
Preferably, the S1 specifically includes:
s11, acquiring three-dimensional point cloud to be processed;
s12, randomly selecting three points from the three-dimensional point cloud, and calculating a plane formed by the three points;
s13, traversing all points in the three-dimensional point cloud, calculating the distance from each point to the plane obtained in the step S12, and counting the number of points with the distance to the plane smaller than the threshold value, namely the number of inner points, according to a preset threshold value;
s14, repeating S12-S13 within the preset iteration times, and searching a plane with the largest number of inner points as an optimal plane;
and S15, traversing all the points in the three-dimensional point cloud, and reserving the points which meet the condition that the distance between the optimal plane and the points is less than a threshold value to form a plane point cloud.
Preferably, the S2 specifically includes:
s21, reading the plane point cloud obtained through the S1 processing;
s22, calculating the mass center of the plane point cloud;
s23, constructing a covariance matrix through the mass center and the plane point cloud, and carrying out QR decomposition to obtain three real characteristic values;
s24, arranging the real characteristic values from large to small, wherein the three corresponding characteristic vectors are the directions of the xyz axes of the plane point cloud pose coordinate system;
s25, establishing a pose coordinate system of the plane point cloud by taking the centroid as an origin and the coordinate axis generated in S24 as a direction;
s26, acquiring the maximum value and the minimum value of the plane point cloud in the xyz direction under the pose coordinate system of the plane point cloud, and creating a bounding box;
s27, calculating the pose of the plane of the assembled part relative to a camera coordinate system through the vertex information and the center information of the bounding box;
and S28, shifting the pose obtained in the S27 along the direction parallel to the plane according to the specific position of the assembly position on the plane to obtain the pose of the assembly position relative to a camera coordinate system.
Preferably, the S3 specifically includes:
s31, calibrating the camera and the robot by hands and eyes to obtain the pose of the camera coordinate system in the flange plate coordinate system or the base coordinate system of the robot;
and S32, calculating the pose of the workpiece relative to the robot base coordinate system through the pose of the assembly position relative to the camera coordinate system obtained in the S28 and the pose obtained in the S31.
Preferably, the S4 specifically includes:
s41, the mobile robot enables a workpiece coordinate system of the assembly part to coincide with a coordinate system of the pose of the assembly position of the assembled part under a robot base coordinate system;
s42, in order to prevent the assembly part and the assembled part from colliding in the assembly process, the workpiece coordinate system of the assembly part is moved to a position, at a certain distance along the axial direction of the assembly direction, of the coordinate system of the assembly position of the assembled part, and the workpiece coordinate system moves linearly in the assembly direction, so that the coordinate systems of the assembly part and the assembled part are overlapped.
In another aspect, the present invention provides an automatic assembling system based on a plane-like workpiece positioning algorithm, including: a computer-readable storage medium and a processor;
the computer readable storage medium is used for storing executable instructions;
the processor is used for reading the executable instructions stored in the computer readable storage medium and executing the automatic assembly method based on the plane type workpiece positioning algorithm.
Through the technical scheme of the invention, the labor cost of the assembly link in the production process can be reduced; compared with the existing assembly technology based on image recognition, the three-dimensional point cloud data based on the method is not influenced by factors such as rotation, illumination, scale and the like; compared with the conventional point cloud-based positioning algorithm, the method avoids a complex template matching process, has high calculation efficiency, and can obtain the beneficial effects of high identification rate and high robustness.
Drawings
FIG. 1 is a schematic flow chart of an automated assembly method based on a planar workpiece positioning algorithm according to the present invention;
FIG. 2 is a schematic view of the robot in a photo position;
fig. 3 is a schematic view of the robot in the fitting position.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a robot automatic assembly method of a plane workpiece, which comprises the following steps
S1, extracting plane point clouds in assembled parts;
s2, creating a bounding box according to the plane point cloud obtained in the S1, and calculating the pose of the assembly position of the assembled part relative to a camera coordinate system;
s3, calculating the pose of the assembly phase of the assembled plane part relative to the robot base coordinate system according to the pose of the assembly phase of the assembled part relative to the camera coordinate system;
and S4, controlling the robot to complete assembly according to the pose obtained in the S3.
Examples
As shown in fig. 1, the present invention provides a robot automatic assembly method for planar parts, which can realize robot vision-based automation of the assembly process including a planar assembly surface by using the above process. The method specifically comprises the following steps:
deploying the system: the system comprises a camera system with a three-dimensional measurement function, a robot and a computer; the deployment process comprises male TCP calibration and robot-camera hand-eye calibration, and comprises the following specific steps:
1) calibrating a male tool coordinate system: in the embodiment, a four-point method is adopted to realize the control of coincidence of the TCP with the fixed point in the space from four different directions, and an equation set is established by utilizing coordinate equality of four TCP points in a robot base coordinate system, so that a translation component between a tool coordinate system and a tail end flange coordinate system is solved. The specific construction equation is as follows:
Figure BDA0003573524220000051
wherein
Figure BDA0003573524220000052
Marking the pose of the TCP under the robot base,
Figure BDA0003573524220000053
the pose of the lower end flange is tied to the base coordinate of the robot,
Figure BDA0003573524220000054
the pose of the TCP under the terminal flange coordinate system; converting T to a combination of R and T, one can obtain:
Figure BDA0003573524220000055
it is developed to obtain:
Figure BDA0003573524220000056
due to the transformation relation of the tool coordinate system to the base coordinate system
Figure BDA0003573524220000057
Is fixed, for four different sets of poses of the robot, there are three equations:
Figure BDA0003573524220000058
note the book
Figure BDA0003573524220000059
Representing the translation component to be calibrated, the above formula can be obtained after conversion:
Figure BDA00035735242200000510
the above formula can be simplified into a form of Ax ═ b, and the translation component which can be solved through singular value decomposition
Figure BDA00035735242200000511
It is worth noting that the tooling TCP and the mechanical arm flange plate adopted in the embodiment are parallel, and only a translation variable does not have a rotation variable. For the embodiment with the rotation variable, an eight-point method may be adopted to obtain the TCP, and since the TCP calibration is not the protection focus of the present invention, the specific content of the eight-point method is not described herein again.
2) Calibrating the hands and eyes: the embodiment adopts an eye-on-hand robot-camera system, namely a system that a camera is fixed on a robot flange plate; in the embodiment, the pose of the camera coordinate system in the robot flange coordinate system is solved by adopting a matrix vectorization operator method. The specific process is that the robot is controlled to drive the camera to move to a plurality of positions, and image acquisition is carried out on the calibration plate at the fixed position. A series of nonlinear equations can be constructed through the read pose of the robot and camera external parameters obtained through calculation according to camera images.
The nonlinear equations can be linearized by a matrix vectorization operator method, and then the equations can be solved by a least square method. Because the solution obtained by the least square solution often cannot satisfy the orthogonal constraint of the transformation matrix, the obtained solution is further processed by | | | R | | | ═ 1 and QR decomposition, and the pose of the camera coordinate system under the robot flange coordinate system can be obtained.
Since the system adopted in the embodiment is an eye-on-hand system, in addition to the eye-on-hand system, there is an eye-off-hand system, that is, a system in which a camera is located on a fixed support which is not moved by a robot, and the method is similar to the calibration method of the eye-on-hand system; besides the above-mentioned vectorization operator method of the matrix, there are other hand-eye calibration calculation methods; the process of calibrating the hands and eyes is not the key point of the present invention, and therefore, the detailed description is omitted.
S1, moving a robot to a photographing position, and extracting a plane point cloud in an assembled part by using a three-dimensional camera, wherein the plane point cloud is shown in figure 2;
s2, extracting plane point clouds in the assembled parts and removing noise points: because the workpiece to be identified is a plane workpiece, the target point cloud can be quickly extracted through a plane detection algorithm. The specific flow of the plane detection algorithm is as follows:
1) reading a measurement point cloud Pdata
2) From the measurement point cloud PdataThree points are randomly selected, and a plane formed by the three points is calculated.
3) Traverse PdataEach point p iniCalculating a point piDistance d to planeiThen according to a preset distance threshold dthresStatistic satisfies di<dthresNumber of dots N of (2)i
4) Given iteration number Iternum, repeating the steps 2) -3) to find the number of inner points NiAt most the corresponding plane _ best.
5) Traverse PdataEach point p iniCalculating a point piDistance d to optimal plane _ bestiSatisfy di<dthresThe points of (2) constitute a planar point cloud PplaneOtherwise, the remaining point cloud P is formedretainOriginal point cloud P as next plane detectiondata
In the actual operation process, because the plane to be positioned may not be the plane with the largest number of points, multiple plane detection operations need to be performed to extract multiple planes, and finally, the required plane is selected by using the length and width constraints of the planes.
For the extracted planar point cloud, the subsequent minimum bounding box calculation is wrong due to the outliers possibly existing in the point cloud, and therefore, the outlier removing work needs to be completed. The reason why the outlier removing work is put after the plane detection work is that for scenes with serious noise, the outlier is difficult to completely remove by using an outlier removing algorithm, and the plane detection algorithm can filter most outliers in advance, so that point cloud data with less noise is provided for the subsequent outlier removing algorithm. The specific process is as follows:
1) reading a measurement point cloud Pplane
2) To PplaneAnd establishing a kd-tree.
3) Traverse PplaneEach point p iniFind the corresponding K neighborhood points pjP is calculated according to the following formulaiTo its neighbor point pjIs determined to be a distance di
Figure BDA0003573524220000071
4) The average judgment distance is obtained through statistics
Figure 1
And the variance σ, and is based on
Figure BDA0003573524220000073
Calculating a distance threshold dthresWhere α is a threshold coefficient.
5) Traverse P againplaneEach point p iniSatisfy di>dthresThe points of (2) are determined as outliers and removed.
S3, minimum bounding box algorithm: for the denoised plane point cloud, in order to determine the pose of the point cloud relative to a camera coordinate system, the invention uses an OBB direction bounding box to calculate a local reference coordinate system of the point cloud under a camera, and the specific flow is as follows:
1) reading a planar point cloud Pplane
2) Calculating the point cloud centroid p according to the following formulacentroidWherein N represents the number of points in the point cloud:
Figure BDA0003573524220000081
3) according to point pcentroidAnd point cloud PdataThe covariance matrix C (p) is constructed according to the following equationcentroid):
Figure BDA0003573524220000082
4) For covariance matrix C (p)centroid) QR decomposition is carried out to obtain three real characteristic values lambda1、λ2、λ3(from large to small) and the corresponding feature vector e1、e2、e3The feature vector is the orientation of the point cloud coordinate system x, y, z.
5) Using point cloud centroids pcentroidAnd point cloud coordinate system orientation, the measured point cloud P is transformed by matrix inversiondataConverting to the origin, coinciding the point cloud coordinate system with the origin coordinate system, and recording the converted point cloud as Pplane′。
6) Obtaining Pplane' i.e. obtaining the maximum and minimum X in the X, y, z directionsmax、 Xmin、Ymax、Ymin、Zmax、ZminThereby obtainingEight vertex coordinates to bounding box, center point pcenter' coordinates and aspect height information.
7) Converting the vertex coordinates and the center point coordinates of the bounding box back to the initial measurement point cloud coordinate system, and recording the center point of the bounding box as pcenter
By calculating the minimum bounding box of the measurement point cloud, the translation component of the measurement point cloud coordinate system O relative to the camera coordinate system C can be obtained
Figure BDA0003573524220000083
And component of rotation
Figure BDA0003573524220000084
The transformation matrix of the point cloud coordinate system O to the camera coordinate system C
Figure BDA0003573524220000091
In addition, by utilizing the length and width information of the bounding box, the plane required by the invention can be screened out from a plurality of plane point clouds. For any plane to be detected, the length and width of the bounding box are respectively li=Xmax-Xmin、wi=Ymax-Ymin. Given a length and width distance threshold lthres、wthresSatisfy the length and width constraint | li-ltruth|<lthres,|wi-wtruth|<wthresThe plane of (a) is the plane required by the present invention, wherein the true value l of the length and width of the required planetruth、wtruthCan be obtained by measurement.
8) The coordinate system solved above is located at the center of the plane of the assembled part, and for parts with the assembling position not at the center of the plane, the deviation is needed according to the actual position of the assembling position, so as to obtain the pose of the assembling position relative to the camera coordinate system.
9) Calculating the pose of the assembly coordinate relative to the robot base coordinate system by the pose of the camera coordinate system obtained in the step of calibrating the hands and eyes in the step of S1 under the robot flange coordinate system and the pose of the assembly coordinate assembled in the camera coordinate system in the previous step
Figure BDA0003573524220000092
S4, controlling the robot to grab: during assembly, making the male TCP
Figure BDA0003573524220000093
And
Figure BDA0003573524220000094
coincident, geometrically means that the male and female heads are combined together, as shown in fig. 3. However, the robot is directly commanded to move from the photo-taking station to
Figure BDA0003573524220000095
During the movement, the male head and the female head collide. Therefore, when designing the path program, it is necessary to move the male workpiece coordinate system to a position where the female fitting coordinate system is located at a certain distance along the axial direction of the fitting direction, and linearly move the male workpiece coordinate system and the female fitting coordinate system in the fitting direction so that the coordinate systems of the male and female fitting positions coincide with each other. In this embodiment, since the assembling direction is perpendicular to the detecting plane, it is only necessary to set the assembling direction to
Figure BDA0003573524220000096
The z-axis of (a) is sufficient.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. The robot automatic assembly method of the plane type workpieces is characterized by comprising the following steps
S1, extracting plane point clouds in the assembled parts;
s2, creating a bounding box according to the plane point cloud obtained in the S1, and calculating the pose of the assembly position of the assembled part relative to a camera coordinate system;
s3, calculating the pose of the assembly position of the assembled plane part relative to the base coordinate system of the robot according to the pose of the assembly position of the assembled part relative to the camera coordinate system;
and S4, controlling the robot to complete assembly according to the pose obtained in the S3.
2. The method according to claim 1, wherein the S1 specifically includes:
s11, acquiring a three-dimensional point cloud to be processed;
s12, randomly selecting three points from the three-dimensional point cloud, and calculating a plane formed by the three points;
s13, traversing all points in the three-dimensional point cloud, calculating the distance from each point to the plane obtained in the step S12, and counting the number of points with the distance to the plane smaller than the threshold value, namely the number of inner points, according to a preset threshold value;
s14, repeating S12-S13 within the preset iteration number, and searching a plane with the largest number of internal points as an optimal plane;
and S15, traversing all the points in the three-dimensional point cloud, and reserving the points which meet the condition that the distance between the optimal plane and the points is less than a threshold value to form a plane point cloud.
3. The method according to claim 1, wherein the S2 specifically includes:
s21, reading the plane point cloud obtained through the S1 processing;
s22, calculating the mass center of the plane point cloud;
s23, constructing a covariance matrix through the mass center and the plane point cloud, and carrying out QR decomposition to obtain three real characteristic values;
s24, arranging the real characteristic values from large to small, wherein the three corresponding characteristic vectors are the directions of xyz axes of the plane point cloud pose coordinate system;
s25, establishing a pose coordinate system of the plane point cloud by taking the centroid as an origin and the coordinate axis generated in S24 as a direction;
s26, acquiring the maximum value and the minimum value of the plane point cloud in the xyz direction under the pose coordinate system of the plane point cloud, and creating a bounding box;
s27, calculating the pose of the plane of the assembled part relative to a camera coordinate system through the bounding box vertex information and the center information;
and S28, shifting the pose obtained in the S27 along the direction parallel to the plane according to the specific position of the assembly position on the plane to obtain the pose of the assembly position relative to a camera coordinate system.
4. The method according to claim 3, wherein the S3 specifically comprises:
s31, calibrating the camera and the robot by hands and eyes to obtain the pose of the camera coordinate system in the flange plate coordinate system or the base coordinate system of the robot;
and S32, calculating the pose of the workpiece relative to the robot base coordinate system through the pose of the assembly position relative to the camera coordinate system obtained in the S28 and the pose obtained in the S31.
5. The method according to claim 1, wherein the S4 specifically includes:
s41, the mobile robot enables a workpiece coordinate system of the assembly part to coincide with a coordinate system of the pose of the assembly position of the assembled part under a robot base coordinate system;
s42, moving the workpiece coordinate system of the assembly part to a position, which is a certain distance away from the axial direction of the coordinate system of the assembly position of the assembled part along the assembly direction, and linearly moving in the assembly direction to enable the coordinate systems of the assembly part and the assembled part to be superposed.
6. An automated assembly system based on a planar workpiece positioning algorithm, comprising: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is used for reading executable instructions stored in the computer-readable storage medium and executing the automatic assembly method based on the plane-type workpiece positioning algorithm according to any one of claims 1 to 5.
CN202210332377.1A 2022-03-30 2022-03-30 Automatic assembly method and system based on plane type workpiece positioning algorithm Pending CN114742883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210332377.1A CN114742883A (en) 2022-03-30 2022-03-30 Automatic assembly method and system based on plane type workpiece positioning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210332377.1A CN114742883A (en) 2022-03-30 2022-03-30 Automatic assembly method and system based on plane type workpiece positioning algorithm

Publications (1)

Publication Number Publication Date
CN114742883A true CN114742883A (en) 2022-07-12

Family

ID=82280162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210332377.1A Pending CN114742883A (en) 2022-03-30 2022-03-30 Automatic assembly method and system based on plane type workpiece positioning algorithm

Country Status (1)

Country Link
CN (1) CN114742883A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115922404A (en) * 2023-01-28 2023-04-07 中冶赛迪技术研究中心有限公司 Disassembling method, disassembling system, electronic equipment and storage medium
CN117523206A (en) * 2024-01-04 2024-02-06 南京航空航天大学 Automatic assembly method based on cross-source point cloud and multi-mode information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115922404A (en) * 2023-01-28 2023-04-07 中冶赛迪技术研究中心有限公司 Disassembling method, disassembling system, electronic equipment and storage medium
CN115922404B (en) * 2023-01-28 2024-04-12 中冶赛迪技术研究中心有限公司 Disassembling method, disassembling system, electronic equipment and storage medium
CN117523206A (en) * 2024-01-04 2024-02-06 南京航空航天大学 Automatic assembly method based on cross-source point cloud and multi-mode information
CN117523206B (en) * 2024-01-04 2024-03-29 南京航空航天大学 Automatic assembly method based on cross-source point cloud and multi-mode information

Similar Documents

Publication Publication Date Title
CN114742883A (en) Automatic assembly method and system based on plane type workpiece positioning algorithm
CN111805051B (en) Groove cutting method, device, electronic equipment and system
CN111775146A (en) Visual alignment method under industrial mechanical arm multi-station operation
CN112070818A (en) Robot disordered grabbing method and system based on machine vision and storage medium
CN106651894B (en) Automatic spraying system coordinate transformation method based on point cloud and image matching
CN113042939B (en) Workpiece weld joint positioning method and system based on three-dimensional visual information
CN112669385B (en) Industrial robot part identification and pose estimation method based on three-dimensional point cloud features
CN115147437B (en) Intelligent robot guiding machining method and system
CN111476841A (en) Point cloud and image-based identification and positioning method and system
CN110555878B (en) Method and device for determining object space position form, storage medium and robot
Lin et al. Robotic grasping with multi-view image acquisition and model-based pose estimation
CN113894481B (en) Welding pose adjusting method and device for complex space curve welding seam
CN111598946B (en) Object pose measuring method and device and storage medium
CN111360821A (en) Picking control method, device and equipment and computer scale storage medium
CN111815706A (en) Visual identification method, device, equipment and medium for single-article unstacking
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
CN112720449A (en) Robot positioning device and control system thereof
CN116604212A (en) Robot weld joint identification method and system based on area array structured light
Liang et al. Rgb-d camera based 3d object pose estimation and grasping
CN114939891B (en) 3D grabbing method and system for composite robot based on object plane characteristics
CN113927606B (en) Robot 3D vision grabbing method and system
CN116012442A (en) Visual servo method for automatically correcting pose of tail end of mechanical arm
CN113963129A (en) Point cloud-based ship small component template matching and online identification method
CN113932712A (en) Melon and fruit vegetable size measuring method based on depth camera and key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination