CN112862878A - Mechanical arm trimming method based on 3D vision - Google Patents
Mechanical arm trimming method based on 3D vision Download PDFInfo
- Publication number
- CN112862878A CN112862878A CN202110168422.XA CN202110168422A CN112862878A CN 112862878 A CN112862878 A CN 112862878A CN 202110168422 A CN202110168422 A CN 202110168422A CN 112862878 A CN112862878 A CN 112862878A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- model
- mechanical arm
- data
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000004438 eyesight Effects 0.000 title claims abstract description 18
- 238000009966 trimming Methods 0.000 title claims description 7
- 238000012545 processing Methods 0.000 claims abstract description 16
- 238000004088 simulation Methods 0.000 claims abstract description 9
- 230000004927 fusion Effects 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 35
- 230000009466 transformation Effects 0.000 claims description 30
- 239000000203 mixture Substances 0.000 claims description 29
- 238000004422 calculation algorithm Methods 0.000 claims description 18
- 230000033001 locomotion Effects 0.000 claims description 12
- 235000015895 biscuits Nutrition 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 claims description 10
- 238000000227 grinding Methods 0.000 claims description 5
- 238000003754 machining Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000003466 welding Methods 0.000 claims description 4
- 238000012952 Resampling Methods 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 3
- 230000004069 differentiation Effects 0.000 claims description 3
- 238000005315 distribution function Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000007619 statistical method Methods 0.000 claims description 3
- 238000011089 mechanical engineering Methods 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract description 5
- 238000012795 verification Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 11
- 238000004519 manufacturing process Methods 0.000 description 5
- 239000000919 ceramic Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 238000005498 polishing Methods 0.000 description 4
- 230000001131 transforming effect Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 235000017166 Bambusa arundinacea Nutrition 0.000 description 1
- 235000017491 Bambusa tulda Nutrition 0.000 description 1
- 241001330002 Bambuseae Species 0.000 description 1
- 241000282414 Homo sapiens Species 0.000 description 1
- 235000015334 Phyllostachys viridis Nutrition 0.000 description 1
- 239000011425 bamboo Substances 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/005—Manipulators for mechanical processing tasks
- B25J11/0065—Polishing or grinding
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a mechanical arm fettling method based on 3D vision, which comprises the following steps: point cloud data acquisition, target point cloud extraction, point cloud registration, surface reconstruction and mechanical arm track planning. The point cloud data acquisition uses a fusion depth camera to acquire multi-angle scene point cloud data containing a target workpiece, and then performs target extraction on the scene point cloud. On the basis, point cloud registration is carried out on the multi-view target point cloud, a full-view point cloud model is established, then surface reconstruction is carried out on the model, and a more complete reconstruction model close to a real object is reconstructed. And in the mechanical arm track planning step, track points are planned according to the operation point coordinates on the reconstruction model and the corresponding target function, verification and execution are carried out in a simulation environment, and after the simulation execution is finished, track data are sent to a mechanical arm controller for fettling operation processing. The invention improves the automation degree of system operation and improves the stability and reliability.
Description
Technical Field
The invention relates to the technical field of machine vision and industrial mechanical arms, in particular to a mechanical arm trimming method based on 3D vision.
Background
With the transformation and upgrade of the manufacturing industry and the advanced development of robot control and perception technology, more and more robot equipment is applied to industrial manufacturing, so that the original working efficiency of the industrial manufacturing industry can be greatly improved, and meanwhile, the robot equipment can replace human beings to complete work in dangerous environments. Among them, an industrial robot arm is one of main robot devices applied to an industrial production line, and a robot arm is a robot device having multiple degrees of freedom based on various technologies such as electronics, mechanics, control, and the like. The mechanical arm has wide application in the working fields of sorting, stacking, carrying, paint spraying, welding and the like.
Machine vision is one of the most important perception techniques applied to the industrial field. Early monocular vision-based perception techniques were applied to simple cognitive tracking. At present, depth cameras based on binocular stereo imaging, infrared structured light and TOF (time Of flight) technologies, such as bamboo shoots in spring after rain, generally enter various fields, and improve the visual perception Of a robot to another level.
The traditional ceramic bathroom industry production line needs workers to polish and polish the biscuit according to strict measurement data by using a polishing tool in a handheld mode before a robot automation technology is not applied, the working difficulty is high, the efficiency is low, and the processing method of ceramic biscuits in different batches is greatly different, so that the difficulty and the strength of manual operation are increased. Even under ideal conditions, the effect and quality of manual grinding are not uniform in the same batch of workpieces. In addition, the technical scheme of the ceramic bathroom robot fettling work introduced in most of the current factories is used for teaching to obtain the polishing track of the robot, and the defects of the scheme are as follows: firstly, the polishing track obtained by the teaching robot is relatively fixed, and the final polishing quality is greatly influenced due to no adjustment space. Secondly, there is very big difference between the required throwing track between different products and the different batches of the same kind of product, to different circumstances, needs constantly to change teaching scheme, and is very loaded down with trivial details.
Therefore, the invention provides a robot fettling method based on 3D vision, which adds 3D vision perception, improves the automation degree on a production line by combining a mechanical arm, and automatically plans a path by establishing a workpiece model.
Disclosure of Invention
The invention provides a mechanical arm fettling method based on 3D vision, aiming at overcoming the defects in the prior art.
The method can be used for carrying out multi-view 3D data acquisition and three-dimensional reconstruction on different kinds of workpieces to obtain a reconstruction model. The method has strong adaptability, can carry out reasonable mechanical arm track planning according to the reconstructed model and the processing target, and transmits the mechanical arm track planning to the entity mechanical arm for processing execution after simulation verification, thereby improving the safety and reliability of the fettling process.
A mechanical arm fettling method based on 3D vision comprises the following steps:
step 1: and collecting scene point cloud data containing multi-angle target biscuit workpieces. The method comprises the following steps of collecting point cloud data by using a fusion binocular depth camera, and obtaining depth data by the sensor based on a binocular stereo imaging principle and an infrared structured light distance measuring principle. The method comprises the steps of placing a target workpiece to be scanned on a rotary platform with a controllable rotation angle, forming a fixed relative position between a depth camera and the rotary platform, placing the workpiece on the rotary platform at a certain initial position, and performing stepping rotation by taking the fixed angle as increment to obtain multi-angle information of a target biscuit. The depth camera scans and records each angle in the scene and then transmits the angle back to the computer, and the point cloud data is stored in a PCD file form according to the time sequence, named as View1, View2 and View3
Step 2: and extracting target point cloud. The method comprises the steps of 1, acquiring multi-View scene point clouds View1, View2 and View 3. The method comprises the following specific steps:
step 2-1: setting ROI parameters according to the relative positions of the depth camera and the rotary platform in the step 1, and carrying out ROI region segmentation screening on space points in the complex scene point clouds View1, View2 and View 3. And preliminarily segmenting small scene point clouds only comprising three parts of the ground, a rotating platform and a workpiece.
Step 2-2: and expressing the point cloud data set in the small scene point cloud obtained in the last step as follows:
A{a1,a2,a3,...an};
and (3) performing plane fitting in the point cloud set A by using a random sample consensus (RANSAC) algorithm, fitting a plane and plane parameters thereof in the small scene by using the RANSAC algorithm, and dividing points in the set A into points on the plane and points which do not belong to the plane. And recording the subscripts of the data points belonging to the plane and the subscripts of the data points not belonging to the plane, and performing surface removing processing according to the subscripts to remove the points belonging to the plane. After the RANSAC algorithm obtains plane parameters, the position of a plane in a small scene point cloud can be determined, the height of a rotary platform is obtained through measurement and is H, all points which are away from the plane and are at the height of H can be removed, and therefore data of the rotary platform are removed, and preliminary target workpiece data are obtained.
Step 2-3: the preliminary target workpiece data obtained in the step 2-2 has outliers generated by insufficient efficiency in the algorithm and surface burr noise and edge noise left when the 3D vision sensor acquires data, so that errors are generated in the subsequent steps. Therefore, a statistical analysis filter (statistical outlierremove) is used, and the above results are used as input to carry out filtering to remove outlier and surface outlier noise. Finally, multi-View target workpiece point cloud data Obj1, Obj2, Obj3,. ObjN are extracted from View1, View2 and View 3.
And step 3: and (3) carrying out multi-view target point cloud registration, wherein the process comprises the step of carrying out pairwise registration on Obj1, Obj2 and Obj3, which are obtained in the step 2-3, and carrying out global splicing to obtain a complete model.
Step 3-1: and establishing a Gaussian mixture model for the point clouds of two adjacent visual angles. Selecting two adjacent view point clouds needing to be registered from Obj1, Obj2 and Obj 3. The gaussian continuous probability density distribution function is known as:
where μ is the mean vector, Σ is the covariance matrix, and d is the dimensionality of the data.
The Gaussian mixture model is established according to the following criteria:
1) the number of gaussian components in the gaussian mixture model is equal to the number of point clouds in each point cloud dataset.
2) For each gaussian component in the gaussian mixture model, its mean vector is set according to the spatial position of the point.
3) All gaussian components in the gaussian mixture model share the same covariance matrix.
Finally, all gaussian components as described above are added with the same weight, which results in:
wherein wiAnd weight coefficients of the Gaussian mixture model. And (3) establishing a Gaussian mixture Model gmm (S) and gmm (M) for the Scene and the Model according to the above rule, wherein gmm represents the functional relation in the step (2), and the input S represents the point cloud Scene and the input M represents the point cloud Model.
Step 3-2: establishing a transformation matrix between two point clouds with a parameter theta, wherein the Model point cloud after parameter transformation is expressed as Transform (M, theta), and a Gaussian mixture Model of the Model point cloud can be expressed as gmm (M, theta), wherein the Transform represents a function for performing corresponding rigid body transformation according to the transformation matrix with the parameter theta;
step 3-3: and performing difference square integration on the two Gaussian mixture models to establish a differentiation objective function:
∫(gmm(S)-gmm(T(M,θ)))2dx (3)
and (3) taking the fixed rotation parameter used in the step (1) as an initial value of the parameter theta, and performing iterative optimization operation by using a Gauss-Newton algorithm to obtain a parameter value when the objective function is minimum. And calculating a transformation matrix T according to the parameter values.
Step 3-4: the Obj1 is taken as a reference for registration, and the coordinate system in which the Obj1 is located is taken as a reference coordinate system. According to the step 3-3, the transformation matrix T is obtained by carrying out registration processing on Obj1 and Obj2(Obj1 is regarded as Scene, and Obj2 is regarded as Model)12Obj2 may pass through T12And transforming the matrix to a reference coordinate system. Carrying out Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T23Obj3 may pass through T12*T23And transforming the matrix to a reference coordinate system. And sequentially carrying out registration calculation on every two visual angles, converting the two visual angles into a reference coordinate system according to the transformation matrix, and splicing the multi-visual angle point clouds to obtain the full-visual angle point cloud model.
And 4, step 4: on the basis of the full view point cloud model established in the point cloud step 3, resampling and vulnerability repairing are carried out on the surface of the workpiece, a high-order polynomial interpolation is carried out on surrounding point data to reconstruct a vulnerability part of the surface, and surface normal vectors and curvature features caused by registration are corrected. And (3) performing point cloud surface reconstruction by using a Delaunay triangulation method, fitting and approximating the originally scattered point cloud to a real surface to obtain a reconstruction model.
And 5: and planning the track of the mechanical arm. Based on ROS operating system and Moveit! The framework establishes a corresponding simulation environment in the Rviz to verify the track correctness, and the specific steps are as follows:
step 5-1: the ROS Moveit Setup Assistant is used to build the functional package required for the robot programming and motion planning. Importing a mechanical arm URDF structure description model file used in a fettling scene according to the steps, configuring a self-collision matrix (used for judging potential self-collision in a motion process), creating a planning group, selecting a kinematics analyzer, defining an initial default pose of the robot, and configuring a robot end tool (such as a grinding tool).
Step 5-2: and introducing a reconstruction model of the target biscuit and a model file of the mechanical arm in an ROS-Rviz environment. And obtaining discrete non-connected processing Target points Target1, Target2 and Target3.. to avoid joint collision, avoid collision with a biscuit and the like, calculating a motion track point by using a Moveit frame according to the Target point, obtaining joint data of the mechanical arm by using a kinematic solver to obtain an inverse solution, and executing a blank repairing path in a simulated environment. And after the correctness is confirmed, the final joint point and the normal vector of the machining path of the mechanical arm are sent to a bottom controller of the mechanical arm to execute the operation.
Preferably, the end tool of step 5-1 is a grinding tool, or a cutting tool, or a welding tool, or a jig.
The invention has the advantages and positive effects that:
1. the invention has the advantages of low use cost and good compatibility, can realize high-elasticity use, and can build models for different workpieces with different sizes and plan according to the models to meet the requirements of different types of users.
2. The invention adopts the three-dimensional reconstruction model to carry out the track planning, has very high generalization degree on modeling target data, can adapt to workpieces with different structures, different types and materials and carries out the processing track planning. The teaching mechanical arm machining path teaching device solves the problems that the teaching mechanical arm machining path can only be adjusted and targeted to a certain type, manual operation is needed, and precision is low.
3. The invention obtains real-time data based on 3D visual sensing, adopts a fusion depth camera and an active three-dimensional reconstruction algorithm, and has higher precision, reliability, stability and robustness compared with the algorithm defects and precision problems existing in a passive image three-dimensional modeling mode.
4. The invention introduces Moveit!into the mechanical arm movement planning method! The motion frame can adapt to mechanical arms of various models and brands, can realize hot plug use, improves elasticity and convenience, is matched with a simulation environment, and can verify the correctness of an algorithm to avoid unnecessary economic loss in processing.
Drawings
FIG. 1 is a block flow diagram of the present invention.
Fig. 2 is collected scene point cloud data.
FIG. 3-a is a small scene point cloud for ROI preliminary segmentation.
FIG. 3-b is a point cloud data of the target extraction result.
Fig. 4 shows the working process of the point cloud registration step.
FIG. 5 shows the three-dimensional reconstruction results
Fig. 6 is an example of a two-dimensional code used for calibration.
FIG. 7 is a representation of the move _ setup _ Assistant operation interface
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A mechanical arm fettling method based on 3D vision is shown in a working flow diagram in figure 1 and comprises five steps of data acquisition, target extraction, point cloud registration, surface reconstruction, trajectory planning and the like. Fig. 2 shows scene point cloud data captured by data acquisition, which includes target workpieces, working environment and other impurities, and the point cloud is huge in amount and spread over noise. Fig. 3 shows the target workpiece data at a certain viewing angle after the target extraction processing. FIG. 4 is a working process of a point cloud registration step. FIG. 5 is a three-dimensional view of a reconstructed bath workpiece model.
With reference to fig. 1-5, the embodiments of the present invention are as follows:
step 1: and collecting scene point cloud data. Working scene point cloud data was acquired using an Intel Realsense D435 depth camera. The depth camera is placed on a tripod with the height of 1.8m, and is connected with a USB3.0 port of a computer by using an elongated USB3.0 transmission data line. The ceramic bathroom biscuit workpiece is placed on a rotating platform (the initial pose can be just opposite to a depth camera), ROI parameters between the rotating platform and the depth camera are measured and recorded, the rotating angle (taking 45 degrees as an example) of the rotating platform is set, the rotating angle is controlled by the rotating platform through a motor, the rotating angle is controlled by the rotating platform, every time the rotating platform rotates 45 degrees, the depth camera sends acquired data to a computer, and the workpiece rotates 360 degrees. The scene point cloud data collected at one of the angles is shown in fig. 2.
The depth camera and the computer transmit images and point cloud data through a USB connecting line. The computer adopts an ubuntu16.04 operating system + ROS Kinetic + PCL as a software platform to collect, process and present data.
And a realsense-ROS function package supported by the ROS community issues point cloud information transmitted from the realsense depth camera to the computer. The cloud information can be subscribed by looking up the topic name corresponding to the point cloud information through the rostopic, issued sensor _ msgs/PointCloud2 format data are processed by using a fromrOSMsg conversion function contained in PCL _ coverionin a PCL function library to generate corresponding PCL format point cloud data, the PCL format point cloud data are exported to PCD files through IO operation and stored in a hard disk, and the files are respectively View1, View2 and View 3.
Step 2: and extracting target point cloud.
Step 2-1: ROI parameters are set according to the relative positions of the depth camera and the rotating platform in step 1,
performing ROI region segmentation screening on spatial points in complex scene point clouds View1, View2 and View3, namely selecting three-dimensional points of which x, y and z (three-dimensional space coordinates of each point) are located in an ROI region. And preliminarily segmenting small scene point clouds only comprising three parts of the ground, a rotating platform and a workpiece. The segmentation effect is shown in fig. 3-a.
Step 2-2: and (4) carrying out surface removing treatment on the basis of 2-1, and further removing the surface and the rotating platform. And expressing the point cloud data set in the small scene point cloud obtained in the step 2-1 as follows:
A{a1,a2,a3,...an};
and (3) performing plane fitting in the point cloud set A by using a random sample consensus (RANSAC) algorithm, fitting a plane and plane parameters thereof in the small scene by using the RANSAC algorithm, and dividing points in the set A into points on the plane and points which do not belong to the plane. And recording the subscripts of the data points belonging to the plane and the subscripts of the data points not belonging to the plane, and performing surface removing processing according to the subscripts to remove the points belonging to the plane. After the RANSAC algorithm obtains plane parameters, the position of a plane in a small scene point cloud can be determined, the height of a rotary platform is obtained through measurement and is H, all points which are away from the plane and are at the height of H can be removed, and therefore data of the rotary platform are removed, and preliminary target workpiece data are obtained.
Step 2-3: the preliminary target workpiece data obtained in the step 2-2 has outliers generated by insufficient efficiency in the algorithm and surface burr noise and edge noise left when the 3D vision sensor acquires data, so that errors are generated in the subsequent steps. Therefore, a statistical analysis filter (statistical outlierremove) is used, and the above results are used as input to carry out filtering to remove outlier and surface outlier noise. Finally, multi-View target workpiece point cloud data Obj1, Obj2, Obj3,. ObjN are extracted from View1, View2 and View 3. The extracted target workpiece data is shown in fig. 3-b.
And step 3: and (3) carrying out multi-view target point cloud registration, wherein the process comprises the step of carrying out pairwise registration on Obj1, Obj2 and Obj3, which are obtained in the step 2-3, and carrying out global splicing to obtain a complete model.
Step 3-1: and the point clouds of two adjacent visual angles of Obj1 and Obj1 are registered, wherein Obj1 is target point cloud Scene, and Obj2 is a point cloud Model to be registered. The gaussian continuous probability density distribution function is known as:
where μ is the mean vector, Σ is the covariance matrix, and d is the dimensionality of the data.
The Gaussian mixture models of Obj1 and Obj2 were established according to the following criteria:
1) the number of gaussian components in the gaussian mixture model is equal to the number of point clouds in each point cloud dataset.
2) For each gaussian component in the gaussian mixture model, its mean vector is set according to the spatial position of the point.
3) All gaussian components in the gaussian mixture model share the same covariance matrix.
Finally, all gaussian components as described above are added with the same weight, which results in:
wherein wiThe weight coefficient of each Gaussian component in the Gaussian mixture model. And (3) establishing a Gaussian mixture Model gmm (S) and gmm (M) for the Scene and the Model according to the above rule, wherein gmm represents the functional relation in the step (2), and the input S represents the point cloud Scene and the input M represents the point cloud Model.
Step 3-2: establishing a transformation matrix between two point clouds with a parameter theta, wherein the Model point cloud after parameter transformation is expressed as Transform (M, theta), and a Gaussian mixture Model of the Model point cloud can be expressed as gmm (M, theta), wherein the Transform represents a function for performing corresponding rigid body transformation according to the transformation matrix with the parameter theta;
step 3-3: and performing difference square integration on the two Gaussian mixture models to establish a differentiation objective function:
∫(gmm(S)-gmm(T(M,θ)))2dx (3)
taking the fixed rotation parameter used in the step 1 as an initial value of the parameter theta (taking 45 degrees as an example), and performing iterative optimization operation by using a gauss-newton algorithm to obtain a parameter value when the objective function is minimum. And calculating a transformation matrix T according to the parameter values.
Step 3-4: the Obj1 is taken as a reference for registration, and the coordinate system in which the Obj1 is located is taken as a reference coordinate system. The registration results in a transformation matrix T12(transformation matrices from Obj2 to Obj 2), Obj2 may pass through T12And transforming the matrix to a reference coordinate system. Carrying out Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T23Obj3 may pass through T12*T23And transforming the matrix to a reference coordinate system. And sequentially carrying out registration calculation on every two visual angles, converting the two visual angles into a reference coordinate system according to the transformation matrix, and splicing the multi-visual angle point clouds to obtain the full-visual angle point cloud model. The optimization process and final effect of point cloud registration are shown in fig. 4.
And 4, step 4: on the basis of the full view point cloud model established in the point cloud step 3, resampling and vulnerability repairing are carried out on the surface of the workpiece, a high-order polynomial interpolation is carried out on surrounding point data to reconstruct a vulnerability part of the surface, and surface normal vectors and curvature features caused by registration are corrected. And (3) performing point cloud surface reconstruction by using a Delaunay triangulation method, and fitting the originally scattered point cloud to approximate the surface of the original scattered point cloud to a real surface to obtain a reconstruction model. Three views of the reconstructed model are shown in fig. 5.
And 5: and planning the track of the mechanical arm. The arm needs to be calibrated by hand and eye before executing the task. The coordinate data points of the surface of the reconstructed model are located under a depth camera coordinate system, and hand-eye calibration needs to be carried out on the mechanical arm and the realsense depth camera when data are converted into a robot coordinate system through hand-eye calibration. Two-dimensional codes are attached to the end of the mechanical arm by using an ArucoMarker method in an opencv _ contrib module, and a transformation matrix between a Realsense color image coordinate system and a mechanical arm base is obtained by the two-dimensional codes in an example shown in a figure 6. And reading a transformation matrix between a color image coordinate system and a point cloud data coordinate from tf information issued by a RealSense function package, and obtaining a transformation relation between the point cloud data coordinate system and a mechanical arm base according to a rigid body transformation principle, so that the coordinates of the processing point on the workpiece model can be converted into the base coordinate system of the mechanical arm.
Step 5-1: the ROSMoveitSetupAssistand was used to build the functional packages required for robotic arm programming and motion planning, the moveit _ setup _ assist operational interface is shown in FIG. 7. Importing a mechanical arm URDF structure description model file used in a fettling scene according to steps, configuring a self-collision matrix (used for judging potential self-collision in a motion process), creating a planning group, selecting a kinematics resolver, defining an initial default pose of the robot, and configuring a robot end tool which can be a grinding tool, a cutting tool, a welding tool or a clamp.
The use interface of the ROS Moveit SetupAssistat is shown in the figure, and a corresponding function package can be established according to prompt operation.
Step 5-2: importing a simulation model of a bathroom workpiece model and a robot arm into a ROS-Kinetic-RViz simulation environment, and using Moveit! The method comprises the steps that a motion planner plug-in generates an expected track under various constraints such as the condition of avoiding joint collision in a limited motion area, solves each path point in the track by using a kinematic inverse solution solving plug-in to generate mechanical arm joint point data, firstly sends the data to a mechanical arm in a simulation model, executes and verifies the data, and sends the joint point data to a bottom controller of the mechanical arm in an ROSaction communication mode after the data are confirmed to be reliable and correct to execute a machining track.
It should be emphasized that the embodiments described herein are illustrative and not restrictive, and thus the present invention includes, but is not limited to, the embodiments described in the detailed description, and that other embodiments similar to those described herein may be made by those skilled in the art without departing from the scope of the present invention.
Claims (5)
1. A mechanical arm fettling method based on 3D vision comprises the following specific steps:
step 1: collecting scene point cloud data containing multi-angle target biscuit workpieces; the method specifically comprises the following steps:
acquiring point cloud data by using a fused binocular depth camera, wherein the fused binocular depth camera acquires depth book data based on a binocular stereo imaging principle and an infrared structured light distance measuring principle; placing a target workpiece to be scanned on a rotating platform with a controllable rotating angle, forming a fixed relative position between a fusion binocular depth camera and the rotating platform, placing the workpiece on the rotating platform at a certain initial position, and performing stepping rotation by taking a fixed angle as increment to obtain multi-angle information of a target biscuit, scanning and recording each angle in a scene by the depth camera, then transmitting the scanned and recorded angle information back to a computer, and storing point cloud data in a PCD file form according to a time sequence, wherein the PCD file form is named as a View1, a View2, a View3, a View N and other N View scene point cloud files;
step 2: extracting target point cloud; rejecting irrelevant data points in the ViewN and extracting target workpiece data, wherein the scene point clouds View1, View2 and View3 of multiple visual angles are acquired in the step 1; the method comprises the following specific steps:
step 2-1: setting ROI parameters according to the relative positions of the depth camera and the rotary platform in the step 1, and carrying out ROI region segmentation screening on space points in complex scene point clouds View1, View2 and View3,. namely selecting three-dimensional space coordinates x, y and z of each point, wherein the three-dimensional space coordinates x, y and z are located in an ROI region; preliminarily segmenting small scene point clouds only comprising three parts of the ground, a rotary platform and a workpiece;
step 2-2: expressing the point cloud data set in the small scene point cloud obtained in the step 2-1 as follows:
A{a1,a2,a3,...an};
performing plane fitting in a point cloud set A by using a random sample consensus (RANSAC) algorithm, fitting a plane and plane parameters thereof in a small scene by using the RANSAC algorithm, and dividing points in the set A into points on the plane and points which do not belong to the plane; recording subscripts of data points belonging to the plane and subscripts of data points not belonging to the plane, and performing surface removing processing according to the subscripts to remove points belonging to the plane; after the RANSAC algorithm obtains plane parameters, the position of a plane in a small scene point cloud can be determined, the height of a rotary platform is obtained through measurement and is H, all points which are away from the plane and have the height of H can be removed, and therefore data of the rotary platform are removed, and preliminary target workpiece data are obtained;
step 2-3: the initial target workpiece data obtained in the step 2-2 has outliers generated by insufficient efficiency on the algorithm and surface burr noise and edge noise left when the 3D vision sensor acquires data, so that errors are generated in the subsequent steps; therefore, a statistical analysis filter (statistical outlierremove) is used, the result is used as input for filtering, and outlier and surface outlier noise are removed; extracting multi-View target workpiece point cloud data Obj1, Obj2 and Obj 3.. objN from View1, View2 and View 3.;
and step 3: carrying out registration on the multi-view target point clouds, wherein the registration is carried out on each two of Obj1, Obj2 and Obj3, namely objN obtained in the step 2-3, and carrying out global splicing to obtain a complete model;
step 3-1: establishing a Gaussian mixture model for the point clouds of two adjacent visual angles; selecting two adjacent view point clouds needing to be registered from Obj1, Obj2 and Obj3, and setting a target point cloud Scene and a point cloud Model to be registered; the gaussian continuous probability density distribution function is known as:
wherein mu is a mean vector, sigma is a covariance matrix, and d is a data dimension;
the Gaussian mixture model is established according to the following criteria:
1) the number of Gaussian components in the Gaussian mixture model is equal to the number of point clouds in each point cloud data set;
2) for the Gaussian components in each Gaussian mixture model, setting the average value vector of the Gaussian components according to the spatial position of the point;
3) all Gaussian components in the Gaussian mixture model share the same covariance matrix;
finally, all gaussian components as described above are added with the same weight, which results in:
wherein wiWeighting coefficients for each gaussian component in the gaussian mixture model; establishing a Gaussian mixture Model gmm (S) and gmm (M) for Scene and Model according to the rule, wherein gmm represents the functional relation in the step (2), S is input to represent the point cloud Scene, and M represents the point cloud Model;
step 3-2: establishing a transformation matrix between two point clouds with a parameter theta, wherein the Model point cloud after parameter transformation is expressed as Transform (M, theta), and a Gaussian mixture Model of the Model point cloud can be expressed as gmm (M, theta), wherein the Transform represents a function for performing corresponding rigid body transformation according to the transformation matrix with the parameter theta;
step 3-3: and performing difference square integration on the two Gaussian mixture models to establish a differentiation objective function:
∫(gmm(S)-gmm(T(M,θ)))2dx (3)
taking the fixed rotation parameter used in the step 1 as an initial value of a parameter theta, and performing iterative optimization operation by using a Gauss-Newton algorithm to obtain a parameter value when the objective function is minimum; calculating a transformation matrix T according to the parameter values;
step 3-4: taking Obj1 as a reference for registration, and taking a coordinate system where Obj1 is located as a reference coordinate system; according to the step 3-3, the Obj1 is regarded as Scene, the Obj2 is regarded as Model, and the Obj1 and the Obj2 are registered to obtain the transformation matrix T12Obj2 may pass through T12Transforming the matrix to a reference coordinate system; carrying out Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T23Obj3 may pass through T12*T23Transforming the matrix to a reference coordinate system; sequentially carrying out registration calculation on every two visual angles, converting the registration calculation into a reference coordinate system according to a transformation matrix, and splicing multi-visual angle point clouds to obtain a full-visual angle point cloud model;
and 4, step 4: on the basis of the full view point cloud model established in the point cloud step 3, resampling and vulnerability repairing are carried out on the surface of the workpiece, a high-order polynomial interpolation is carried out on surrounding point data to reconstruct a vulnerability part of the surface, and a surface normal vector and curvature characteristics caused by registration are corrected; performing point cloud surface reconstruction by using a Delaunay triangulation method, and fitting and approximating the originally scattered point cloud to a real surface to obtain a reconstruction model;
and 5: planning the track of the mechanical arm; based on ROS operating system and Moveit! Acquiring a target point, solving inverse kinematics, planning a track in a simulation Rviz environment, verifying the correctness, and finally executing an operation task, wherein the method specifically comprises the following steps: (ii) a
Step 5-1: establishing a functional package required by mechanical engineering programming and motion planning by using the ROS move Setup Assistant; importing a mechanical arm URDF structure description model file used in a fettling scene according to the steps, configuring a self-collision matrix to judge potential self-collision in a motion process, creating a planning group, selecting a kinematics resolver, defining an initial default pose of the robot, and configuring a robot end tool;
step 5-2: introducing a reconstruction model of a target biscuit and a model file of a mechanical arm in an ROS-Rviz environment; obtaining discrete non-communicated processing Target points Target1, Target2 and Target3.. to avoid joint collision and biscuit collision as targets, calculating a motion track point according to the Target point by using a Moveit frame, obtaining joint data of the mechanical arm by using a kinematics solver, and executing a fettling path in an imitation environment; and after the correctness is confirmed, the final machining path point and the normal vector of the mechanical arm are sent to a bottom controller of the mechanical arm to execute the operation.
2. The method for trimming a mechanical arm based on 3D vision as claimed in claim 1, wherein the end tool in step 5-1 is a grinding tool.
3. The method for trimming a mechanical arm based on 3D vision as claimed in claim 1, wherein the end tool in step 5-1 is a cutting tool.
4. The method for trimming a mechanical arm based on 3D vision as claimed in claim 1, wherein the end tool in step 5-1 is a jig.
5. The method for trimming the mechanical arm based on the 3D vision is characterized in that the end tool in the step 5-1 is a welding tool.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110168422.XA CN112862878B (en) | 2021-02-07 | 2021-02-07 | Mechanical arm blank repairing method based on 3D vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110168422.XA CN112862878B (en) | 2021-02-07 | 2021-02-07 | Mechanical arm blank repairing method based on 3D vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112862878A true CN112862878A (en) | 2021-05-28 |
CN112862878B CN112862878B (en) | 2024-02-13 |
Family
ID=75988953
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110168422.XA Active CN112862878B (en) | 2021-02-07 | 2021-02-07 | Mechanical arm blank repairing method based on 3D vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862878B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408916A (en) * | 2021-06-28 | 2021-09-17 | 河南唐都科技有限公司 | Fire-fighting equipment detection and on-site acceptance evaluation system based on intelligent AI and mobile APP |
CN114055781A (en) * | 2021-10-24 | 2022-02-18 | 扬州大学 | Self-adaptive correction method for fuel tank welding mechanical arm based on point voxel correlation field |
CN114102274A (en) * | 2021-11-12 | 2022-03-01 | 苏州大学 | 3D printing part processing method |
CN114407015A (en) * | 2022-01-28 | 2022-04-29 | 青岛理工大学 | Teleoperation robot online teaching system and method based on digital twins |
CN115255806A (en) * | 2022-07-21 | 2022-11-01 | 北京化工大学 | Industrial robot steel billet crack grinding system and method based on 3D attitude information |
CN116394235A (en) * | 2023-03-16 | 2023-07-07 | 中国长江电力股份有限公司 | Dry ice cleaning track planning system and method for large part robot based on three-dimensional measurement |
CN117162098A (en) * | 2023-10-07 | 2023-12-05 | 合肥市普适数孪科技有限公司 | Autonomous planning system and method for robot gesture in narrow space |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886528A (en) * | 2017-11-30 | 2018-04-06 | 南京理工大学 | Distribution line working scene three-dimensional rebuilding method based on a cloud |
CN110264567A (en) * | 2019-06-19 | 2019-09-20 | 南京邮电大学 | A kind of real-time three-dimensional modeling method based on mark point |
CN110977982A (en) * | 2019-12-19 | 2020-04-10 | 南京理工大学 | Depth vision-based double-mechanical-arm control method |
CN111251295A (en) * | 2020-01-16 | 2020-06-09 | 清华大学深圳国际研究生院 | Visual mechanical arm grabbing method and device applied to parameterized parts |
-
2021
- 2021-02-07 CN CN202110168422.XA patent/CN112862878B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886528A (en) * | 2017-11-30 | 2018-04-06 | 南京理工大学 | Distribution line working scene three-dimensional rebuilding method based on a cloud |
CN110264567A (en) * | 2019-06-19 | 2019-09-20 | 南京邮电大学 | A kind of real-time three-dimensional modeling method based on mark point |
CN110977982A (en) * | 2019-12-19 | 2020-04-10 | 南京理工大学 | Depth vision-based double-mechanical-arm control method |
CN111251295A (en) * | 2020-01-16 | 2020-06-09 | 清华大学深圳国际研究生院 | Visual mechanical arm grabbing method and device applied to parameterized parts |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408916A (en) * | 2021-06-28 | 2021-09-17 | 河南唐都科技有限公司 | Fire-fighting equipment detection and on-site acceptance evaluation system based on intelligent AI and mobile APP |
CN113408916B (en) * | 2021-06-28 | 2023-12-29 | 河南唐都科技有限公司 | Fire-fighting facility detection and field acceptance assessment system based on intelligent AI and mobile APP |
CN114055781A (en) * | 2021-10-24 | 2022-02-18 | 扬州大学 | Self-adaptive correction method for fuel tank welding mechanical arm based on point voxel correlation field |
CN114055781B (en) * | 2021-10-24 | 2023-12-29 | 扬州大学 | Self-adaptive correction method for fuel tank welding mechanical arm based on point voxel correlation field |
CN114102274A (en) * | 2021-11-12 | 2022-03-01 | 苏州大学 | 3D printing part processing method |
CN114407015A (en) * | 2022-01-28 | 2022-04-29 | 青岛理工大学 | Teleoperation robot online teaching system and method based on digital twins |
CN115255806A (en) * | 2022-07-21 | 2022-11-01 | 北京化工大学 | Industrial robot steel billet crack grinding system and method based on 3D attitude information |
CN115255806B (en) * | 2022-07-21 | 2024-03-26 | 北京化工大学 | Industrial robot billet crack repairing and grinding system and method based on 3D attitude information |
CN116394235A (en) * | 2023-03-16 | 2023-07-07 | 中国长江电力股份有限公司 | Dry ice cleaning track planning system and method for large part robot based on three-dimensional measurement |
CN116394235B (en) * | 2023-03-16 | 2023-11-21 | 中国长江电力股份有限公司 | Dry ice cleaning track planning system and method for large part robot based on three-dimensional measurement |
CN117162098A (en) * | 2023-10-07 | 2023-12-05 | 合肥市普适数孪科技有限公司 | Autonomous planning system and method for robot gesture in narrow space |
CN117162098B (en) * | 2023-10-07 | 2024-05-03 | 合肥市普适数孪科技有限公司 | Autonomous planning system and method for robot gesture in narrow space |
Also Published As
Publication number | Publication date |
---|---|
CN112862878B (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112862878B (en) | Mechanical arm blank repairing method based on 3D vision | |
CN111775146B (en) | Visual alignment method under industrial mechanical arm multi-station operation | |
CN109202912B (en) | Method for registering target contour point cloud based on monocular depth sensor and mechanical arm | |
JP6004809B2 (en) | Position / orientation estimation apparatus, information processing apparatus, and information processing method | |
CN111644935A (en) | Robot three-dimensional scanning measuring device and working method | |
Motai et al. | Hand–eye calibration applied to viewpoint selection for robotic vision | |
Zou et al. | Fault-tolerant design of a limited universal fruit-picking end-effector based on vision-positioning error | |
CN111476841B (en) | Point cloud and image-based identification and positioning method and system | |
Melchiorre et al. | Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach | |
CN110450163A (en) | The general hand and eye calibrating method based on 3D vision without scaling board | |
CN110065068B (en) | Robot assembly operation demonstration programming method and device based on reverse engineering | |
CN109940626B (en) | Control method of eyebrow drawing robot system based on robot vision | |
CN113751981B (en) | Space high-precision assembling method and system based on binocular vision servo | |
CN113172659B (en) | Flexible robot arm shape measuring method and system based on equivalent center point identification | |
CN112686950A (en) | Pose estimation method and device, terminal equipment and computer readable storage medium | |
CN116766194A (en) | Binocular vision-based disc workpiece positioning and grabbing system and method | |
CN114407015A (en) | Teleoperation robot online teaching system and method based on digital twins | |
CN111583342A (en) | Target rapid positioning method and device based on binocular vision | |
CN110992416A (en) | High-reflection-surface metal part pose measurement method based on binocular vision and CAD model | |
CN113793383A (en) | 3D visual identification taking and placing system and method | |
CN112907682B (en) | Hand-eye calibration method and device for five-axis motion platform and related equipment | |
Borangiu et al. | Robot arms with 3D vision capabilities | |
Seçil et al. | 3-d visualization system for geometric parts using a laser profile sensor and an industrial robot | |
Gruen | Digital close-range photogrammetry: progress through automation | |
CN116372938A (en) | Surface sampling mechanical arm fine adjustment method and device based on binocular stereoscopic vision three-dimensional reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |