CN112862878B - Mechanical arm blank repairing method based on 3D vision - Google Patents

Mechanical arm blank repairing method based on 3D vision Download PDF

Info

Publication number
CN112862878B
CN112862878B CN202110168422.XA CN202110168422A CN112862878B CN 112862878 B CN112862878 B CN 112862878B CN 202110168422 A CN202110168422 A CN 202110168422A CN 112862878 B CN112862878 B CN 112862878B
Authority
CN
China
Prior art keywords
point cloud
model
mechanical arm
data
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110168422.XA
Other languages
Chinese (zh)
Other versions
CN112862878A (en
Inventor
禹鑫燚
张毅凯
仇翔
欧林林
程兆赢
许成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110168422.XA priority Critical patent/CN112862878B/en
Publication of CN112862878A publication Critical patent/CN112862878A/en
Application granted granted Critical
Publication of CN112862878B publication Critical patent/CN112862878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/005Manipulators for mechanical processing tasks
    • B25J11/0065Polishing or grinding
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a mechanical arm blank repairing method based on 3D vision, which comprises the following steps: point cloud data acquisition, target point cloud extraction, point cloud registration, surface reconstruction and mechanical arm track planning. And acquiring point cloud data, namely acquiring multi-angle scene point cloud data containing a target workpiece by using a fusion type depth camera, and then extracting targets from the scene point cloud. On the basis, point cloud registration is carried out on the multi-view target point cloud, a point cloud model of all view angles is established, then surface reconstruction is carried out on the model, and a complete and real-object-approaching reconstruction model is reconstructed. The mechanical arm track planning step is to plan track points according to the operation point coordinates and corresponding objective functions on the reconstruction model, verify and execute the track points in a simulation environment, and send track data to a mechanical arm controller for blank repairing operation processing after the simulation execution is completed. The invention improves the system operation automation degree and the stability and reliability.

Description

Mechanical arm blank repairing method based on 3D vision
Technical Field
The invention relates to the technical field of machine vision and industrial mechanical arms, in particular to a mechanical arm blank repairing method based on 3D vision.
Background
With the transformation and upgrading of the manufacturing industry and the progress of the robot control and sensing technology, more and more robot equipment is applied to the industrial manufacturing, so that the original working efficiency of the industrial manufacturing industry can be greatly improved, and meanwhile, the robot equipment can replace human beings to complete work in dangerous environments. The industrial mechanical arm is one of main robot devices applied to an industrial production line, and the mechanical arm is a robot device with multiple degrees of freedom based on various technologies such as electronics, machinery, control and the like. The mechanical arm has wide application in the working fields of sorting, stacking, carrying, paint spraying, welding and the like.
Machine vision is one of the most important sensing technologies applied to the industrial field. Early monocular vision-based perception techniques were applied to simple recognition tracking. Currently, depth cameras such as spring bamboo shoots after rain based on binocular stereoscopic imaging, infrared structured light and TOF (Time Of Flight) technology generally enter various large fields, and the visual perception of robots is improved to another level.
Before the robot automation technology is not applied to the traditional ceramic bathroom industrial production line, workers are required to carry out grinding and polishing on the biscuit by handheld polishing tools according to strict measurement data, the working difficulty is high, the efficiency is low, and great differences exist among ceramic biscuit processing methods for different batches, so that the difficulty and the strength of manual operation are increased. Even in ideal cases, the effect and quality of manual polishing appear to be uneven in the same batch of workpieces. In addition, the technical scheme of the ceramic bathroom trimming work of the robot introduced by most factories at present is to acquire the polishing track of the robot for teaching, and the scheme has the following defects: first, the polishing track that teaching robot obtained is comparatively fixed, does not have the space of adjustment, can produce huge influence to final polishing quality. Secondly, there is very big difference between the polishing tracks required between different products and different batches of the same product, and aiming at different situations, the teaching scheme needs to be continuously changed, which is very complicated.
Therefore, the invention provides a robot blank repairing method based on 3D vision, which adds 3D vision perception, combines a mechanical arm, improves the automation degree on a production line, and automatically plans a path by establishing a workpiece model.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a mechanical arm blank repairing method based on 3D vision.
The method can acquire multi-view 3D data of different types of workpieces and reconstruct three-dimensionally to obtain a reconstruction model. The method has strong adaptability, can reasonably plan the track of the mechanical arm according to the reconstructed model and the machining target, and can be transmitted to the physical mechanical arm for machining and execution after simulation verification, thereby improving the safety and reliability of the blank repairing process.
A3D vision-based mechanical arm trimming method comprises the following steps:
step 1: scene point cloud data of a workpiece containing a multi-angle target biscuit is collected. The method comprises the steps of acquiring point cloud data by using a fusion type binocular depth camera, and acquiring depth data by using the sensor based on a binocular stereoscopic imaging principle and an infrared structure light ranging principle. And placing the target workpiece to be scanned on a rotating platform with a controllable rotating angle, forming a fixed relative position between the depth camera and the rotating platform, placing the workpiece on the rotating platform at a certain initial position, and performing stepping rotation by taking the fixed angle as an increment to obtain multi-angle information of the target biscuit. The depth camera scans and records each angle in the scene, and then transmits the angle back to the computer, and the point cloud data are stored in a PCD file form according to a time sequence, and are named as N View angle scene point cloud files of View1, view2, view3, view N and the like
Step 2: and extracting a target point cloud. And step 1, acquiring multi-View scene point clouds View1, view2 and View3, wherein the three views are ViewN, and eliminating irrelevant data points and extracting target workpiece data. The method comprises the following specific steps:
step 2-1: and (3) setting ROI parameters according to the relative positions of the depth camera and the rotary platform in the step (1), and carrying out ROI region segmentation screening on space points in the complex scene point clouds View1, view2 and View 3. And primarily dividing into small scene point clouds only comprising three parts of ground, a rotary platform and a workpiece.
Step 2-2: and (3) representing the point cloud data set in the small scene point cloud obtained in the last step as:
A{a1,a2,a3,...an};
and performing plane fitting in the point cloud set A by using a random sampling consistency (RANSAC) algorithm, wherein the RANSAC algorithm fits a plane and plane parameters of the plane in the small scene, and the points in the set A are divided into two types of points on the plane and points which do not belong to the plane. And recording subscripts of the data points in the plane and subscripts of the data points not in the plane, carrying out plane removing processing according to the subscripts, and eliminating the points in the plane. After the plane parameters are obtained by the RANSAC algorithm, the position of the plane in the small scene point cloud can be determined, the height of the rotating platform is H through measurement, all points which are located at the height of H above the plane can be removed, and therefore data of the rotating platform part are removed, and preliminary target workpiece data are obtained.
Step 2-3: the preliminary target workpiece data obtained in step 2-2 may have outliers generated by the algorithmic inefficiency, and surface burr noise and edge noise left when the 3D vision sensor collects data, so that errors are generated in subsequent steps. Therefore, a statistical analysis filter (statisticcaloutlierremotion) is used to filter the above result as input to remove outliers and surface outlier noise. Finally, multi-View target workpiece point cloud data Obj1, obj2, obj3, obj n are extracted from View1, view2, view 3.
Step 3: the multi-view target point Yun Peizhun is obtained by registering Obj1, obj2, obj3 obtained in step 2-3, and performing global stitching on the two-by-two object points.
Step 3-1: and establishing a Gaussian mixture model for the point clouds of two adjacent view angles. Selecting two adjacent point-of-view clouds to be registered from Obj1, obj2, obj3, and then setting a target point cloud Scene and a point cloud Model to be registered. Gaussian continuous probability density distribution functions are known as:
where μ is the mean vector, Σ is the covariance matrix, and d is the dimension of the data.
The gaussian mixture model is built according to the following criteria:
1) The number of gaussian components in the gaussian mixture model is equal to the number of point clouds in each point cloud dataset.
2) For each gaussian component in the gaussian mixture model, its mean vector is set according to the spatial location of the point.
3) All gaussian components in the gaussian mixture model share the same covariance matrix.
Finally, all gaussian components as described above are added with the same weight, and it is possible to obtain:
wherein w is i Weight coefficients of the gaussian mixture model. And (3) establishing a Gaussian mixture Model gmm (S) for the Scene and the Model according to the rule, wherein gmm (M) represents the functional relation in the step (2), and S represents the point cloud Scene and M represents the point cloud Model.
Step 3-2: establishing a transformation matrix between two point clouds with parameters theta, wherein the Model point clouds subjected to parameter transformation are expressed as Transform (M, theta), and the Gaussian mixture Model can be expressed as gmm (M, theta)), wherein the Transform represents a function of performing corresponding rigid transformation according to the transformation matrix with the parameters theta;
step 3-3: and (3) carrying out difference square integration on the two Gaussian mixture models to establish a differential objective function:
∫(gmm(S)-gmm(T(M,θ))) 2 dx (3)
and (3) taking the fixed rotation parameter used in the step (1) as an initial value of the parameter theta, and performing iterative optimization operation by using a Gaussian Newton algorithm to obtain a parameter value when the objective function is minimum. And calculates the transformation matrix T from the parameter values.
Step 3-4: the Obj1 is used as a reference for registration, and the coordinate system in which Obj1 is located is used as a reference coordinate system. According to step 3-3, the transformation matrix T is obtained by performing registration processing on Obj1 and Obj2 (Obj 1 is regarded as Scene and Obj2 is regarded as Model) 12 Obj2 may pass through T 12 The matrix is transformed into a reference coordinate system. Performing Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T 23 Obj3 can pass through T 12 *T 23 The matrix is transformed into a reference coordinate system. Sequentially performing registration calculation of two visual angles, transforming to a reference coordinate system according to a transformation matrix, splicing multi-view point clouds,and obtaining the full view point cloud model.
Step 4: and (3) resampling and bug repairing are carried out on the surface of the workpiece on the basis of the full view point cloud model established in the point cloud step 3, the bug part of the surface is reconstructed by carrying out high-order polynomial interpolation on surrounding point data, and the normal vector and curvature characteristics of the surface caused by registration are corrected. And (3) performing point cloud surface reconstruction by using a Delaunay triangulation method, and fitting and approximating originally scattered point clouds to a real surface to obtain a reconstruction model.
Step 5: and planning a mechanical arm track. Based on ROS operating system and Moveit-! The framework establishes the corresponding simulation environment in the Rviz to verify the track correctness, and the specific steps are as follows:
step 5-1: the ROS Moveit SetupAssistant is used to build the functional packages required for the robotic arm engineering and motion planning. And importing a mechanical arm URDF structure description model file used in a blank repairing scene according to the steps, configuring a self-collision matrix (used for judging potential self-collision in the motion process), creating a planning group, selecting a kinematics analyzer, defining an initial default pose of the robot, and configuring a tail end tool (such as a polishing tool) of the robot.
Step 5-2: and importing a reconstructed model of the target biscuit into the ROS-Rviz environment and a model file of the mechanical arm. Obtaining discrete non-communication processing Target points Target1, target2 and Target3 on the surface of the reconstructed model, calculating a motion track point according to the Target points by using a movit frame to avoid joint collision, avoid collision with a biscuit and other targets, obtaining joint data of the mechanical arm by using a kinematic solver through inverse solution, and executing a repairing path in an imitation environment. After confirming the error, the final mechanical arm processing path joint point and normal vector are sent to a bottom layer controller of the mechanical arm to execute the operation.
Preferably, the end tool of step 5-1 is a sanding tool, or a cutting, or welding tool, or a fixture.
The invention has the advantages and positive effects that:
1. the invention has low use cost and good compatibility, can realize high-elasticity use, establishes modeling aiming at different types of workpieces with various sizes and plans according to the model, and meets the requirements of different types of users.
2. The invention adopts the three-dimensional reconstruction model to carry out track planning, has high generalization degree on modeling targets, can adapt to workpieces with different structures and different types and materials, and carries out processing track planning. The problem that the teaching mechanical arm processing track can only be operated in advance without adjustment and pertinence and is low in precision because of manual work is solved.
3. The invention acquires real-time data based on 3D visual sensing, adopts a fusion depth camera and an active three-dimensional reconstruction algorithm, and has higher precision, reliability, stability and robustness compared with the algorithm defects and precision problems existing in a passive image three-dimensional modeling mode.
4. The mechanical arm motion planning method introduces Moveit-! The motion frame can adapt to mechanical arms of various models and brands, can realize hot plug use, improves elasticity and convenience, is matched with a simulation environment, can verify the correctness of an algorithm, and avoids unnecessary economic loss during processing.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is acquired scene point cloud data.
Fig. 3 a) is a small scene point cloud for initial ROI segmentation.
Fig. 3 b) is a target extraction result point cloud data.
Fig. 4 is a working process of the point cloud registration step.
FIG. 5 is a three-dimensional reconstruction result
Fig. 6 is an example of a two-dimensional code for calibration.
FIG. 7 is a motion_setup_Assistant operation interface
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A mechanical arm trimming method based on 3D vision is shown in a working flow chart in fig. 1, and comprises five steps of data acquisition, target extraction, point cloud registration, surface reconstruction, track planning and the like. Fig. 2 shows scene point cloud data captured by data acquisition, including target workpieces, work environments and other impurities, and the number of point clouds is huge and spread over noise. Fig. 3 a) -3 b) are view angle target workpiece data after target extraction processing. Fig. 4 point cloud registration step operation. Figure 5 is a three-view of the model of the bathroom workpiece after reconstruction.
In connection with fig. 1-5, embodiments of the present invention are as follows:
step 1: and (5) scene point cloud data acquisition. Work scene point cloud data was acquired using a Intel Realsense D435 depth camera. The depth camera was placed on a 1.8m high tripod and connected to the USB3.0 port of the computer using an elongated USB3.0 transmission data line. The ceramic bathroom biscuit workpiece is placed on a rotary platform (the initial pose can be right opposite to the depth camera), the ROI parameter between the rotary platform and the depth camera is measured and recorded, the rotation angle of the rotary platform is set (45 degrees for example), the rotary platform controls the rotation angle through a motor, and the depth camera transmits collected data to a computer every 45 degrees until the workpiece rotates by 360 degrees. Scene point cloud data of one angle is collected as shown in fig. 2.
The depth camera and the computer transmit images and point cloud data through a USB connecting wire. The computer adopts a ubuntu16.04 operating system plus ROS Kinetic plus PCL as a software platform for data collection, processing and presentation.
The realsense-ROS function package supported by the ROS community publishes point cloud information transmitted from the realsense depth camera to the computer. The topic names corresponding to the point cloud information can be consulted through the rostopic, the point cloud information can be subscribed to, the published sensor_msgs/PointCloud2 format data can be processed through the from ROSmsg conversion function contained in the pcl_convertions in the PCL function library to generate corresponding PCL format point cloud data, the corresponding PCL format point cloud data is exported through IO operation and is stored in a hard disk as PCD files, and the files are View1, view2 and View 3.
Step 2: and extracting a target point cloud.
Step 2-1: setting ROI parameters according to the relative positions of the depth camera and the rotating platform in step 1,
and (3) performing ROI region segmentation screening on the space points in the cloud View1, view2 and View3 of the complex scene points. And primarily dividing into small scene point clouds only comprising three parts of ground, a rotary platform and a workpiece. The segmentation effect is shown in fig. 3 a).
Step 2-2: and (3) carrying out plane removal processing on the basis of 2-1, and further removing the plane and the rotary platform. The point cloud data set in the small scene point cloud obtained in the step 2-1 is expressed as:
A{a1,a2,a3,...an};
and performing plane fitting in the point cloud set A by using a random sampling consistency (RANSAC) algorithm, wherein the RANSAC algorithm fits a plane and plane parameters of the plane in the small scene, and the points in the set A are divided into two types of points on the plane and points which do not belong to the plane. And recording subscripts of the data points in the plane and subscripts of the data points not in the plane, carrying out plane removing processing according to the subscripts, and eliminating the points in the plane. After the plane parameters are obtained by the RANSAC algorithm, the position of the plane in the small scene point cloud can be determined, the height of the rotating platform is H through measurement, all points which are located at the height of H above the plane can be removed, and therefore data of the rotating platform part are removed, and preliminary target workpiece data are obtained.
Step 2-3: the preliminary target workpiece data obtained in step 2-2 may have outliers generated by the algorithmic inefficiency, and surface burr noise and edge noise left when the 3D vision sensor collects data, so that errors are generated in subsequent steps. Therefore, a statistical analysis filter (statisticcaloutlierremotion) is used to filter the above result as input to remove outliers and surface outlier noise. Finally, multi-View target workpiece point cloud data Obj1, obj2, obj3, obj n are extracted from View1, view2, view 3. The extracted target workpiece data is shown in fig. 3 b).
Step 3: the multi-view target point Yun Peizhun is obtained by registering Obj1, obj2, obj3 obtained in step 2-3, and performing global stitching on the two-by-two object points.
Step 3-1: and (3) performing registration processing on the point clouds of the two frames of adjacent view angles of Obj1, wherein Obj1 is a target point cloud Scene, and Obj2 is a point cloud Model to be registered. Gaussian continuous probability density distribution functions are known as:
where μ is the mean vector, Σ is the covariance matrix, and d is the dimension of the data.
Establishing a Gaussian mixture model of Obj1 and Obj2 according to the following criteria:
1) The number of gaussian components in the gaussian mixture model is equal to the number of point clouds in each point cloud dataset.
2) For each gaussian component in the gaussian mixture model, its mean vector is set according to the spatial location of the point.
3) All gaussian components in the gaussian mixture model share the same covariance matrix.
Finally, all gaussian components as described above are added with the same weight, and it is possible to obtain:
wherein w is i Weighting coefficients for each gaussian component in the gaussian mixture model. And (3) establishing a Gaussian mixture Model gmm (S) for the Scene and the Model according to the rule, wherein gmm (M) represents the functional relation in the step (2), and S represents the point cloud Scene and M represents the point cloud Model.
Step 3-2: establishing a transformation matrix between two point clouds with parameters theta, wherein the Model point clouds subjected to parameter transformation are expressed as Transform (M, theta), and the Gaussian mixture Model can be expressed as gmm (M, theta)), wherein the Transform represents a function for performing corresponding rigid transformation according to the transformation matrix with the parameters theta;
step 3-3: and (3) carrying out difference square integration on the two Gaussian mixture models to establish a differential objective function:
∫(gmm(S)-gmm(T(M,θ))) 2 dx (3)
the fixed rotation parameter used in step 1 is used as an initial value (for example, 45 °) of the parameter θ, and iterative optimization operation is performed using a gaussian newton algorithm, so as to obtain a parameter value at which the objective function is minimized. And calculates the transformation matrix T from the parameter values.
Step 3-4: the Obj1 is used as a reference for registration, and the coordinate system in which Obj1 is located is used as a reference coordinate system. Registering to obtain a transformation matrix T 12 (transformation matrix from Obj2 to Obj 2), obj2 may be represented by T 12 The matrix is transformed into a reference coordinate system. Performing Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T 23 Obj3 can pass through T 12 *T 23 The matrix is transformed into a reference coordinate system. And sequentially carrying out registration calculation of every two visual angles, transforming the registration calculation to a reference coordinate system according to a transformation matrix, and splicing multi-view point cloud to obtain a full-view point cloud model. The optimization process and final effect of the point cloud registration is shown in figure 4.
Step 4: and (3) resampling and bug repairing are carried out on the surface of the workpiece on the basis of the full view point cloud model established in the point cloud step 3, the bug part of the surface is reconstructed by carrying out high-order polynomial interpolation on surrounding point data, and the normal vector and curvature characteristics of the surface caused by registration are corrected. And (3) performing point cloud surface reconstruction by using a Delaunay triangulation method, and fitting and approximating originally scattered point clouds to a real surface to obtain a reconstruction model. Three views of the reconstructed model are shown in fig. 5.
Step 5: and planning a mechanical arm track. Before the mechanical arm performs the task, the hand-eye calibration is needed. The coordinate data points of the reconstructed model surface are positioned under the depth camera coordinate system, and the manipulator and the realsense depth camera are required to be calibrated by hand and eye when the data are converted into the robot coordinate system through hand and eye calibration. And (3) attaching a two-dimensional code to the tail end of the mechanical arm by using an Aruco Marker method in an opencv_confrib module, wherein a transformation matrix between a Realsense color image coordinate system and a mechanical arm base is obtained by a two-dimensional code example as shown in a figure 6. And reading a transformation matrix between the color image coordinate system and the point cloud data coordinate from tf information issued by the RealSense functional package, and obtaining a transformation relation between the point cloud data coordinate system and the mechanical arm base according to a rigid transformation principle, so that the processing point coordinate on the workpiece model can be converted into the base coordinate system of the mechanical arm.
Step 5-1: the function package required for the mechanical arm engineering and motion planning is established by using ROS Moveit SetupAssistant, and the motion_setup_assistant operation interface is shown in fig. 7. And importing a mechanical arm URDF structure description model file used in a blank repairing scene according to the steps, configuring a self-collision matrix (used for judging potential self-collision in the motion process), creating a planning group, selecting a kinematic resolver, defining an initial default pose of the robot, and configuring a tail end tool of the robot, wherein the tail end tool can be a polishing tool, a cutting basis, a welding tool or a clamp.
ROS Moveit SetupAssistant, and corresponding functional packages can be built according to the prompt operation.
Step 5-2: introducing a bathroom workpiece model and a simulation model of a mechanical arm into an ROS-Kinetic-RViz simulation environment, and using a Movet-! The method comprises the steps that a motion planner plug-in generates a desired track in a limited motion area under various constraints such as collision of joints, a kinematic inverse solution plug-in is used for solving each path point in the track to generate mechanical arm joint point data, the data are firstly sent to a mechanical arm in a simulation model, execution and verification are carried out, after reliability and error free are confirmed, the joint point data are sent to a bottom controller of the mechanical arm in a ROS action communication mode, and the machining track is executed.
It should be emphasized that the embodiments described herein are illustrative rather than limiting, and that this invention includes, by way of example, but is not limited to the specific embodiments described herein, as other embodiments similar thereto will occur to those of skill in the art based upon the teachings herein and fall within the scope of this invention.

Claims (5)

1. A mechanical arm blank repairing method based on 3D vision comprises the following specific steps:
step 1: acquiring scene point cloud data of a workpiece containing a multi-angle target biscuit; the method specifically comprises the following steps:
acquiring point cloud data by using a fusion type binocular depth camera, wherein the fusion type binocular depth camera acquires depth book data based on a binocular stereoscopic imaging principle and an infrared structure light ranging principle; the method comprises the steps that a target workpiece to be scanned is placed on a rotating platform capable of controlling a rotating angle, a fixed relative position is formed between a fusion type binocular depth camera and the rotating platform, the workpiece is placed on the rotating platform at a certain initial position, stepping rotation is carried out by taking the fixed angle as an increment, so that multi-angle information of a target biscuit is obtained, the depth camera scans and records each angle in a scene and then returns the scanned and recorded information to a computer, point cloud data are stored in a PCD file mode according to a time sequence, and the point cloud data are named as View1, view2 and View 3;
step 2: extracting a target point cloud; collecting the scene point clouds View1, view2 and View3 with multiple views obtained in the step 1, eliminating irrelevant data points in View N, and extracting target workpiece data; the method comprises the following specific steps:
step 2-1: setting ROI parameters according to the relative positions of the depth camera and the rotary platform in the step 1, and carrying out ROI region segmentation screening on space points in the complex scene point clouds View1, view2 and View 3; primarily dividing small scene point clouds only comprising three parts of ground, a rotary platform and a workpiece;
step 2-2: and (3) representing the point cloud data set in the small scene point cloud obtained in the step (2-1) as:
A{a1,a2,a3,...an};
performing plane fitting in the point cloud set A by using a random sampling consistency RANSAC algorithm, wherein the RANSAC algorithm fits a plane and plane parameters of the plane in a small scene, and the points in the set A are divided into two types of points on the plane and points which do not belong to the plane; recording subscripts of data points in a plane and subscripts of data points not in the plane, carrying out plane removing treatment according to the subscripts, and eliminating points in the plane; after the plane parameters are obtained by the RANSAC algorithm, the position of the plane in the small scene point cloud can be determined, the height of the rotating platform is H through measurement, all points which are located at the height of H above the plane can be removed, and therefore data of the rotating platform part are removed, and preliminary target workpiece data are obtained;
step 2-3: the preliminary target workpiece data obtained in the step 2-2 have outliers generated by the lack of algorithmic efficiency, and surface burr noise and edge noise left when the 3D vision sensor collects data, so that errors are generated in the subsequent steps; filtering the result as input by using a statistical analysis filter statisticcaloutlierremotion to remove outlier and surface outlier noise; finally, extracting multi-View target workpiece point cloud data Obj1, obj2, obj3 from View1, view2, view3, view n;
step 3: the multi-view target point Yun Peizhun is obtained by registering Obj1, obj2, obj3 obtained in step 2-3 in pairs, performing global stitching on the object n, and obtaining a complete model;
step 3-1: establishing a Gaussian mixture model for point clouds of two adjacent view angles; selecting two adjacent view point clouds to be registered from Obj1, obj2, obj3, & gt, obj n, setting a target point cloud Scene and a point cloud Model to be registered; gaussian continuous probability density distribution functions are known as:
wherein μ is a mean vector, Σ is a covariance matrix, and d is the dimension of the data;
the gaussian mixture model is built according to the following criteria:
1) The number of Gaussian components in the Gaussian mixture model is equal to the number of point clouds in each point cloud data set;
2) For the gaussian component in each gaussian mixture model, its mean vector is set according to the spatial position of the point;
3) All Gaussian components in the Gaussian mixture model share the same covariance matrix;
finally, all gaussian components as described above are added with the same weight, and it is possible to obtain:
wherein w is i Weighting coefficients for each gaussian component in the gaussian mixture model; establishing a Gaussian mixture Model gmm (S) for the Scene and the Model according to the rule, wherein gmm (M) represents the functional relation in the step (2), and inputting S represents the point cloud Scene and M represents the point cloud Model;
step 3-2: establishing a transformation matrix between two point clouds with parameters theta, wherein the Model point clouds subjected to parameter transformation are expressed as Transform (M, theta), and the Gaussian mixture Model can be expressed as gmm (M, theta)), wherein the Transform represents a function for performing corresponding rigid transformation according to the transformation matrix with the parameters theta;
step 3-3: and (3) carrying out difference square integration on the two Gaussian mixture models to establish a differential objective function:
∫(gmm(S)-gmm(T(M,θ))) 2 dx (3)
taking the fixed rotation parameter used in the step 1 as an initial value of the parameter theta, and performing iterative optimization operation by using a Gaussian Newton algorithm to obtain a parameter value when the objective function is minimum; calculating a transformation matrix T according to the parameter values;
step 3-4: taking Obj1 as a reference of registration, and taking a coordinate system in which Obj1 is positioned as a reference coordinate system; according to step 3-3, obj1 is regarded as Scene, obj2 is regarded as Model, and Obj1 and Obj2 are registered to obtain a transformation matrix T 12 Obj2 may pass through T 12 Matrix transformation is carried out under a reference coordinate system; performing Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T 23 Obj3 can pass through T 12 *T 23 Matrix transformation is carried out under a reference coordinate system; sequentially carrying out registration calculation of every two visual angles, transforming the registration calculation into a reference coordinate system according to a transformation matrix, and splicing multi-view point clouds to obtain a full-view point cloud model;
step 4: resampling and bug repairing are carried out on the surface of the workpiece on the basis of the full view point cloud model established in the point cloud step 3, the bug part of the surface is reconstructed by carrying out high-order polynomial interpolation on surrounding point data, and the normal vector and curvature characteristics of the surface caused by registration are corrected; performing point cloud surface reconstruction by using a Delaunay triangular subdivision method, and fitting and approximating originally scattered point clouds to a real surface to obtain a reconstruction model;
step 5: planning a track of the mechanical arm; based on ROS operating system and Moveit-! The method comprises the following steps of obtaining target points, solving inverse kinematics, planning tracks in a simulated Rviz environment, verifying correctness, and finally executing a work task: the method comprises the steps of carrying out a first treatment on the surface of the
Step 5-1: using ROS Moveit Setup Assistant to establish a functional package required by mechanical arm engineering and motion planning; importing a mechanical arm URDF structure description model file used in a blank repairing scene according to the steps, configuring a self-collision matrix to judge potential self-collision in the motion process, creating a planning group, selecting a kinematics analyzer, defining an initial default pose of the robot, and configuring a robot tail end tool;
step 5-2: introducing a reconstructed model of the target biscuit into an ROS-Rviz environment and a model file of the mechanical arm; obtaining discrete non-communication processing Target points Target1, target2 and Target3 on the surface of the reconstructed model, calculating a motion track point by using a movit frame with the aim of avoiding joint collision and biscuit collision as targets according to the Target points, obtaining joint data of the mechanical arm by using a kinematic solver through inverse solution, and executing a biscuit repairing path in an imitation environment; after confirming the error, the final mechanical arm processing path point and normal vector are sent to a bottom layer controller of the mechanical arm to execute the operation.
2. The method for repairing a blank using a 3D vision-based mechanical arm of claim 1, wherein the end tool of step 5-1 is a grinding tool.
3. The method for repairing a blank of a mechanical arm based on 3D vision as set forth in claim 1, wherein the end tool in the step 5-1 is a cutting tool.
4. The method for repairing a blank of a mechanical arm based on 3D vision as set forth in claim 1, wherein the end tool in the step 5-1 is a fixture.
5. The method for repairing a blank of a mechanical arm based on 3D vision as set forth in claim 1, wherein the end tool in the step 5-1 is a welding tool.
CN202110168422.XA 2021-02-07 2021-02-07 Mechanical arm blank repairing method based on 3D vision Active CN112862878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110168422.XA CN112862878B (en) 2021-02-07 2021-02-07 Mechanical arm blank repairing method based on 3D vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110168422.XA CN112862878B (en) 2021-02-07 2021-02-07 Mechanical arm blank repairing method based on 3D vision

Publications (2)

Publication Number Publication Date
CN112862878A CN112862878A (en) 2021-05-28
CN112862878B true CN112862878B (en) 2024-02-13

Family

ID=75988953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110168422.XA Active CN112862878B (en) 2021-02-07 2021-02-07 Mechanical arm blank repairing method based on 3D vision

Country Status (1)

Country Link
CN (1) CN112862878B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408916B (en) * 2021-06-28 2023-12-29 河南唐都科技有限公司 Fire-fighting facility detection and field acceptance assessment system based on intelligent AI and mobile APP
CN114055781B (en) * 2021-10-24 2023-12-29 扬州大学 Self-adaptive correction method for fuel tank welding mechanical arm based on point voxel correlation field
CN114102274A (en) * 2021-11-12 2022-03-01 苏州大学 3D printing part processing method
CN114407015A (en) * 2022-01-28 2022-04-29 青岛理工大学 Teleoperation robot online teaching system and method based on digital twins
CN115255806B (en) * 2022-07-21 2024-03-26 北京化工大学 Industrial robot billet crack repairing and grinding system and method based on 3D attitude information
CN116394235B (en) * 2023-03-16 2023-11-21 中国长江电力股份有限公司 Dry ice cleaning track planning system and method for large part robot based on three-dimensional measurement
CN117162098B (en) * 2023-10-07 2024-05-03 合肥市普适数孪科技有限公司 Autonomous planning system and method for robot gesture in narrow space

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN110264567A (en) * 2019-06-19 2019-09-20 南京邮电大学 A kind of real-time three-dimensional modeling method based on mark point
CN110977982A (en) * 2019-12-19 2020-04-10 南京理工大学 Depth vision-based double-mechanical-arm control method
CN111251295A (en) * 2020-01-16 2020-06-09 清华大学深圳国际研究生院 Visual mechanical arm grabbing method and device applied to parameterized parts

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886528A (en) * 2017-11-30 2018-04-06 南京理工大学 Distribution line working scene three-dimensional rebuilding method based on a cloud
CN110264567A (en) * 2019-06-19 2019-09-20 南京邮电大学 A kind of real-time three-dimensional modeling method based on mark point
CN110977982A (en) * 2019-12-19 2020-04-10 南京理工大学 Depth vision-based double-mechanical-arm control method
CN111251295A (en) * 2020-01-16 2020-06-09 清华大学深圳国际研究生院 Visual mechanical arm grabbing method and device applied to parameterized parts

Also Published As

Publication number Publication date
CN112862878A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112862878B (en) Mechanical arm blank repairing method based on 3D vision
CN107871328B (en) Machine vision system and calibration method implemented by machine vision system
CN109202912B (en) Method for registering target contour point cloud based on monocular depth sensor and mechanical arm
JP6004809B2 (en) Position / orientation estimation apparatus, information processing apparatus, and information processing method
CN111644935A (en) Robot three-dimensional scanning measuring device and working method
Zou et al. Fault-tolerant design of a limited universal fruit-picking end-effector based on vision-positioning error
CN106272424A (en) A kind of industrial robot grasping means based on monocular camera and three-dimensional force sensor
CN111476841B (en) Point cloud and image-based identification and positioning method and system
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
CN110065068B (en) Robot assembly operation demonstration programming method and device based on reverse engineering
CN113751981B (en) Space high-precision assembling method and system based on binocular vision servo
CN113327281A (en) Motion capture method and device, electronic equipment and flower drawing system
CN113172659B (en) Flexible robot arm shape measuring method and system based on equivalent center point identification
CN114407015A (en) Teleoperation robot online teaching system and method based on digital twins
CN116276328A (en) Robot polishing track optimization method based on digital twin and visual transmission technology
CN113362463A (en) Workpiece three-dimensional reconstruction method based on Gaussian mixture model
CN113793383A (en) 3D visual identification taking and placing system and method
CN112907682B (en) Hand-eye calibration method and device for five-axis motion platform and related equipment
Borangiu et al. Robot arms with 3D vision capabilities
Biqing et al. Research on Picking Identification and Positioning System Based on IOT.
Seçil et al. 3-d visualization system for geometric parts using a laser profile sensor and an industrial robot
CN114463495A (en) Intelligent spraying method and system based on machine vision technology
CN117576227B (en) Hand-eye calibration method, device and storage medium
CN116872216B (en) Robot vision servo operation method based on finite time control
CN116494248B (en) Visual positioning method of industrial robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant