CN116012442A - Visual servo method for automatically correcting pose of tail end of mechanical arm - Google Patents
Visual servo method for automatically correcting pose of tail end of mechanical arm Download PDFInfo
- Publication number
- CN116012442A CN116012442A CN202211170736.4A CN202211170736A CN116012442A CN 116012442 A CN116012442 A CN 116012442A CN 202211170736 A CN202211170736 A CN 202211170736A CN 116012442 A CN116012442 A CN 116012442A
- Authority
- CN
- China
- Prior art keywords
- pose
- mechanical arm
- target
- point cloud
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Manipulator (AREA)
Abstract
According to the visual servo method for automatically correcting the pose of the tail end of the mechanical arm, 3D point cloud data and 2D color images of a target position are collected by using a depth camera, SIFT matching is carried out on the 2D color images of the current position and the 2D color images of the target, and pixel information of n matching points is obtained; according to the pixel coordinates of the matching points, finding out corresponding 3D point cloud coordinates to serve as ICP, and obtaining a point cloud conversion matrix; and converting to a mechanical arm tail end transformation matrix through calculation. The planar image feature point is extracted by SIFT by combining SIFT and ICP algorithms in a planar-before-three-dimensional mode. And (3) performing ICP matching by using point cloud coordinates corresponding to the extracted pixel characteristics, calculating a pose transformation matrix of the target point cloud and the current point cloud, combining with an optimization SIFT algorithm, finding out the mean value and standard deviation of the Euclidean distance difference of the corresponding characteristic points on the original matching points, reducing the extraction interval of the Euclidean distance difference of the matching points, and performing secondary screening.
Description
Technical Field
The invention belongs to the field of computer vision and industrial robot automatic operation, and relates to a vision servo method for automatically correcting the pose of the tail end of a mechanical arm.
Background
With the intensive research of intelligent robots, automatic operation robots for inspection of subway train bottom tracks are gradually rising. In the operation process of the robot, the motion paths of the chassis and the mechanical arm are planned, and the current work can be completed only by shooting the images of the parts at the bottom of the vehicle when the robot runs to a specified place. However, it is impossible to completely stop the train at the same position every time it stops at the station. Acceleration during the stopping of the train also affects the distance between the cars. Therefore, the working environment of the inspection robot cannot be unchanged, and the high-precision requirement of image acquisition cannot be met by simply relying on the fixed movement of the chassis and the mechanical arm.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a visual servo method for automatically correcting the pose of the tail end of a mechanical arm based on a monocular depth camera.
The invention is realized by the following technical scheme:
a visual servo method for automatically correcting the pose of the tail end of a mechanical arm comprises the following steps:
1) The inspection robot moves to a preset position, the mechanical arm moves to a preset pose, a monocular depth camera is started, and RGB images and depth images of the bottom of the train under the current pose are collected;
2) Performing SIFT feature detection and matching on the current RGB image and the actual target RGB image, filtering a matching result, and removing matching features with larger errors; the data of the actual target RGB image is transported in a patrol robot picture library;
3) Converting the matched plane feature points to pixel coordinates on a corresponding target depth map, and converting the matched points on the target depth map to three-dimensional point cloud coordinates under a camera coordinate system according to the internal parameters of the monocular depth camera; the target depth map data are stored in the inspection robot memory;
4) Calculating a pose conversion matrix between the point clouds by adopting a nearest neighbor point iterative algorithm ICP according to the coordinates of the current point cloud and the target point cloud;
5) And converting the point cloud pose conversion matrix into a conversion matrix of the current pose of the camera and the target pose of the camera through the mapping relation of the camera, and calculating a rotation matrix and a translation vector of the mechanical arm moving from the current pose to the target pose through the pose relation of the mechanical arm and the camera.
Preferably, the monocular depth camera employs an intel realsense D455 monocular depth camera and the robotic arm employs an auboi5 six degree of freedom robotic arm.
Preferably, the specific steps of performing SIFT feature detection and matching on the current RGB image and the actual target RGB image are as follows:
let n pairs of matched feature points in total, and the pixel coordinates of the matched points on the current RGB image be (x 2i ,y 2i ) The pixel coordinates of the matching points on the actual target RGB image are (x 1i ,y 1i ),
Δx i =x 1i -x 2i ………………………………………………………………(1)
Δy i =y 1i -y 2i ……………………………………………………………………(2)
Then:
when Deltax i E (μx-0.5 x σx, μx+0.5 x σx) and Δy i When E (μy-0.5 x σy, μy+0.5 x σy), the required feature point condition is satisfied.
The invention adopts a mode of firstly plane and then stereo, combines a SIFT algorithm and an ICP algorithm, and firstly uses SIFT to extract the characteristic points of the plane image before the three-dimensional point cloud space position matching is carried out. Performing ICP matching by using point cloud coordinates corresponding to the extracted pixel characteristics, calculating a pose transformation matrix of the target point cloud and the current point cloud, and combining with an optimization SIFT algorithm: in feature point filtering, the SIFT algorithm itself adopts the ratio of the Euclidean distance of adjacent feature points to the Euclidean distance of next adjacent feature points to be in a certain threshold interval so as to improve the matching accuracy. The ICP algorithm is greatly affected by the initial value, and 2D images taken at different angles often have similar vector variations for the feature descriptors of the pixels. In order to collect more effective and accurate characteristic points, on the basis of a SIFT algorithm, a statistical method is adopted to find out the mean value and standard deviation of the difference of Euclidean distances of the corresponding characteristic points on the original matching points, the extraction interval of the difference of Euclidean distances of the matching points is reduced, and secondary screening is carried out.
Drawings
Fig. 1 is a simplified monocular visual servoing schematic.
Detailed Description
The present invention is described in further detail below in conjunction with specific embodiments to provide a better understanding of the present technical solution.
The invention is implemented by using an Intel Realsense D455 monocular depth camera and an auboi5 six-degree-of-freedom mechanical arm. The method mainly comprises three steps: and (3) data acquisition, target determination, feature matching and tail end pose adjustment of the mechanical arm.
Step one, data acquisition and target determination: since the target position needs to be known, an RGB image (color 1) and a depth image (depth 1) of the target position need to be acquired before a servo experiment is performed. After the target position is determined and data are acquired, the mechanical arm enters a force control mode, and the mechanical arm is manually moved for a certain distance (including translation and rotation, and the displacement error of the detection target under the robot stop position during simulated inspection) at will.
Step two, feature matching: the pose of the tail end of the mechanical arm after the artificial movement is identified as the current pose. The camera captures an RGB image (color 2) and a depth image (depth 2) in the current pose. SIFT feature detection and matching are carried out on color1 and color2, n pairs of matched feature points are shared, and pixel coordinates of the matched points on color1 are (x 1i ,y 1i ) The matching point pixel coordinates on color2 are (x 2i ,y 2i )。
Δx i =x 1i -x 2i ……………………………………………………………(1)
Δy i =y 1i -y 2i ………………………………………………(2)
Then:
when Deltax i E (μx-0.5 x σx, μx+0.5 x σx) and Δy i When the E (mu y-0.5 x sigma y, mu y+0.5 x sigma y), the required characteristic point condition is satisfied.
Matching and filtering RGB images to the characteristic points one by oneIn the depth map, m pairs of matched feature points are obtained after filtering, and the corresponding depth map pixel coordinates are depth1 (u 1i ,v 1i ),depth2(u 2i ,v 2i )。
According to the conversion relation between pixel coordinates and camera coordinates:
converting the matched depth map pixel coordinates to camera coordinates depth1: (u) 1i ,v 1i )→P 1 (x 1i ,y 1i ,z 1i )depth2:(u 1i ,v 1i )→P 2 (x 2i ,y 2i ,z 2i )
Calculating to obtain P through an ICP point cloud matching algorithm 1 And P 2 Pose conversion matrix T of (2) x (R x ,t x )。
As shown in fig. 1, a simple monocular visual servo is shown. If the mechanical arm is to be driven from T 1 (R 1 ,t 1 ) (the former in brackets represents the rotation matrix and the latter represents the translation vector) motion to T 2 (R 2 ,t 2 ),. The two poses are respectively seen by the monocular stereo camera: t (T) c1 (R c1 ,t c1 ) T is as follows c2 (R c2 ,t c2 ). The pose of the tail end of the mechanical arm to be moved is as follows: t (T) x (R x ,t x ). The pose is transformed as: t ' (R ', T ').
From the geometric relationships, a system of equations can be established:
T′·T C1 =T x ·T′·T C2 ..............(7)
T 1 =T x ·T 2 .........................................(8)
it is known that:
the method can obtain the following steps:
the T is calculated 2 (R 2 ,t 2 ) The pose matrix of the pose of the tail end of the mechanical arm reaching the target is obtained.
And thirdly, adjusting the pose of the tail end of the mechanical arm. To move the mechanical arm to T 2 Pose state, need to rotate matrix R 2 Converted into quaternion q.
Thus, the target pose (t) of the mechanical arm can be obtained 2 Q), i.e. (X, Y, Z, w, X, Y,z)。
the results of the multiple experiments are as follows:
Claims (3)
1. the visual servo method for automatically correcting the pose of the tail end of the mechanical arm is characterized by comprising the following steps of:
1) The inspection robot moves to a preset position, the mechanical arm moves to a preset pose, a monocular depth camera is started, and RGB images and depth images of the bottom of the train under the current pose are collected;
2) Performing SIFT feature detection and matching on the current RGB image and the actual target RGB image, filtering a matching result, and removing matching features with larger errors; the data of the actual target RGB image is transported in a patrol robot picture library;
3) Converting the matched plane feature points to pixel coordinates on a corresponding target depth map, and converting the matched points on the target depth map to three-dimensional point cloud coordinates under a camera coordinate system according to the internal parameters of the monocular depth camera; the target depth map data are stored in the inspection robot memory;
4) Calculating a pose conversion matrix between the point clouds by adopting a nearest neighbor point iterative algorithm ICP according to the coordinates of the current point cloud and the target point cloud;
5) And converting the point cloud pose conversion matrix into a conversion matrix of the current pose of the camera and the target pose of the camera through the mapping relation of the camera, and calculating a rotation matrix and a translation vector of the mechanical arm moving from the current pose to the target pose through the pose relation of the mechanical arm and the camera.
2. The visual servo method for automatically correcting the pose of the tail end of the mechanical arm according to claim 1, wherein the monocular depth camera is an intel realsense D455 monocular depth camera, and the mechanical arm is an auboi5 six-degree-of-freedom mechanical arm.
3. The visual servo method for automatically correcting the pose of the tail end of the mechanical arm according to claim 1, wherein the specific steps of performing SIFT feature detection and matching on a current RGB image and an actual target RGB image are as follows:
let n pairs of matched feature points in total, and the pixel coordinates of the matched points on the current RGB image be (x 2i ,y 2i ) The pixel coordinates of the matching points on the actual target RGB image are (x 1i ,y 1i ),
Δx i =x 1i -x 2i ...........................(1)
Δy i =y 1i -y 2i ..............................................(2)
Then:
when Deltax i E (μx-0.5 x σx, μx+0.5 x σx) and Δy i When E (μy-0.5 x σy, μy+0.5 x σy), the required feature point condition is satisfied.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211170736.4A CN116012442A (en) | 2022-09-23 | 2022-09-23 | Visual servo method for automatically correcting pose of tail end of mechanical arm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211170736.4A CN116012442A (en) | 2022-09-23 | 2022-09-23 | Visual servo method for automatically correcting pose of tail end of mechanical arm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116012442A true CN116012442A (en) | 2023-04-25 |
Family
ID=86030544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211170736.4A Pending CN116012442A (en) | 2022-09-23 | 2022-09-23 | Visual servo method for automatically correcting pose of tail end of mechanical arm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116012442A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117140539A (en) * | 2023-11-01 | 2023-12-01 | 成都交大光芒科技股份有限公司 | Three-dimensional collaborative inspection method for robot based on space coordinate transformation matrix |
-
2022
- 2022-09-23 CN CN202211170736.4A patent/CN116012442A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117140539A (en) * | 2023-11-01 | 2023-12-01 | 成都交大光芒科技股份有限公司 | Three-dimensional collaborative inspection method for robot based on space coordinate transformation matrix |
CN117140539B (en) * | 2023-11-01 | 2024-01-23 | 成都交大光芒科技股份有限公司 | Three-dimensional collaborative inspection method for robot based on space coordinate transformation matrix |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767423B (en) | mechanical arm target positioning and grabbing method based on binocular vision | |
Song et al. | CAD-based pose estimation design for random bin picking using a RGB-D camera | |
CN109794963B (en) | Robot rapid positioning method facing curved surface component | |
CN111476841B (en) | Point cloud and image-based identification and positioning method and system | |
CN112509063A (en) | Mechanical arm grabbing system and method based on edge feature matching | |
CN111897349A (en) | Underwater robot autonomous obstacle avoidance method based on binocular vision | |
CN113311873B (en) | Unmanned aerial vehicle servo tracking method based on vision | |
CN113421291B (en) | Workpiece position alignment method using point cloud registration technology and three-dimensional reconstruction technology | |
CN112883984B (en) | Mechanical arm grabbing system and method based on feature matching | |
CN113034600A (en) | Non-texture planar structure industrial part identification and 6D pose estimation method based on template matching | |
CN114519738A (en) | Hand-eye calibration error correction method based on ICP algorithm | |
US20230419531A1 (en) | Apparatus and method for measuring, inspecting or machining objects | |
CN113554757A (en) | Three-dimensional reconstruction method and system for workpiece track based on digital twinning | |
CN116012442A (en) | Visual servo method for automatically correcting pose of tail end of mechanical arm | |
CN116766194A (en) | Binocular vision-based disc workpiece positioning and grabbing system and method | |
CN114494463A (en) | Robot sorting method and device based on binocular stereoscopic vision technology | |
CN116749233A (en) | Mechanical arm grabbing system and method based on visual servoing | |
Ding et al. | The detection of non-cooperative targets in space by using 3D point cloud | |
JP2778430B2 (en) | Three-dimensional position and posture recognition method based on vision and three-dimensional position and posture recognition device based on vision | |
CN113822946A (en) | Mechanical arm grabbing method based on computer vision | |
Motai et al. | SmartView: hand-eye robotic calibration for active viewpoint generation and object grasping | |
JPH06258028A (en) | Method and system for visually recognizing three dimensional position and attitude | |
CN111452036B (en) | Workpiece grabbing method based on line laser binocular stereoscopic vision | |
Han | Study on Dynamic Target Positioning and Grabbing Based on Binocular Vision | |
Xin et al. | Real-time dynamic system to path tracking and collision avoidance for redundant robotic arms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |