CN109676604A - Robot non-plane motion localization method and its motion locating system - Google Patents
Robot non-plane motion localization method and its motion locating system Download PDFInfo
- Publication number
- CN109676604A CN109676604A CN201811604262.3A CN201811604262A CN109676604A CN 109676604 A CN109676604 A CN 109676604A CN 201811604262 A CN201811604262 A CN 201811604262A CN 109676604 A CN109676604 A CN 109676604A
- Authority
- CN
- China
- Prior art keywords
- robot
- frame
- motion
- measurement unit
- curved surface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000004807 localization Effects 0.000 title abstract 2
- 238000005259 measurement Methods 0.000 claims abstract description 96
- 239000011159 matrix material Substances 0.000 claims abstract description 84
- 238000013519 translation Methods 0.000 claims abstract description 79
- 230000001133 acceleration Effects 0.000 claims abstract description 23
- 238000005457 optimization Methods 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000005484 gravity Effects 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 9
- 238000001179 sorption measurement Methods 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000006073 displacement reaction Methods 0.000 abstract 1
- 238000004804 winding Methods 0.000 abstract 1
- 230000000007 visual effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000009194 climbing Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The present invention relates to robot non-plane motion localization method and its motion locating systems, this method passes through the method calculating robot spin matrix of characteristic matching and displacement using the color image and deep image information of curved surface in front of RGB-D depth camera acquisition robot motion;The robot motion's surface points cloud obtained by depth image is spliced using the course angle and translation vector of the re-projection error amendment robot of characteristic point using the pitch angle and roll angle of the data-optimized robot of 3-axis acceleration of Inertial Measurement Unit IMU simultaneously;Constraint is minimized according to the position constraint and Inertial Measurement Unit imu error of robot and surface points cloud, nonlinear optimization is carried out to position and posture, obtain final posture information;Dictionary is made in characteristic point in key frame, is detected for winding.The present invention can carry out position positioning and attitude orientation to robot non-plane motion, and system structure is simple, and positioning accuracy is high, being capable of long-time reliably working.
Description
Technical Field
The invention relates to a computer vision technology, in particular to a robot curved surface motion positioning method and a motion positioning system thereof.
Background
With the rapid development of the robot technology, industrial mobile robots are widely used. A wall-climbing robot, which is one of industrial mobile robots, performs work instead of manual work by being adsorbed on a large structure or equipment surface by magnetic adsorption or air pressure. The autonomous position location and the attitude location are basic conditions for realizing automatic or semi-automatic work of the wall-climbing robot.
Under the working environment that strong magnetic field interference exists in a closed space and the surrounding environment and the typical structural characteristics are lacked, GPS signals are difficult to receive, a magnetometer is easy to be interfered by the surrounding magnetic field, and the application of a range finding method is limited due to the fact that the driving wheels of the robot slip. It is difficult for a single navigation system to simultaneously satisfy navigation requirements of various environments. Therefore, two or more devices with different characteristics are combined together to form a comprehensive system, and respective advantages can be fully exerted to achieve the best overall performance. The combined mode of the visual sensor and the inertial sensor has the advantages of low price, simple structure, rich information content and the like, and is concerned and applied. In the existing visual inertial positioning method, when a monocular camera is used as a visual sensor, a motion parameter needs to be acquired by applying an optical flow method on the premise of assuming constant brightness, and the influence of ambient environment illumination is large; when a binocular camera is used as a vision sensor, the three-dimensional coordinates of the feature points are calculated and optimized by a trigonometry method, the requirements on the number and the identifiability of the feature points are high, the similarity of the feature points on the surface of large equipment or a structural member is high, and mismatching is easy to occur; when an RGB-D (depth image) depth camera is used as a visual sensor, chinese patent document CN201711077826 discloses a positioning method and system based on visual inertial navigation information fusion, which uses harris corner point calculation descriptors as feature points, but discards depth information of non-harris corner points, and has a large error and more loss information in the position positioning of curved surface motion.
Disclosure of Invention
The invention aims to provide a robot curved surface motion positioning method and a motion positioning system thereof, which have the advantages of simple structure, strong anti-interference capability, high positioning precision and the like, thereby further improving the autonomous positioning level of the robot.
The technical scheme of the invention is as follows:
the robot curved surface motion positioning method comprises the following steps:
1) establishing a robot curved surface motion positioning system, and defining a coordinate system: the robot curved surface motion positioning system comprises a robot, wherein an RGB-D camera, an inertia measurement unit IMU, an adsorption device and a motion mechanism are arranged on the robot, the adsorption device is used for enabling the robot to be adsorbed on the surface of an object, and the motion mechanism is used for driving the robot to move;
the coordinate system comprises a space inertia coordinate system, an RGB-D camera coordinate system, an inertia measurement unit IMU coordinate system and a robot coordinate system;
registering a color image and a depth image of the RGB-D camera;
2) the robot moves, a RGB-D camera fixedly connected in front of the robot is used for collecting color images and depth images of the surface of an object in real time, and an inertial measurement unit IMU (inertial measurement unit) is used for collecting three-axis angular velocity omega of the robot moving on a curved surfacek(ωx,ωy,ωz) And three-axis acceleration Ak(ax,ay,az);
3) Extracting feature points from the color image acquired in the step 2) by using an ORB feature extraction method, calculating a descriptor by using a BRIEF algorithm, completing feature matching by using a nearest neighbor search method, and calculating a rotation matrix of the robot under a spatial inertial coordinate systemAnd translation vector
4) Between every two frames of color images, according to the Euler angle (α) corresponding to the k-1 frame imagek-1,βk-1,γk-1) Calculating Euler angle of k frame according to output value of inertial measurement unit IMUTaking the output value of the IMU triaxial accelerometer as the component A of the gravity acceleration on three axesk(ax,ay,az) And using said component Ak(ax,ay,az) Updating roll angle βkAnd a pitch angle gammakThereby correcting the Euler angle of the k-th frame using the updated roll angle βkAnd a pitch angle gammakCorrection step 3) rotation matrixCalculating the reprojection error of the feature point of the last frame in the k frame, and correcting the course angle α by using the least square methodkAnd translation vectorCalculating the three-dimensional coordinate P of the robot according to the Euler angle and translation vector of the corrected k-th frametUpdating the three-axis angular velocity error b of the inertial measurement unit IMU by using a least square methodk(bx,by,bz) And the error b of the three-axis angular velocityk(bx,by,bz) Returning to the inertial measurement unit IMU to correct the measurement value of the inertial measurement unit IMU;
5) splicing the RGB-D depth images by using the rotation matrix and the translation vector corrected in the step 4) to generate a robot motion curved surface point cloud, and performing voxel grid filtering on the curved surface point cloud;
6) according to the principle that the robot posture is adaptive to the motion curved surface of the robot, the corrected translation vector is optimized by combining the motion constraint construction nonlinear equation of the robot on the curved surface to obtain the translation vector of the robot in the k frame
7) And (3) performing closed-loop detection, comparing the BRIEF descriptor of the feature point in the k frame with the descriptor in the BoW bag-of-words model, determining the similarity when the similarity is greater than a set value, updating the pose information of the robot according to the three-dimensional coordinates of the feature point corresponding to the descriptor in the BoW bag-of-words model, and otherwise, adding the feature point in the k frame into the BoW bag-of-words model and returning to the step 3.
Preferably, the Euler angle (α) corresponding to the k-1 frame image in the step 4) is selectedk-1,βk-1,γk-1) Calculating Euler angle of k frame according to output value of inertial measurement unit IMUIs represented by formula (1) and formula (2):
Rk-1=DCMFromEuler(αk-1,βk-1,γk-1) (1)
wherein DCMFromEuler converts Euler angle into rotation matrix;
eulerfrombcm is the conversion of the rotation matrix into euler angles;
Ωkan inertial measurement unit IMU acquires the three-axis angular velocity of the kth frame of the robot moving on a motion curved surface;
bk-1is the three-axis angular velocity error of the inertial measurement unit IMU of the (k-1) th frame;
Rk-1is a rotation matrix of the robot under a k-1 frame space inertial coordinate system;
at is the time difference between the kth frame and the (k-1) th frame.
Preferably, the heading angle α is modified by a nonlinear equation that constructs the reprojection error of the feature point of the k-th frame in the k-1 th framekAnd translation vectorAs shown in formula (3):
wherein,respectively representing the three-dimensional coordinates of the feature points in the kth frame and the kth-1 frame;
αkis the robot heading angle;
Rk(αk) Is the rotation matrix from the k-1 th frame to the k-th frame.
Preferably, the inertial measurement unit IMU has a three-axis angular velocity error bk(bx,by,bz) The calculation steps are as follows:
firstly, a rotation matrix from a k-m frame to a k frame is calculated according to an output value of an inertial measurement unit IMU by adopting an equation (4)
Wherein, Jr(ΩjΔ t) is the calculation of ΩjA Right Jacobian matrix of Δ t over the lie group;
k is the current frame;
Ωjis the three-axis angular velocity (omega) of the j frame of the robot moving on the motion surface acquired by the inertial measurement unit IMUx,ωy,ωz);
[Ωj]×Is omegajAn antisymmetric matrix of (a);
Rk-mis a rotation matrix under a space inertia coordinate system corresponding to the robot in the k-m frame;
then, the rotation matrix from the k-m frame to the k frame is calculated according to the image feature matching by adopting the formula (5)
The rotation matrix of the robot in the jth frame under a space inertia coordinate system;
RCBis a rotation matrix from the RGB-D camera coordinate system to the inertial measurement unit IMU coordinate system;
using the equations (4) and (5), the three-axis angular velocity error b of the inertial measurement unit IMU is measuredkConstructing a nonlinear equation as shown in equation (6),
obtaining the optimal b by iterative solution of formula (6) by Gauss-Newton methodk。
Preferably, in step 6), the method for optimizing the modified translation vector includes:
setting up upward unit vector of vertical motion curved surface in robot coordinate systemPreliminarily estimating the position P of the robot according to the Euler angle and the translational vector of the k frame acquired in the step 4)tSelecting P on the motion surface sigmatFitting out P from nearby discrete pointstTangent plane of the point and finding the corresponding normal vectorResidual r1Calculating as formula (7):
wherein R iskIs a rotation matrix of the robot under a k frame space inertial coordinate system;
is a unit vector of the upward direction of the vertical motion curve in the robot coordinate system;
calculating the distance r from the position coordinates of the robot to the motion curved surface2The calculation formula is shown as formula (8):
wherein P istIndicating calculated value of robot position, MtRepresenting a point on the point cloud of the motion curved surface of the robot in the step 5), which is closest to the calculated position of the robot,is the robot translation vector to be optimized;
the nonlinear equation for optimizing and constructing the translation vector is as follows:
and iteratively solving the optimal value of the translation vector between every two frames of images of the robot by a Gauss-Newton method.
Preferably, the inertial measurement unit IMU includes a gyroscope and a three-axis accelerometer.
Preferably, the radius of curvature of the surface of the object is 5 m or more.
Preferably, the component A of the acceleration of gravity in the three axial directions of the accelerometer is utilizedk(ax,ay,az) Updating roll angle βkAnd a pitch angle gammakThe specific method is that the rotation matrix in the step 3) is firstly rotatedConverting into Euler angle representation mode, and converting into horizontal lineThe roll angle is updated to βkUpdating the pitch angle to gammakThen converting the Euler angle into a new rotation matrix Rk。
The invention also provides a robot curved surface motion positioning system, which adopts the robot curved surface motion positioning method to position the robot curved surface motion, and comprises the following components: the system comprises an RGB-D depth camera, an inertial measurement unit IMU, a wall-climbing robot, a coordinate system building module, a registration module, a rotation matrix and translation vector acquisition module, a rotation matrix and translation vector correction module, an RGB-D depth image splicing module, a translation vector optimization module and a similarity comparison module, wherein:
the RGB-D camera is used for acquiring a color image and a depth image of the surface of an object in real time;
the inertial measurement unit IMU is used for acquiring the three-axis angular velocity omega of the robot moving on the curved surfacek(ωx,ωy,ωz) And three-axis acceleration Ak(ax,ay,az);
The coordinate system construction module is used for constructing a space inertia coordinate system, an RGB-D camera coordinate system, an inertia measurement unit IMU coordinate system and a robot coordinate system;
the registration module is used for registering the color image and the depth image of the RGB-D camera;
the rotation matrix and translation vector acquisition module is used for extracting feature points in the color image acquired in the step 2) by using an ORB feature extraction method, calculating a descriptor by using a BRIEF algorithm, completing feature matching by using a nearest neighbor search method, and calculating a rotation matrix of the robot under a spatial inertial coordinate systemAnd translation vector
Rotation matrixAnd a translation vector modification module for modifying the Euler angles (α) between every two frames of color images according to the k-1 th frame of imagek-1,βk-1,γk-1) Calculating Euler angle of k frame according to output value of inertial measurement unit IMUTaking the output value of the IMU triaxial accelerometer as the component A of the gravity acceleration on three axesk(ax,ay,az) And using said component Ak(ax,ay,az) Updating roll angle βkAnd a pitch angle gammakThereby correcting the Euler angle of the k-th frame using the updated roll angle βkAnd a pitch angle gammakCorrection step 3) rotation matrixCalculating the reprojection error of the feature point of the last frame in the k frame, and correcting the course angle α by using the least square methodkAnd translation vectorCalculating the three-dimensional coordinate P of the robot according to the Euler angle and translation vector of the corrected k-th frametUpdating the three-axis angular velocity error b of the inertial measurement unit IMU by using a least square methodk(bx,by,bz) And the error b of the three-axis angular velocityk(bx,by,bz) Returning to the inertial measurement unit IMU to correct the measurement value of the inertial measurement unit IMU;
the RGB-D depth image splicing module splices the RGB-D depth images by using the rotation matrix and the translation vector corrected in the step 4) to generate a robot motion curved surface point cloud, and performs voxel grid filtering on the curved surface point cloud;
the translation vector optimization module is used for constructing a nonlinear equation by combining the motion constraint of the robot on the curved surface according to the principle that the robot posture is adaptive to the motion curved surface of the robot and performing correction on the translation vectorLine optimization is carried out to obtain a translation vector of the robot in the k frame
And the similarity comparison module is used for executing closed-loop detection, comparing the BRIEF descriptor of the feature point in the k frame with the descriptor in the BoW bag-of-words model, determining the similarity when the similarity is greater than a set value, updating the position and posture information of the robot according to the three-dimensional coordinates of the feature point corresponding to the descriptor in the BoW bag-of-words model, and otherwise, adding the feature point in the k frame into the BoW bag-of-words model.
① adopts RGB-D depth camera to directly obtain three-dimensional coordinates of characteristic points, omits the calculation of depth value of triangulation method, ② adopts inertial measurement unit IMU auxiliary measurement method to avoid the problem that the depth can not be calculated under the rotation condition of the robot in monocular vision, ③ adopts RGB-D depth image splicing to obtain the point cloud of the robot motion curved surface, carries out secondary optimization on the attitude and position parameters of the robot, and increases the positioning precision.
Drawings
FIG. 1 is a schematic flow chart of a robot surface motion positioning system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for acquiring a curved surface motion and an RGB-D camera of a robot according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating generation of a robot motion surface point cloud by stitching RGB-D depth images by using a rotation matrix and a translation vector according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the method for optimizing position location and orientation parameters by using a curved point cloud according to an embodiment of the present invention.
In the figure: 1-RGB-D depth camera; 2-an inertial measurement unit IMU; 3-a wall climbing robot; 4-large structural members; 5-RGB-D depth camera field of view region; 6-depth point cloud curved surface; 7-depth point cloud tiles; 8-depth point cloud splicing repetition part.
Detailed Description
The following describes embodiments of a robot curved surface motion positioning method and a motion positioning system thereof according to the present invention with reference to the accompanying drawings. Those of ordinary skill in the art will recognize that the described embodiments can be modified in various different ways, or combinations thereof, without departing from the spirit and scope of the present invention. Accordingly, the drawings and description are illustrative in nature and not intended to limit the scope of the claims. Furthermore, in the present description, the drawings are not to scale and like reference numerals refer to like parts.
Fig. 1 shows a schematic flow chart of a robot curved surface motion positioning method, which includes the following steps:
1) establishing a robot curved surface motion positioning system, and defining a coordinate system: the robot curved surface motion positioning system comprises a robot, wherein an RGB-D camera, an inertia measurement unit IMU, an adsorption device and a motion mechanism are arranged on the robot, the adsorption device is used for adsorbing the surface of an object, and the motion mechanism is used for driving the robot to move; the coordinate system comprises a space inertia coordinate system, an RGB-D camera coordinate system, an inertia measurement unit IMU coordinate system and a robot coordinate system; the color image and the depth image of the RGB-D camera are registered. As shown in fig. 2, the wall-climbing robot 3 moves on the surface of a large structure 4, an RGB-D depth camera 1 is provided at the front end of the wall-climbing robot 3, and an IMU inertial measurement unit 2 is further provided on the wall-climbing robot 3. Denoted 5 is the camera area of the RGB-D depth camera 1.
2) The robot moves on the surface of large equipment or structural part (curvature radius is more than 5 m) as shown in the attached figure 2, simultaneously, a RGB-D camera fixedly connected in front of the robot and with a downward shooting direction collects color images and depth images in real time, and an inertial measurement unit IMU (inertial measurement unit) collects three-axis angular velocity omega of the robot moving on a curved surfacek(ωx,ωy,ωz) And three-axis acceleration Ak(ax,ay,az)。
3) Extracting feature points from the color image collected in step 2) by using ORB (organized FAST and rolling BRIEF, which is an algorithm for rapidly extracting feature points and descriptors), detecting a circle of pixel values around a candidate feature point based on the gray-level values of the image around the feature point by the method for extracting feature points, and considering the candidate point as a feature point if there are enough pixels in the field around the candidate point to have a large difference with the gray-level value of the candidate point (the large difference is compared with a set threshold and is larger than the set threshold, the difference is considered to be large enough). And calculating the descriptor by adopting a BRIEF feature descriptor extraction algorithm, and completing feature matching by utilizing a nearest neighbor searching method, namely searching the descriptor nearest to the descriptor of one feature point so as to complete feature matching. According to the matching of the image characteristic points of two continuous frames, a least square method is adopted to solve the rotation matrix and the translation vector of the robot for matching the image characteristic points of the two continuous frames, specifically, the square sum of the difference values of the three-dimensional coordinates of the image characteristic points of the two continuous frames is minimized, so that the rotation matrix of the robot under a space inertia coordinate system is calculatedAnd translation vector
4) Between every two frames of images, the Euler angle of the robot corresponding to the previous frame of image (α)k-1,βk-1,γk-1) Calculating Euler angle of current frame according to IMU output value of inertial measurement unitThe Euler angle corresponding to the previous frame image is converted into a rotation matrix R in the formula (1)k-1The calculation is convenient, the formula (2) is to carry out recursion to solve the rotation matrix at the next moment according to the output value of the inertial measurement unit IMU, and convert the rotation matrix of the robot into Euler angle for representation:
Rk-1=DCMFromEuler(αk-1,βk-1,γk-1) (1)
the robot moves at low speed on a large-curvature curved surface, the acceleration of the robot is approximately 0, the output value of the IMU triaxial accelerometer of the inertial measurement unit is approximately the projection of the gravity acceleration on three axes, and the component A of the gravity acceleration in the triaxial direction of the triaxial accelerometer is utilizedk(ax,ay,az) Calculating robot roll angle βkAnd a pitch angle gammakAnd the robot roll angle calculated by the inertial measurement unit IMU in the alternative formula (2)And a pitch angleThe specific method is that the rotation matrix in the step 3) is firstly rotatedConverting the data into Euler angle representation mode, and then calculating the roll angle β of the robot according to the formula (3)kCalculating the pitch angle gamma of the robot according to the formula (4)k,
βk=atan2(ax,az) (3)
Wherein a isx,ay,azRespectively outputting values of an inertial measurement unit IMU triaxial accelerometer;
atan2(ax,az) Is to ask for axAnd azA four-quadrant arctangent function of the ratio;
arcsin is an arcsine function.
The roll angle of the robot calculated by the inertial measurement unit IMU in the formula (2)Is updated to βkAngle of pitch of robotUpdated to gammakThen, the updated Euler angle is converted into a new rotation matrix Rk. R is to bekIs taken as αkFunction of (1), denoted as Rk(αk) (ii) a In the space inertial coordinate system, the feature points in the previous frame are comparedBy rotating the matrix Rk(αk) And translation vectorProjected to the current frame, the characteristic points corresponding to the current frameSubtracting to obtain reprojection error, summing the reprojection errors of all the feature points, and correcting course angle α by least square methodkAnd translation vectorAs shown in formula (5):
wherein,respectively representing the three-dimensional coordinates of the feature point in the current frame and the last frame;
calculating the three-dimensional coordinate P of the robot according to the corrected Euler angle and translation vector of the current framet(xt,yt,zt) And constructing a nonlinear equation about the three-axis angular velocity error of the inertial measurement unit IMU again, and calculating the three-axis angular velocity error b of the inertial measurement unit IMUk(bx,by,bz). The error b of the three-axis angular velocityk(bx,by,bz) And returning the data to the inertial measurement unit IMU to correct the measurement value of the inertial measurement unit IMU.
The specific method is that firstly, a rotation matrix from a k-m frame to a current frame (a k frame) is calculated according to an output value of an inertial measurement unit IMU, and the formula is (6):
wherein Jr(ΩjΔ t) is the calculation of ΩjA Right Jacobian matrix of Δ t over the lie group;
[Ωj]×is omegajAn antisymmetric matrix of (a);
and then, calculating a rotation matrix from the k-m frame to the current frame (the k frame) according to image feature matching, wherein the formula is (7):
whereinRCBIs a rotation matrix from the RGB-D camera coordinate system to the inertial measurement unit IMU coordinate system;
according to the orthogonal property of the rotation matrix, under the condition of no error,andshould be the same as each other and should be,andthe product is a unit matrix, and a triaxial angular velocity error nonlinear equation of the inertial measurement unit IMU is constructed as shown in the formula (8):
5) and (3) splicing the RGB-D depth images by using the rotation matrix and the translation vector corrected in the step 4), as shown in the attached figure 3, wherein an existing depth point cloud curved surface 6 is shown in figure 3-1, and a depth point cloud splicing block 7 is shown in figure 3-2, wherein a shadow part is a repeated part in splicing. And adding the new point cloud coordinates into the existing depth point cloud plane 6, and removing the depth point cloud splicing repeated part 8 to generate a new robot motion surface point cloud as shown in fig. 3-3. And then carrying out voxel grid filtering on the curved surface point cloud.
6) According to the principle that the robot posture is adaptive to the motion curved surface of the robot, the corrected translation vector is optimized by combining the motion constraint construction nonlinear equation of the robot on the curved surface, and the translation vector of the robot at the kth moment is obtained
The following is a detailed descriptionThe following are disclosed: the robot moves on the curved surface, errors are easy to accumulate due to the acceleration quadratic integration mode, and more reliable position location can be obtained by adopting the translation vector obtained in the optimization step 4). The robot is in the tangent plane of the corresponding position on the motion curved surface in the speed direction at any moment, in the embodiment, the wheel type wall-climbing robot is adopted, the curvature of large equipment or structural parts is small, the large equipment or structural parts can be approximately in instantaneous plane motion, and a unit vector which is vertical to the motion curved surface and faces upwards is arranged in a robot coordinate systemRobot edgeThe direction movement speed is 0, and the position P of the robot is preliminarily estimated according to the Euler angle and the translation vector of the current frame acquired in the step 4)tSelecting P on the motion surface sigmatFitting out P from nearby discrete pointstTangent plane of the point and finding the corresponding normal vectorUnder ideal conditionsAndis 0, and is therefore selectedAndis optimized as a residual error, as shown in equation (9):
wherein R iskIs the inertia seat of the robot in the k time spaceA rotation matrix under a mark system;
is a unit vector of the upward direction of the vertical motion curve in the robot coordinate system;
the robot coordinate should be in the motion curved surface obtained in step 5) in an ideal state, but due to the influence of factors such as three-axis angular velocity and acceleration errors, image feature matching errors and calculation precision errors output by the inertial measurement unit IMU, a situation that a three-dimensional space point corresponding to the robot position coordinate deviates from the motion curved surface exists, as shown in fig. 4, points a, b and c in the figure are positions of the robot on a point cloud curved surface under an ideal condition, and point a*、b*、c*The robot position under the condition of error existence is kept, the robot positioning error can be effectively reduced by keeping the distance square sum from the position coordinate of the robot to the motion curved surface to be minimum, and the translation vector optimization calculation formula is as shown in the formula (10):
wherein P istIndicating calculated value of robot position, MtRepresents a point on the robot motion surface point cloud which is nearest to the robot calculation position,is the robot translation vector to be optimized;
the nonlinear equation for optimizing and constructing the translation vector is as follows:
iterative solution of translation vector between every two frames of images of robot through Gauss-Newton methodThe optimum value of (c).
7) And performing closed loop detection, comparing the BRIEF description operator of the feature point in the current frame with the description operator in the BoW bag-of-words model, judging the similarity when the similarity is greater than a set value, updating the position and posture information of the robot according to the three-dimensional coordinates of the feature point corresponding to the description operator in the BoW bag-of-words model, and otherwise, adding the new feature point into the BoW bag-of-words model.
In an alternative embodiment, the inertial measurement unit IMU comprises a gyroscope and a three-axis accelerometer.
The invention also provides a robot curved surface motion positioning system which is used for positioning the robot curved surface motion by adopting the method and comprises the RGB-D depth camera 1, the inertia measurement unit IMU2 and the wall-climbing robot 3. As shown in fig. 2, the wall-climbing robot 3 moves on the surface of a large structure 4, an RGB-D depth camera 1 is provided at the front end of the wall-climbing robot 3, and an IMU inertial measurement unit 2, an adsorption device (not shown), and a motion mechanism (not shown) are further provided on the wall-climbing robot 3. The adsorption device is used for adsorbing on the surface of an object, and the movement mechanism is used for driving the robot to move. Denoted 5 is the camera area of the RGB-D depth camera 1. The method comprises the steps of collecting color images and depth images in real time through an RGB-D camera fixedly connected with the front of the robot and shooting the images in a downward direction, and collecting three-axis angular velocity omega of the robot moving on a curved surface through an inertial measurement unit IMUk(ωx,ωy,ωz) And three-axis acceleration Ak(ax,ay,az)。
Further, the robot curved surface motion positioning system further comprises a coordinate system building module, which is used for building the robot curved surface motion positioning system and defining a coordinate system: the coordinate system comprises a space inertia coordinate system, an RGB-D camera coordinate system, an inertia measurement unit IMU coordinate system and a robot coordinate system.
Further, the system comprises a registration module for registering the color image and the depth image of the RGB-D camera.
Further, the method further includes a rotation matrix and translation vector obtaining module, configured to extract feature points from the color image collected in step 2) by using ORB (organized FAST and rolling BRIEF, which is an algorithm for quickly extracting feature points and descriptors), detect a circle of pixel values around a candidate feature point based on gray-scale values of the image around the feature point, and if there are enough pixel points in the field around the candidate point and gray-scale values of the candidate point are different enough (where the enough gray-scale value difference is compared with a set threshold and is greater than the set threshold, the gray-scale value difference is considered to be large enough), the candidate point is considered as a feature point. And calculating the descriptor by adopting a BRIEF feature descriptor extraction algorithm, and completing feature matching by utilizing a nearest neighbor searching method, namely searching the descriptor nearest to the descriptor of one feature point so as to complete feature matching. According to the matching of the image characteristic points of two continuous frames, a least square method is adopted to solve the rotation matrix and the translation vector of the robot for matching the image characteristic points of the two continuous frames, specifically, the square sum of the difference values of the three-dimensional coordinates of the image characteristic points of the two continuous frames is minimized, so that the rotation matrix of the robot under a space inertia coordinate system is calculatedAnd translation vector
Further, the robot system comprises a rotation matrix and translation vector correction module for correcting the Euler angle (α) of the robot according to the previous image between every two imagesk-1,βk-1,γk-1) Calculating Euler angle of current frame according to IMU output value of inertial measurement unitThe Euler angle corresponding to the previous frame image is converted into a rotation matrix R in the formula (1)k-1Convenient calculation, the formula (2) is based on the IMU output value of the inertia measurement unitAnd (3) solving a rotation matrix at the next moment in a row recursion manner, and converting the rotation matrix of the robot into Euler angle representation:
Rk-1=DCMFromEuler(αk-1,βk-1,γk-1) (1)
the robot moves at low speed on a large-curvature curved surface, the acceleration of the robot is approximately 0, the output value of the IMU triaxial accelerometer of the inertial measurement unit is approximately the projection of the gravity acceleration on three axes, and the component A of the gravity acceleration in the triaxial direction of the triaxial accelerometer is utilizedk(ax,ay,az) Calculating robot roll angle βkAnd a pitch angle gammakAnd the robot roll angle calculated by the inertial measurement unit IMU in the alternative formula (2)And a pitch angleThe specific method is that the rotation matrix in the step 3) is firstly rotatedConverting the data into Euler angle representation mode, and then calculating the roll angle β of the robot according to the formula (3)kCalculating the pitch angle gamma of the robot according to the formula (4)k,
βk=atan2(ax,az) (3)
Wherein a isx,ay,azRespectively outputting values of an inertial measurement unit IMU triaxial accelerometer;
atan2(ax,az) Is to ask for axAnd azA four-quadrant arctangent function of the ratio;
arcsin is an arcsine function.
The roll angle of the robot calculated by the inertial measurement unit IMU in the formula (2)Is updated to βkAngle of pitch of robotUpdated to gammakThen, the updated Euler angle is converted into a new rotation matrix Rk. R is to bekIs taken as αkFunction of (1), denoted as Rk(αk) (ii) a In the space inertial coordinate system, the feature points in the previous frame are comparedBy rotating the matrix Rk(αk) And translation vectorProjected to the current frame, the characteristic points corresponding to the current frameSubtracting to obtain reprojection error, summing the reprojection errors of all the feature points, and correcting course angle α by least square methodkAnd translation vectorAs shown in formula (5):
wherein,are respectively a characteristic pointThree-dimensional coordinates of a current frame and a previous frame;
calculating the three-dimensional coordinate P of the robot according to the corrected Euler angle and translation vector of the current framet(xt,yt,zt) And constructing a nonlinear equation about the three-axis angular velocity error of the inertial measurement unit IMU again, and calculating the three-axis angular velocity error b of the inertial measurement unit IMUk(bx,by,bz). The error b of the three-axis angular velocityk(bx,by,bz) And returning the data to the inertial measurement unit IMU to correct the measurement value of the inertial measurement unit IMU.
The specific method is that firstly, a rotation matrix from a k-m frame to a current frame (a k frame) is calculated according to an output value of an inertial measurement unit IMU, and the formula is (6):
wherein Jr(ΩjΔ t) is the calculation of ΩjA Right Jacobian matrix of Δ t over the lie group;
[Ωj]×is omegahAn antisymmetric matrix of (a);
and then, calculating a rotation matrix from the k-m frame to the current frame (the k frame) according to image feature matching, wherein the formula is (7):
wherein R isCBIs a rotation matrix from the RGB-D camera coordinate system to the inertial measurement unit IMU coordinate system;
according to the orthogonal property of the rotation matrix, under the condition of no error,andshould be the same as each other and should be,andthe product is a unit matrix, and a triaxial angular velocity error nonlinear equation of the inertial measurement unit IMU is constructed as shown in the formula (8):
further, the depth image mosaic method further comprises an RGB-D depth image mosaic module, which is used for mosaicing the RGB-D depth image by using the rotation matrix and the translation vector corrected in the step 4), as shown in fig. 3, fig. 3-1 shows an existing depth point cloud curved surface 6, and fig. 3-2 shows a depth point cloud mosaic block 7, wherein a shadow part is a repeated part in mosaicing. And adding the new point cloud coordinates into the existing depth point cloud plane 6, and removing the depth point cloud splicing repeated part 8 to generate a new robot motion surface point cloud as shown in fig. 3-3. And then carrying out voxel grid filtering on the curved surface point cloud.
Further, the system also comprises a translation vector optimization module which is used for optimizing the modified translation vector by combining a motion constraint construction nonlinear equation of the robot on the curved surface according to the principle that the robot posture should be adapted to the motion curved surface of the robot, so as to obtain the translation vector of the robot at the kth moment
The following is specifically described: the robot moves on the curved surface, errors are easy to accumulate due to the acceleration quadratic integration mode, and more reliable position location can be obtained by adopting the translation vector obtained in the optimization step 4). The tangent plane of the corresponding position of the robot on the motion curved surface in the speed direction at any momentIn the present case, the wheel type wall-climbing robot is adopted, the curvature of large equipment or structural parts is very small, instantaneous plane motion can be approximated, and the unit vector of the upward direction of the vertical motion curve is arranged in the robot coordinate systemRobot edgeThe direction movement speed is 0, and the position P of the robot is preliminarily estimated according to the Euler angle and the translation vector of the current frame acquired in the step 4)tSelecting P on the motion surface sigmatFitting out P from nearby discrete pointstTangent plane of the point and finding the corresponding normal vectorUnder ideal conditionsAndis 0, and is therefore selectedAndis optimized as a residual error, as shown in equation (9):
wherein R iskThe method comprises the following steps of (1) obtaining a rotation matrix of a robot under a k-time space inertia coordinate system;
is that the vertical motion curved surface in the robot coordinate system faces upwardsA unit vector of orientation;
the robot coordinate should be in the motion curved surface obtained in step 5) in an ideal state, but due to the influence of factors such as three-axis angular velocity and acceleration errors, image feature matching errors and calculation precision errors output by the inertial measurement unit IMU, a situation that a three-dimensional space point corresponding to the robot position coordinate deviates from the motion curved surface exists, as shown in fig. 4, points a, b and c in the figure are positions of the robot on a point cloud curved surface under an ideal condition, and point a*、b*、c*The robot position under the condition of error existence is kept, the robot positioning error can be effectively reduced by keeping the distance square sum from the position coordinate of the robot to the motion curved surface to be minimum, and the translation vector optimization calculation formula is as shown in the formula (10):
wherein P istIndicating calculated value of robot position, MtRepresents a point on the robot motion surface point cloud which is nearest to the robot calculation position,is the robot translation vector to be optimized;
the nonlinear equation for optimizing and constructing the translation vector is as follows:
iterative solution of translation vector between every two frames of images of robot through Gauss-Newton methodThe optimum value of (c).
And further, the system also comprises a similarity comparison module which is used for executing closed loop detection, comparing the BRIEF description operator of the feature point in the current frame with the description operator in the BoW bag-of-words model, when the similarity is greater than a set value, judging the similarity, updating the position and posture information of the robot according to the three-dimensional coordinates of the feature point corresponding to the description operator in the BoW bag-of-words model, and otherwise, adding the new feature point into the BoW bag-of-words model, and returning to the step 3.
Claims (9)
1. The robot curved surface motion positioning method is characterized by comprising the following steps:
1) establishing a robot curved surface motion positioning system, and defining a coordinate system: the robot curved surface motion positioning system comprises a robot, wherein an RGB-D camera, an inertia measurement unit IMU, an adsorption device and a motion mechanism are arranged on the robot, the adsorption device is used for enabling the robot to be adsorbed on the surface of an object, and the motion mechanism is used for driving the robot to move;
the coordinate system comprises a space inertia coordinate system, an RGB-D camera coordinate system, an inertia measurement unit IMU coordinate system and a robot coordinate system;
registering a color image and a depth image of the RGB-D camera;
2) the robot moves, a RGB-D camera fixedly connected in front of the robot is used for collecting color images and depth images of the surface of an object in real time, and an inertial measurement unit IMU (inertial measurement unit) is used for collecting three-axis angular velocity omega of the robot moving on a curved surfacek(ωx,ωy,ωz) And three-axis acceleration Ak(ax,ay,az);
3) Extracting feature points from the color image acquired in the step 2) by using an ORB feature extraction method, calculating a descriptor by using a BRIEF algorithm, completing feature matching by using a nearest neighbor search method, and calculating a rotation matrix of the robot under a spatial inertial coordinate systemAnd translation vector
4) Between every two frames of color images, according to the Euler angle (α) corresponding to the k-1 frame imagek-1,βk-1,γk-1) Calculating Euler angle of k frame according to output value of inertial measurement unit IMUTaking the output value of the IMU triaxial accelerometer as the component A of the gravity acceleration on three axesk(ax,ay,az) And using said component Ak(ax,ay,az) Updating roll angle βkAnd a pitch angle gammakThereby correcting the Euler angle of the k-th frame using the updated roll angle βkAnd a pitch angle gammakCorrection step 3) rotation matrixThen calculates the next frame bitCorrecting the course angle α by using the least square method to the reprojection error of the feature point in the k framekAnd translation vectorCalculating the three-dimensional coordinate P of the robot according to the Euler angle and translation vector of the corrected k-th frametUpdating the three-axis angular velocity error b of the inertial measurement unit IMU by using a least square methodk(bx,by,bz) And the error b of the three-axis angular velocityk(bx,by,bz) Returning to the inertial measurement unit IMU to correct the measurement value of the inertial measurement unit IMU;
5) splicing the RGB-D depth images by using the rotation matrix and the translation vector corrected in the step 4) to generate a robot motion curved surface point cloud, and performing voxel grid filtering on the curved surface point cloud;
6) according to the principle that the robot posture is adaptive to the motion curved surface of the robot, the corrected translation vector is optimized by combining the motion constraint construction nonlinear equation of the robot on the curved surface to obtain the translation vector of the robot in the k frame
7) And (3) performing closed-loop detection, comparing the BRIEF descriptor of the feature point in the k frame with the descriptor in the BoW bag-of-words model, determining the similarity when the similarity is greater than a set value, updating the pose information of the robot according to the three-dimensional coordinates of the feature point corresponding to the descriptor in the BoW bag-of-words model, and otherwise, adding the feature point in the k frame into the BoW bag-of-words model and returning to the step 3.
2. The robot curved motion positioning method according to claim 1, characterized in that:
according to the Euler angle (α) corresponding to the k-1 frame image in the step 4)k-1,βk-1,γk-1) Calculating Euler angle of k frame according to output value of inertial measurement unit IMUIs represented by formula (1) and formula (2):
Rk-1=DCMFromEuler(αk-1,βk-1,γk-1) (1)
wherein DCMFromEuler converts Euler angle into rotation matrix;
eulerfrombcm is the conversion of the rotation matrix into euler angles;
Ωkan inertial measurement unit IMU acquires the three-axis angular velocity of the kth frame of the robot moving on a motion curved surface;
bk-1is the three-axis angular velocity error of the inertial measurement unit IMU of the (k-1) th frame;
Rk-1is a rotation matrix of the robot under a k-1 frame space inertial coordinate system;
at is the time difference between the kth frame and the (k-1) th frame.
3. The method as claimed in claim 2, wherein the heading angle α is corrected by constructing a nonlinear equation of the reprojection error of the k-th frame feature point in the k-1 framekAnd translation vectorAs shown in formula (3):
wherein,respectively representing the three-dimensional coordinates of the feature points in the kth frame and the kth-1 frame;
αkis the robot heading angle;
Rk(αk) Is a rotation matrix from the k-1 th frame to the k-th frame。
4. The robot curved motion positioning method of claim 3, wherein: the three-axis angular velocity error b of the inertial measurement unit IMUk(bx,by,bz) The calculation steps are as follows:
firstly, a rotation matrix from a k-m frame to a k frame is calculated according to an output value of an inertial measurement unit IMU by adopting an equation (4)
Wherein, Jr(ΩjΔ t) is the calculation of ΩjA Right Jacobian matrix of Δ t over the lie group;
k is the current frame;
Ωjis the three-axis angular velocity (omega) of the j frame of the robot moving on the motion surface acquired by the inertial measurement unit IMUx,ωy,ωz);
[Ωj]×Is omegajAn antisymmetric matrix of (a);
Rk-mis a rotation matrix under a space inertia coordinate system corresponding to the robot in the k-m frame;
then, the rotation matrix from the k-m frame to the k frame is calculated according to the image feature matching by adopting the formula (5)
Is the rotation moment of the robot in the jth frame under the space inertia coordinate systemArraying;
RCBis a rotation matrix from the RGB-D camera coordinate system to the inertial measurement unit IMU coordinate system;
using the equations (4) and (5), the three-axis angular velocity error b of the inertial measurement unit IMU is measuredkConstructing a nonlinear equation as shown in equation (6),
obtaining the optimal b by iterative solution of formula (6) by Gauss-Newton methodk。
5. The robot curved motion positioning method of claim 4, wherein:
in step 6), the method for optimizing the corrected translation vector includes:
setting up upward unit vector of vertical motion curved surface in robot coordinate systemPreliminarily estimating the position P of the robot according to the Euler angle and the translational vector of the k frame acquired in the step 4)tSelecting P on the motion surface sigmatFitting out P from nearby discrete pointstTangent plane of the point and finding the corresponding normal vectorResidual r1Calculating as formula (7):
wherein R iskIs a rotation matrix of the robot under a k frame space inertial coordinate system;
is a unit vector of the upward direction of the vertical motion curve in the robot coordinate system;
calculating the distance r from the position coordinates of the robot to the motion curved surface2The calculation formula is shown as formula (8):
wherein P istIndicating calculated value of robot position, MtRepresenting a point on the point cloud of the motion curved surface of the robot in the step 5), which is closest to the calculated position of the robot,is the robot translation vector to be optimized;
the nonlinear equation for optimizing and constructing the translation vector is as follows:
and iteratively solving the optimal value of the translation vector between every two frames of images of the robot by a Gauss-Newton method.
6. The robot curved motion positioning method according to claim 1, characterized in that: the inertial measurement unit IMU comprises a gyroscope and a three-axis accelerometer.
7. The robot curved motion positioning method according to claim 1, characterized in that:
the curvature radius of the surface of the object is greater than or equal to 5 meters.
8. The robot curved motion positioning method according to claim 1, characterized in that:
utilizing component A of gravity acceleration in three-axis direction of accelerometerk(ax,ay,az) Updating roll angle βkAnd a pitch angle gammakThe specific method is that the rotation matrix in the step 3) is firstly rotatedConverting into Euler angle representation mode, and updating roll angle to βkUpdating the pitch angle to gammakThen converting the Euler angle into a new rotation matrix Rk。
9. A robot curved surface motion positioning system, which is characterized in that the robot curved surface motion positioning method of any one of claims 1 to 8 is adopted for robot curved surface motion positioning, and the motion positioning system comprises: the system comprises an RGB-D depth camera, an inertial measurement unit IMU, a wall-climbing robot, a coordinate system building module, a registration module, a rotation matrix and translation vector acquisition module, a rotation matrix and translation vector correction module, an RGB-D depth image splicing module, a translation vector optimization module and a similarity comparison module, wherein:
the RGB-D camera is used for acquiring a color image and a depth image of the surface of an object in real time;
the inertial measurement unit IMU is used for acquiring the three-axis angular velocity omega of the robot moving on the curved surfacek(ωx,ωy,ωz) And three-axis acceleration Ak(ax,ay,az);
The coordinate system construction module is used for constructing a space inertia coordinate system, an RGB-D camera coordinate system, an inertia measurement unit IMU coordinate system and a robot coordinate system;
the registration module is used for registering the color image and the depth image of the RGB-D camera;
the rotation matrix and translation vector acquisition module is used for extracting feature points in the color image acquired in the step 2) by using an ORB feature extraction method, calculating a descriptor by using a BRIEF algorithm, completing feature matching by using a nearest neighbor search method, and calculating a rotation matrix of the robot under a spatial inertial coordinate systemAnd translation vector
The rotation matrix and translation vector correction module is used for correcting the Euler angles (α) between every two frames of color images according to the image of the (k-1) th framek-1,βk-1,γk-1) Calculating Euler angle of k frame according to output value of inertial measurement unit IMUTaking the output value of the IMU triaxial accelerometer as the component A of the gravity acceleration on three axesk(ax,ay,az) And using said component Ak(ax,ay,az) Updating roll angle βkAnd a pitch angle gammakThereby correcting the Euler angle of the k-th frame using the updated roll angle βkAnd a pitch angle gammakCorrection step 3) rotation matrixCalculating the reprojection error of the feature point of the last frame in the k frame, and correcting the course angle α by using the least square methodkAnd translation vectorCalculating the three-dimensional coordinate P of the robot according to the Euler angle and translation vector of the corrected k-th frametUpdating the three-axis angular velocity error b of the inertial measurement unit IMU by using a least square methodk(bx,by,bz) And the error b of the three-axis angular velocityk(bx,by,bz) Returning to the inertial measurement unit IMU to correct the measurement value of the inertial measurement unit IMU;
the RGB-D depth image splicing module splices the RGB-D depth images by using the rotation matrix and the translation vector corrected in the step 4) to generate a robot motion curved surface point cloud, and performs voxel grid filtering on the curved surface point cloud;
the translation vector optimization module is used for combining the robot on the curved surface according to the principle that the robot posture is adaptive to the motion curved surface of the robotThe motion constraint structure nonlinear equation is used for optimizing the corrected translation vector to obtain the translation vector of the robot at the k frame
And the similarity comparison module is used for executing closed-loop detection, comparing the BRIEF descriptor of the feature point in the k frame with the descriptor in the BoW bag-of-words model, determining the similarity when the similarity is greater than a set value, updating the position and posture information of the robot according to the three-dimensional coordinates of the feature point corresponding to the descriptor in the BoW bag-of-words model, and otherwise, adding the feature point in the k frame into the BoW bag-of-words model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811604262.3A CN109676604B (en) | 2018-12-26 | 2018-12-26 | Robot curved surface motion positioning method and motion positioning system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811604262.3A CN109676604B (en) | 2018-12-26 | 2018-12-26 | Robot curved surface motion positioning method and motion positioning system thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109676604A true CN109676604A (en) | 2019-04-26 |
CN109676604B CN109676604B (en) | 2020-09-22 |
Family
ID=66189823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811604262.3A Active CN109676604B (en) | 2018-12-26 | 2018-12-26 | Robot curved surface motion positioning method and motion positioning system thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109676604B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110202575A (en) * | 2019-06-05 | 2019-09-06 | 武汉工程大学 | A kind of robot target path accuracy compensation method for commercial measurement |
CN110497727A (en) * | 2019-08-28 | 2019-11-26 | 华侨大学 | A kind of optimal processing space choosing method of solid stone carving processing |
CN111105881A (en) * | 2019-12-26 | 2020-05-05 | 昆山杜克大学 | Database system for 3D measurement of human phenotype |
WO2020220616A1 (en) * | 2019-04-27 | 2020-11-05 | 魔门塔(苏州)科技有限公司 | Vehicle pose correction method and apparatus |
CN111983620A (en) * | 2020-03-04 | 2020-11-24 | 武汉理工大学 | Target positioning method for underwater robot searching and feeling |
CN112053383A (en) * | 2020-08-18 | 2020-12-08 | 东北大学 | Method and device for real-time positioning of robot |
CN112284380A (en) * | 2020-09-23 | 2021-01-29 | 深圳市富临通实业股份有限公司 | Nonlinear estimation method and system based on fusion of optical flow and IMU (inertial measurement Unit) |
WO2021042374A1 (en) * | 2019-09-06 | 2021-03-11 | 罗伯特·博世有限公司 | Three-dimensional environment modeling method and device for industrial robot, computer storage medium and industrial robot operating platform |
CN112643207A (en) * | 2020-12-12 | 2021-04-13 | 南京理工大学 | Laser automatic derusting system and method based on computer vision |
CN112710308A (en) * | 2019-10-25 | 2021-04-27 | 阿里巴巴集团控股有限公司 | Positioning method, device and system of robot |
CN113160075A (en) * | 2021-03-30 | 2021-07-23 | 武汉数字化设计与制造创新中心有限公司 | Processing method and system for Apriltag visual positioning, wall-climbing robot and storage medium |
CN113660395A (en) * | 2021-08-06 | 2021-11-16 | 海信视像科技股份有限公司 | Safety prompting method and equipment based on target identification |
CN114199235A (en) * | 2021-11-29 | 2022-03-18 | 珠海一微半导体股份有限公司 | Positioning system and positioning method based on sector depth camera |
CN114623819A (en) * | 2022-03-21 | 2022-06-14 | 中国电信股份有限公司 | Robot positioning method, system, device, equipment and storage medium |
CN114789439A (en) * | 2021-01-26 | 2022-07-26 | 深圳市普渡科技有限公司 | Slope positioning correction method and device, robot and readable storage medium |
CN114794992A (en) * | 2022-06-07 | 2022-07-29 | 深圳甲壳虫智能有限公司 | Charging seat, robot recharging method and sweeping robot |
CN115829833A (en) * | 2022-08-02 | 2023-03-21 | 爱芯元智半导体(上海)有限公司 | Image generation method and mobile device |
CN118357932A (en) * | 2024-06-20 | 2024-07-19 | 苏州艾利特机器人有限公司 | Mechanical arm positioning method and device, electronic equipment and storage medium |
CN118357932B (en) * | 2024-06-20 | 2024-09-24 | 苏州艾利特机器人有限公司 | Mechanical arm positioning method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102162738A (en) * | 2010-12-08 | 2011-08-24 | 中国科学院自动化研究所 | Calibration method of camera and inertial sensor integrated positioning and attitude determining system |
US20120078510A1 (en) * | 2010-09-24 | 2012-03-29 | Honeywell International Inc. | Camera and inertial measurement unit integration with navigation data feedback for feature tracking |
CN102538781A (en) * | 2011-12-14 | 2012-07-04 | 浙江大学 | Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method |
CN107680133A (en) * | 2017-09-15 | 2018-02-09 | 重庆邮电大学 | A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN108981687A (en) * | 2018-05-07 | 2018-12-11 | 清华大学 | A kind of indoor orientation method that vision is merged with inertia |
-
2018
- 2018-12-26 CN CN201811604262.3A patent/CN109676604B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120078510A1 (en) * | 2010-09-24 | 2012-03-29 | Honeywell International Inc. | Camera and inertial measurement unit integration with navigation data feedback for feature tracking |
CN102162738A (en) * | 2010-12-08 | 2011-08-24 | 中国科学院自动化研究所 | Calibration method of camera and inertial sensor integrated positioning and attitude determining system |
CN102538781A (en) * | 2011-12-14 | 2012-07-04 | 浙江大学 | Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method |
CN107680133A (en) * | 2017-09-15 | 2018-02-09 | 重庆邮电大学 | A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN108981687A (en) * | 2018-05-07 | 2018-12-11 | 清华大学 | A kind of indoor orientation method that vision is merged with inertia |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020220616A1 (en) * | 2019-04-27 | 2020-11-05 | 魔门塔(苏州)科技有限公司 | Vehicle pose correction method and apparatus |
CN110202575A (en) * | 2019-06-05 | 2019-09-06 | 武汉工程大学 | A kind of robot target path accuracy compensation method for commercial measurement |
CN110497727B (en) * | 2019-08-28 | 2020-12-01 | 华侨大学 | Optimal processing space selection method for three-dimensional stone carving processing |
CN110497727A (en) * | 2019-08-28 | 2019-11-26 | 华侨大学 | A kind of optimal processing space choosing method of solid stone carving processing |
WO2021042374A1 (en) * | 2019-09-06 | 2021-03-11 | 罗伯特·博世有限公司 | Three-dimensional environment modeling method and device for industrial robot, computer storage medium and industrial robot operating platform |
CN112710308A (en) * | 2019-10-25 | 2021-04-27 | 阿里巴巴集团控股有限公司 | Positioning method, device and system of robot |
CN112710308B (en) * | 2019-10-25 | 2024-05-31 | 阿里巴巴集团控股有限公司 | Positioning method, device and system of robot |
CN111105881A (en) * | 2019-12-26 | 2020-05-05 | 昆山杜克大学 | Database system for 3D measurement of human phenotype |
CN111983620B (en) * | 2020-03-04 | 2024-02-20 | 武汉理工大学 | Target positioning method for underwater robot searching and exploring |
CN111983620A (en) * | 2020-03-04 | 2020-11-24 | 武汉理工大学 | Target positioning method for underwater robot searching and feeling |
CN112053383A (en) * | 2020-08-18 | 2020-12-08 | 东北大学 | Method and device for real-time positioning of robot |
CN112053383B (en) * | 2020-08-18 | 2024-04-26 | 东北大学 | Method and device for positioning robot in real time |
CN112284380A (en) * | 2020-09-23 | 2021-01-29 | 深圳市富临通实业股份有限公司 | Nonlinear estimation method and system based on fusion of optical flow and IMU (inertial measurement Unit) |
CN112643207A (en) * | 2020-12-12 | 2021-04-13 | 南京理工大学 | Laser automatic derusting system and method based on computer vision |
CN112643207B (en) * | 2020-12-12 | 2022-09-30 | 南京理工大学 | Laser automatic derusting system and method based on computer vision |
CN114789439A (en) * | 2021-01-26 | 2022-07-26 | 深圳市普渡科技有限公司 | Slope positioning correction method and device, robot and readable storage medium |
CN114789439B (en) * | 2021-01-26 | 2024-03-19 | 深圳市普渡科技有限公司 | Slope positioning correction method, device, robot and readable storage medium |
CN113160075A (en) * | 2021-03-30 | 2021-07-23 | 武汉数字化设计与制造创新中心有限公司 | Processing method and system for Apriltag visual positioning, wall-climbing robot and storage medium |
CN113660395A (en) * | 2021-08-06 | 2021-11-16 | 海信视像科技股份有限公司 | Safety prompting method and equipment based on target identification |
CN114199235A (en) * | 2021-11-29 | 2022-03-18 | 珠海一微半导体股份有限公司 | Positioning system and positioning method based on sector depth camera |
WO2023093515A1 (en) * | 2021-11-29 | 2023-06-01 | 珠海一微半导体股份有限公司 | Positioning system and positioning method based on sector depth camera |
CN114199235B (en) * | 2021-11-29 | 2023-11-03 | 珠海一微半导体股份有限公司 | Positioning system and positioning method based on sector depth camera |
CN114623819A (en) * | 2022-03-21 | 2022-06-14 | 中国电信股份有限公司 | Robot positioning method, system, device, equipment and storage medium |
CN114794992B (en) * | 2022-06-07 | 2024-01-09 | 深圳甲壳虫智能有限公司 | Charging seat, recharging method of robot and sweeping robot |
CN114794992A (en) * | 2022-06-07 | 2022-07-29 | 深圳甲壳虫智能有限公司 | Charging seat, robot recharging method and sweeping robot |
CN115829833A (en) * | 2022-08-02 | 2023-03-21 | 爱芯元智半导体(上海)有限公司 | Image generation method and mobile device |
CN115829833B (en) * | 2022-08-02 | 2024-04-26 | 爱芯元智半导体(上海)有限公司 | Image generation method and mobile device |
CN118357932A (en) * | 2024-06-20 | 2024-07-19 | 苏州艾利特机器人有限公司 | Mechanical arm positioning method and device, electronic equipment and storage medium |
CN118357932B (en) * | 2024-06-20 | 2024-09-24 | 苏州艾利特机器人有限公司 | Mechanical arm positioning method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109676604B (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109676604B (en) | Robot curved surface motion positioning method and motion positioning system thereof | |
CN111024066B (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN111780754B (en) | Visual inertial odometer pose estimation method based on sparse direct method | |
WO2021180128A1 (en) | Rgbd sensor and imu sensor-based positioning method | |
CN110345944A (en) | Merge the robot localization method of visual signature and IMU information | |
CN109993113A (en) | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information | |
CN108648215B (en) | SLAM motion blur pose tracking algorithm based on IMU | |
CN110044354A (en) | A kind of binocular vision indoor positioning and build drawing method and device | |
CN109509230A (en) | A kind of SLAM method applied to more camera lens combined type panorama cameras | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN109579838A (en) | The localization method and positioning system of AGV trolley | |
CN112734765A (en) | Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion | |
CN110455301A (en) | A kind of dynamic scene SLAM method based on Inertial Measurement Unit | |
CN107909614A (en) | Crusing robot localization method under a kind of GPS failures environment | |
CN111932674A (en) | Optimization method of line laser vision inertial system | |
CN115574816B (en) | Bionic vision multi-source information intelligent perception unmanned platform | |
CN109648558A (en) | Robot non-plane motion localization method and its motion locating system | |
CN111862316A (en) | IMU tight coupling dense direct RGBD three-dimensional reconstruction method based on optimization | |
CN110675455A (en) | Self-calibration method and system for car body all-around camera based on natural scene | |
CN110929402A (en) | Probabilistic terrain estimation method based on uncertain analysis | |
CN111504323A (en) | Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion | |
CN108253962A (en) | New energy pilotless automobile localization method under a kind of low light environment | |
CN114234967A (en) | Hexapod robot positioning method based on multi-sensor fusion | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |