CN117893610B - Aviation assembly robot gesture measurement system based on zoom monocular vision - Google Patents

Aviation assembly robot gesture measurement system based on zoom monocular vision Download PDF

Info

Publication number
CN117893610B
CN117893610B CN202410290179.2A CN202410290179A CN117893610B CN 117893610 B CN117893610 B CN 117893610B CN 202410290179 A CN202410290179 A CN 202410290179A CN 117893610 B CN117893610 B CN 117893610B
Authority
CN
China
Prior art keywords
coordinate system
target
camera
matrix
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410290179.2A
Other languages
Chinese (zh)
Other versions
CN117893610A (en
Inventor
殷鸣
杨博文
谢罗峰
岑学祥
孙浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202410290179.2A priority Critical patent/CN117893610B/en
Publication of CN117893610A publication Critical patent/CN117893610A/en
Application granted granted Critical
Publication of CN117893610B publication Critical patent/CN117893610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of gesture measurement and discloses an aerial assembly robot gesture measurement system based on zoom monocular vision, which comprises a three-dimensional target, a laser tracker, a zoom industrial camera, a characteristic point extraction module and a gesture solving module.

Description

Aviation assembly robot gesture measurement system based on zoom monocular vision
Technical Field
The invention relates to the field of space attitude measurement in a robot hole making process, in particular to an aviation assembly robot attitude measurement system based on zoom monocular vision.
Background
In the aerospace field, a large number of thin-wall weak-rigidity laminated components are adopted for aircraft parts, a large number of connecting holes are formed in the surfaces of the thin-wall weak-rigidity laminated components, the connecting holes are mainly formed by manual hole forming, and a small part of the connecting holes are formed by hole forming by machine tools, robots and the like. Compared with the traditional manual hole making and machine tool hole making, the robot hole making has better processing accessibility, environmental adaptability and manufacturing flexibility, can realize in-situ processing of large-scale components, but the robot has lower precision, and the hole making precision, normal direction adjusting precision and the like of the robot can not temporarily meet the processing requirements of a large-size integral structure.
Disclosure of Invention
The invention aims to provide an aerial assembly robot gesture measurement system based on zoom monocular vision, which adopts a method of externally measuring and feeding back the aerial assembly robot gesture in real time to correct the gesture of the aerial assembly robot in time so as to solve the problem of insufficient precision of the aerial assembly robot hole making self.
In order to achieve the above purpose, the invention adopts the following technical scheme:
an aerial assembly robot pose measurement system based on zoom monocular vision, comprising:
the three-dimensional target is arranged at the execution end of the aviation assembly robot to be tested and comprises a base and a plurality of non-coplanar reflecting points arranged on the base, the base is integrally arched and comprises a first mounting surface, a second mounting surface and a third mounting surface, the first mounting surface and the second mounting surface are respectively positioned at two sides of the arch and are coplanar, and the third mounting surface is positioned at the top of the arch; the first mounting surface and the second mounting surface are respectively provided with reflecting points which are the same in number and uniformly arranged along the length direction, and four corners of the third mounting surface are respectively provided with reflecting points; a magnetic attraction base is arranged in the middle of the surface of the third mounting surface, and a target ball is arranged on the magnetic attraction base;
the laser tracker is used for tracking the target ball and acquiring the position information of the three-dimensional target; acquiring 3D feature point coordinates under a target coordinate system according to the position information and the size information of the three-dimensional target;
The zoom industrial camera is positioned at the front side of the execution end of the aerial assembly robot and is used for acquiring three-dimensional target images of the execution end of the aerial assembly robot in all postures; the size of the field of view is changed by adjusting the focal length of the zoom industrial camera to deal with the acquisition of target images with different distances;
The feature point extraction module acquires a target image and preprocesses the target image, acquires outline information of the feature points of the target through a canny operator, and calculates coordinates of central points of outline of the feature points according to the outline information of the feature points, namely coordinates of 2D feature points under a camera coordinate system;
The gesture solving module is used for acquiring 3D feature point coordinates in a target coordinate system and 2D feature point coordinates in a camera coordinate system and calculating gesture information of the target relative to the camera;
The method comprises the following steps: firstly, matching a characteristic point corresponding relation between a 3D characteristic point coordinate, a 2D characteristic point coordinate and an initial translation matrix under a given current pose by adopting SoftPOSIT algorithm to obtain a 3D-2D characteristic point corresponding relation matrix; and then, acquiring the attitude information of the three-dimensional target by adopting an EPNP algorithm and an orthogonal iterative algorithm according to the corresponding relation of the 3D-2D characteristic points.
Further, the preprocessing of the target image in the feature point extraction module means that median filtering is performed on the image first, and then binarization processing is performed on the image.
Further, given the 3D feature point coordinates, the 2D feature point coordinates and the initial translation matrix under the current pose in the pose solving module, matching the feature point correspondence by adopting SoftPOSIT algorithm to obtain a 3D-2D feature point correspondence matrix, comprising the following steps:
Setting N initial iteration rotation matrixes, wherein the iteration k=1, … and N; giving 3D feature point coordinates and 2D feature point coordinates under the current pose and an initial translation matrix, and carrying out feature point corresponding relation matching by SoftPOSIT algorithm; judging whether the matching is correct or not, if the matching is incorrect, iterating k=k+1, restarting SoftPOSIT the algorithm to solve the corresponding relation of the feature points, and if the matching is correct, outputting a 3D-2D feature point corresponding relation matrix.
Further, the initial translation matrix acquisition method is as follows:
The laser tracker acquires accurate position information of the three-dimensional target relative to a coordinate system of the laser tracker Acquiring a rotation matrix and a translation matrix of a laser tracker coordinate system relative to a camera coordinate system by offline calibrationAcquiring the position information of the target under the camera coordinate systemAs an initial translation matrix
Further, the adoption of the EPNP algorithm and the orthogonal iteration algorithm in the gesture solving module to acquire the gesture information of the three-dimensional target refers to taking the gesture information acquired by the EPNP algorithm as an iteration initial value of the orthogonal iteration algorithm, and then carrying out iterative computation on an accurate rotation matrix by the orthogonal iteration algorithm to acquire the gesture information of the three-dimensional target.
Further, pose information acquired by the EPNP algorithm comprises the following steps:
firstly, selecting the coordinates of a virtual control point under a target coordinate system by adopting a principal component analysis method All target coordinate system 3D feature points are represented by 4 control points: ; wherein, Is the 3D characteristic point coordinates of the target coordinate system,In order to control the weight of the point,
Combining the three-dimensional corresponding point conversion relation of the target coordinate system and the camera coordinate systemThe 3D feature points in the camera coordinate system are represented by control point coordinates in the camera coordinate system in a linear manner: ; wherein, For a rotation matrix from the target coordinate system to the camera coordinate system,A translation matrix from a target coordinate system to a camera coordinate system; for the 3D characteristic point coordinates in the camera coordinate system, only the control point coordinates in the camera coordinate system are needed to be obtained And 3D feature point coordinates under a camera coordinate system can be obtained, so that the initial pose of the cooperative target is obtained.
Further, virtual control point coordinates in a camera coordinate systemThe solving method is as follows:
Definition of the definition As a matrix of parameters within the camera,The camera imaging model is as follows: wherein, the method comprises the steps of, wherein, In order to control the weight of the point,
Two linear equations are available for each feature point:
When there are n sets of corresponding points, a linear equation set containing 2n equations is obtained and recorded as Where M is a 2n 12 matrix, vectorIs a 12 multiplied by 1 virtual control point coordinate vector, then according to the invariance of the Europe transformation distance of the three-dimensional virtual control point under the target coordinate system and the camera coordinate system, the 3D coordinate of the control point under the camera coordinate system can be obtained, and thenAnd (3) solving the 3D coordinates of the feature points under the camera coordinate system, and further solving the initial pose of the target.
Further, the orthogonal iterative algorithm solves a globally converged rotation matrix by minimizing a colinear error of a three-dimensional space of a target, and comprises the following steps:
first, a collinearity error of orthographic projection of a target coordinate system is defined Wherein, the method comprises the steps of, wherein,The characteristic point coordinates are under a camera coordinate system; Is the characteristic point of the image Is provided with a projection matrix of (a),For the target coordinate system colinear error, iteratively solving a rotation matrix R and a translation matrix t which minimize the square sum of the colinear error;
The objective function is:
when the rotation matrix R is fixed, the translation matrix
Obtaining a function of the co-linearity error with respect to the rotation matrix:
Calculating
The optimal attitude information can be obtained by iterative solution
The beneficial effects of the invention are as follows: firstly, a high-resolution zoom industrial camera is used for collecting a cooperative target image, two-dimensional characteristic point center coordinates of the target image are extracted after image processing such as binarization and filtering, then a SoftPOSIT algorithm is used for determining a corresponding relation of 3D-2D characteristic points, and then matched 3D-2D characteristic point-to-coordinate information is transmitted into a gesture solving algorithm to calculate the gesture of the cooperative target relative to the camera.
Drawings
Fig. 1 is a schematic diagram of a system structure according to the present invention.
Fig. 2 is a schematic diagram of a three-dimensional target structure according to the present invention.
Fig. 3 is a schematic diagram of the gesture solving process of the present invention.
Fig. 4 is a schematic diagram of a 3D-2D feature point matching process according to the present invention.
Fig. 5 is a schematic diagram of initial translation matrix acquisition according to the present invention.
Fig. 6 is a schematic diagram of a camera imaging model according to the present invention.
The figure illustrates: 100. an aviation assembly robot; 101. an execution end; 200. a stereoscopic target; 210. a base; 211. a first mounting surface; 212. a second mounting surface; 213. a third mounting surface; 220. a reflection point; 300. a camera; 400. a laser tracker; 401. and a target ball.
Detailed Description
As shown in fig. 1, the aerial assembly robot gesture measurement system based on zoom monocular vision provided in this embodiment includes a stereoscopic target 200, a zoom industrial camera 300, a laser tracker 400, a feature point coordinate extraction module and a gesture solving module.
The three-dimensional target 200 is mounted at the execution end 101 of the aircraft assembly robot 100 to be tested, as shown in fig. 2, the three-dimensional target 200 comprises a base 210 and a plurality of reflecting points 220 mounted on the base 210, the base 210 is in an arch shape as a whole, and comprises a first mounting surface 211, a second mounting surface 212 and a third mounting surface 213, the first mounting surface 211 and the second mounting surface 212 are positioned on the same horizontal plane and are positioned on two sides of the arch, the third mounting surface 213 is positioned between the first mounting surface 211 and the second mounting surface 212 and is positioned at the top of the arch, 4 reflecting points 220 are uniformly mounted on the first mounting surface 211 and the second mounting surface 212 along the length direction respectively, the reflecting points 220 are mounted on the four corners of the third mounting surface 213, and the reflecting points 220 have non-coplanar characteristics in total, and the 12 reflecting points 220 in the embodiment; the middle part of the third mounting surface 213 is provided with a magnetic attraction base for mounting a target ball 401, the target ball is connected with a laser tracker 400 in a matching way, the target ball 401 changes along with the change of the pose of the tail end 101 executed by the aerial assembly robot 100, and the laser tracker 400 can acquire the 3D coordinate information of the target ball, namely the position information of the three-dimensional target 200; and acquiring the coordinates of the 3D characteristic points under the target coordinate system according to the position information and the size information of the three-dimensional target 200.
When the robot gesture is measured, the three-dimensional target 200 is installed on the execution end 101 of the aerial assembly robot 100 to be measured, and the middle part of the bottom surface of the third installation surface 213 of the three-dimensional target 200 is connected with the execution end 101 of the aerial assembly robot 100; the laser tracker 400 is arranged on the ground and can track the target ball 401 positioned on the third mounting surface 213, the laser tracker 400 is used for acquiring the position information of the laser target ball 401 on the three-dimensional target 200, the position information is the position information of the three-dimensional target, further, the 3D characteristic point coordinates under the target coordinate system are acquired according to the position information and the size information of the three-dimensional target 200, and the position information is sent to the gesture solving module; the zoom industrial camera 300 is positioned at the front side of the execution end 101 through a camera bracket and is used for acquiring the images of the three-dimensional target 200 in each posture; and sends the stereoscopic target 200 image to the feature point coordinate extraction module.
The feature point coordinate extraction module and the gesture solving module are both realized by being carried on a computer through software programs.
The feature point coordinate extraction module is used for acquiring a target image shot by the zoom industrial camera 300 and extracting 2D coordinate information of a feature point center; the method specifically comprises the following steps:
Firstly, preprocessing a target image to reduce the information redundancy of the image and highlight the geometric features of target feature points; the image preprocessing in this embodiment means that median filtering is performed on an image first, and then binarization processing is performed on the image; median filtering improves the imaging quality of the image; binarization processing highlights feature point boundary contours. Aiming at the influence of interference light spots in the image, the interference light spots with lower brightness can be filtered by improving the image binarization processing threshold, and for few noise sources with higher brightness and small light spot area, the small area identification and elimination method is adopted for deleting, so that the accurate extraction of the characteristic point image is ensured.
Then, the feature point area is accurately segmented, the contour is extracted through a canny operator, outline information of the target feature point is obtained, center pixel coordinates of the feature point contour are calculated according to the feature point contour information, and the coordinates are used as 2D feature point coordinates of the feature point under the camera coordinates.
After the 2D characteristic point coordinates of the characteristic points are obtained, the gesture solving module utilizes the 3D characteristic point coordinates under the target coordinate system and the 2D characteristic point coordinates under the camera coordinate system to realize gesture solving of the target relative to the camera. As shown in fig. 3, the pose solving process of the present embodiment includes feature point correspondence solving, and solving the target pose using corresponding 3D-2D feature point coordinate information.
The gesture solving algorithm based on the feature points is an iterative algorithm or a non-iterative algorithm, and n pairs of 3D-2D feature point coordinates determined by the matching relationship are needed, so that the corresponding relationship between the 3D feature point coordinates and the 2D feature point coordinates is needed to be determined before gesture solving. In the embodiment, the SoftPOSIT algorithm is adopted to solve the corresponding relation of the feature points, and the problem of matching failure caused by the divergence of the SoftPOSIT algorithm is solved by setting the iteration initial gesture set.
The SoftPOSIT algorithm takes the 3D feature point coordinates and the 2D feature point coordinates as inputs, and outputs the corresponding relation between the 3D feature point coordinates and the 2D feature point coordinates. Under different initial conditions, as the algorithm has the condition that the matching is failed due to easy divergence, the embodiment gives out an initial translation matrix through the position information acquired by the laser tracker 400, establishes an initial posture matrix set as the initial condition to improve the convergence performance, and when iteration is not successful for a certain number of times, the algorithm automatically transmits a new initial value for iterative calculation, so that the correct corresponding point matching relation can be obtained under different target posture conditions. The single-gesture SoftPOSIT algorithm calculation flow chart is shown in fig. 4, N initial iterative rotation matrices are preset, and the iterations k=1, … and N; given the 3D feature point coordinates and the 2D feature point coordinates under the current gesture and the initial translation matrix, carrying out specific point corresponding relation matching by SoftPOSIT algorithm, judging whether the specific point corresponding relation is correctly matched, if not, restarting SoftPOSIT algorithm to solve the feature point corresponding relation, and if so, outputting the 3D-2D feature point corresponding relation matrix.
The iteration initial gesture set provided by the embodiment is composed of 729 rotation matrixes, wherein the rotation matrixes correspond to azimuth angles, pitch angles and roll angles in ranges of [ -pi/2, pi/2 ] and are arranged at intervals of pi/8; as shown in fig. 5, the initial translation matrix is obtained as follows: the laser tracker acquires accurate position information of the three-dimensional target relative to a coordinate system of the laser tracker by measuring the position of a laser target ball on the three-dimensional targetRotation and translation matrix of laser tracker coordinate system relative to camera coordinate systemObtaining by off-line calibration, and then obtaining the position information of the target under the camera coordinate systemAs an initial translation matrix
After the matching of the corresponding relation of the 3D-2D characteristic points is completed, carrying out gesture resolving by using the proposed gesture resolving method, firstly, acquiring an initial estimated value of the target gesture by using an EPNP algorithm, taking the acquired estimated value of the target gesture as an iteration initial value of an orthogonal iteration algorithm, and carrying out iteration resolving to obtain final target gesture information.
The object of the gesture solving is to acquire the gesture relation of the target relative to the camera, namely three angles of azimuth angle, pitch angle and roll angle. A target coordinate system, a camera coordinate system and an image coordinate system related to the camera imaging model are shown in fig. 6; the coordinates of the characteristic points of the target coordinate system are as follows: The camera coordinate system feature point coordinates are: The corresponding characteristic point relationship of the target coordinate system and the camera coordinate system is as follows: Wherein In order to rotate the matrix is rotated,Is a translation matrix.
The gesture solving module acquires the gesture of the three-dimensional target by combining an EPNP algorithm with an orthogonal iterative algorithm, the EPNP algorithm is used as a non-iterative algorithm, a preliminary estimated gesture can be quickly solved, the orthogonal iterative algorithm is used as a gesture iterative solving algorithm, the gesture solving module is characterized in that iterative calculation is needed, but the accuracy is high, gesture information acquired by the EPNP algorithm is used as an initial rotation and translation matrix for iterative initial value calculation of the orthogonal iterative algorithm, the orthogonal iterative algorithm can be quickly converged to a correct gesture, and the robustness and the accuracy of a calculation result are improved.
The EPNP algorithm utilizes n groups of characteristic point 3D coordinates and 2D coordinate information with known corresponding relations to estimate the relative pose relation of a camera coordinate system and a target coordinate system, the algorithm respectively adopts the weighted sum representation of four virtual control points for n 3D characteristic points in the target coordinate system, and the problem is converted into a pose estimation problem from a 3D point set to a 3D point set by solving the coordinates of the virtual control points under the camera coordinate system; the method specifically comprises the following steps:
firstly, selecting the coordinates of a virtual control point under a target coordinate system according to a principal component analysis method All target coordinate system 3D feature points are represented by 4 control points: ; wherein, Is the 3D characteristic point coordinates of the target coordinate system,In order to control the weight of the point,
Combining the three-dimensional corresponding point conversion relation of the target coordinate system and the camera coordinate systemThe 3D feature points in the camera coordinate system are represented by control point coordinates in the camera coordinate system in a linear manner: ; wherein, For a rotation matrix from the target coordinate system to the camera coordinate system,A translation matrix from a target coordinate system to a camera coordinate system; representing the coordinates of the characteristic points in the camera coordinate system, and only needing to obtain the coordinates of the control points in the camera coordinate system And 3D characteristic point coordinates can be obtained.
The method for solving the coordinates of the virtual control points under the camera coordinate system is as follows:
Definition of the definition As a matrix of parameters within the camera,The camera imaging model is as follows: wherein, the method comprises the steps of, wherein,
For each feature point, two equations are obtained according to the coordinate point conversion relationship: when n sets of corresponding points exist, a linear equation set containing 2n equations can be obtained and recorded as Where M is a 2n 12 matrix, vectorThe coordinate vector of the virtual control point is 12 multiplied by 1, the coordinate of the control point under the camera coordinate system can be obtained according to the invariance of the Europe transformation distance of the three-dimensional virtual control point under the target coordinate system and the camera coordinate system, and the attitude information of the target can be obtained according to the attitude estimation method from the 3D point set to the 3D point set after the three-dimensional coordinate of the characteristic point under the camera coordinate system is obtained.
The orthogonal iterative algorithm can directly solve a globally converged rotation matrix by minimizing the collinearity error of the three-dimensional space of the target; the method specifically comprises the following steps:
first, a collinearity error of orthographic projection of a target coordinate system is defined (Projection error): Wherein For image space feature pointsIs provided with a projection matrix of (a),For the target coordinate system colinear error, iteratively solving a rotation matrix R and a translation matrix t which minimize the square sum of the colinear error;
The objective function is:
when the rotation matrix R is fixed, the translation matrix
Obtaining a function of the co-linearity error with respect to the rotation matrix:
By calculation of
The optimal attitude information can be obtained by iterative solutionThe initial iteration matrix of the algorithm is generally constructed by solving the minimum image reprojection error objective function under the weak perspective projection condition of the camera, but the initial iteration matrix constructed by the method has a larger phase difference with the real pose, so that the iteration solving efficiency is lower, the solving precision is insufficient, the initial pose of the target is solved by adopting the EPNP algorithm, and the accurate pose of the target can be quickly iterated by adopting the orthogonal iteration algorithm because the initial pose is close to the real pose.
The foregoing is merely a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any modification and substitution based on the technical scheme and the inventive concept provided by the present invention should be covered in the scope of the present invention.

Claims (5)

1. An aerial assembly robot attitude measurement system based on zoom monocular vision, comprising:
the three-dimensional target is arranged at the execution end of the aviation assembly robot to be tested and comprises a base and a plurality of non-coplanar reflecting points arranged on the base, the base is integrally arched and comprises a first mounting surface, a second mounting surface and a third mounting surface, the first mounting surface and the second mounting surface are respectively positioned at two sides of the arch and are coplanar, and the third mounting surface is positioned at the top of the arch; the first mounting surface and the second mounting surface are respectively provided with reflecting points which are the same in number and uniformly arranged along the length direction, and four corners of the third mounting surface are respectively provided with reflecting points; a magnetic attraction base is arranged in the middle of the surface of the third mounting surface, and a target ball is arranged on the magnetic attraction base;
the laser tracker is used for tracking the target ball and acquiring the position information of the three-dimensional target; acquiring 3D feature point coordinates under a target coordinate system according to the position information and the size information of the three-dimensional target;
The zoom industrial camera is positioned at the front side of the execution end of the aerial assembly robot and is used for acquiring three-dimensional target images of the execution end of the aerial assembly robot in all postures; the size of the field of view is changed by adjusting the focal length of the zoom industrial camera to deal with the acquisition of target images with different distances;
The feature point extraction module acquires a target image and preprocesses the target image, acquires outline information of the feature points of the target through a canny operator, and calculates coordinates of central points of outline of the feature points according to the outline information of the feature points, namely coordinates of 2D feature points under a camera coordinate system;
The gesture solving module is used for acquiring 3D feature point coordinates in a target coordinate system and 2D feature point coordinates in a camera coordinate system and calculating gesture information of the target relative to the camera;
the method comprises the following steps: firstly, matching a characteristic point corresponding relation between a 3D characteristic point coordinate, a 2D characteristic point coordinate and an initial translation matrix under a given current pose by adopting SoftPOSIT algorithm to obtain a 3D-2D characteristic point corresponding relation matrix;
Setting N initial iteration rotation matrixes, wherein the iteration k=1, … and N; giving 3D feature point coordinates and 2D feature point coordinates under the current pose and an initial translation matrix, and carrying out feature point corresponding relation matching by SoftPOSIT algorithm; judging whether the matching is correct or not, if the matching is incorrect, iterating k=k+1, restarting SoftPOSIT algorithm to solve the corresponding relation of the feature points, and if the matching is correct, outputting a 3D-2D feature point corresponding relation matrix;
Then, according to the corresponding relation of the 3D-2D characteristic points, acquiring attitude information of the three-dimensional target by adopting an EPNP algorithm and an orthogonal iterative algorithm;
The method comprises the steps that an EPNP algorithm is adopted in a gesture solving module to be combined with an orthogonal iterative algorithm to obtain gesture information of a three-dimensional target, the gesture information obtained by the EPNP algorithm is used as an iteration initial value of the orthogonal iterative algorithm, then an accurate rotation matrix is calculated through iteration of the orthogonal iterative algorithm, and further the gesture information of the three-dimensional target is obtained;
the pose information acquired by the EPNP algorithm comprises the following steps:
firstly, selecting the coordinates of a virtual control point under a target coordinate system by adopting a principal component analysis method ,/>All target coordinate system 3D feature points are represented by 4 control points:,/> ; wherein/> Is 3D characteristic point coordinates of a target coordinate system,/>For controlling point weight,/>
Combining the three-dimensional corresponding point conversion relation of the target coordinate system and the camera coordinate systemThe 3D feature points in the camera coordinate system are represented by control point coordinates in the camera coordinate system in a linear manner: ; wherein/> For a rotation matrix from the target coordinate system to the camera coordinate system,/>A translation matrix from a target coordinate system to a camera coordinate system; /(I)For the 3D characteristic point coordinates in the camera coordinate system, only the control point coordinates/>, in the camera coordinate system, are needed to be obtainedAnd 3D feature point coordinates under a camera coordinate system can be obtained, so that the initial pose of the cooperative target is obtained.
2. The zoom monocular vision-based aerial assembly robot pose measurement system according to claim 1, wherein the preprocessing of the target image in the feature point extraction module is to perform median filtering on the image first and then perform binarization processing on the image.
3. The zoom monocular vision-based aerial assembly robot pose measurement system of claim 1, wherein the initial translation matrix acquisition method is as follows:
The laser tracker acquires accurate position information of the three-dimensional target relative to a coordinate system of the laser tracker Acquiring a rotation and translation matrix/>, relative to a camera coordinate system, of a laser tracker coordinate system by offline calibration、/>Acquiring the position information/>, of the target under the camera coordinate systemAs an initial translation matrix/>
4. The zoom monocular vision-based aerial assembly robot pose measurement system of claim 1, wherein the virtual control point coordinates in the camera coordinate systemThe solving method is as follows:
Definition of the definition Is a matrix of parameters in the camera,/>The camera imaging model is as follows: /(I)Wherein/>,/>For controlling point weight,/>
Two linear equations are available for each feature point:
When there are n sets of corresponding points, a linear equation set containing 2n equations is obtained and recorded as Where M is a 2n×12 matrix, vector/>Is a 12 multiplied by 1 virtual control point coordinate vector, and according to the invariance of the Europe transformation distance of the three-dimensional virtual control point under the target coordinate system and the camera coordinate system, the 3D coordinate of the control point under the camera coordinate system can be obtained, and then/>And (3) solving the 3D coordinates of the feature points under the camera coordinate system, and further solving the initial pose of the target.
5. The zoom monocular vision based aerial assembly robot pose measurement system of claim 1, wherein the orthogonal iterative algorithm solves for a globally converged rotation matrix by minimizing co-linearity errors of the target three-dimensional space, comprising the steps of:
first, a collinearity error of orthographic projection of a target coordinate system is defined :/>Wherein/>The characteristic point coordinates are under a camera coordinate system; /(I)Is the characteristic point of the imageProjection matrix of/>For the target coordinate system colinear error, iteratively solving a rotation matrix R and a translation matrix t which minimize the square sum of the colinear error;
The objective function is:
when the rotation matrix R is fixed, the translation matrix
Obtaining a function of the co-linearity error with respect to the rotation matrix:
Calculating
The optimal attitude information can be obtained by iterative solution
CN202410290179.2A 2024-03-14 2024-03-14 Aviation assembly robot gesture measurement system based on zoom monocular vision Active CN117893610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410290179.2A CN117893610B (en) 2024-03-14 2024-03-14 Aviation assembly robot gesture measurement system based on zoom monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410290179.2A CN117893610B (en) 2024-03-14 2024-03-14 Aviation assembly robot gesture measurement system based on zoom monocular vision

Publications (2)

Publication Number Publication Date
CN117893610A CN117893610A (en) 2024-04-16
CN117893610B true CN117893610B (en) 2024-05-28

Family

ID=90639757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410290179.2A Active CN117893610B (en) 2024-03-14 2024-03-14 Aviation assembly robot gesture measurement system based on zoom monocular vision

Country Status (1)

Country Link
CN (1) CN117893610B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108375382A (en) * 2018-02-22 2018-08-07 北京航空航天大学 Position and attitude measuring system precision calibration method based on monocular vision and device
CN109448055A (en) * 2018-09-20 2019-03-08 中国科学院光电研究院 Monocular vision attitude determination method and system
CN110146038A (en) * 2019-06-08 2019-08-20 西安电子科技大学 The distributed monocular camera laser measuring device for measuring and method of cylindrical member assembly corner
CN111681279A (en) * 2020-04-17 2020-09-18 东南大学 Driving suspension arm space pose measurement method based on improved lie group nonlinear optimization
CN113340198A (en) * 2021-06-09 2021-09-03 配天机器人技术有限公司 Robot attitude measurement method and robot attitude measurement system
CN113409384A (en) * 2021-08-17 2021-09-17 深圳市华汉伟业科技有限公司 Pose estimation method and system of target object and robot
CN113776523A (en) * 2021-08-24 2021-12-10 武汉第二船舶设计研究所 Low-cost navigation positioning method and system for robot and application
CN114880888A (en) * 2022-07-08 2022-08-09 四川大学 Multi-rotary-joint robot end effector pose correlation dynamics prediction method
CN115625709A (en) * 2022-10-31 2023-01-20 深圳市凌云视迅科技有限责任公司 Hand and eye calibration method and device and computer equipment
CN115638726A (en) * 2022-10-27 2023-01-24 天津大学 Fixed sweep pendulum type multi-camera vision measurement method
WO2023060511A1 (en) * 2021-10-14 2023-04-20 宁德时代新能源科技股份有限公司 Method for determining position of battery box and method for disassembling and replacing battery box
CN116429162A (en) * 2023-03-07 2023-07-14 之江实验室 Multi-sensor calibration method and device and computer equipment
CN116608937A (en) * 2023-05-18 2023-08-18 哈尔滨工业大学 Large flexible structure vibration mode identification method and test device based on computer vision
CN117506918A (en) * 2023-11-30 2024-02-06 中国航空工业集团公司北京长城计量测试技术研究所 Industrial robot tail end pose calibration method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096185B (en) * 2021-03-29 2023-06-06 Oppo广东移动通信有限公司 Visual positioning method, visual positioning device, storage medium and electronic equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108375382A (en) * 2018-02-22 2018-08-07 北京航空航天大学 Position and attitude measuring system precision calibration method based on monocular vision and device
CN109448055A (en) * 2018-09-20 2019-03-08 中国科学院光电研究院 Monocular vision attitude determination method and system
CN110146038A (en) * 2019-06-08 2019-08-20 西安电子科技大学 The distributed monocular camera laser measuring device for measuring and method of cylindrical member assembly corner
CN111681279A (en) * 2020-04-17 2020-09-18 东南大学 Driving suspension arm space pose measurement method based on improved lie group nonlinear optimization
CN113340198A (en) * 2021-06-09 2021-09-03 配天机器人技术有限公司 Robot attitude measurement method and robot attitude measurement system
CN113409384A (en) * 2021-08-17 2021-09-17 深圳市华汉伟业科技有限公司 Pose estimation method and system of target object and robot
CN113776523A (en) * 2021-08-24 2021-12-10 武汉第二船舶设计研究所 Low-cost navigation positioning method and system for robot and application
WO2023060511A1 (en) * 2021-10-14 2023-04-20 宁德时代新能源科技股份有限公司 Method for determining position of battery box and method for disassembling and replacing battery box
CN114880888A (en) * 2022-07-08 2022-08-09 四川大学 Multi-rotary-joint robot end effector pose correlation dynamics prediction method
CN115638726A (en) * 2022-10-27 2023-01-24 天津大学 Fixed sweep pendulum type multi-camera vision measurement method
CN115625709A (en) * 2022-10-31 2023-01-20 深圳市凌云视迅科技有限责任公司 Hand and eye calibration method and device and computer equipment
CN116429162A (en) * 2023-03-07 2023-07-14 之江实验室 Multi-sensor calibration method and device and computer equipment
CN116608937A (en) * 2023-05-18 2023-08-18 哈尔滨工业大学 Large flexible structure vibration mode identification method and test device based on computer vision
CN117506918A (en) * 2023-11-30 2024-02-06 中国航空工业集团公司北京长城计量测试技术研究所 Industrial robot tail end pose calibration method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Development of vision-aided navigation for a wearable outdoor augmented reality system;Alberico Menozzi;《IEEE Xplore》;20140710;全文 *
单目视觉姿态自动测量方法研究;张慧娟;《CNKI中国知网》;20190915;正文第2.2-4.2节 *
参考点共面条件下的稳健相机位姿估计方法;陈津平;《激光与光电子学进展》;20240225(第第61卷第04期期);正文第1-2节 *
张慧娟.单目视觉姿态自动测量方法研究.《CNKI中国知网》.2019,正文第2.2-4.2节. *

Also Published As

Publication number Publication date
CN117893610A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN109029299B (en) Dual-camera measuring device and method for butt joint corner of cabin pin hole
CN110146038B (en) Distributed monocular camera laser measuring device and method for assembly corner of cylindrical part
US8600192B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
CN112325796A (en) Large-scale workpiece profile measuring method based on auxiliary positioning multi-view point cloud splicing
CN112700501B (en) Underwater monocular subpixel relative pose estimation method
CN109767416B (en) Positioning system and method for mechanical equipment
CN111220126A (en) Space object pose measurement method based on point features and monocular camera
CN111415391A (en) Multi-view camera external orientation parameter calibration method adopting inter-shooting method
CN104537707A (en) Image space type stereo vision on-line movement real-time measurement system
CN112362034B (en) Solid engine multi-cylinder section butt joint guiding measurement method based on binocular vision
CN113884002B (en) Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion
CN111879354A (en) Unmanned aerial vehicle measurement system that becomes more meticulous
CN113172659B (en) Flexible robot arm shape measuring method and system based on equivalent center point identification
CN113421291A (en) Workpiece position alignment method using point cloud registration technology and three-dimensional reconstruction technology
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN113160335A (en) Model point cloud and three-dimensional surface reconstruction method based on binocular vision
CN115284292A (en) Mechanical arm hand-eye calibration method and device based on laser camera
CN115546289A (en) Robot-based three-dimensional shape measurement method for complex structural part
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
Jaw et al. Feature-based registration of terrestrial lidar point clouds
CN117893610B (en) Aviation assembly robot gesture measurement system based on zoom monocular vision
CN114581515A (en) Multi-camera calibration parameter optimization method based on optimal path conversion
CN112700505A (en) Binocular three-dimensional tracking-based hand-eye calibration method, equipment and storage medium
CN115131208A (en) Structured light 3D scanning measurement method and system
CN114463495A (en) Intelligent spraying method and system based on machine vision technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant