CN114001651A - Large-scale long and thin cylinder type component pose in-situ measurement method based on binocular vision measurement and prior detection data - Google Patents

Large-scale long and thin cylinder type component pose in-situ measurement method based on binocular vision measurement and prior detection data Download PDF

Info

Publication number
CN114001651A
CN114001651A CN202111224890.0A CN202111224890A CN114001651A CN 114001651 A CN114001651 A CN 114001651A CN 202111224890 A CN202111224890 A CN 202111224890A CN 114001651 A CN114001651 A CN 114001651A
Authority
CN
China
Prior art keywords
measurement
gcs
key feature
pose
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111224890.0A
Other languages
Chinese (zh)
Other versions
CN114001651B (en
Inventor
樊伟
郑联语
付强
曹彦生
刘新玉
张学鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111224890.0A priority Critical patent/CN114001651B/en
Publication of CN114001651A publication Critical patent/CN114001651A/en
Application granted granted Critical
Publication of CN114001651B publication Critical patent/CN114001651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an in-situ measurement method for the pose of a large slender cylinder member based on binocular vision measurement and prior detection data. First, the method builds a global coordinate system for the entire large component measurement system based on the laser tracker and measurement aids (i.e., customized reference plates). Then, the relative position of the target hole to be measured of the large component and the visual target on the reference plate is quickly obtained through a binocular vision measurement method (wherein the target hole to be measured and the visual target can be used as key positioning characteristics of the measurement system), and at the moment, the prior detection data of the end face flanges of the cylinder sections at the two ends of the large component are introduced to construct the end face shapes of the cylinder sections at the two ends of the large component. And finally, calculating the centroid global coordinate value and the circumferential deflection angle of the large member of the flange end surfaces of the cylinder sections at the two ends of the large member by utilizing the interoperation relationship among the coordinate systems in the large member measuring system, thereby calculating the current pose of the large member. The method breaks through the key technologies of global coordinate system construction in a large-size measurement scene, large-size component key feature target positioning in a complex industrial scene based on deep learning, reduction of error accumulation in the process of converting local coordinates into global coordinates and the like. The method can effectively solve the problems of difficult manual scribing, low production efficiency and the like in the multi-robot collaborative processing of the large-scale long and thin cylinder type components, and can effectively improve the processing quality and efficiency of the large-scale Sichang cylinder type components.

Description

Large-scale long and thin cylinder type component pose in-situ measurement method based on binocular vision measurement and prior detection data
Technical Field
The invention relates to the research field of multi-robot cooperative self-adaptive processing pose in-situ measurement of large long and thin barrel type members (hereinafter referred to as large members), in particular to a large member measurement global coordinate system construction method based on a laser tracking measurement and customized auxiliary measurement device (customized reference plate), a large member key positioning feature identification method based on a depth learning and digital image processing method, and a large member current pose calculation method based on a large member key positioning feature identification result and combined with large member flange plate prior detection data.
Background
With the rapid development and application of large-size digital measurement technology, the method provides technical support for high-precision and high-efficiency in-situ pose perception of large-size components. The currently common large-size digital measurement methods mainly include laser tracking measurement, iGPS measurement, laser radar measurement, vision measurement, and the like.
However, in the face of a complex measurement scenario, sometimes a single measurement method is too limited to complete a given measurement task with high precision and efficiency. For example, due to the influence of the measurement distance and the shooting environment, the monocular vision measurement system sometimes has difficulty in ensuring the stability of the measurement accuracy, resulting in poor measurement effect. Although the laser tracking measurement has a large measurement range and high measurement accuracy, each laser tracker can only measure the coordinates of one spatial measurement point at the same time, and the dynamic measurement efficiency is low.
The vision measurement method has the advantages of high precision and non-contact, and along with the development of research and technology, the accuracy of automatic identification of the target to be measured is higher and higher. The laser tracking method has the advantages of large measurement range, high measurement precision and good stability, and is widely applied to digital assembly and detection of products at present. Therefore, in order to solve the problem of high-precision and high-efficiency posture perception of the large cylinder type component, the invention provides the in-situ posture measuring method of the large slender cylinder type component based on binocular vision measurement and prior check detection data.
The measuring system constructed aiming at the in-situ pose perception of the large-scale component mainly comprises a laser tracker, two groups of binocular vision measuring systems A and B, two groups of customized measuring auxiliary devices (namely customized reference plates A and B) and a server (mainly used for installing and developing measuring software). On the basis of the measurement system, it is possible to,
first, the method establishes a Global Coordinate System (GCS) of the entire large-scale component measurement System based on the laser tracker and the measurement assistance device (i.e., the customized reference plate).
Then, the relative position of the target hole to be measured of the large component and the visual target on the reference plate is quickly obtained through a binocular vision measurement method (wherein the target hole to be measured and the visual target can be used as key positioning characteristics of the measurement system), and at the moment, the prior detection data of the large component end face flange is introduced to construct the end face shape of the large component.
And finally, calculating the global coordinate value of the centroid of the flanges on the two end faces of the large member and the circumferential deflection angle of the large member by utilizing the interoperation relationship among the coordinate systems in the measuring system, thereby calculating the current pose of the large member.
Disclosure of Invention
The invention provides a binocular vision measurement and prior detection data-based pose in-situ measurement method for a large long and thin barrel type component, aiming at the problems of low manual scribing alignment efficiency and poor pose adjustment precision of the large component in the processing process of a large component in-situ robot. The method breaks through key technologies such as global coordinate system GCS construction in a large-size measurement scene, large-size component key feature target positioning in a complex industrial scene based on deep learning, reduction of error accumulation in the process of converting local coordinates into global coordinates and the like.
Simultaneously, the following scientific problems are solved:
(1) how to quickly construct a measurement field of a unified reference coordinate system under a large-scale component multi-robot cooperative intelligent processing mode;
(2) how to quickly and accurately position key feature points of a large component based on an image acquired in a large scene in a complex environment;
(3) how to reduce the measurement error of the vision measurement system by means of fusing external data and measurement data.
The formed large-scale component in-situ pose measuring system based on binocular vision measurement can realize the functions of camera automatic calibration, barrel section pose automatic measurement, barrel section error model analysis, visual display and derivation of measuring results and the like, can achieve the effect of large-scale component automatic measurement, outputs key pose parameters such as large-scale component axis position, quadrant deflection angle and the like, and provides data support for large-scale component multi-robot collaborative intelligent in-situ processing.
In order to achieve the purpose, the in-situ measurement method for the pose of the large-sized long and thin barrel type component based on binocular vision measurement and prior detection data specifically comprises the following steps:
and S1, after the position relation of the two groups of reference plates in the large component measuring field is determined, calibrating the plates based on the checkerboard, and controlling two groups of binocular vision measuring systems A and B arranged at two ends of the large component to be measured to finish binocular camera calibration before measurement.
S2, the construction of the global coordinate system GCS is completed by importing the measurement data of the laser tracker to the reference plate.
S3, importing and analyzing the prior detection data file of the end face flange of the large component to be measured to obtain partial size parameters of the large component (namely size parameters of end frame hole positions of the end face flange of the cylinder sections at two ends of the large component).
And S4, after the initialization of the measuring system is finished, respectively controlling two groups of binocular camera systems A, B to acquire images of the end faces of the cylinder sections at the two ends of the large-scale component, and ensuring that each image simultaneously shoots all the visual targets on the end face of the cylinder section at the corresponding end and the datum plate.
And S5, identifying key feature central points (namely the central points of the visual target and the large-scale component end face flange end frame hole feature) in each image shot in the step S4 based on a deep learning method and combined with a digital image processing technology.
And S6, matching key features of left and right images in the binocular camera and three-dimensional reconstruction are completed based on the binocular calibration data in the S1.
And S7, fitting a three-dimensional coordinate of the frame centers of the flange ends of the end faces at the two ends of the large member in the global coordinate system based on the three-dimensional coordinate of the central point of the key feature solved in S6 and the analyzed prior detection data in S3, and finally solving the current pose of the large member in the global coordinate system GCS. The current pose of the large member can be represented by a cylinder segment axis vector and a cylinder segment circumferential quadrant deflection angle under a global coordinate system GCS.
Step S1 includes:
s1.1, determining the size of a view field and detection precision according to the optimal shooting angle of the large component barrel section and the reference plate, and selecting the corresponding resolution of a camera.
S1.2, controlling and switching the cameras arranged at the two ends of the large-scale component through measurement software, respectively completing binocular calibration of the cameras at the two ends by means of a calibration board checkerboard, and acquiring internal and external parameters of a camera group of A, B groups of binocular vision measurement systems.
Step S2 includes:
and S2.1, respectively measuring three-dimensional coordinates of specified reference points on the reference plates A and B by using a laser tracker, establishing X, Y, Z axes of a global coordinate system GCS by means of the reference points on the reference plates A, and obtaining a coordinate conversion relation between a laser tracking measurement coordinate system LCS and the GCS based on the X, Y, Z axes, thereby completing establishment of the GCS.
And S2.2, according to the coordinate conversion relation between the LCS and the GCS obtained in the S2.1, the coordinate value of the reference point on the reference plate B measured under the LCS can be converted into the coordinate value of the reference plate B measured under the GCS, so that the relative pose relation of the reference plates A and B under the GCS is obtained.
S2.3, the reference plate is a square finish-machined metal plate with the thickness of 400mm multiplied by 15 mm. The surface of the plate is provided with fine machining through holes which are uniformly distributed, and the size of each through hole is the same as that of a laser tracker target ball seat pin shaft and a photogrammetric target seat tool pin shaft. The laser tracking measurement target ball seats on the two sides of the reference plate and the center connecting line of the vision measurement target tool are perpendicular to the reference plate, the distance is known, the relative coordinate conversion relation of the reference points on the two sides of the reference plate can be calculated according to the relation, so that the mutual conversion relation of the vision measurement coordinate systems of the two binocular vision systems A and B in the GCS can be determined, and the mutual conversion relation between the vision measurement coordinate systems CCS and the GCS can be obtained.
Step S3 includes:
s3.1, analyzing the size of the flange on the end face of the cylinder section at the two ends of the large-scale component and a measurement detection data file of the hole position, acquiring the actual processing position of the characteristic hole on the end face of the flange end face of the large-scale component (namely the hole center coordinate of the characteristic hole), fitting out a flange end face circle by using the characteristic hole center coordinate, and viewing the center of the end face circle to the centroid of the flange.
And S3.2, regarding the hole center of the key characteristic hole on the end surface of the large-scale component and the circle center of the end surface of the large-scale component as a group of plane points, reserving the relative position relation between the plane points, and converting the two-dimensional points into three-dimensional points to form a group of point sets distributed on a space plane.
Step S4 includes:
and S4.1, finely adjusting the positions of the binocular cameras in the two groups of vision measurement systems A and B, and ensuring that the end surface of the whole large-scale component and the reference plate are enveloped in the field of view of the cameras.
Step S5 includes:
s5.1, selecting a fast-RCNN neural network with stable performance as a target detection network, identifying and positioning key feature points in each image by using a pre-trained model, positioning the key feature points in a (x, y, w, h) Box parameter mode, and simultaneously giving out the category (hole or target) of the feature points, wherein x, y, w, h respectively represent the abscissa of the upper left corner point of a target positioning frame, and the ordinate, width and height of the upper left corner point; hole and target represent the pore class target and the visual target, respectively.
S5.2, positioning the center of the key feature in each Box image, wherein the specific process comprises the steps of firstly finding out the edge of the key feature by using a Canny operator, and fitting out the optimal contour based on the edge by using a Hough fitting circle algorithm, so as to position the position of the center point of the key feature.
Step S6 includes:
s6.1, obtaining a binocular vision basic matrix F by utilizing the camera internal and external parameters obtained in the S1, multiplying the key feature center points in the left image by the F to obtain the right key feature center theoretical position under epipolar constraint, finding out the key feature center point closest to the theoretical position in the right image, and repeating the operation guidance to complete the matching of all key feature center points in the left image.
And S6.2, solving the three-dimensional coordinates of the key feature central point of the large-scale component in the current binocular camera coordinate system by using the matched key feature central points of the left image and the right image (namely left image and right image) and the calibration result of the binocular camera, thereby completing the three-dimensional reconstruction of the key feature central point of the large-scale component.
And S6.3, converting the three-dimensional coordinates of the key feature points under the CCS into the GCS by using the coordinate mutual conversion relation between the GCS and the CCS established in the step S2.
Step S7 includes:
and S7.1, matching the point set obtained in the step S3 with the central three-dimensional coordinates of the key features obtained in the step S6 based on the relative position relation of the key features of the large-scale component on the end face flange.
And S7.2, after the key positioning features of the large member are matched, the three-dimensional coordinates of the center of shape and the quadrant holes of the end face flanges of the cylinder sections at the two ends of the large member under the global coordinate system GCS can be obtained through the mutual conversion relation of the coordinate systems in the measuring system.
And S7.3, calculating the current pose of the large member under the full GCS according to the solved data.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) the invention provides an in-situ automatic measurement method for the pose of a large-sized long and thin barrel type component based on binocular vision measurement and prior detection data, aiming at the problems that the manual marking and aligning method in the current large-sized component machining process is low in efficiency and cannot guarantee the precision. The method can greatly improve the measurement efficiency, precision and stability of the in-situ attitude of the large-scale component, and can effectively reduce errors caused by improper operation.
(2) The laser tracker can complete the measurement task under a large-scale scene, and is mainly used for construction of GCS and periodic calibration and calibration of the installation position of the customized reference plate. Therefore, the utilization rate of the equipment can be effectively ensured, and the maintenance cost of the measuring system is reduced.
(3) Because the calculation of the measurement data and the intermediate data is completed by the computer, the output form of the measurement result is very flexible, the output form and the output format of the data can be adjusted according to the field requirement, the real-time communication can be carried out with the large-scale member multi-robot cooperative intelligent processing system, and the measurement result of the current pose of the large-scale member is output to the process control system of the large-scale member multi-robot cooperative intelligent processing system for process adjustment or optimization.
(4) The method provided by the invention can be operated by only one person without strong professional knowledge of operators, and can effectively save the labor cost of multi-person cooperation, professional training and the like.
FIG. 1 is a layout diagram of a hardware device and a distribution diagram of a key coordinate system according to the present invention.
FIG. 2 is a logic relationship diagram of the main functional modules of the measurement software developed by the present invention.
FIG. 3 is a schematic diagram and an object diagram of a customized reference plate, a special auxiliary measuring device designed and manufactured in the invention.
FIG. 4 is a flow chart of the present invention for locating key feature centers using depth learning and digital image processing algorithms in a single image.
FIG. 5 is a schematic diagram of a left and right image key feature center matching and three-dimensional reconstruction storage format according to the present invention.
FIG. 6 is a measurement software human-computer interface developed by the present invention. FIG. 7 is a schematic diagram of the steps of constructing a global coordinate system according to the present invention.
Detailed description of the invention
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
In S1.1, the binocular camera solves the relation between an internal parameter matrix mtx, a distortion parameter dist, rotation rvecs and translation tvecs of the camera and a calibration plate plane during each shooting through the Zhang-Yongyou calibration method theory and simultaneously uses opencv in the monocular calibration process, and solves the relation between a three-dimensional correction parameter, a rotation matrix R and a translation matrix t between a left camera and a right camera in the binocular calibration process.
In order to ensure the real-time performance of switching and synchronous photographing, the output port of the camera is connected with a PCI-E bus interface of the server host by using a USB3.0 with higher speed.
As shown in fig. 1, the coordinate systems in the large component pose measurement system mainly include a global coordinate system GCS, a large component coordinate system TCS, a laser tracking measurement coordinate system LCS, reference plate coordinate systems RCS1 and RCS2, binocular vision measurement coordinate systems CCS1 and CCS2, and the like. Before the posture of the large member is adjusted, a unified coordinate system (namely a global coordinate system GCS) is established to realize the interoperation of each coordinate system in the large member posture measuring system, which is the basis of the posture perception of the large member. The global coordinate system GCS establishing process mainly comprises the following steps:
step 1: GCS settings for laser tracker
Measuring key measuring points K set on each reference plate by using laser trackerMi(i is 1,2,3), thereby obtaining a point KMiAnd the coordinate value under the laser tracking measurement coordinate system. As shown in FIG. 3, the measurement point K on the reference plate AM1、KM2And KM3Given the spatial relationship of (A), let KM2Is the origin of the global coordinate system GCS, and then the X, Y, Z axes of the global coordinate system GCS can be determined according to the spatial position relationship between the measurement points. At the same time, point K is aligned in the laser tracking measurement softwareM1、KM2And KM3Is assigned (i.e. coordinate values under the global coordinate system GCS). At this time, the measurement coordinate system of the laser tracker is set as the global coordinate system GCS. And then measuring the other key measuring points by using a laser tracker to obtain the global coordinate value of each key point.
Step 2: GCS measurement of visual targets
As shown in FIG. 3, the connecting line of the VM targets installed on the two sides of the reference plate and the center of the laser tracking target ball is perpendicular to the plane of the reference plate, and the distance between the VM targets and the center of the laser tracking target ball is fixed. Therefore, the translation vector Q from the target ball to the corresponding visual measurement target can be obtained according to the relation, and the global coordinate value of the visual measurement target can be obtained based on the global coordinate value of the key measuring point where Q and the target ball are located.
And step 3: and acquiring images of the reference holes on the end surfaces of the reference plate and the large-scale component by using a binocular measuring system, so as to obtain coordinate values of the end surface reference holes and the visual measuring targets on the reference plate under a binocular visual camera coordinate system. Since the global coordinate value of the vision measurement target is obtained in step 2, the coordinate conversion relationship between the vision measurement coordinate system and the global coordinate system can be obtained, and the position data of the reference hole can be converted into the global coordinate system, thereby completing the GCS construction of the large-scale component measurement system.
In S2.1, two modes are adopted for importing the measurement data of the laser tracker, wherein the mode is that data is manually input in real time in the measurement process; and the second mode is that a measurement result file with a fixed format is exported through the control software of the laser tracker, and a measurement system can directly read and analyze the file.
In S2.2, when the conversion relation from the laser tracker coordinate system to the reference plate A coordinate system is solved, the coordinates of the same group of point sets in the two coordinate systems are abstracted into two groups of point sets, and simultaneously a model is established:
Figure BDA0003313608550000071
the following relation is obtained after the derivation process is omitted:
Figure BDA0003313608550000072
Figure BDA0003313608550000073
let S be XWYTObtaining:
Figure BDA0003313608550000074
Figure BDA0003313608550000075
v, R, U are all orthogonal matrices, so M is VTRU is also an orthogonal matrix, also illustrating the vector M for each column in matrix Mj
Figure BDA0003313608550000076
The following formula is thus obtained:
Figure BDA0003313608550000081
to obtain
Figure BDA0003313608550000082
Medium and maximum R, then:
I=M=VTRU (5)
V=RU (6)
R=VUT (7)
after the rotation matrix R is calculated, the translation matrix t can be calculated by substituting the original expression.
The current pose of the large member comprises axis pose data and a circumferential rolling deflection angle of the large member under a global coordinate system GCS, and the key point is that three-dimensional coordinates of each hole site at two ends of a barrel section and three-dimensional coordinates of an end face centroid under the global coordinate system are solved, and the three-dimensional coordinates of the end face centroid are calculated by combining hole site three-dimensional coordinate calculation values with barrel section flange detection data.
The calculation of the three-dimensional coordinates of the key feature points can be divided into positioning the key feature points on the images acquired by each camera and reconstructing the three-dimensional coordinates of the key feature points (the center of the reference hole on the end face of the large-scale component and the center of the visual measurement target) according to the positioning of the key feature points in the left camera and the right camera.
Fig. 4 shows a single image identification and key feature point location process that incorporates a deep learning algorithm, an edge detection algorithm, and a circle-fitting algorithm.
After the measuring system obtains the original image, the target detection is carried out by adopting a deep learning algorithm, the image is input into a pre-trained neural network model, the output result is the position and the size of a target frame and the category of key features in the target frame, and the detection result is written into a data file according to a certain format.
And separating the image in the target frame, and performing edge detection on the image by using a Canny operator to obtain an edge image. And finally, fitting a circle closest to the edge shape according to the edge image by using a hough circle fitting algorithm, and taking the circle center of the circle as a central positioning point of the key feature in the target frame.
It is noted that not all key features in the image captured by the binocular camera are close to perfect circles, and therefore, a score needs to be introduced for the closeness of each key feature to the fitted shape, and the score is stored in a data file together with the position information of the key features for subsequent reading. And at the moment, the identification and the positioning of the key feature central point of the single image are completed.
After matching of key feature points in the left image and the right image is completed by utilizing epipolar constraint, the three-dimensional coordinates of each key feature point under a coordinate system of the binocular camera can be calculated by utilizing minimum two-multiplication according to a triangular model of the binocular camera by combining internal and external parameters and distortion parameters of the camera recorded in calibration of the binocular camera. And storing the calculation result into a key characteristic data storage file according to the data format shown in fig. 5. And (5) solving the three-dimensional coordinates of the key characteristic points.
In order to reduce the influence of the three-dimensional reconstruction effect on the precision of the measurement system, the prior detection data of the flange end face of the large member is introduced to solve the current pose of the large member.
S3.1, the actual machining position U of the center of the mounting hole on the flange plane can be passedi= [ui,vi]TAnd fitting an end face circle on the plane to find the position of the end face centroid in a flange plane coordinate system, and obtaining a simplified model of the error f of the fitting circle on the two-dimensional plane as shown in the formula (8) if the number of elements in the current end face mounting hole set is n.
Figure BDA0003313608550000091
According to the least square idea, the center position of a fitting circle when the error model takes the minimum value is obtained
Figure BDA0003313608550000092
And the radius R*
Reading the three-dimensional coordinate information of the key characteristic points calculated by the measuring system, determining whether each point is the center of the mounting hole according to the diameter estimation value of each point, if so, adding the point into a mounting hole center set, and scoring and picking k mounting holes P with front scores in the set according to positioning effectk=[xk,yk,zk]T(k is more than or equal to 3), determining the one-to-one correspondence relationship between the center points of the k mounting holes and the flange detection data of the large-scale component barrel section, and uniformly setting the value of the z axis to 0 when the flange detection data is converted from a two-dimensional plane to a three-dimensional space, namely, the center coordinates of the mounting holes in the detection data are changed into Uk=[uk,vk,0]TThe coordinates of the center of the fitting circle are Uc=[uc,vc,0]T
Setting the coordinate of the center of a fitting circle in the global coordinate system as Pc=[xc,yc,zc]T. The vector between the ith mounting hole and the center of the circle in the a priori detection data can be expressed as shown in formula (9).
Figure BDA0003313608550000093
The vector between the ith mounting hole and the center of the circle in the global coordinate system can be expressed as shown in formula (10).
Figure BDA0003313608550000094
Because the included angle of the vector between the center point of any two mounting holes and the center of the fitting circle is twoThe numerical value of the dimensional plane is unchanged in the process of converting the dimensional plane into the three-dimensional space, so that the relational expression of the center points of the mounting holes and the coordinates of the centers of the fitting circles can be listed. Substituting the global coordinates of the picked k mounting holes into a relational expression to obtain xc、yc、zcAnd obtaining the three-dimensional coordinate of the circle center of the end surface of the large component under the global coordinate system, wherein the connecting line of the circle centers of the two end surfaces in the space is the axis of the cylinder end component.
And meanwhile, determining three-dimensional coordinate data of the quadrant hole in the key feature point storage file according to the detection data of the large member end face flange, and solving the circumferential rolling angle of the large member according to the data to finally complete the solution of the current pose of the large member.
In S7.3, the current pose of the large member comprises a member axis vector and a member circumferential rolling quadrant deflection angle under a global coordinate system, wherein the axis vector is a connecting line vector of the centroids of two end faces of the member, and the circumferential rolling quadrant deflection angle is a line angle between a connecting line from one end (end A or end B) of the flange end face of the large member containing the quadrant hole to the centroid of the end face closest to the quadrant hole and a YOZ plane of the global coordinate system.

Claims (7)

1. A binocular vision measurement and prior detection data-based in-situ pose measurement method for a large-sized long and thin cylinder member is characterized by comprising the following steps of: the detection method comprises the following steps:
and S1, after the position relation of the two groups of reference plates in the large component measuring field is determined, calibrating the plates based on the checkerboard, and controlling two groups of binocular vision measuring systems A and B arranged at two ends of the large component to be measured to finish binocular camera calibration before measurement.
S2, the construction of the global coordinate system GCS is completed by importing the measurement data of the laser tracker to the reference plate.
S3, importing and analyzing the prior detection data file of the end face flange of the large component to be measured to obtain partial size parameters of the large component (namely size parameters of end frame hole positions of the end face flange of the cylinder sections at two ends of the large component).
And S4, after the initialization of the measuring system is finished, respectively controlling two groups of binocular camera systems A, B to acquire images of the end faces of the cylinder sections at the two ends of the large-scale member, and ensuring that each image simultaneously shoots all the visual targets on the end face of the cylinder section at the corresponding end and the reference plate.
And S5, identifying key feature central points (namely the central points of the visual target and the large-scale component end face flange end frame hole features) in each image shot in the step S4 based on a deep learning method and combined with a digital image processing technology.
And S6, matching key features of left and right images in the binocular camera and three-dimensional reconstruction are completed based on the binocular calibration data in the S1.
And S7, fitting three-dimensional coordinates of the frame centers of the flange ends of the end faces at the two ends of the large member in a global coordinate system based on the three-dimensional coordinates of the central point of the key feature solved in S6 and the analyzed prior detection data in S3, and finally solving the current pose of the large member in the global coordinate system GCS. The current pose of the large member can be represented by a cylinder segment axis vector and a cylinder segment circumferential quadrant deflection angle under a global coordinate system GCS.
2. The in-situ measurement method for the pose of the large-sized long and thin barrel type component based on binocular vision measurement and prior detection data, as recited in claim 1, is characterized in that: the step S2 includes the steps of,
and S2.1, respectively measuring three-dimensional coordinates of specified reference points on the reference plates A and B by using a laser tracker, establishing X, Y, Z axes of a global coordinate system GCS by means of the reference points on the reference plates A, and obtaining a coordinate conversion relation between a laser tracking measurement coordinate system LCS and the GCS based on the X, Y, Z axes, thereby completing the establishment of the GCS.
And S2.2, according to the coordinate conversion relation between the LCS and the GCS obtained in the S2.1, the coordinate value of the reference point on the reference plate B measured under the LCS can be converted into the coordinate value of the reference plate B measured under the GCS, so that the relative pose relation of the reference plates A and B under the GCS is obtained.
S2.3, the reference plate is a square finish-machined metal plate with the thickness of 400mm multiplied by 15 mm. The surface of the plate is provided with fine machining through holes which are uniformly distributed, and the size of each through hole is the same as that of a laser tracker target ball seat pin shaft and a photogrammetric survey target seat tool pin shaft. The laser tracking measurement target ball seats on the two sides of the reference plate and the center connecting line of the vision measurement target tool are perpendicular to the reference plate, the distance is known, the relative coordinate conversion relation of the reference points on the two sides of the reference plate can be calculated according to the relation, so that the mutual conversion relation of the vision measurement coordinate systems of the two binocular vision systems A and B in the GCS can be determined, and the mutual conversion relation between the vision measurement coordinate systems CCS and the GCS can be obtained.
3. The in-situ measurement method for the pose of the large-sized elongated barrel-like member based on the vision and the prior detection data as claimed in claim 1 is characterized in that: the step S3 includes the steps of,
s3.1, analyzing the size of the flange on the end face of the cylinder section at the two ends of the large-scale component and a measurement detection data file of the hole position, obtaining the actual processing position of the characteristic hole of the end face of the flange of the large-scale component on the end face (namely the hole center coordinate of the characteristic hole), fitting out a flange end face circle by using the characteristic hole center coordinate, and taking the center of the end face circle as the centroid of the flange.
And S3.2, regarding the characteristic holes and the circle center of the end face as a group of plane points, reserving the relative position relationship between the characteristic holes and the circle center of the end face, and converting the two-dimensional points into three-dimensional points to form a group of point sets distributed on a space plane.
4. The in-situ measurement method for the pose of the large-sized elongated barrel-like member based on the vision and the prior detection data as claimed in claim 1 is characterized in that: the step S4 includes the steps of,
and S4.1, finely adjusting the positions of the binocular cameras in the two groups of vision measurement systems A and B, and ensuring that the end surface of the whole large-scale component and the reference plate are enveloped in the field of view of the cameras.
5. The in-situ measurement method for the pose of the large-sized elongated barrel-like member based on the vision and the prior detection data as claimed in claim 1 is characterized in that: step S5 includes:
s5.1, selecting a fast-RCNN neural network with stable performance as a target detection network, identifying and positioning key feature points in each image by using a pre-trained model, positioning the key feature points in a (x, y, w, h) Box parameter mode, and simultaneously giving the category (hole or target) of the feature points, wherein x, y, w, h respectively represent the horizontal coordinate of the upper left corner point of a target positioning frame, and the vertical coordinate, width and height of the upper left corner point; hole and target represent the pore class target and the visual target, respectively.
S5.2, positioning the center of the key feature in each Box image, wherein the specific process comprises the steps of firstly finding out the edge of the key feature by using a Canny operator, and fitting out the optimal contour based on the edge by using a Hough circle fitting algorithm, so as to position the position of the center point of the key feature.
6. The in-situ measurement method for the pose of the large-sized elongated barrel-like member based on the vision and the prior detection data as claimed in claim 1 is characterized in that: step S6 includes:
s6.1, obtaining a binocular vision basic matrix F by utilizing the camera internal and external parameters obtained in the S1, multiplying the key feature center points in the left image by the F to obtain the right key feature center theoretical position under epipolar constraint, finding out the key feature center point closest to the theoretical position in the right image, and repeating the operation guidance to complete the matching of all key feature center points in the left image.
And S6.2, solving the three-dimensional coordinates of the key feature center point of the large-scale component in the coordinate system of the current binocular camera by using the matched key feature center points of the left image and the right image (namely left image and right image) and the calibration result of the binocular camera, thereby completing the three-dimensional reconstruction of the key feature center point of the large-scale component.
And S6.3, converting the three-dimensional coordinates of the key feature points under the CCS into the GCS by using the coordinate mutual conversion relation between the GCS and the CCS established in the step S2.
7. The in-situ measurement method for the pose of the large-sized elongated barrel-like member based on the vision and the prior detection data as claimed in claim 1 is characterized in that: step S7 includes:
and S7.1, matching the point set obtained in the step S3 with the three-dimensional coordinates of the center of the key positioning feature of the large-scale component obtained in the step S6 based on the relative position relation of the key feature on the flange.
And S7.2, after matching is finished, the centroid of the flange of the cylinder section at two ends of the large-scale component under the GCS and the three-dimensional coordinates of the quadrant hole can be obtained by measuring a mutual conversion relation matrix among all coordinate systems in the inner system.
And S7.3, calculating the current pose of the large member under the global coordinate system GCS based on the solved data.
CN202111224890.0A 2021-10-21 2021-10-21 Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data Active CN114001651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111224890.0A CN114001651B (en) 2021-10-21 2021-10-21 Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111224890.0A CN114001651B (en) 2021-10-21 2021-10-21 Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data

Publications (2)

Publication Number Publication Date
CN114001651A true CN114001651A (en) 2022-02-01
CN114001651B CN114001651B (en) 2023-05-23

Family

ID=79923358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111224890.0A Active CN114001651B (en) 2021-10-21 2021-10-21 Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data

Country Status (1)

Country Link
CN (1) CN114001651B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114905511A (en) * 2022-05-12 2022-08-16 南京航空航天大学 Industrial robot assembly error detection and precision compensation system calibration method
CN115179065A (en) * 2022-06-20 2022-10-14 成都飞机工业(集团)有限责任公司 Air inlet channel type composite material tooling template machining support structure and allowance adjusting method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107421465A (en) * 2017-08-18 2017-12-01 大连理工大学 A kind of binocular vision joining method based on laser tracker
CN107883870A (en) * 2017-10-24 2018-04-06 四川雷得兴业信息科技有限公司 Overall calibration method based on binocular vision system and laser tracker measuring system
DE102017110816A1 (en) * 2017-05-18 2018-07-12 Carl Zeiss Meditec Ag An optical observation apparatus and method for efficiently performing an automatic focusing algorithm
CN109297413A (en) * 2018-11-30 2019-02-01 中国科学院沈阳自动化研究所 A kind of large-size cylinder body Structural visual measurement method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017110816A1 (en) * 2017-05-18 2018-07-12 Carl Zeiss Meditec Ag An optical observation apparatus and method for efficiently performing an automatic focusing algorithm
CN107421465A (en) * 2017-08-18 2017-12-01 大连理工大学 A kind of binocular vision joining method based on laser tracker
CN107883870A (en) * 2017-10-24 2018-04-06 四川雷得兴业信息科技有限公司 Overall calibration method based on binocular vision system and laser tracker measuring system
CN109297413A (en) * 2018-11-30 2019-02-01 中国科学院沈阳自动化研究所 A kind of large-size cylinder body Structural visual measurement method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭清达;全燕鸣;于广平;武彦林;: "基于ICP算法的双目标定改进方法研究", 光学学报 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114905511A (en) * 2022-05-12 2022-08-16 南京航空航天大学 Industrial robot assembly error detection and precision compensation system calibration method
CN114905511B (en) * 2022-05-12 2023-08-11 南京航空航天大学 Industrial robot assembly error detection and precision compensation system calibration method
CN115179065A (en) * 2022-06-20 2022-10-14 成都飞机工业(集团)有限责任公司 Air inlet channel type composite material tooling template machining support structure and allowance adjusting method
CN115179065B (en) * 2022-06-20 2023-12-08 成都飞机工业(集团)有限责任公司 Air inlet channel composite tool template processing supporting structure and allowance adjusting method

Also Published As

Publication number Publication date
CN114001651B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN109598762B (en) High-precision binocular camera calibration method
CN107214703B (en) Robot self-calibration method based on vision-assisted positioning
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN111415391B (en) External azimuth parameter calibration method for multi-camera by adopting mutual shooting method
CN109323650B (en) Unified method for measuring coordinate system by visual image sensor and light spot distance measuring sensor in measuring system
CN109297436B (en) Binocular line laser stereo measurement reference calibration method
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN112700501B (en) Underwater monocular subpixel relative pose estimation method
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
US20220230348A1 (en) Method and apparatus for determining a three-dimensional position and pose of a fiducial marker
CN112288826A (en) Calibration method and device of binocular camera and terminal
CN111207670A (en) Line structured light calibration device and method
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
CN112362034B (en) Solid engine multi-cylinder section butt joint guiding measurement method based on binocular vision
Wang et al. Error analysis and improved calibration algorithm for LED chip localization system based on visual feedback
CN113119129A (en) Monocular distance measurement positioning method based on standard ball
Ding et al. A robust detection method of control points for calibration and measurement with defocused images
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN109773589B (en) Method, device and equipment for online measurement and machining guidance of workpiece surface
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
Fan et al. High-precision external parameter calibration method for camera and LiDAR based on a calibration device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant