CN111981982B - Multi-directional cooperative target optical measurement method based on weighted SFM algorithm - Google Patents
Multi-directional cooperative target optical measurement method based on weighted SFM algorithm Download PDFInfo
- Publication number
- CN111981982B CN111981982B CN202010850055.7A CN202010850055A CN111981982B CN 111981982 B CN111981982 B CN 111981982B CN 202010850055 A CN202010850055 A CN 202010850055A CN 111981982 B CN111981982 B CN 111981982B
- Authority
- CN
- China
- Prior art keywords
- target
- feature
- coordinate system
- points
- multidirectional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
Abstract
The invention relates to a multi-directional cooperative target optical measurement method based on a weighted SFM algorithm, and provides a method for constructing a multi-directional cooperative target by using a planar feature target and realizing a high-precision optical measurement function of the multi-directional cooperative target by combining the weighted SFM algorithm aiming at the problems that a high-precision cooperative target in a complex measurement environment is difficult to process, difficult to store, easy to shield and the like. The method comprises the steps of randomly placing a designed multidirectional cooperative target in a camera view field, shooting the multidirectional cooperative target from different view points by the camera, researching a three-dimensional reconstruction algorithm based on the pose of a feature point of a planar feature target according to obtained image information, establishing a global coordinate system, providing a topological corresponding matching and optimizing method of the planar feature target coordinate system and the three-dimensional coordinate system, establishing a weighted SFM algorithm, and completing global unification of three-dimensional data of multi-view local measurement of the multidirectional cooperative target by performing high-precision splicing on feature point three-dimensional data obtained from multiple angles and multiple view points.
Description
Technical Field
The invention belongs to the technical field of measurement, and provides a high-precision optical measurement function for constructing a multi-directional cooperative target by using a planar feature target and combining a weighted SFM (Structure From Motion) algorithm to realize the multi-directional cooperative target, aiming at the problems that a high-precision cooperative target in a complex measurement environment is difficult to process, difficult to store, easy to shield and the like.
Background
With the rapid development of machine vision, the three-dimensional digital point cloud reconstruction technology based on non-contact vision measurement has increasingly enhanced functions in the fields of medicine, machinery, multimedia, criminal reconnaissance, industrial flaw detection, aerospace and the like, wherein for the overall 3D reconstruction of a target object in Euclidean space, a 3D scanner is mostly adopted to scan the target object at different angles to obtain a plurality of local data, and the local data are spliced by a global unification method to further recover the overall appearance of the target object.
At present, three methods for splicing local scanning data of a target object by a 3D scanner are mainly used, namely splicing based on natural characteristic points, splicing based on artificial mark points and tracking type splicing based on cooperative targets. The splicing based on the natural feature points is realized by correctly matching the natural feature points through algorithms such as ORB, SURF or SHIFT and the like, and then completing the splicing according to an ICP (iterative Closest Point) principle, and the method is low in precision and unstable; currently, a splicing method based on artificial mark points is generally adopted for mainstream products, however, the method needs to stick mark points on the surface of a target object, the surface of the target object is complex and polluted, the method is not suitable for internal complex objects and objects with the characteristics of easy expansion and corrosion and the like, and the accumulated error is gradually increased along with the increase of the scanning range; in the tracking type splicing method based on the cooperative target, a 3D scanner can realize the three-dimensional reconstruction of a plurality of section profiles of a local area of a scanned object based on the principle of triangulation by acquiring a single image. The cooperative target is rigidly connected with the scanning head and can be used for dynamically marking the space pose of the scanning head, and the global camera has a larger view field range and can dynamically track the scanning head in a large-scale space, so that the global splicing of local scanning results is realized. In the scanning process of large-size parts, global splicing of local scanning results can be realized without marking points, and high flexibility is realized in application. With the development of higher resolution global cameras and higher precision calibration technologies, tracking laser 3D scanners based on cooperative targets and global cameras have become an important development trend in the future. Before tracking and scanning, the spatial coordinates of each feature point on the cooperative target need to be accurately known, and the 3D scanner spatial pose can be further solved. The cooperation target obtained by traditional machining is not easy to store, is easy to be shielded, has low machining precision, is difficult to meet the technical requirement of global splicing precision, and in non-mechanical measurement, optics proves to have greater flexibility than magnetic, acoustic or inertial sensing. Therefore, designing a high-precision cooperative target and realizing three-dimensional measurement of feature points on the cooperative target by an optical means is an important problem to be solved urgently.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the problems that the traditional cooperation target is easily influenced by noise such as external illumination, vibration and the like, the appearance is fixed, the structure is complex, the processing is difficult and the like, the multidirectional cooperation target with multidirectional visibility is provided, the multidirectional cooperation target is formed by rigidly connecting a special plane characteristic target and a three-dimensional framework, a corresponding algorithm is provided by combining a non-contact measurement technology, the coordinate of a characteristic point of the multidirectional cooperation target under a world coordinate system can be obtained without complex machining, and due to the multidirectional visibility, the shielding problem of various external interference factors on the multidirectional cooperation target in a complex measurement environment is solved.
Aiming at a designed multidirectional cooperative target, a novel weighted SFM algorithm is provided, a camera shoots the multidirectional cooperative target at different angles of multiple viewpoints to obtain a series of characteristic images, the coordinates of characteristic points in the images under a corresponding camera coordinate system are directly obtained by correctly identifying and processing the collected characteristic images, an estimation basis matrix and a triangulation process are reduced, a weight function is set according to the included angle relationship between the optical axis of the camera and the plane normal vector of the plane characteristic target in the multidirectional cooperative target, nonlinear optimization is carried out on a main algorithm, and high-precision global unification of all characteristic points of the multidirectional cooperative target is realized.
The technical scheme of the invention is as follows: a multi-directional cooperative target optical measurement method based on a weighted SFM algorithm comprises the following steps:
and 2, utilizing a weighted SFM algorithm to globally unify the feature points on the multidirectional cooperative target so as to obtain the coordinates of the feature points and perform optical measurement.
Further, in the step 1, designing a multidirectional cooperative target specifically includes the following steps:
1.1 design of planar feature targets
The designed planar characteristic target consists of a Marker in AprilTags and four black squares, and the same black squares are added at four corners of the Marker respectively to form a checkerboard pattern;
1.2 polyhedral framework design
Designing a polyhedral framework, respectively and randomly placing plane feature targets with different internal coding structures on each surface of the polyhedral framework, rigidly connecting all the plane feature targets with the polyhedral framework, constructing a multidirectional cooperation target, automatically identifying unique coding information of the plane feature targets through internal coding of the plane feature targets, and determining the plane where the plane feature targets are located.
Further, in the step 2, a weighted SFM algorithm is used to perform global unification on the feature points on the multidirectional cooperative target, so as to obtain high-precision feature point coordinates, and the specific steps are as follows:
2.1, identifying a plane feature target and extracting feature points:
obtaining pixel coordinates of four angular points of the planar feature target by adopting a quad detection method, resolving the sub-pixel coordinates of the four angular points according to a sub-pixel extraction method, and solving three-dimensional coordinates of the four angular points under a planar feature target coordinate system taking the center of the planar feature target as an origin according to the side length of the known planar feature target; according to an external parameter estimation algorithm of the camera, resolving a rotation vector R and a translation vector T of each plane feature target coordinate system converted to a corresponding camera coordinate system; acquiring the number of a current plane feature target according to an internal payload decoding method; the inner payload decoding process is as follows: firstly, converting the coordinate of each bit field on a Marker into an image coordinate system through a homography matrix; then, threshold processing is carried out on the pixels by establishing a light intensity function model, so that the correct values of corresponding bits can be read from the payload fields under the condition of ambient illumination change, and further decoding of the inner payload of the Marker is completed;
2.2, reconstructing a multidirectional cooperative target initial model by utilizing an SFM algorithm:
the camera shoots the multi-directional cooperative target at different angles of multiple viewpoints to obtain a series of characteristic images, and the collected characteristic images are identified and processed,directly acquiring three-dimensional coordinates of feature points in the image under each viewpoint of the camera and rotation matrix of each feature image to corresponding camera coordinate systemAnd translation vectorj denotes the jth camera coordinate system, and a global coordinate system O is establishedG-XGYGZGConverting all the characteristic points from the camera coordinate system to the global coordinate system OG-XGYGZGIn the conversion process, the reconstructed feature points are directly added into the global coordinate system and used as points in the global coordinate system to reconstruct subsequent feature points, and the like, so that the reconstruction of the multi-directional cooperative target initial model is completed;
2.3, carrying out weighted optimization on the SFM algorithm:
according to the principle of a perspective projection model of a camera, the method comprises the following steps:
obtaining a global coordinate system OG-XGYGZGRotation matrix to corresponding camera coordinate systemAnd translation vectorAnd re-projecting the feature points in the initial model of the multidirectional cooperative target onto the corresponding feature images, resolving a weight coefficient function f (theta) according to the included angle relationship between the optical axis of the camera and the plane normal vector of the plane feature target in the multidirectional cooperative target, and establishing a target optimization function, as shown in a formula (2):
wherein L (j) represents the total number of feature images, L (i)j) Representing the total number of feature codes identified in the jth feature image,is shown inThe sub-pixel coordinates of the extracted feature points,representing feature points in correspondenceThe coordinate of the lower reprojection point, K is the camera internal parameter,indicating that the vector is to be rotatedAnd (3) converting the target data into Rodrigues of a rotation matrix, and performing nonlinear optimization on the SFM algorithm by optimizing the objective function to realize high-precision global unification of all characteristic points of the multi-directional cooperative target.
Further, in the step 2.1, the camera moves around the multi-directional cooperative target, a plurality of shooting positions are selected, target images are acquired from different viewpoints and angles, and at least one identical planar feature target image is ensured to be contained between two adjacent images.
Further, in the step 1, the designed planar feature target is composed of a Marker in AprilTags and four black squares, the Marker is a code of the planar feature target, the same black squares are respectively added at four corners of the Marker to construct a checkerboard pattern, and sub-pixel extraction of four corners of the planar feature target is realized.
The invention has the advantages that:
1. the invention designs a multidirectional cooperative target with multidirectional visibility, which is formed by rigidly connecting a special planar characteristic target and a three-dimensional framework, solves the problems that the traditional cooperative target is easily influenced by noise such as external illumination, vibration and the like, is fixed in appearance, complex in structure, difficult to process and the like, and reduces the measurement cost.
2. The invention provides a new weighted SFM algorithm by combining a non-contact measurement technology, and the coordinates of the characteristic points of the multi-directional cooperative target under a world coordinate system can be obtained without complex machining, so that the high-precision optical measurement of the multi-directional cooperative target is completed.
3. The invention combines the designed multidirectional cooperation target and the corresponding algorithm, applies the multidirectional cooperation target to the 3D scanning scene, realizes the three-dimensional reconstruction and positioning of the target object with the complex internal structure, and has the advantages of anti-blocking, simple processing, low cost and the like.
Drawings
FIG. 1 is a planar feature target topography;
FIG. 2 is a representation of a polyhedral skeleton and a multidirectional cooperative target topography;
FIG. 3 is a camera shooting multi-directional cooperative targets from different angles;
FIG. 4 is a transformation of feature points in a single image from a planar feature target coordinate system to a corresponding camera coordinate system;
FIG. 5 is a diagram showing the sequential transformation of all feature points from the camera coordinate system to the global coordinate system;
FIG. 6 is a back projection of feature points from a global coordinate system to a planar feature target coordinate system;
FIG. 7 is an overall flow chart of the weighted SFM algorithm of the present invention;
FIG. 8 is a diagram of several functions that satisfy the trend of the angle and error relationships;
FIG. 9 is a three-dimensional multi-directional target model;
fig. 10 is a three-dimensional model of a reconstructed three-dimensional multi-directional target.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.
The invention designs a multidirectional cooperative target with multidirectional visibility based on computer vision and image processing technology, and provides a new weighted SFM algorithm to realize high-precision global unification of all feature points of the multidirectional cooperative target according to a camera perspective projection model principle.
The planar target has the advantages of easy processing, low cost, high precision and the like, so that the high-precision special planar characteristic target is designed to be used as a part of the multi-directional cooperative target. In machine vision, common plane targets are generally dot targets and checkerboard targets, in a complex large-field-of-view environment, noise such as surrounding dots and angular points interferes with extraction of feature points on the dot targets and the checkerboard targets, the two plane targets do not have an automatic identification and positioning function and cannot judge specific spatial positions of the plane targets, and therefore the plane feature targets with feature coding information and strong anti-interference capability are designed aiming at the problems that a traditional plane target is poor in anti-interference capability and does not have the automatic identification and positioning function and the like.
In recent years, visual reference systems have been widely used for object recognition, tracking, and positioning. In the field of machine vision, visual reference libraries commonly used for identifying locations are aprilats, ARTags, BinARyID, and ArUco. The Marker in the visual reference libraries is composed of black and white colors, is square, and contains unique binary coding information inside. And (3) correctly extracting the information of four corner points of the square through a corresponding algorithm, and decoding the internal coding information of the Marker to realize the self-recognition and positioning functions of the Marker in the visual reference library. The Marker is positioned in AprilTags, the interior of the Marker contains the richest coding information for verification, the anti-interference and anti-shielding capabilities are strongest, the Marker can flexibly adapt to light intensity and angle change, the problem of image distortion is solved, the real-time performance is achieved, and the Marker is extremely suitable for high-precision identification and positioning occasions. Therefore, several markers with different codes in the AprilTags library are selected for improvement, and the improved markers are used as plane feature targets.
According to one embodiment of the invention, the designed planar feature target consists of a Marker in AprilTags and four black squares. The Marker is a code of a planar feature target, the same black squares are respectively added at four corners of the Marker to construct a checkerboard pattern, the sub-pixel extraction of the four corners of the planar feature target is realized, and the precision of a weighted SFM algorithm proposed later is further improved.
In order to ensure multidirectional visibility of the multidirectional cooperative targets and solve the problem of shielding, the invention designs the polyhedral framework with the multidirectional visibility, the exact pose relation among all the faces of the polyhedral framework is determined without machining, only random connection is needed among the faces, the planar feature targets with different internal coding structures are respectively and randomly placed on all the faces of the polyhedral framework, all the planar feature targets are rigidly connected with the polyhedral framework, the multidirectional cooperative targets are constructed, the unique coding information of the planar feature targets can be automatically identified through an algorithm, the plane where the planar feature targets are located is determined, and data support is provided for the subsequent multidirectional cooperative target elevation precision optical measurement.
According to an embodiment of the invention, the polyhedron skeleton and the multidirectional cooperative target morphology are shown in fig. 2 in the attached drawings of the specification, the larger the number of the faces of the polyhedron skeleton is, the stronger the anti-shielding capacity is, such as a C60 structure, and the regular dodecahedron is used for carrying out experiments to respectively measure each face of the regular dodecahedron. In order to realize high-precision optical measurement of a multi-directional cooperative target, effective identification and processing of feature points on the target are required. The camera moves around the multi-directional cooperative target, a plurality of shooting positions are selected, target images are collected from different angles of multiple viewpoints, and at least one same planar feature target image is ensured to be contained between every two adjacent images, as shown in fig. 3. Establishing a mathematical model, proposing an SFM algorithm, processing the obtained image, and determining the coordinate system conversion relation between the planar feature target of each surface of the multidirectional cooperative target and the camera, as shown in FIG. 4. And establishing a global coordinate system, and further realizing the conversion from the coordinate systems of the cameras to the global coordinate system, thereby obtaining the conversion relation between the plane feature target coordinate system and the global coordinate system, so that feature points on the multi-directional cooperative target are all unified to the global coordinate system, and the global unification of the multi-viewpoint local measurement three-dimensional data of the multi-directional cooperative target is completed, as shown in fig. 5. Next, the feature points in the global coordinate system are back-projected under the corresponding planar feature target coordinate system, as shown in fig. 6. And finally, constructing an optimized objective function, optimizing the objective function by a weighted BA (beam Adjustment) method, realizing high-precision optical measurement of the multi-directional cooperative target, and having high detection efficiency and strong universality, wherein the specific implementation process is shown in figure 7 in the attached drawing of the specification.
The invention provides a multi-directional cooperative target optical measurement method based on a weighted SFM algorithm, which comprises the following concrete implementation steps:
1. planar feature target identification and feature point extraction
Obtaining pixel coordinates of four angular points of a planar feature target by adopting a quad detection method, resolving the sub-pixel coordinates of the four angular points according to a sub-pixel extraction method proposed by Chu and the like, and solving the coordinates of the four angular points under a planar feature target coordinate system taking the center of the planar feature target as an origin according to the side length of the known planar feature target; according to an external parameter estimation algorithm of the camera, resolving a rotation vector R and a translation vector T of each plane feature target coordinate system converted to a corresponding camera coordinate system; and acquiring the number of the current plane feature target according to an internal payload decoding method.
2. SFM algorithm design
All the characteristic images acquired by the camera through different angles of a single viewpoint are processed by identifying the plane characteristic target and extracting the characteristic points. Establishing a camera coordinate systemAnd establishing a planar feature target on each face of the multi-directional cooperative targetCoordinate system of the targetWherein j represents a camera coordinate system serial number or an image serial number, and i represents a planar feature target code. Correctly identifying each plane feature target code in image and four feature points corresponding to the codeThe coordinates of the following. Set T is establishediRepresenting all the planar feature target information on the multi-directional cooperative target, as shown in (1),
wherein the content of the first and second substances,(n represents the serial number of the feature points in the current plane feature target, is a natural number, and has a value range of 0-3) represents that four feature points corresponding to i are in3D coordinates of lower, N+Representing a positive natural number.
Set I is establishedjRepresenting all the planar feature target codes identified in the jth image, as shown in (2),
Ij={ij|ij∈N+,j∈N+} (2)
wherein ijRepresenting the single plane feature target code in the j image. By equation (3), solve forjAll of (1) tojCorresponding toIn turn fromDown conversion toThe coordinates of the lower part and the back part,
wherein the content of the first and second substances,andrespectively representToThe rotation vector and the translation matrix of (a),indicating that the vector is to be rotatedRodrigues converted to rotation matrices. Set A is establishedjIs shown inBelow IjCoordinates of the corresponding feature points of the middle element, AjThe middle element is in dictionary form, ijIs a key, and is provided with a plurality of keys,as a value, as shown in (4),
through the process, the feature point pairs corresponding to all the feature codes identified by the feature images are solvedSwitch over toCoordinates of each picture are obtained, and the identified feature codes of each picture are locatedAnd correspond toThe conversion relationship between them.
Arbitrarily selecting the first camera coordinate system as the global coordinate system OG-XGYGZGResolving others And OG-XGYGZGTo obtain a conversion relation betweenAnd OG-XGYGZGThe conversion relationship between them. Finally obtaining all characteristic points on the multi-directional cooperative target OG-XGYGZGThe 3D coordinates in (3) realize global unification of the multidirectional cooperative targets. In the algorithm presented herein, a first camera coordinate system is selectedAs a global coordinate system OG-XGYGZGEstablishing a set IG,I2And I2GAs shown in (5),
wherein, IGRepresents OG-XGYGZGThe 3D coordinates in (3) realize global unification of the multidirectional cooperative targets. In the algorithm provided by the invention, a corresponding plane characteristic target code set I identified in a first camera coordinate system is selected2To representThe corresponding planar feature target code set identified in (1)2GTo representAnd OG-XGYGZGAll identified identical code sets in (a). Building a setAndas shown in (6), in the figure,
wherein the content of the first and second substances,andthe middle elements are in the form of a dictionary,in (i)2GIs a key, and is provided with a plurality of keys,in the form of a value of (a),in (i)2GIs a key, and is provided with a plurality of keys,in the form of a value of (a),represents OG-XGYGZGLower i2GCorresponding feature point 3D coordinate, OG-XGYGZGTo representLower i2GCorresponding feature point 3D coordinates, n representing the current i2GThe corresponding characteristic point numbers are natural numbers, and the value range is 0-3.
From equation (7), solveAndmiddle correspondenceAndposition of center of massAndwherein, L (I)2G) Representation set I2GTotal number of middle elements.
through equation (9), an optimization objective function is constructed to obtain R*Maximum likelihood estimate ofNamely, it isToThe rotation matrix of (a) is,
Through (11), set I is established2-G、A″2-GAnd A2,
Wherein, I2-GIs represented by2And IGDifference set of (A2-GThe middle element is in dictionary form, i2-GIs a key, and is provided with a plurality of keys,in the form of a value of (a),to representLower i2-GCorresponding to the feature point 3D coordinates. From (4) may give A2。A2InDo not require a full conversion to OG-XGYGZGNext, only A ″, is required to be added2-GIn2-GCorresponding toConversion to OG-XGYGZGAnd then, the reconstruction times are reduced, and the global unified precision is improved. By equation (12), solveConversion to OG-XGYGZG3D coordinates after bottom
Building a setAs shown in (13), in the figure,the middle element is in dictionary form, i2-GIs a key, and is provided with a plurality of keys,in order to be a corresponding value of the value,
according to (14), to IGAnd I2-GFusing the medium data to obtain a set I'GIncreasing OG-XGYGZGThe number of medium feature codes. At the same time, A is added1Andfusing the medium data to obtain a set A'GObtaining OG-XGYGZGWhere all feature codes present correspond to feature point data. To this end, the process is completedTo OG-XGYGZGThe global uniformity of (a) is such that,
for the same reason, followTo OG-XGYGZGUnified algorithm of (1), will beLower feature codes and corresponding feature points, unified to O in turnG-XGYGZGThe following steps. Set B is establishedGAs shown in (15), the first step,
wherein, BGThe middle element is in the form of a dictionary, i is a key,in the form of a value of (a),i represents the corresponding characteristic point of the i is OG-XGYGZGThe following 3D coordinates. To this end, allTo OG-XGYGZGThe global unified body algorithm of (1) is completed. The three-dimensional reconstruction of the multi-directional cooperative target can be realized through experiments.
3. Weighted BA optimization
And establishing an algorithm precision evaluation standard aiming at the SFM main body algorithm designed in the last step. A new weighted BA optimization method is provided, high-precision optimization is carried out on the SFM main body algorithm, and finally high-precision optical measurement on the multi-directional cooperation target is realized.
First, equation (16) is established from (4), (9), (10) and (15), and is obtainedMiddle characteristic pointAnd OG-XGYGZGMiddle characteristic pointThe relationship is converted by the conversion unit,
from equation (16), solve for OG-XGYGZGTo correspond toOf the rotation matrixAnd translation matrixAs shown in (17), wherein, OG-XGYGZGThe rotation matrix and the translation matrix to itself are identity matrices I and 0 respectively,
aiming at the j characteristic image collected, an image coordinate system is establishedMixing O withG-XGYGZGAs a world coordinate system OW-XWYWZWBased on the perspective projection model of the camera, OW-XWYWZWNeutralization ofSame ijCorresponding toReproject to correspondingCalculating the 2D coordinates of the reprojected pointsAs shown in the equation (18),
wherein s is a scale factor,andare respectively asAndk is the camera internal parameter, fxAnd fyAre respectively asNormalized focal length on the u and v axes, (u)0,v0) Indicating the primary point (principal points) coordinates. Set C is establishedjAs shown in (19), CjThe middle element is in dictionary form, ijIs a key, and is provided with a plurality of keys,in the form of a value of (a),
through the process, all the characteristic pictures are processed, and all the O are solvedG-XGYGZGAnd (4) re-projecting the lower characteristic point to a corresponding image coordinate system. Then, extracting i in the j characteristic image by adopting a quad detection methodjCorresponding feature pointAt the lower pixel level 2D coordinates. According to the sub-pixel angular point extraction algorithm proposed by Chu and the like, the characteristic points are solvedSub-pixel level 2D coordinates of
Constructing a root mean square error E between the extracted points and the reprojected pointsRMSAnd the accuracy of the multi-directional cooperative target three-dimensional model after the feature point three-dimensional data is globally unified is judged, as shown in equation (20),
wherein N represents the total number of the feature points in all the feature images,to representIs determined by the coordinate of (a) in the space,to representCoordinate of (a), L (I)j) Is represented byjThe total number of feature codes contained in the code. According to B in (15)GThe contained elements are used for establishing an equation (21) and resolving a root mean square error D between the side length and the true value of two adjacent characteristic points of the planar characteristic target in a reconstructed multidirectional cooperative target three-dimensional modelRMSFor judging the accuracy of the multi-directional cooperative target three-dimensional model after globally unifying the three-dimensional data of the feature points,
wherein L (i) represents the total number of planar feature targets on the multi-directional cooperative target,and D is the true distance between two adjacent characteristic points on the planar characteristic target. And after the algorithm precision evaluation standard is determined, carrying out weighted BA optimization on the SFM algorithm. When the camera carries out single-viewpoint multi-angle shooting on the multi-direction cooperative target, an included angle between the optical axis of the camera and a normal vector of a plane where the plane characteristic target is located is assumed to be theta. In a certain measurement range, the smaller theta is, the clearer the acquired characteristic image is, and the higher the accuracy of the characteristic point extracted from the image is; the larger the theta is, the more fuzzy the acquired characteristic image is, and the accuracy of the characteristic points extracted from the image is correspondingly reduced; when theta exceeds a certain threshold theta', the coordinates of the feature points extracted from the feature images deviate from the actual positions too much, so that the problem of extracting the feature points by mistake occurs, and the overall algorithm precision of the multi-directional cooperative target three-dimensional data global unification is greatly reduced. Therefore, after a theta expression is deduced, firstly determining a threshold theta' of theta, rejecting error feature points extracted due to overlarge theta, and only keeping feature points with higher extraction precision; secondly, an optimization objective function is established, the weight of the feature points with overlarge theta in the error optimization objective function is reduced, the weight of the feature points with smaller theta in the error optimization objective function is increased, and therefore the global unified high-precision optimization of the three-dimensional data is completed. The θ expression solving process is as follows:
according to equation (22), obtainUnit vector of middle Z axis directionAndunit vector of middle Z axis direction
Solving for theta is done according to equation (24),
according to the principle of perspective projection model of camera, when theta is 90 DEG, ERMSMaximum, when θ is 0 °, ERMSIs about 0. Therefore, summarizing the trends of several functions f (θ) satisfying (0, 1) and (1, 0), as shown in FIG. 8, a weighting function f (θ) is established to obtain an optimization objective function, as shown in equation (25),
wherein L (j) represents the total number of feature images, L (i)j) Representing the identified feature in the jth feature imageAnd the total number of codes is calculated, a proper f (theta) is selected through a comparison experiment, and an LM algorithm is adopted to perform BA optimization on the objective function, so that the three-dimensional reconstruction precision of the multi-directional cooperative target is improved.
According to an embodiment of the present invention, the following experiment was performed:
the camera model for collecting pictures is MER-301-125U3M, and the lens model is M1214-MP2, which are all manufactured by great constancy. Image resolution was 2048 × 1536pixels, principal point coordinates (u)0,v0) (1018.946, 776.791) pixels, the effective focal length (in pixels) in the x, y directions is fx3523.7pixels and fy3523.281 pixels. Radial distortion coefficient of (k)1,k2,k3) (-0.05255, -0.30788,0.00000), and the tangential distortion coefficient is (p)1,p2) The working distance is about 1m (0.00027, 0.00064). The camera is fixed at any position on the optical platform, and the multi-directional cooperative target is placed on the optical platform which is about 1 meter away from the camera. And rotating the position of the multi-directional cooperative target, so that the camera continuously shoots the multi-directional cooperative target at different angles, ensuring that at least one same plane characteristic target is contained in every two adjacent collected pictures, and collecting 50 pictures in total to verify the feasibility of the algorithm.
This section mainly verifies the feasibility of the proposed weighted SFM-based algorithm to optically measure the subject algorithm for multi-directional cooperative targets. The regular dodecahedral skeleton material of the multidirectional cooperative target is gypsum, the processed planar characteristic target material is film, the processing precision of each planar characteristic target is 0.005mm, and D is 30 mm. The planar feature target and the framework are rigidly connected to construct a multi-directional cooperation target, and a multi-directional cooperation target object is shown in fig. 9.
Fixing a camera at any position on an optical platform, placing a multidirectional cooperation target on the optical platform at a distance of 1m away from the camera, and rotating the position of the multidirectional cooperation target to enable the camera to continuously shoot the multidirectional target at different angles, so that at least one same plane feature target is contained in every two adjacent collected pictures, and 50 feature pictures are collected in total. The characteristic image is processed through the algorithm mentioned above to obtain a multidirectional cooperative target initial model, weighting functions are obtained through experiments on different weighting functions, a threshold value within an error allowable range is obtained, and then a multidirectional cooperative target high-precision model is obtained, wherein the solved model is shown in fig. 10 in the specification attached diagram. The results of the experiment are shown in table 1.
TABLE-E under different weighting functionsRMSAnd DRMS
Respectively adding several weight functions, optimizing the whole algorithm, and calculating the corresponding ERMSAnd DRMSOverall comparison of values, selecting Tanh(s) as a weight function, and finally E after overall optimization of the algorithmRMSIs 0.032pixel, DRMSIs 0.130 mm. The experimental result proves that the provided algorithm has strong robustness and high precision, the problems of three-dimensional data global unification and camera calibration precision reduction caused by shielding and visual blind areas in a complex scene of the traditional three-dimensional target are solved, and the provided three-dimensional multi-directional target model is simple and convenient to process, convenient to store and carry, and can be directly applied to the research of global unification and positioning of a follow-up three-dimensional scanning technology.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.
Claims (4)
1. A multi-directional cooperative target optical measurement method based on a weighted SFM algorithm is characterized by comprising the following steps:
step 1, designing a multidirectional cooperation target, wherein the multidirectional cooperation target comprises a polyhedral framework and plane feature targets arranged on all surfaces;
step 2, utilizing a weighted SFM algorithm to carry out global unification on the feature points on the multidirectional cooperation targets so as to obtain the coordinates of the feature points and carry out optical measurement;
in the step 2, the weighted SFM algorithm is used for carrying out global unification on the feature points on the multidirectional cooperative target so as to obtain high-precision feature point coordinates, and the specific steps are as follows:
2.1, identifying a plane feature target and extracting feature points:
obtaining pixel coordinates of four angular points of the planar feature target by adopting a quad detection method, resolving the sub-pixel coordinates of the four angular points according to a sub-pixel extraction method, and solving three-dimensional coordinates of the four angular points under a planar feature target coordinate system taking the center of the planar feature target as an origin according to the side length of the known planar feature target; according to an external parameter estimation algorithm of the camera, resolving a rotation vector R and a translation vector T of each plane feature target coordinate system converted to a corresponding camera coordinate system; acquiring the number of a current plane feature target according to an internal payload decoding method; the inner payload decoding process is as follows: firstly, converting the coordinate of each bit field on a Marker into an image coordinate system through a homography matrix; then, threshold processing is carried out on the pixels by establishing a light intensity function model, so that the correct values of corresponding bits can be read from the payload fields under the condition of ambient illumination change, and further decoding of the inner payload of the Marker is completed;
2.2, reconstructing a multidirectional cooperative target initial model by utilizing an SFM algorithm:
shooting a multidirectional cooperative target at different angles of multiple viewpoints by a camera to obtain a series of characteristic images, and directly obtaining three-dimensional coordinates of characteristic points in the images under each viewpoint of the camera and a rotation matrix of each characteristic image to a corresponding camera coordinate system by identifying and processing the collected characteristic imagesAnd translation vectorj denotes the jth camera coordinate system, and a global coordinate system O is establishedG-XGYGZGConverting all the characteristic points from the camera coordinate system to the global coordinate system OG-XGYGZGIn the conversion process, the reconstructed feature points are directly added into the global coordinate system and used as points in the global coordinate system to reconstruct subsequent feature points, and the like, so that the reconstruction of the multi-directional cooperative target initial model is completed;
2.3, carrying out weighted optimization on the SFM algorithm:
according to the principle of a perspective projection model of a camera, the method comprises the following steps:
obtaining a global coordinate system OG-XGYGZGRotation matrix to corresponding camera coordinate systemAnd translation vectorAnd re-projecting the feature points in the initial model of the multidirectional cooperative target onto the corresponding feature images, resolving a weight coefficient function f (theta) according to the included angle relationship between the optical axis of the camera and the plane normal vector of the plane feature target in the multidirectional cooperative target, and establishing a target optimization function, as shown in a formula (2):
wherein L (j) represents the total number of feature images, L (i)j) Representing the total number of feature codes identified in the jth feature image,is shown inThe sub-pixel coordinates of the extracted feature points,representing feature points in correspondenceThe coordinate of the lower reprojection point, K is the camera internal parameter,indicating that the vector is to be rotatedAnd (3) converting the target data into Rodrigues of a rotation matrix, and performing nonlinear optimization on the SFM algorithm by optimizing the objective function to realize high-precision global unification of all characteristic points of the multi-directional cooperative target.
2. The method for the optical measurement of the multi-directional cooperative target based on the weighted SFM algorithm as claimed in claim 1, wherein the step 1 of designing the multi-directional cooperative target specifically comprises the following steps:
1.1 design of planar feature targets
The designed planar characteristic target consists of a Marker in AprilTags and four black squares, and the same black squares are added at four corners of the Marker respectively to form a checkerboard pattern;
1.2 polyhedral framework design
Designing a polyhedral framework, respectively and randomly placing plane feature targets with different internal coding structures on each surface of the polyhedral framework, rigidly connecting all the plane feature targets with the polyhedral framework, constructing a multidirectional cooperation target, automatically identifying unique coding information of the plane feature targets through internal coding of the plane feature targets, and determining the plane where the plane feature targets are located.
3. The method as claimed in claim 1, wherein in step 2.1, the camera moves around the multi-directional cooperative target, a plurality of shooting positions are selected, target images are acquired from different angles of multiple viewpoints, and at least one identical planar feature target image is ensured to be included between two adjacent images.
4. The method as claimed in claim 1, wherein in step 1, the designed planar feature target is composed of a Marker in aprilatas and four black squares, the Marker is a code of the planar feature target, the same black squares are added at four corners of the Marker respectively to construct a checkerboard pattern, and sub-pixel extraction of four corners of the planar feature target is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010850055.7A CN111981982B (en) | 2020-08-21 | 2020-08-21 | Multi-directional cooperative target optical measurement method based on weighted SFM algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010850055.7A CN111981982B (en) | 2020-08-21 | 2020-08-21 | Multi-directional cooperative target optical measurement method based on weighted SFM algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111981982A CN111981982A (en) | 2020-11-24 |
CN111981982B true CN111981982B (en) | 2021-07-06 |
Family
ID=73444027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010850055.7A Active CN111981982B (en) | 2020-08-21 | 2020-08-21 | Multi-directional cooperative target optical measurement method based on weighted SFM algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111981982B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112734844B (en) * | 2021-01-08 | 2022-11-08 | 河北工业大学 | Monocular 6D pose estimation method based on octahedron |
CN112734843B (en) * | 2021-01-08 | 2023-03-21 | 河北工业大学 | Monocular 6D pose estimation method based on regular dodecahedron |
CN112381893B (en) * | 2021-01-13 | 2021-04-20 | 中国人民解放军国防科技大学 | Three-dimensional calibration plate calibration method for annular multi-camera system |
CN113340234B (en) * | 2021-06-30 | 2022-12-27 | 杭州思锐迪科技有限公司 | Adapter, three-dimensional scanning system, data processing method and data processing system |
CN116862999B (en) * | 2023-09-04 | 2023-12-08 | 华东交通大学 | Calibration method, system, equipment and medium for three-dimensional measurement of double cameras |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299261A (en) * | 2014-09-10 | 2015-01-21 | 深圳大学 | Three-dimensional imaging method and system for human body |
CN104374338A (en) * | 2014-09-28 | 2015-02-25 | 北京航空航天大学 | Single-axis rotation angle vision measurement method based on fixed camera and single target |
CN104851104A (en) * | 2015-05-29 | 2015-08-19 | 大连理工大学 | Flexible-target-based close-range large-field-of-view calibrate method of high-speed camera |
CN109242915A (en) * | 2018-09-29 | 2019-01-18 | 合肥工业大学 | Multicamera system scaling method based on multi-face solid target |
CN110230979A (en) * | 2019-04-15 | 2019-09-13 | 深圳市易尚展示股份有限公司 | A kind of solid target and its demarcating three-dimensional colourful digital system method |
CN110487213A (en) * | 2019-08-19 | 2019-11-22 | 杭州电子科技大学 | Full view line laser structured light three-dimensional image forming apparatus and method based on spatial offset |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7057220B2 (en) * | 2018-05-24 | 2022-04-19 | 株式会社ニューフレアテクノロジー | Positioning method for multi-electron beam image acquisition device and multi-electron beam optical system |
-
2020
- 2020-08-21 CN CN202010850055.7A patent/CN111981982B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299261A (en) * | 2014-09-10 | 2015-01-21 | 深圳大学 | Three-dimensional imaging method and system for human body |
CN104374338A (en) * | 2014-09-28 | 2015-02-25 | 北京航空航天大学 | Single-axis rotation angle vision measurement method based on fixed camera and single target |
CN104851104A (en) * | 2015-05-29 | 2015-08-19 | 大连理工大学 | Flexible-target-based close-range large-field-of-view calibrate method of high-speed camera |
CN109242915A (en) * | 2018-09-29 | 2019-01-18 | 合肥工业大学 | Multicamera system scaling method based on multi-face solid target |
CN110230979A (en) * | 2019-04-15 | 2019-09-13 | 深圳市易尚展示股份有限公司 | A kind of solid target and its demarcating three-dimensional colourful digital system method |
CN110487213A (en) * | 2019-08-19 | 2019-11-22 | 杭州电子科技大学 | Full view line laser structured light three-dimensional image forming apparatus and method based on spatial offset |
Non-Patent Citations (2)
Title |
---|
MODELING MULTI-TARGET ESTIMATION IN NOISE AND CLUTTER;Michael B.Malyutov;《Simulation in Industry"2000》;20010201;全文 * |
多面立体靶标的多相机标定方法研究;余寰;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190115;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111981982A (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111981982B (en) | Multi-directional cooperative target optical measurement method based on weighted SFM algorithm | |
JP4245963B2 (en) | Method and system for calibrating multiple cameras using a calibration object | |
Wöhler | 3D computer vision: efficient methods and applications | |
CN110853075B (en) | Visual tracking positioning method based on dense point cloud and synthetic view | |
Fitzgibbon et al. | Automatic 3D model acquisition and generation of new images from video sequences | |
CN104778656B (en) | Fisheye image correcting method based on spherical perspective projection | |
CN113205593B (en) | High-light-reflection surface structure light field three-dimensional reconstruction method based on point cloud self-adaptive restoration | |
EP1125249A2 (en) | Improved methods and apparatus for 3-d imaging | |
CN109373912B (en) | Binocular vision-based non-contact six-degree-of-freedom displacement measurement method | |
CN103649998A (en) | Method for determining a parameter set designed for determining the pose of a camera and/or for determining a three-dimensional structure of the at least one real object | |
CN115345822A (en) | Automatic three-dimensional detection method for surface structure light of aviation complex part | |
CN116129037B (en) | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof | |
CN113205603A (en) | Three-dimensional point cloud splicing reconstruction method based on rotating platform | |
Elstrom et al. | Stereo-based registration of ladar and color imagery | |
CN111724446B (en) | Zoom camera external parameter calibration method for three-dimensional reconstruction of building | |
CN110136048B (en) | Image registration method and system, storage medium and terminal | |
CN112243518A (en) | Method and device for acquiring depth map and computer storage medium | |
CN115082617A (en) | Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium | |
CN114998448A (en) | Method for calibrating multi-constraint binocular fisheye camera and positioning space point | |
Cobzas et al. | A comparative analysis of geometric and image-based volumetric and intensity data registration algorithms | |
RU2384882C1 (en) | Method for automatic linking panoramic landscape images | |
CN114993207B (en) | Three-dimensional reconstruction method based on binocular measurement system | |
US20240013437A1 (en) | Method for providing calibration data for calibrating a camera, method for calibrating a camera, method for producing at least one predefined point-symmetric region, and device | |
CN112630469B (en) | Three-dimensional detection method based on structured light and multiple light field cameras | |
Tseng et al. | Computing location and orientation of polyhedral surfaces using a laser-based vision system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |