CN111637850A - Self-splicing surface point cloud measuring method without active visual marker - Google Patents

Self-splicing surface point cloud measuring method without active visual marker Download PDF

Info

Publication number
CN111637850A
CN111637850A CN202010475819.9A CN202010475819A CN111637850A CN 111637850 A CN111637850 A CN 111637850A CN 202010475819 A CN202010475819 A CN 202010475819A CN 111637850 A CN111637850 A CN 111637850A
Authority
CN
China
Prior art keywords
projector
camera
pose
structured light
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010475819.9A
Other languages
Chinese (zh)
Other versions
CN111637850B (en
Inventor
汪鹏
张丽艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010475819.9A priority Critical patent/CN111637850B/en
Publication of CN111637850A publication Critical patent/CN111637850A/en
Application granted granted Critical
Publication of CN111637850B publication Critical patent/CN111637850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2545Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with one projection direction and several detection directions, e.g. stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a self-splicing surface point cloud measuring method without active visual marks, wherein a camera and a projector can independently and freely move to obtain a series of modulated structured light images which jointly cover the whole surface to be measured, the images are decoded to obtain coded information, dense pixel matching is realized according to the coded information in the series of modulated structured light images, space geometric constraints of the images at different poses are simultaneously established, then the global poses of the camera and the projector corresponding to each image and the space coordinates of a reconstructed three-dimensional point are calculated and optimized under the framework of a motion recovery structure, and finally point cloud data on the whole surface to be measured under a unified world coordinate system are output, point cloud marks do not need to be arranged in advance, an independent post-splicing processing algorithm is not needed, and the operation is flexible, the method can be suitable for the accurate measurement of objects with different sizes and shapes.

Description

Self-splicing surface point cloud measuring method without active visual marker
Technical Field
The invention belongs to the technical field of vision measurement. In particular to a self-splicing surface point cloud measuring method without active visual markers.
Background
The structured light measurement method is widely used for the point cloud measurement of the object surface due to the advantages of high precision, non-contact, low cost and the like. A typical structured light measurement system consists of a computer, an industrial camera and a projector, wherein the camera and the projector need to be fixed together to ensure that the relative pose of the camera and the projector is unchanged. Before actual measurement, the system needs to be calibrated to establish a matching relationship between the projector and camera image planes through the encoded information in the structured light image. During the measurement process, the camera captures a structured light image projected by the projector and modulated by the object surface, and the computer decodes and resolves the captured image to obtain a dense three-dimensional point cloud.
In the actual measurement process of an object, due to the limitation of the field of view of a camera and a projector, the shielding of the object itself, and the like, a single measurement of a common structured light measurement system can only acquire a local point cloud on the surface of the object. Therefore, the structured light measurement system needs to be moved around the object to perform multiple measurements, and the relative pose of the camera and the projector needs to be kept unchanged during the moving process. However, each measurement is performed in a different coordinate system, and in order to obtain a complete three-dimensional shape of the surface, the local measurement data need to be spliced into a uniform coordinate system.
At present, a great deal of research is carried out on the problem of point cloud splicing, but some problems still exist in practical application. A common method used in the industry is to attach visual markers to the surface of or near the object to be measured and splice two pieces of point cloud together by aligning the spatial coordinates of three or more common visual markers contained in the two adjacent pieces of point cloud. But with the increase of the number of times of point cloud splicing, splicing errors are accumulated. University of major graduates proposed a marker-based three-dimensional data stitching method (CN201610221163.1) that reconstructs the three-dimensional coordinates of all visual markers in the global coordinate system in advance. And then, the spatial coordinates of the visual mark points are used as a reference, and the local point cloud data are fused into a global coordinate system to reduce splicing errors. This method requires an additional camera to acquire the image of the marker point and the coordinates of the global reference point to be calculated in advance.
The visual mark is used for splicing the multi-station measurement point cloud data, visual mark points are required to be pasted on the surface of a measured object one by one, the preparation work before measurement is complicated and time-consuming, the complicated mark removing work is required after measurement, and in some cases, the mark points are not allowed to be pasted on the surface of the measured object at all. Another disadvantage of using the marker at the same time is that the object surface point cloud of the area covered by the marker cannot be accurately obtained.
Nanjing aerospace university has proposed an industrial photogrammetry method (CN201910202543.4) without encoding points, which utilizes a projector to project speckle images onto the surface of a measured object, a camera to shoot the object images covered with speckles from a plurality of poses, and establishes matching relations among different images according to speckle textures, thereby reconstructing three-dimensional point cloud. However, this method only allows the camera to shoot in different poses, while the projector can only remain stationary in one pose. When most objects are measured, due to the shielding of the objects and the limitation of the view field of the projector, the projector is fixed in one pose and cannot realize point cloud measurement of the whole surface of the object to be measured.
In addition, a method for splicing point cloud data measured by a structured light measuring system at different stations through software is also provided. Such a stitching algorithm is based on extracting common features in an overlapping region of two point clouds, and generally includes two steps: firstly, calculating rough coordinate transformation between two point clouds according to the extracted common characteristics; the results are then optimized using an Iterative Closest Point (ICP) algorithm. The splicing effect of the method for splicing the measured fragment point cloud data by using software depends on the shape of a measured object to a great extent and whether common characteristics can be extracted from different fragment point clouds, and many industrial parts do not meet the requirement. Therefore, this splicing method is not suitable for many industrial measurement problems.
Disclosure of Invention
The invention aims to provide a self-splicing surface point cloud measuring method without an active visual marker aiming at the problems brought forward by the background technology. Unlike the traditional structured light measurement method in which a camera and a projector are fixedly connected together, in the method of the present invention, both the camera and the projector can move independently and freely to acquire a series of modulated structured light images, which collectively cover the entire surface to be measured. Only the series of images and the measurement method provided by the invention are needed to directly output the point cloud data on the whole surface to be measured under a unified world coordinate system. The method does not need to lay mark points in advance, does not need an independent point cloud splicing post-processing algorithm, is flexible to operate, and can be suitable for accurate measurement of objects with different sizes and shapes.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
a self-stitched surface point cloud measurement method without active visual markers, wherein: the method comprises the following steps:
preparing a projector which can independently and freely move and can project structured light to a measured object, an industrial camera which can independently and freely move and can shoot images of the measured object and a computer, wherein the projector can project the structured light wrapped with coded information, and the computer is used for controlling analysis and calculation required by projector projection, camera shooting and three-dimensional point cloud measurement; calibrating intrinsic parameters of a camera and a projector, wherein the projector and the camera both adopt an intrinsic parameter model based on perspective projection;
adjusting the relative positions and postures of the object to be measured, the projector and the camera to enable the surface of the object to be measured to be positioned in a common field of view of the projector and the camera;
step three, keeping the relative pose among the object to be measured, the projector and the camera, projecting a group of structured light image sets P to the surface of the object to be measured by the projector, projecting each structured light to the surface of the object to be measured, simultaneously shooting a structured light field image modulated by the surface of the object by the camera, adding the image into the object structured light image set S, and judging whether the S contains the complete surface of the object, if not, executing the step four, and if so, executing the step five;
step four, keeping the pose of the measured object unchanged, keeping the pose of any one of the camera and the projector unchanged, flexibly changing the pose of the other one of the camera and the projector, helping to enlarge the measured area and ensure that the measured area still has a common view field relative to the camera and the projector with changed poses, and then returning to the step three;
decoding all images in the object structured light image set S to obtain coding information corresponding to each pixel, matching pixels of each structured light image according to the coding information for a structured light image group which is fixed in the pose of the projector and shot in a moving mode by the camera due to the fixed position of the structured light wrapped by the coding information, and directly matching the images by the pixels according to two groups of structured light images shot by the camera for a structured light image group which is fixed by the camera and changes the position of the projector due to the fixed position of the pixels shot by the camera so as to construct the space geometric constraint relation between the images corresponding to the camera and the projector in all the poses,
establishing a unified coordinate system, and solving all poses of the camera and the projector in the unified coordinate system through a frame of the motion recovery structure;
reconstructing complete point cloud data of the surface of the measured object in a unified coordinate system through dense pixel matching pairs and poses of all visual angles of the camera and the projector;
and step eight, using the camera internal parameters, the projector internal parameters which are calibrated in advance, and the three-dimensional coordinates of all the pose parameters and the space points which are obtained by calculation as optimization variables, utilizing the light speed adjustment for overall optimization, and finally reconstructing complete point cloud data of the measured object according to the optimized parameters.
In order to optimize the structural form, the specific measures adopted further comprise:
in the fifth step, a specific method for constructing a space geometric constraint relationship between the corresponding images of the camera and the projector under all the poses is as follows: note NpFor total number of projector movements, PiFor the pose of the projector after i movements, i is 1,2PRecord NiFor projector attitude PiThe total number of times the camera moves while it remains stationary is counted
Figure BDA0002515843970000031
For projector attitude PiKeeping the pose of the camera after the jth camera is moved when the camera is not moved, j is 1,2iRecording the in-position posture of the camera
Figure BDA0002515843970000032
The modulation image set shot at the lower part is
Figure BDA0002515843970000033
Position and attitude of projector PiThe lower projected structured light image is PiFor j ∈ {1,2i},k∈{1,2,...,Ni},i∈{1,2,...,NPH, and j ≠ k for arbitrary modulation image sets
Figure BDA0002515843970000034
PiAnd
Figure BDA0002515843970000035
matching between images using the same encoded information, for i ∈ {1,2i-1} of any two sets of modulation image sets
Figure BDA0002515843970000036
And
Figure BDA0002515843970000037
matching between images is performed with the same pixel directly as a matching pixel pair.
In the sixth step, the camera pose is realized through the frame of the motion recovery structure
Figure BDA0002515843970000038
And projector pose PiResolution of pose, where j ∈ {1,2i},i∈{1,2,...,NPFirstly, discrete sampling is carried out from all matched pixels to obtain a relatively sparse matching point set, space geometric constraint is established by utilizing the sparse matching point set, and then a projector P is solved by utilizing a motion recovery structure frameiAnd camera
Figure BDA0002515843970000041
Pose in a unified coordinate system.
The motion restoration structural framework described above is an incremental three-dimensional reconstruction framework.
In the sixth step, the projector P is used1The coordinate system is unified coordinate system, and the camera is recovered
Figure BDA0002515843970000042
And a projector P1Estimating a basic matrix F of the projector and the camera according to the matched pixel pair between the projector and the camera, further calculating an essential matrix E, and decomposing the essential matrix E to obtain the camera
Figure BDA0002515843970000043
And a projector P1The poses of the camera and the projector in the unified coordinate system under each shooting position are solved incrementally through space geometric constraint.
In the sixth step, after a basic matrix F of the projector and the camera is estimated, the basic matrix F is calculated according to a formula
Figure BDA0002515843970000044
Obtaining an essential matrix E, and obtaining a camera pose by utilizing SVD decomposition
Figure BDA0002515843970000045
And projector pose P1
Wherein X1And X2Is a camera
Figure BDA0002515843970000046
And a projector P1The homogeneous coordinate of the upper matching point, F is a basic matrix, E is an essential matrix, K1Is a projector internal reference matrix, K2Is a camera internal reference matrix.
The encoded information is phase.
Unlike the traditional structured light measurement method in which a camera and a projector are fixedly connected together, in the method of the present invention, both the camera and the projector can move independently and freely to acquire a series of modulated structured light images, which collectively cover the entire surface to be measured. Only the series of images and the measurement method provided by the invention are needed to directly output the point cloud data on the whole surface to be measured under a unified world coordinate system. The method directly projects the structured light wrapped with the coded information through the projector, does not need to lay mark points in advance, does not need an independent point cloud splicing post-processing algorithm, is flexible to operate, and can be suitable for accurate measurement of objects with different sizes and shapes.
The invention has the following advantages:
(1) the three-dimensional point cloud measuring method provided by the invention can independently adjust the relative positions of the projector and the camera under the given shooting rule according to the difference of the measuring objects, avoids the problem of incomplete measuring data caused by a conventional fixed structure, and can be suitable for the accurate measurement of objects with different sizes and shapes;
(2) the self-splicing point cloud measuring method provided by the invention does not need to arrange visual mark points on the surface of an object, does not need additional steps or equipment for data splicing, is convenient and quick, and can stably and reliably realize the self-splicing of point cloud data for a measured object without textures and characteristics.
(3) According to the point cloud measuring method provided by the invention, point cloud data under a unified world coordinate system is directly output through global optimization, and accumulated errors caused by sequential splicing of fragment point cloud data can be effectively reduced.
Drawings
FIG. 1 is a schematic view of the measurement process of the method of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a partially acquired image of an object being measured in an embodiment of the method of the present invention;
FIG. 4 is a graphical illustration of the visualization of the pose of the camera and projector in an embodiment of the method of the present invention;
FIG. 5 shows reconstructed point cloud data and its surface reconstruction result in an embodiment of the method of the present invention.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
The self-splicing surface point cloud measurement method without the active visual marker of the embodiment comprises the following steps: the method comprises the following steps:
preparing a projector which can independently and freely move and can project structured light to a measured object, an industrial camera which can independently and freely move and can shoot images of the measured object and a computer, wherein the projector can project the structured light wrapped with coded information, and the computer is used for controlling analysis and calculation required by projector projection, camera shooting and three-dimensional point cloud measurement; calibrating intrinsic parameters of a camera and a projector, wherein the projector and the camera both adopt an intrinsic parameter model based on perspective projection;
adjusting the relative positions and postures of the object to be measured, the projector and the camera to enable the surface of the object to be measured to be positioned in a common field of view of the projector and the camera;
step three, keeping the relative pose among the object to be measured, the projector and the camera, projecting a group of structured light image sets P to the surface of the object to be measured by the projector, projecting each structured light to the surface of the object to be measured, simultaneously shooting a structured light field image modulated by the surface of the object by the camera, adding the image into the object structured light image set S, and judging whether the S contains the complete surface of the object, if not, executing the step four, and if so, executing the step five;
step four, keeping the pose of the measured object unchanged, keeping the pose of any one of the camera and the projector unchanged, flexibly changing the pose of the other one of the camera and the projector, helping to enlarge the measured area and ensure that the measured area still has a common view field relative to the camera and the projector with changed poses, and then returning to the step three;
decoding all images in the object structured light image set S to obtain coding information corresponding to each pixel, matching pixels of each structured light image according to the coding information for a structured light image group which is fixed in the pose of the projector and shot in a moving mode by the camera due to the fixed position of the structured light wrapped by the coding information, and directly matching the images by the pixels according to two groups of structured light images shot by the camera for a structured light image group which is fixed by the camera and changes the position of the projector due to the fixed position of the pixels shot by the camera so as to construct the space geometric constraint relation between the images corresponding to the camera and the projector in all the poses,
establishing a unified coordinate system, and solving all poses of the camera and the projector in the unified coordinate system through a frame of the motion recovery structure;
reconstructing complete point cloud data of the surface of the measured object in a unified coordinate system through dense pixel matching pairs and poses of all visual angles of the camera and the projector;
and step eight, using the camera internal parameters, the projector internal parameters which are calibrated in advance, and the three-dimensional coordinates of all the pose parameters and the space points which are obtained by calculation as optimization variables, utilizing the light speed adjustment for overall optimization, and finally reconstructing complete point cloud data of the measured object according to the optimized parameters.
In the fifth step, the specific method for constructing the space geometric constraint relationship between the corresponding images of the camera and the projector under all the poses is as follows: note NpFor total number of projector movements, PiFor the pose of the projector after i movements, i is 1,2PRecord NiFor projector attitude PiThe total number of times the camera moves while it remains stationary is counted
Figure BDA0002515843970000061
For projector attitude PiKeeping the pose of the camera after the jth camera is moved when the camera is not moved, j is 1,2iRecording the in-position posture of the camera
Figure BDA0002515843970000062
The modulation image set shot at the lower part is
Figure BDA0002515843970000063
Position and attitude of projector PiThe structure light image projected downwards isPiFor j ∈ {1,2i},k∈{1,2,...,Ni},i∈{1,2,...,NPH, and j ≠ k for arbitrary modulation image sets
Figure BDA0002515843970000064
PiAnd
Figure BDA0002515843970000065
matching between images using the same encoded information, for i ∈ {1,2i-1} of any two sets of modulation image sets
Figure BDA0002515843970000066
And
Figure BDA0002515843970000067
matching between images is performed with the same pixel directly as a matching pixel pair.
In the sixth step, the camera pose is realized through the frame of the motion recovery structure
Figure BDA00025158439700000612
And projector pose PiResolution of pose, where j ∈ {1,2i},i∈{1,2,...,NPFirstly, discrete sampling is carried out from all matched pixels to obtain a relatively sparse matching point set, space geometric constraint is established by utilizing the sparse matching point set, and then a projector P is solved by utilizing a motion recovery structure frameiAnd camera
Figure BDA0002515843970000068
Pose in a unified coordinate system.
The motion restoration structural framework is an incremental three-dimensional reconstruction framework.
In the sixth step, the projector P is used1The coordinate system is unified coordinate system, and the camera is recovered
Figure BDA0002515843970000069
And a projector P1Based on the matched pairs of pixels between the pair of projectors and the cameraA basic matrix F of the projector and the camera is obtained, an essential matrix E is further calculated, and the essential matrix E is decomposed to obtain the camera
Figure BDA00025158439700000610
And a projector P1The poses of the camera and the projector in the unified coordinate system under each shooting position are solved incrementally through space geometric constraint.
In the sixth step, after a basic matrix F of the projector and the camera is estimated, the basic matrix F is calculated according to a formula
Figure BDA00025158439700000611
Obtaining an essential matrix E, and obtaining a camera pose by utilizing SVD decomposition
Figure BDA0002515843970000071
And projector pose P1Of (1);
wherein X1And X2Is a camera
Figure BDA0002515843970000072
And a projector P1The homogeneous coordinate of the upper matching point, F is a basic matrix, E is an essential matrix, K1Is a projector internal reference matrix, K2Is a camera internal reference matrix.
The encoded information is the phase.
Specific examples of the method are given below:
the object to be measured is a porcelain-off vase, the shape of the porcelain-off vase is closed, and complete surface shape data cannot be obtained by measuring a single pose. The structured light projected by this example is horizontal and vertical tri-frequency quad-phase shifted fringe structured light, which encodes information as phase. Firstly, the wrapping phases of single frequency are respectively solved, the wrapping phases of different frequencies are used for making difference, and finally the horizontal and vertical absolute phases in the structured light are extracted.
In this embodiment, an AVTMako G-158B PoE camera is used, the imaging resolution is 2045 × 2045 pixels, a lens of a Schneider Kreuznach industrial camera with a focal length of 35mm is used, the projector is DLP4500 of texas instruments, and the resolution is 1240 × 912 pixels. And calibrating perspective projection parameters of the camera and the projector by adopting a plane-based calibration method.
In this embodiment, the object to be measured is 360 degrees closed, and in order to reconstruct a complete object, the position of the camera needs to be changed, but it needs to be ensured that the position of the projector remains unchanged when the camera is changed, and similarly, the position of the projector also needs to be changed, and it needs to be ensured that the position of the camera remains unchanged when the projector is changed, and a set of fringe images is shot before and after the projector is changed. The resulting image acquired is shown in fig. 3.
After acquiring complete image data of the surface of a measured object, decoding the image to obtain an absolute phase corresponding to each pixel, matching the pixels of the structured light images shot by all cameras under a fixed projector pose according to phase information, and directly matching the same pixels of two groups of structured light images shot by the cameras under the same pose according to the change of the projector position so as to construct space geometric constraint of the images under all viewing angles.
Then realizing the camera through the frame of the motion recovery structure
Figure BDA0002515843970000073
And a projector PiPosition resolution of (1), where j ∈ {1,2i},i∈{1,2,...,NP}. The result is shown in fig. 4, where the small spatial polygon represents an industrial camera and the large spatial polygon represents a projector. In particular, embodiments employ an incremental three-dimensional reconstruction framework. First restore the camera
Figure BDA0002515843970000074
And a projector P1Position and attitude of (with projector P)1Is a unified coordinate system), a basic matrix F of the projector and the camera is estimated according to the matched pixel pairs between the projector and the camera, an essential matrix E is further calculated, and the essential matrix E is decomposed to obtain the camera
Figure BDA0002515843970000075
And projectionInstrument P1The poses of the camera and the projector in the unified coordinate system are calculated incrementally through space geometric constraint, the internal parameters of the camera and the projector, the coordinates of all the poses of the camera and the projector and the coordinates of all the space points of the projector are optimized through light beam adjustment, the optimized parameters are obtained, and finally complete point cloud data of the object are reconstructed, as shown in fig. 5. The reconstruction effect shows that the self-splicing surface point cloud measuring method without the active visual marking point is feasible in practical application, convenient to operate and complete in reconstruction data.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (6)

1. A self-splicing surface point cloud measuring method without active visual markers is characterized in that: the method comprises the following steps:
preparing a projector which can independently and freely move and can project structured light to a measured object, an industrial camera which can independently and freely move and can shoot images of the measured object and a computer, wherein the projector can project the structured light wrapped with coded information, and the computer is used for controlling analysis and calculation required by projector projection, camera shooting and three-dimensional point cloud measurement; calibrating intrinsic parameters of a camera and a projector, wherein the projector and the camera both adopt an intrinsic parameter model based on perspective projection;
adjusting the relative positions and postures of the object to be measured, the projector and the camera to enable the surface of the object to be measured to be positioned in a common field of view of the projector and the camera;
step three, keeping the relative pose among the object to be measured, the projector and the camera, projecting a group of structured light image sets P to the surface of the object to be measured by the projector, projecting each structured light to the surface of the object to be measured, simultaneously shooting a structured light field image modulated by the surface of the object by the camera, adding the image into the object structured light image set S, and judging whether the S contains the complete surface of the object, if not, executing the step four, and if so, executing the step five;
step four, keeping the pose of the measured object unchanged, keeping the pose of any one of the camera and the projector unchanged, flexibly changing the pose of the other one of the camera and the projector, helping to enlarge the measured area and ensure that the measured area still has a common view field relative to the camera and the projector with changed poses, and then returning to the step three;
and fifthly, decoding all images in the object structured light image set S to obtain coding information corresponding to each pixel, matching pixels of a camera image with pixels of a projector image according to the coding information of the modulated structured light image group on the surface of the object, which is shot by fixing the pose of the projector and moving the camera, of the modulated structured light image group on the surface of the object, which is shot by the camera and has a changed position of the projector, directly matching the pixels at the same position of two groups of modulated structured light images, which are shot by the camera before and after the position of the projector is changed, and further constructing a space geometric constraint relation between the corresponding images of the camera and the projector under all the poses.
Establishing a unified coordinate system, and solving all poses of the camera and the projector in the unified coordinate system through a frame of the motion recovery structure;
reconstructing complete point cloud data of the surface of the measured object in a unified coordinate system through dense pixel matching pairs and poses of all visual angles of the camera and the projector;
and step eight, using the camera internal parameters, the projector internal parameters which are calibrated in advance, and the three-dimensional coordinates of all the pose parameters and the space points which are obtained by calculation as optimization variables, utilizing the light beam adjustment for overall optimization, and finally reconstructing complete point cloud data of the measured object according to the optimized parameters.
2. The self-stitched surface point cloud measurement without active visual markers of claim 1The method is characterized by comprising the following steps: in the fifth step, the specific method for constructing the space geometric constraint relationship between the corresponding images of the camera and the projector under all the poses is as follows: note NpFor total number of projector movements, PiFor the pose of the projector after i movements, i is 1,2PRecord NiFor projector attitude PiThe total number of times the camera moves while it remains stationary is counted
Figure FDA0002515843960000021
For projector attitude PiKeeping the pose of the camera after the jth camera is moved when the camera is not moved, j is 1,2iRecording the in-position posture of the camera
Figure FDA0002515843960000022
The modulation image set shot at the lower part is
Figure FDA0002515843960000023
Position and attitude of projector PiThe lower projected structured light image is PiFor j ∈ {1,2i},k∈{1,2,...,Ni},i∈{1,2,...,NPH, and j ≠ k for arbitrary modulation image sets
Figure FDA0002515843960000024
PiAnd
Figure FDA0002515843960000025
matching between images using the same encoded information, for i ∈ {1,2i-1} of any two sets of modulation image sets
Figure FDA0002515843960000026
And
Figure FDA0002515843960000027
matching between images is performed with the same pixel directly as a matching pixel pair.
3. According to claim2 the self-splicing surface point cloud measuring method without active visual markers is characterized in that: in the sixth step, the camera pose is realized through the frame of the motion recovery structure
Figure FDA0002515843960000028
And projector pose PiWhere j ∈ {1,2i},i∈{1,2,...,NPFirstly, performing discrete sampling from all matched pixels to obtain a relatively sparse matching point set, wherein the number of matching points in the relatively sparse matching point set is not less than 8, establishing space geometric constraint by using the sparse matching point set, and then calculating the projector P by using a motion recovery structure frameiAnd camera
Figure FDA0002515843960000029
Pose in a unified coordinate system.
4. The method of claim 3 for self-stitched surface point cloud measurement without active visual markers, comprising: the motion recovery structure frame is an incremental three-dimensional reconstruction frame.
5. The method of claim 4, wherein the method comprises the following steps: in the sixth step, the projector is used for setting the position P1The coordinate system of the lower part is a unified coordinate system according to the position and the attitude of the camera
Figure FDA00025158439600000210
And projector pose P1The basic matrix F of the projector and the camera is estimated by the discrete sampled matched pixel pairs, the essential matrix E is further calculated, and the essential matrix E is decomposed to obtain
Figure FDA00025158439600000211
The position and pose in the unified coordinate system are sequentially solved in an incremental mode through space geometric constraint to calculate the position and pose of the camera and the projector in the unified coordinate system under each shooting positionThe pose of (1).
6. The method of claim 5, wherein the method comprises the following steps: in the sixth step, after a basic matrix F of the projector and the camera is estimated, the basic matrix F is calculated according to a formula
Figure FDA00025158439600000212
Obtaining an essential matrix E, and obtaining a camera by utilizing SVD decomposition
Figure FDA00025158439600000213
And a projector P1The pose of (a);
wherein X1And X2Is a camera
Figure FDA00025158439600000214
And a projector P1The homogeneous coordinate of the upper matching point, F is a basic matrix, E is an essential matrix, K1Is a projector intrinsic parameter matrix, K2Is an intra-camera parameter matrix.
CN202010475819.9A 2020-05-29 2020-05-29 Self-splicing surface point cloud measuring method without active visual marker Active CN111637850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010475819.9A CN111637850B (en) 2020-05-29 2020-05-29 Self-splicing surface point cloud measuring method without active visual marker

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010475819.9A CN111637850B (en) 2020-05-29 2020-05-29 Self-splicing surface point cloud measuring method without active visual marker

Publications (2)

Publication Number Publication Date
CN111637850A true CN111637850A (en) 2020-09-08
CN111637850B CN111637850B (en) 2021-10-26

Family

ID=72326861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010475819.9A Active CN111637850B (en) 2020-05-29 2020-05-29 Self-splicing surface point cloud measuring method without active visual marker

Country Status (1)

Country Link
CN (1) CN111637850B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733641A (en) * 2020-12-29 2021-04-30 深圳依时货拉拉科技有限公司 Object size measuring method, device, equipment and storage medium
CN113140042A (en) * 2021-04-19 2021-07-20 杭州思看科技有限公司 Three-dimensional scanning splicing method and device, electronic device and computer equipment
CN113432550A (en) * 2021-06-22 2021-09-24 北京航空航天大学 Large-size part three-dimensional measurement splicing method based on phase matching
CN113838266A (en) * 2021-09-23 2021-12-24 广东中星电子有限公司 Drowning alarm method and device, electronic equipment and computer readable medium
CN114092335A (en) * 2021-11-30 2022-02-25 深圳群宾精密工业有限公司 Image splicing method, device and equipment based on robot calibration and storage medium
CN114166146A (en) * 2021-12-03 2022-03-11 香港理工大学深圳研究院 Three-dimensional measurement method and equipment based on construction of encoded image projection
CN114279326A (en) * 2021-12-22 2022-04-05 易思维(天津)科技有限公司 Global positioning method of three-dimensional scanning equipment
CN115330885A (en) * 2022-08-30 2022-11-11 中国传媒大学 Special-shaped surface dynamic projection method based on camera feedback
CN115442584A (en) * 2022-08-30 2022-12-06 中国传媒大学 Multi-sensor fusion irregular surface dynamic projection method
CN116934871A (en) * 2023-07-27 2023-10-24 湖南视比特机器人有限公司 Multi-objective system calibration method, system and storage medium based on calibration object

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102483319A (en) * 2009-09-11 2012-05-30 瑞尼斯豪公司 Non-contact object inspection
CN104299211A (en) * 2014-09-25 2015-01-21 周翔 Free-moving type three-dimensional scanning method
CN206596100U (en) * 2017-03-29 2017-10-27 武汉嫦娥医学抗衰机器人股份有限公司 A kind of high definition polyphaser full-view stereo imaging system
US9952036B2 (en) * 2015-11-06 2018-04-24 Intel Corporation Systems, methods, and apparatuses for implementing maximum likelihood image binarization in a coded light range camera
WO2018171851A1 (en) * 2017-03-20 2018-09-27 3Dintegrated Aps A 3d reconstruction system
CN109727277A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 The body surface of multi-view stereo vision puts position tracking
CN109945841A (en) * 2019-03-11 2019-06-28 南京航空航天大学 A kind of industrial photogrammetry method of no encoded point
CN111189416A (en) * 2020-01-13 2020-05-22 四川大学 Structural light 360-degree three-dimensional surface shape measuring method based on characteristic phase constraint

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102483319A (en) * 2009-09-11 2012-05-30 瑞尼斯豪公司 Non-contact object inspection
CN104299211A (en) * 2014-09-25 2015-01-21 周翔 Free-moving type three-dimensional scanning method
US9952036B2 (en) * 2015-11-06 2018-04-24 Intel Corporation Systems, methods, and apparatuses for implementing maximum likelihood image binarization in a coded light range camera
WO2018171851A1 (en) * 2017-03-20 2018-09-27 3Dintegrated Aps A 3d reconstruction system
CN206596100U (en) * 2017-03-29 2017-10-27 武汉嫦娥医学抗衰机器人股份有限公司 A kind of high definition polyphaser full-view stereo imaging system
CN109727277A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 The body surface of multi-view stereo vision puts position tracking
CN109945841A (en) * 2019-03-11 2019-06-28 南京航空航天大学 A kind of industrial photogrammetry method of no encoded point
CN111189416A (en) * 2020-01-13 2020-05-22 四川大学 Structural light 360-degree three-dimensional surface shape measuring method based on characteristic phase constraint

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733641A (en) * 2020-12-29 2021-04-30 深圳依时货拉拉科技有限公司 Object size measuring method, device, equipment and storage medium
CN113140042A (en) * 2021-04-19 2021-07-20 杭州思看科技有限公司 Three-dimensional scanning splicing method and device, electronic device and computer equipment
CN113140042B (en) * 2021-04-19 2023-07-25 思看科技(杭州)股份有限公司 Three-dimensional scanning splicing method and device, electronic device and computer equipment
CN113432550A (en) * 2021-06-22 2021-09-24 北京航空航天大学 Large-size part three-dimensional measurement splicing method based on phase matching
CN113838266A (en) * 2021-09-23 2021-12-24 广东中星电子有限公司 Drowning alarm method and device, electronic equipment and computer readable medium
CN114092335B (en) * 2021-11-30 2023-03-10 群滨智造科技(苏州)有限公司 Image splicing method, device and equipment based on robot calibration and storage medium
CN114092335A (en) * 2021-11-30 2022-02-25 深圳群宾精密工业有限公司 Image splicing method, device and equipment based on robot calibration and storage medium
CN114166146A (en) * 2021-12-03 2022-03-11 香港理工大学深圳研究院 Three-dimensional measurement method and equipment based on construction of encoded image projection
CN114279326A (en) * 2021-12-22 2022-04-05 易思维(天津)科技有限公司 Global positioning method of three-dimensional scanning equipment
CN115330885A (en) * 2022-08-30 2022-11-11 中国传媒大学 Special-shaped surface dynamic projection method based on camera feedback
CN115442584A (en) * 2022-08-30 2022-12-06 中国传媒大学 Multi-sensor fusion irregular surface dynamic projection method
CN115442584B (en) * 2022-08-30 2023-08-18 中国传媒大学 Multi-sensor fusion type special-shaped surface dynamic projection method
CN116934871A (en) * 2023-07-27 2023-10-24 湖南视比特机器人有限公司 Multi-objective system calibration method, system and storage medium based on calibration object
CN116934871B (en) * 2023-07-27 2024-03-26 湖南视比特机器人有限公司 Multi-objective system calibration method, system and storage medium based on calibration object

Also Published As

Publication number Publication date
CN111637850B (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN111637850B (en) Self-splicing surface point cloud measuring method without active visual marker
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN104299211B (en) Free-moving type three-dimensional scanning method
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
US11290704B2 (en) Three dimensional scanning system and framework
CN106091983B (en) The complete scaling method of Vision Measuring System With Structured Light Stripe comprising scanning direction information
EP2932191A2 (en) Apparatus and method for three dimensional surface measurement
CN113592721B (en) Photogrammetry method, apparatus, device and storage medium
Guo et al. Mapping crop status from an unmanned aerial vehicle for precision agriculture applications
CN113205592B (en) Light field three-dimensional reconstruction method and system based on phase similarity
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN112907631B (en) Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN114066983A (en) Intelligent supplementary scanning method based on two-axis rotary table and computer readable storage medium
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN114283203A (en) Calibration method and system of multi-camera system
CN111105467B (en) Image calibration method and device and electronic equipment
CN111739103A (en) Multi-camera calibration system based on single-point calibration object
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method
CN104156974A (en) Camera distortion calibration method on basis of multiple constraints
CN111402315A (en) Three-dimensional distance measuring method for adaptively adjusting base line of binocular camera
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof
Cavegn et al. Evaluation of matching strategies for image-based mobile mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant