CN113506348B - Gray code-assisted three-dimensional coordinate calculation method - Google Patents

Gray code-assisted three-dimensional coordinate calculation method Download PDF

Info

Publication number
CN113506348B
CN113506348B CN202110798464.1A CN202110798464A CN113506348B CN 113506348 B CN113506348 B CN 113506348B CN 202110798464 A CN202110798464 A CN 202110798464A CN 113506348 B CN113506348 B CN 113506348B
Authority
CN
China
Prior art keywords
phase
gray code
point
formula
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110798464.1A
Other languages
Chinese (zh)
Other versions
CN113506348A (en
Inventor
李岩
崔振丰
胡成威
周晓伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Kaidi Technology Co ltd
Changchun University of Technology
Original Assignee
Jilin Kaidi Technology Co ltd
Changchun University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Kaidi Technology Co ltd, Changchun University of Technology filed Critical Jilin Kaidi Technology Co ltd
Priority to CN202110798464.1A priority Critical patent/CN113506348B/en
Publication of CN113506348A publication Critical patent/CN113506348A/en
Application granted granted Critical
Publication of CN113506348B publication Critical patent/CN113506348B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention belongs to the field of three-dimensional reconstruction, and particularly relates to a three-dimensional coordinate calculation method based on Gray code assistance. The method comprises the following steps: 1. calibrating internal and external parameters of a binocular camera and capturing a structured light stripe pattern; 2. calculating a mask image for the structured light stripe pattern captured by the binocular camera, and utilizing the mask image to realize rapid phase expansion to obtain an absolute phase; 3. carrying out transverse constraint on the absolute phase; 4. gray code assisted searching of equivalent phase points; 5. and after searching the equivalent phase point, calculating the three-dimensional coordinate of the object by using a triangular distance measurement principle. The invention adopts a Gray code-assisted rapid equivalent phase searching method to obtain the three-dimensional coordinates of an object, and the method has higher searching speed compared with the global searching and the bidirectional stripe constraint method.

Description

Gray code-assisted three-dimensional coordinate calculation method
Technical Field
The invention belongs to the field of three-dimensional reconstruction, and particularly relates to a three-dimensional coordinate calculation method based on Gray code assistance.
Background
The stereoscopic vision measuring method based on the phase shift combined with the Gray code has the advantages of non-contact, high precision, high resolution and low cost, is widely applied to the fields of biomedicine, machine vision, reverse engineering and the like, and is always one of the hot points of research of experts, the method projects a series of phase shift stripe patterns to the surface of an object through a projector, a camera captures the stripe patterns modulated by the object and decodes to obtain a wrapping phase, and the wrapping phase is subjected to phase expansion to obtain a continuous absolute phase value: then, searching equivalent phase point pairs in the left absolute phase image and the right absolute phase image, and finally obtaining the three-dimensional coordinates of the object by combining the binocular triangulation distance measuring principle.
At present, common methods for solving the phase include a four-step phase shift method and a Fourier Transform Profilometry (FTP) method, although the two methods have different phase solving processes, the phase is solved by using an arctangent function to obtain wrapped phases distributed in [ -pi, pi ], and the wrapped phases need to be subjected to phase expansion in order to obtain a real phase of a full field. The phase unwrapping is divided into spatial phase unwrapping and temporal phase unwrapping, and the spatial phase unwrapping is used for properly adjusting a phase value according to phase continuity by analyzing the phase value between adjacent elements of the wrapped phase, so that a continuous absolute phase is recovered. When space phase expansion is applied, expansion results of adjacent pixel points are mutually influenced, and errors are easy to occur when objects with discontinuous surfaces are measured; the temporal phase unwrapping determines the phase value of each point by projecting a series of fringes and performing phase unwrapping along the time axis. The phase shift is combined with Gray codes to serve as a time phase expansion method, the expansion process is not influenced by adjacent pixel points, objects with complex surfaces can be measured, and the precision is high; in the aspect of searching equivalent phase points, a traditional global-based search strategy is directly adopted for phase information, so that the search speed is low and the error is large; the bidirectional projection grating method utilizes stripes in the horizontal and vertical directions to restrict a search area so as to reduce errors, but needs to project stripe patterns in the two directions, so that the calculation amount is large, and the measurement speed is influenced.
Disclosure of Invention
The invention provides a gray code-assisted three-dimensional coordinate calculation method, which firstly realizes rapid phase expansion, firstly adopts a mask image to eliminate pixel points in a non-fringe region so as to reduce the calculated amount, then carries out phase expansion on the pixels in the fringe region, and aims at the problem of periodic dislocation in the phase expansion process, adopts a correction method of staggered gray codes and median filtering to carry out periodic correction to obtain an absolute phase with continuously and linearly changed intensity, carries out transverse constraint on the absolute phase and a gray code decoding sequence, proposes to further carry out longitudinal constraint by using a gray code decoding value on the basis of the transverse constraint, and finally obtains the three-dimensional coordinate of an object according to a binocular triangular ranging principle, thereby solving the defects existing in obtaining the three-dimensional coordinate of the object by searching an equivalent phase method.
The technical scheme of the invention is described as follows by combining the attached drawings:
a three-dimensional coordinate calculation method based on Gray code assistance comprises the following steps:
calibrating internal and external parameters of a binocular camera and capturing a structured light stripe pattern;
calculating a mask image of the structured light stripe pattern captured by the binocular camera, and utilizing the mask image to realize rapid phase expansion to obtain an absolute phase;
thirdly, carrying out transverse constraint on the absolute phase;
step four, gray code assisted searching of equivalent phase points;
and fifthly, calculating the three-dimensional coordinates of the object by utilizing a triangular distance measurement principle after searching the equivalent phase point.
The specific method of the first step is as follows:
establishing an imaging geometric model of a binocular camera, calibrating the binocular camera to obtain internal and external parameters of the camera, wherein the internal and external parameters comprise: the center position, the focal length, the distortion coefficient and the rotation translation matrix of the camera lens are calibrated to capture the structural light stripe pattern.
The specific method of the second step is as follows:
21 Additional projection of a full bright pattern onto the object surfaceThe contrast between the object region and the non-object region is defined according to a segmentation threshold T, the object region is defined as a foreground, the non-object region is defined as a background, and the maximum between-class variance sigma of the two regions is calculated by adopting a formula (1) 2 When σ is 2 When the maximum value is obtained, namely the segmentation threshold value T, the image is segmented by utilizing the segmentation threshold value to obtain a mask image only containing the object region;
σ 2 =p 1 (m t -m g ) 2 +p 2 (m b -m g ) 2 (1)
wherein p is 1 Is the probability that a pixel is classified as foreground; p is a radical of 2 Is the probability that a pixel is classified as background; m is t Is a foreground pixel; m is g Is the average of the entire pattern; m is a unit of b Is a background pixel;
22 On the basis of a mask image, decoding a Gray code stripe pattern and a phase shift stripe pattern, decoding the phase shift stripe by using a formula (2) to obtain a wrapping phase, decoding the Gray code stripe pattern by using a formula (3) to obtain a decimal sequence, and performing phase expansion by using a formula (4) by combining a Gray code sequence and the wrapping phase;
Figure GDA0003982629380000031
Figure GDA0003982629380000032
Φ(x,y)=φ(x,y)+2πk(x,y) (4)
wherein, I n (X, y) sin (2 n π/X) is the nth phase-shifted fringe pattern; k (x, y) is a Gray code decoding sequence; b i (x, y) is a binary sequence; n is Gray code number; i is the bit gray code; Φ (x, y) is the absolute phase; phi (x, y) is the wrapped phase;
23 Correction of the unfolding period; if the error point is an isolated error point, an extra fine stripe with the width half of the N-bit Gray code is projected on the basis of the N-bit traditional Gray codeThe decoding sequence of the first N Gray codes is k 1 And N +1 gray code stripe decoding sequence is k 2 ,k 2 The value of the sequence and the conventional Gray code decoding value k 1 The decoded values at the boundary are staggered from each other, so that errors generated by boundary decoding are avoided, and formula (5) is used for periodic correction; if the adjacent dislocation points exist, a median filter is adopted to eliminate the residual dislocation points by using a formula (6);
Figure GDA0003982629380000041
Φ m (x,y)=medfilt2[Φ(x,y);s x ×s y ] (6)
wherein phi m (x, y) is the absolute phase after median filtering; phi (x, y) is the absolute phase before median filtering; s x ×S y The filter size, medfilt2[ ]]Is the median filter operator.
The concrete method of the third step is as follows:
the absolute phase is transversely constrained, namely, the original absolute phase image is aligned to the abscissa of the left absolute phase image and the abscissa of the right absolute phase image to the polar line through a series of position matrixes and interpolation methods, and the required position matrixes are obtained by calibrating a binocular camera.
The concrete method of the fourth step is as follows:
41 Use the decoding sequence of Gray code to carry on the longitudinal constraint to the left and right phase images, search for the equivalent phase point more accurately, because N bit Gray code roughly divides the measured object into 2 N Regions, which after decoding have obtained a range of 0-2 N A sequence value k of-1, in a period, the phase shift stripe and the finest stripe of the Gray code have the same number of pixel points, so that the phase shift stripe will be 2 N Further subdividing each area;
42 The gray code value of each region is different, so that the longitudinal constraint is formed by utilizing the uniqueness of the gray code decoding value on the basis of the transverse constraint, as shown in formula (7):
Figure GDA0003982629380000042
wherein, C 1 Is a transverse constraint area; c 2 A longitudinal constraint region determined for a gray code decode sequence; c is C 1 And C 2 The common area is the area where the initial search point is located; c 1 |(x r ,y r ) Points shown on the right epipolar line, x r Is a horizontal coordinate; y is r A vertical coordinate;
the equation shown for the polar line:
ax r +by r +c<δ (8)
a is the cross-sectional distance of the polar line; b is the longitudinal intercept of the polar line; c is the intersection point of the polar line and the ordinate; delta is a constant; k is a radical of r (C 2 ) Is that the decoding value of the right Gray code is k r The area of (a); k is a radical of formula l (x l ,y l ) Is a left graph point (x) l ,y l ) The decoded value of the gray code;
43 After determining an initial search point, defining a search window for reducing noise influence brought by an image acquisition process, dividing the width of the finest stripe of the gray code by the size of the search window, selecting a pixel point with a first gray value not being zero when searching an equivalent phase point, placing the window in a relative phase diagram, performing operation in a formula (9) on a phase value at each position in the search window and the point, obtaining the equivalent phase point when meeting the result, and endowing each position in a matching window with different weights to reduce errors caused by phase mutation at the edge;
l (x,y)-Φ r (x,y)|<I set (9)
wherein phi is l (x, y) is the left absolute phase diagram; phi r (x, y) is the right absolute phase diagram; i is set For a set threshold, the absolute phase satisfying the above formula is an equivalent phase point pair, which is denoted as P l (x l ,y l ),P r (x r ,y r )。
The concrete method of the fifth step is as follows:
after searching the equivalent phase point pair P l (x l ,y l ),P r (x r ,y r ) Thereafter, three-dimensional coordinates P (x, y, z) of the object are calculated using the principle of binocular triangulation, where x = x l 、y=y l B, f are obtained by camera calibration, and z is obtained by the formula (10):
Figure GDA0003982629380000051
wherein z is the distance of the object to the camera plane; x is the number of l The abscissa of the left absolute phase plot; y is l Ordinate of the left absolute phase map; b is the distance between two camera lenses; f is the focal length.
The invention has the beneficial effects that:
the invention firstly adopts the mask image to eliminate the non-fringe areas to reduce the calculated amount, and secondly adopts the phase expansion method of combining the phase shift with the Gray code, and after the processing, the absolute phase can be quickly and accurately obtained; and finally, obtaining an equivalent phase point pair by utilizing a gray code assisted quick equivalent phase searching method, wherein the method is higher in searching speed relative to global searching and a bidirectional stripe constraint method.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a schematic diagram of binocular triangulation distance measurement;
FIG. 3 is a schematic diagram of a phase unwrapping process;
FIG. 4 is a schematic diagram of the Gray code longitudinal constraint principle;
FIG. 5 is a schematic diagram of a left camera capture pattern;
FIG. 6 is a schematic diagram of a right camera capturing a pattern;
FIG. 7 is a schematic diagram of a left mask image;
FIG. 8 is a schematic diagram of a right mask image;
FIG. 9 is a schematic diagram of the left absolute phase;
FIG. 10 is a right absolute phase diagram;
FIG. 11 is a schematic view of a point cloud on the front side of a brake disc;
fig. 12 is a schematic diagram of a point cloud on the side surface of the brake disc.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a gray code-assisted three-dimensional coordinate calculation method includes the following steps:
calibrating internal and external parameters of a binocular camera and capturing a structured light stripe pattern;
the specific method of the first step is as follows:
referring to fig. 2, an imaging geometric model of the binocular camera is established, and the binocular camera is calibrated to obtain internal and external parameters of the camera, wherein the internal and external parameters include: the camera lens center position, the focal length, the distortion coefficient and the rotation translation matrix are calibrated and then capture the structured light stripe pattern, and the structured light stripe pattern is calibrated and then captured.
In the figure, P is a point on the object; x is the number of l Is the abscissa of P at the left camera; x is a radical of a fluorine atom r P is the abscissa of the right camera; z is the distance from the point P to the left and right camera planes; o is l Is the optical center of the left camera; o is r Is the optical center of the right camera; f is the focal length; and b is the distance between the two lenses.
Referring to fig. 3, in the second step, a mask image is calculated for the structured light stripe pattern captured by the binocular camera, and the mask image is used to realize fast phase expansion to obtain an absolute phase.
The method adopts phase shift combined with Gray code to realize rapid phase expansion, and comprises the following steps: generating a mask image only containing the area of the measured object, realizing phase unwrapping on the basis of the mask image, and correcting pixel points with period dislocation by adopting a staggered gray code junction and median filtering. The method specifically comprises the following steps:
1) Additionally projecting a full bright pattern to the surface of the object to enhance the contrast between the object and the background, defining a stripe region as a foreground and a non-stripe region as a background according to a segmentation threshold T, and calculating the maximum between-class variance sigma of the two regions by adopting a formula (11) 2 When σ is 2 When the maximum value is obtained, namely the segmentation threshold value T, the image is segmented by utilizing the segmentation threshold value to obtain a mask image only containing the object region;
σ 2 =p 1 (m t -m g ) 2 +p 2 (m b -m g ) 2 (11)
wherein p is 1 Is the probability that a pixel is classified as foreground; p is a radical of 2 Is the probability that a pixel is classified as background; m is t Is a foreground pixel; m is g Is the average of the entire pattern; m is b Is a background pixel;
2) On the basis of a mask image, decoding a Gray code stripe pattern and a phase shift stripe pattern, decoding the phase shift stripe pattern by using a formula (12) to obtain a wrapping phase, decoding the Gray code stripe pattern by using a formula (13) to obtain a decimal sequence, and performing phase expansion by combining a Gray code sequence and the wrapping phase by using a formula (14);
Figure GDA0003982629380000071
Figure GDA0003982629380000072
Φ(x,y)=φ(x,y)+2πk(x,y) (14)
wherein, I n (X, y) sin (2 n pi/X) is the nth phase shift stripe pattern; k (x, y) is a Gray code decoding sequence; b is i (x, y) is a binary sequence; n is Gray code number; i is the Gray code of the digit; Φ (x, y) is the absolute phase; phi (x, y) is the wrapped phase;
3) Correcting the unfolding period; if the error point is an isolated error point, additionally projecting an error gray code stripe pattern with the finest stripe width being half of the N-bit gray code on the basis of the N-bit traditional gray code, wherein the decoding sequence of the first N gray codes is k 1 And N +1 gray code stripe decoding sequence is k 2 ,k 2 Sequence value and conventional Gray code decoding value k 1 The decoded values at the boundary are staggered with each other, so that errors caused by boundary decoding are avoided, and the periodic correction is carried out by using a formula (15). The adjacent error points are caused by that the finer gray code stripes are easier to generate decoding errors at the black and white edges when the error gray code is decoded, and the width of the errors is more than half of the period of the phase shift stripes, and at the moment, the complementary gray code fails; if the adjacent dislocation points exist, a median filter is adopted to eliminate the residual dislocation points by using a formula (16);
Figure GDA0003982629380000081
Φ m (x,y)=medfilt2[Φ(x,y);s x ×s y ] (16)
wherein phi m (x, y) is the absolute phase after median filtering; phi (x, y) is the absolute phase before median filtering; s x ×S y The filter size, medfilt2[ ]]Is the median filter operator.
Through the above processing, the absolute phase map only containing the region of the measured object and the gray code decoding sequence are finally obtained as shown in "final absolute phase" in fig. 3.
Thirdly, carrying out transverse constraint on the absolute phase;
in order to search for an equivalent phase point quickly and accurately, the left and right absolute phases need to be subjected to lateral constraint, and the lateral constraint is to align the abscissa of the left and right absolute phase images to a polar line by a series of position matrixes and an interpolation method through an original absolute phase image, wherein the required position matrixes are obtained by calibration of a binocular camera.
Step four, gray code assisted searching of the equivalent phase point;
4) The decoding sequence of the Gray code is utilized to carry out longitudinal constraint on the left phase image and the right phase image, equivalent phase points are searched more accurately, and the N-bit Gray code roughly divides a measured object into 2 N Regions, which after decoding have obtained a range of 0-2 N A sequence value k of-1, in a period, the phase shift stripe and the finest stripe of the Gray code have the same number of pixels, so that the phase shift stripe will be 2 N Further subdividing each area;
5) The gray code value of each region is different, so that longitudinal constraint is formed by utilizing the uniqueness of the gray code decoding value on the basis of transverse constraint, as shown by a dotted line region in fig. 4, and an absolute phase and gray code decoding sequence in fig. 4 is shown as a formula (17):
Figure GDA0003982629380000091
wherein, C 1 Is a transverse constraint area; c 2 A longitudinal constraint region determined for a gray code decode sequence; c is C 1 And C 2 The common area is the area where the initial search point is located; c 1 |(x r ,y r ) Points shown on the right epipolar line, x r Is a horizontal coordinate; y is r A vertical coordinate; the equation shown for the polar line:
ax r +by r +c<δ (18)
a is the cross-sectional distance of the polar line; b is the longitudinal intercept of the polar line; c is the intersection point of the polar line and the ordinate; delta is a constant; k is a radical of r (C 2 ) Is the right Gray code with the decoding value of k r The area of (a); k is a radical of l (x l ,y l ) Is a left graph point (x) l ,y l ) The decoded value of the gray code.
6) After determining an initial search point, defining a search window for reducing noise influence brought by an image acquisition process, dividing the width of the finest stripe of the Gray code by the size of the search window, selecting a pixel point with a first gray value not being zero by taking a left absolute phase diagram as an example when searching an equivalent phase point, placing the window in a right phase diagram, performing operation in a formula (19) on a phase value at each position in the search window and the point to obtain the equivalent phase point, and endowing each position in a matching window with different weights to reduce errors caused by phase mutation at the edge;
l (x,y)-Φ r (x,y)|<I set (19)
wherein phi is l (x, y) is the left absolute phase diagram; phi r (x, y) is the right absolute phase diagram; i is set For a set threshold, the absolute phase satisfying the above formula is an equivalent phase point pair, which is denoted as P l (x l ,y l ),P r (x r ,y r )。
And step five, calculating the three-dimensional coordinates of the object by utilizing a triangulation distance measuring principle after the equivalent phase point is searched.
After searching the equivalent phase point pair P l (x l ,y l ),P r (x r ,y r ) Thereafter, three-dimensional coordinates P (x, y, z) of the object are calculated using the principle of binocular triangulation, where x = x l 、y=y l B, f are obtained by camera calibration, and z is obtained by formula (20):
because the triangles are similar
Therefore, the method comprises the following steps:
Figure GDA0003982629380000101
the distance Z of the object from the camera plane:
Figure GDA0003982629380000102
wherein z is the distance from the point P to the left and right camera planes; x is the number of l Is the abscissa of P at the left camera; y is l Is the abscissa of P at the left camera; b is the distance f between the two lenses as the focal length.
Examples
The embodiment combines an automobile brake disc to carry out example verification on the method:
referring to fig. 5 and 6, the partial stripe patterns captured by the left and right cameras include phase shift stripe patterns and gray code stripe patterns.
The method uses 12 stripe patterns, 4 phase shift stripe patterns, 6 traditional Gray code patterns, 1 staggered Gray code pattern and 1 full bright pattern in total.
In order to better distinguish the contrast of background pixels of the brake disc, 1 full-bright pattern is used for lighting up the brake disc area, then the masking image is generated by using the threshold segmentation method provided by the invention, and the masking image only comprises pixel points of the brake disc area, so that the calculated amount is reduced, and a condition is provided for subsequent rapid phase unwrapping.
Gray code decoding, phase shift decoding, and period correction are performed on pixels in the mask image, and the obtained absolute phase is as shown in fig. 9 and 10.
Firstly, calculating the average gray value of 4 phase shift patterns, setting the value as a binarization threshold value of Gray code stripes, then carrying out XOR operation on the Gray code stripes after binarization to obtain a binary value, and converting the binary value into decimal after combination to complete decoding of the Gray code:
the wrapping phase is obtained by adopting a formula (12) for the 4 phase-shift fringe patterns, then the gray code and the wrong gray code decoding sequence are calculated by utilizing a formula (13), and absolute phases which are in linear change are obtained by utilizing formulas (14), (15) and (16).
In the method shown in fig. 9 and 10, the gray code decoding sequence is utilized to carry out longitudinal constraint search on the equivalent phase points, the three-dimensional coordinates of the brake disc are obtained by combining the searched equivalent phase point pairs with the triangular ranging principle, and the three-dimensional coordinates are displayed in the form of point clouds shown in fig. 11 and 12.
As can be seen from the point cloud image in fig. 11, the point cloud is dense and complete, which indicates that the method mentioned herein can well find the three-dimensional coordinates of the object, the height of the object can be reflected from the point cloud image on the side of fig. 12, the difference between the actual height of the object and the height of the object is only 0.1mm, and the method has accuracy.
Although the preferred embodiments of the present invention have been described in detail with reference to the accompanying drawings, the scope of the present invention is not limited to the specific details of the above embodiments, and any person skilled in the art can substitute or change the technical solution of the present invention and its inventive concept within the technical scope of the present invention, and these simple modifications belong to the scope of the present invention.
It should be noted that, in the above embodiments, the various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, the present invention does not separately describe various possible combinations.
In addition, any combination of the various embodiments of the present invention is also possible, and the same should be considered as the disclosure of the present invention as long as it does not depart from the spirit of the present invention.

Claims (5)

1. A three-dimensional coordinate calculation method based on Gray code assistance is characterized by comprising the following steps:
calibrating internal and external parameters of a binocular camera and capturing a structured light stripe pattern;
calculating a mask image for the structured light stripe pattern captured by the binocular camera, and utilizing the mask image to realize rapid phase expansion to obtain an absolute phase;
thirdly, carrying out transverse constraint on the absolute phase;
step four, gray code assisted searching of the equivalent phase point;
fifthly, calculating the three-dimensional coordinates of the object by utilizing a triangular distance measurement principle after searching the equivalent phase point;
the specific method of the second step is as follows:
21 Additionally projecting a full-bright pattern to the surface of the object to enhance the contrast between the object region and the non-object region, defining the object region as the foreground and the non-object region as the background according to the segmentation threshold T, and calculating the maximum between-class variance σ of the two regions by adopting the formula (1) 2 When σ is 2 When the maximum value is obtained, namely the segmentation threshold value T, the image is segmented by utilizing the segmentation threshold value to obtain a mask image only containing the object region;
σ 2 =p 1 (m t -m g ) 2 +p 2 (m b -m g ) 2 (1)
wherein p is 1 Is the probability that a pixel is classified as foreground; p is a radical of 2 Is the probability that a pixel is classified as background; m is t Is a foreground pixel; m is g Is the average of the entire pattern; m is b Is a background pixel;
22 Guiding the Gray code stripe pattern and the phase shift stripe pattern to decode by using a mask image non-zero area, decoding the phase shift stripe by using a formula (2) to obtain a wrapping phase, decoding the Gray code stripe pattern by using a formula (3) to obtain a decimal sequence, and performing phase expansion by combining a Gray code sequence and the wrapping phase by using a formula (4);
Figure FDA0003982629370000011
Figure FDA0003982629370000012
Φ(x,y)=φ(x,y)+2πk(x,y) (4)
wherein, I n (X, y) sin (2 n π/X) n phase-shifted fringe patterns; k (x, y)
B i (x, y) is a binary sequence; gray code number; i is the i-th gray code; Φ (x, y) is the absolute phase; phi (x, y) is the wrapped phase;
23 Correction of the unfolding period; if it isAn isolated error site is additionally projected with an error Gray code stripe pattern with the finest stripe width being half of the N-bit Gray code on the basis of the N-bit traditional Gray code, and the decoding sequence of the first N Gray codes is k 1 And N +1 gray code stripe decoding sequence is k 2 ,k 2 The value of the sequence and the conventional Gray code decoding value k 1 The decoded values at the boundary are staggered, so that errors generated by boundary decoding are avoided, and periodic correction is performed by using a formula (5); if the adjacent dislocation points exist, a median filter is adopted to eliminate the residual dislocation points by using a formula (6);
Figure FDA0003982629370000021
Φ m (x,y)=medfilt2[Φ(x,y);s x ×s y ] (6)
wherein phi m (x, y) is the absolute phase after median filtering; phi (x, y) is the absolute phase before median filtering; s. the x ×S y The filter size, medfilt2[ ]]Is the median filter operator.
2. The method for calculating three-dimensional coordinates based on gray code assistance according to claim 1, wherein the specific method of the first step is as follows:
establishing an imaging geometric model of a binocular camera, calibrating the binocular camera to obtain internal and external parameters of the camera, wherein the internal and external parameters comprise: the center position, the focal length, the distortion coefficient and the rotation translation matrix of the camera lens are calibrated to capture the structural light stripe pattern.
3. The method for calculating three-dimensional coordinates based on gray code assistance according to claim 1, wherein the specific method in the third step is as follows:
the absolute phase is transversely constrained, namely, the original absolute phase image is aligned to the abscissa of the left absolute phase image and the abscissa of the right absolute phase image to the polar line through a series of position matrixes and interpolation methods, and the required position matrixes are obtained by calibrating a binocular camera.
4. The method for calculating three-dimensional coordinates based on gray code assistance according to claim 1, wherein the specific method in the fourth step is as follows:
41 Longitudinal constraint of left and right absolute phase images by using a decoding sequence of Gray codes, and accurate search of equivalent phase points, since N-bit Gray codes roughly divide a measured object into 2 N Regions, which after decoding have obtained a range of 0-2 N A sequence value k of-1, in a period, the phase shift stripe and the finest stripe of the Gray code have the same number of pixel points, so that the phase shift stripe will be 2 N Subdividing each area;
42 Because the gray code value of each region is different, a longitudinal constraint is formed by utilizing the uniqueness of the gray code decoding value on the basis of a transverse constraint, as shown in formula (7):
Figure FDA0003982629370000031
wherein, C 1 Is a transverse constraint area; c 2 A longitudinal constraint region determined for a gray code decode sequence; c is C 1 And C 2 The common area is the area where the initial search point is located; c 1 |(x r ,y r ) Points shown on the right epipolar line, x r Is the abscissa, y r Is a vertical coordinate;
the equation shown for the polar line:
ax r +by r +c<δ (8)
a is the cross-sectional distance of the polar line; b is the longitudinal intercept of the polar line; c is the intersection point of the polar line and the ordinate; delta is a constant; k is a radical of r (C 2 ) Is the right Gray code with the decoding value of k r The area of (a); k is a radical of formula l (x l ,y l ) Is a left graph point (x) l ,y l ) The decoded value of the gray code;
43 After determining an initial search point, defining a search window for reducing noise influence brought by an image acquisition process, dividing the width of the finest stripe of the gray code by the size of the search window, selecting a pixel point with a first gray value not being zero when searching an equivalent phase point, placing the window in a relative phase diagram, performing operation in a formula (9) on a phase value at each position in the search window and the point, obtaining the equivalent phase point when meeting the result, and endowing each position in a matching window with different weights to reduce errors caused by phase mutation at the edge;
l (x,y)-Φ r (x,y)|<I set (9)
wherein phi l (x, y) is the left absolute phase diagram; phi r (x, y) is the right absolute phase diagram; i is set For a set threshold, the absolute phase satisfying the above formula is an equivalent phase point pair, which is denoted as P l (x l ,y l ),P r (x r ,y r )。
5. The method for calculating three-dimensional coordinates based on gray code assistance according to claim 1, wherein the specific method in the fifth step is as follows:
after searching the equivalent phase point pair P l (x l ,y l ),P r (x r ,y r ) Thereafter, three-dimensional coordinates P (x, y, z) of the object are calculated using the principle of binocular triangulation, where x = x l 、y=y l B, f are obtained by camera calibration, and z is obtained by the formula (10):
Figure FDA0003982629370000041
wherein z is the distance of the object to the camera plane; x is a radical of a fluorine atom l The abscissa of the left absolute phase plot; y is l Ordinate of the left absolute phase map; b is the distance between two camera lenses; f is the focal length.
CN202110798464.1A 2021-07-15 2021-07-15 Gray code-assisted three-dimensional coordinate calculation method Active CN113506348B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110798464.1A CN113506348B (en) 2021-07-15 2021-07-15 Gray code-assisted three-dimensional coordinate calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110798464.1A CN113506348B (en) 2021-07-15 2021-07-15 Gray code-assisted three-dimensional coordinate calculation method

Publications (2)

Publication Number Publication Date
CN113506348A CN113506348A (en) 2021-10-15
CN113506348B true CN113506348B (en) 2023-02-28

Family

ID=78013355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110798464.1A Active CN113506348B (en) 2021-07-15 2021-07-15 Gray code-assisted three-dimensional coordinate calculation method

Country Status (1)

Country Link
CN (1) CN113506348B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114739322B (en) * 2022-06-09 2022-09-16 广东工业大学 Three-dimensional measurement method, equipment and storage medium
CN116320357A (en) * 2023-05-17 2023-06-23 浙江视觉智能创新中心有限公司 3D structured light camera system, method, electronic device and readable storage medium
CN117523106A (en) * 2023-11-24 2024-02-06 广州市斯睿特智能科技有限公司 Three-dimensional reconstruction method, system, equipment and medium for monocular structured light

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835158A (en) * 2015-05-05 2015-08-12 中国人民解放军国防科学技术大学 3D point cloud acquisition method based on Gray code structure light and polar constraints
CN107607040A (en) * 2017-08-11 2018-01-19 天津大学 A kind of three-dimensional scanning measurement device and method suitable for High Reflective Surface
CN110285775A (en) * 2019-08-02 2019-09-27 四川大学 Three-dimensional rebuilding method and system based on structure photoperiod coding pattern
WO2020168094A1 (en) * 2019-02-15 2020-08-20 Nikon Corporation Simultaneous depth profile and spectral measurement
CN112347882A (en) * 2020-10-27 2021-02-09 中德(珠海)人工智能研究院有限公司 Intelligent sorting control method and intelligent sorting control system
CN112967205A (en) * 2021-03-25 2021-06-15 苏州天准科技股份有限公司 Gray code filter-based outlier correction method, storage medium, and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10360693B2 (en) * 2017-03-01 2019-07-23 Cognex Corporation High speed structured light system
CN106931910B (en) * 2017-03-24 2019-03-05 南京理工大学 A kind of efficient acquiring three-dimensional images method based on multi-modal composite coding and epipolar-line constraint
CN110686599B (en) * 2019-10-31 2020-07-03 中国科学院自动化研究所 Three-dimensional measurement method, system and device based on colored Gray code structured light

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835158A (en) * 2015-05-05 2015-08-12 中国人民解放军国防科学技术大学 3D point cloud acquisition method based on Gray code structure light and polar constraints
CN107607040A (en) * 2017-08-11 2018-01-19 天津大学 A kind of three-dimensional scanning measurement device and method suitable for High Reflective Surface
WO2020168094A1 (en) * 2019-02-15 2020-08-20 Nikon Corporation Simultaneous depth profile and spectral measurement
CN110285775A (en) * 2019-08-02 2019-09-27 四川大学 Three-dimensional rebuilding method and system based on structure photoperiod coding pattern
CN112347882A (en) * 2020-10-27 2021-02-09 中德(珠海)人工智能研究院有限公司 Intelligent sorting control method and intelligent sorting control system
CN112967205A (en) * 2021-03-25 2021-06-15 苏州天准科技股份有限公司 Gray code filter-based outlier correction method, storage medium, and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Period-Wise Phase Unwrapping Method With Two Gray Level Coding Patterns;Bolin Cai等;《IEEE Photonics Journal》;20210430;第13卷(第2期);1-14 *
Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter;DONGLIANG ZHENG等;《Optics EXPRESS》;20170306;第25卷(第5期);4700-4713 *
四步相移结合互补格雷码的快速相位展开;李洋等;《激光杂志》;20210902;第43卷(第02期);第36-41页 *
基于格雷码与相移结合的双目立体视觉测量研究;刘晓辉;《中国优秀硕士学位论文全文数据库_信息科技辑》;20120415;I138-2256 *

Also Published As

Publication number Publication date
CN113506348A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN113506348B (en) Gray code-assisted three-dimensional coordinate calculation method
Wang et al. Robust active stereo vision using Kullback-Leibler divergence
CN112053432B (en) Binocular vision three-dimensional reconstruction method based on structured light and polarization
US8836766B1 (en) Method and system for alignment of a pattern on a spatial coded slide image
CN113237435B (en) High-light-reflection surface three-dimensional vision measurement system and method
CN111473744B (en) Three-dimensional shape vision measurement method and system based on speckle embedded phase shift stripe
CN109945802B (en) Structured light three-dimensional measurement method
CN109631798B (en) Three-dimensional surface shape vertical measurement method based on pi phase shift method
CN108596008B (en) Face shake compensation method for three-dimensional face measurement
CN106091978B (en) The joining method of interference fringe image in inclined in type measurements by laser interferometry
CN113763540A (en) Three-dimensional reconstruction method and equipment based on speckle fringe hybrid modulation
CN115546255B (en) SIFT stream-based single-frame fringe projection high dynamic range error compensation method
CN116793247A (en) Stripe projection profilometry stripe series correction method based on region statistics
CN116295113A (en) Polarization three-dimensional imaging method integrating fringe projection
Liu et al. A novel phase unwrapping method for binocular structured light 3D reconstruction based on deep learning
Wang et al. Phase unwrapping-free fringe projection profilometry for 3D shape measurement
CN116608794B (en) Anti-texture 3D structured light imaging method, system, device and storage medium
CN113551617B (en) Binocular double-frequency complementary three-dimensional surface type measuring method based on fringe projection
CN116295114A (en) High-reflection surface structured light three-dimensional measurement method based on main and auxiliary double-view multi-gray level projection
Tehrani et al. A new approach to 3D modeling using structured light pattern
CN115290004A (en) Underwater parallel single-pixel imaging method based on compressed sensing and HSI
KR20190103833A (en) Method for measuring 3-dimensional data in real-time
CN112325799A (en) High-precision three-dimensional face measurement method based on near-infrared light projection
Yang et al. Line-encoded structured light measurement method in measuring shiny and transparent objects
Tang et al. Review of Highlight Suppression Methods for Structured Light 3D Measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant