CN111582310B - Decoding method and device of implicit structured light - Google Patents

Decoding method and device of implicit structured light Download PDF

Info

Publication number
CN111582310B
CN111582310B CN202010268291.8A CN202010268291A CN111582310B CN 111582310 B CN111582310 B CN 111582310B CN 202010268291 A CN202010268291 A CN 202010268291A CN 111582310 B CN111582310 B CN 111582310B
Authority
CN
China
Prior art keywords
stripe
point
pixel point
output image
fringe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010268291.8A
Other languages
Chinese (zh)
Other versions
CN111582310A (en
Inventor
谢翔
薛嘉雯
李国林
麦宋平
王志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Tsinghua University
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Shenzhen International Graduate School of Tsinghua University filed Critical Tsinghua University
Priority to CN202010268291.8A priority Critical patent/CN111582310B/en
Priority to PCT/CN2020/086096 priority patent/WO2021203488A1/en
Priority to US16/860,737 priority patent/US11238620B2/en
Publication of CN111582310A publication Critical patent/CN111582310A/en
Application granted granted Critical
Publication of CN111582310B publication Critical patent/CN111582310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention provides a decoding method and a device of implicit structured light, wherein the method comprises the following steps: traversing the image shot by the camera to obtain the gray value of each pixel point and ideal neighborhood gray distribution; according to the gray value of the pixel point and the ideal neighborhood gray distribution, combining a preset output image to complete stripe extraction and outputting an updated output image; classifying the stripe central points in the updated output image to obtain a stripe classification result; determining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result; and decoding the central points of all the stripes by using an epipolar constraint method and combining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image. The scheme can efficiently and robustly decode the implicit structured light image on the basis of ensuring the precision.

Description

Decoding method and device of implicit structured light
Technical Field
The present invention relates to the field of decoding technologies, and in particular, to a method and an apparatus for decoding light with a hidden structure.
Background
The projection type human-computer interaction technology generally relies on a projector and a camera to recognize the actions of human hands in a projection area, so that the aim of information interaction is fulfilled. A projection type human-computer interaction system utilizes persistence of vision of human eyes, a projector is adopted to project hidden structured light (generally larger than 120Hz) at a high speed (which is a technology for simplifying matched pixel search by projecting specific textures to a measured space, the structured light is hidden in a parity frame of a projection interface), a camera synchronously collects images of the parity frame interface, then difference values are carried out on the images of the parity frame, structured light information (namely, stripes are extracted), and finally accurate measurement of positions of hands and fingertips and positions of a projection surface in an interaction process is completed in real time by utilizing the information, so that accurate gesture and fingertip touch judgment is realized, and accurate judgment of projection type human-computer interaction is realized.
For the projection human-computer interaction system, considering the real-time requirement, the system cannot use a structured light image of time coding, and only can adopt a structured light image of a space coding mode (the space coding structured light refers to structured light which carries out mathematical coding on a space array distribution mode in a light field). In the spatial coding method, the common color coding causes the projection interface to flicker due to the time-sharing projection principle of the projector, and the other coding methods reduce the implicit effect due to the complexity of the pattern. Therefore, with the harsh requirements of the implicit structured light technique itself, it is necessary to decode with patterns as simple as possible, such as unidirectional stripes. However, such a simple pattern is difficult to accurately decode because the information is less than that of a general pattern.
Based on this, some methods proposed in the prior art are directed to extracting the fringes of the structured light image, and the methods are generally implemented by a gravity center method (finding the gravity center of the gray scale distribution), a geometric center method (finding the boundary point of the fringes and calculating the midpoint of the two points as the geometric center), a gaussian fitting method (calculating the extreme point of the gaussian fitting result), a Steger algorithm (giving the normal direction of each point by using the eigenvector corresponding to the eigenvalue with the maximum Hessian matrix absolute value, and performing second-order taylor expansion on the gray scale distribution function of the pixels at each point of the fringes along the normal direction to obtain the center of the fringes), and the like. The existing gravity center method and the existing geometric center method only utilize a simple gray scale relation and are very easily influenced by noise, although the Gaussian fitting method utilizes the gray scale distribution characteristic of the center of the stripe, the fitting mode is poor in robustness and extremely sensitive to noise, the robustness of a Steger algorithm is strong, the processing effect of the edge of the stripe is poor (multiple noise points exist) due to the defects of the method, and experiments show that the threshold value calculation mode provided in the method is not ideal enough when the light stripe with a large noise and an implicit structure is extracted, and a large number of stripe loss phenomena exist. Therefore, for non-offset fringes with small noise, the methods have no great difference, but for implicit fringes which are projected to an interactive human hand and generate offset, due to the difference of reflection characteristics of the human hand, a great deal of noise is introduced, so that the method has a great error when the center of such implicit fringes is extracted, and a great error is brought to the position measurement of the hand and the fingertip, thereby affecting the accuracy of the system in judging the gesture and the fingertip touch. In addition, because the projection type man-machine system uses the stripes as the structured light image, the stripes projected on the hand can generate different degrees of deviation and even dislocation along with different heights of the hand, and the loss phenomenon of the stripes can also occur, so that the problem of higher difficulty in determining the corresponding relationship between the deviation stripes and the stripes in the structured light image is caused.
Therefore, at present, there is no mature robust solution for decoding such simple stripes.
Disclosure of Invention
The embodiment of the invention provides a decoding method and a decoding device for implicit structured light, which solve the technical problems that the stripe extraction robustness in the current implicit structured light technology in the prior art is poor, and a mature and efficient stripe pattern matching method does not exist.
The embodiment of the invention provides a decoding method of implicit structured light, which comprises the following steps:
traversing the shot image, acquiring the gray value of each pixel point, and constructing ideal neighborhood gray distribution of each pixel point in the direction perpendicular to the stripes according to the gray value of each pixel point;
traversing the shot image, searching all fringe central points in the shot image according to the gray value of each pixel point and the ideal neighborhood gray distribution in combination with a preset output image, and updating the preset output image according to the searched fringe central point when one fringe central point is searched;
classifying the stripe center points in the updated output image to obtain a stripe classification result;
determining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result;
and decoding the central point of the stripe in the updated output image by using an epipolar constraint method and combining the corresponding relation between the stripe in the updated output image and the stripe in the structured light image.
The embodiment of the invention also provides a decoding device for the hidden structured light, which comprises:
the traversal module is used for traversing the shot image, acquiring the gray value of each pixel point, and constructing ideal neighborhood gray distribution of each pixel point in the direction perpendicular to the stripes according to the gray value of each pixel point;
the stripe extraction module is used for traversing the shot image, searching all stripe central points in the shot image according to the gray value of each pixel point and ideal neighborhood gray level distribution in combination with a preset output image, and updating the preset output image according to the searched stripe central point when one stripe central point is searched;
the stripe classification module is used for classifying stripe central points in the updated output image to obtain a stripe classification result;
the corresponding relation determining module is used for determining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result;
and the decoding module is used for decoding the central point of the stripe in the updated output image by utilizing an epipolar constraint method and combining the corresponding relation between the stripe in the updated output image and the stripe in the structured light image.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the method.
In one embodiment, ideal neighborhood gray distribution is established according to gray distribution characteristics of a central point of a stripe in a shot image, whether a current pixel point is the central point of the stripe or not is judged by combining a preset output image based on the gray value of each pixel point in each row or each column and the ideal neighborhood gray distribution, and all the stripes in the output image with the extracted stripe are classified; determining the corresponding relation between all the stripes in the updated output image and the stripes in the structured light image according to the classification result; and decoding all fringe central points by using an epipolar constraint method and combining the corresponding relation between the fringes in the updated output image and the fringes in the structured light image. The invention can realize the decoding of the implicit structured light image efficiently and robustly on the basis of ensuring the precision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a decoding method for implicit structured light according to an embodiment of the present invention;
FIG. 2 is a partial flowchart of an image update process for an output image according to an embodiment of the present invention;
fig. 3 is a flowchart of determining whether the center point of the stripe is the center point of the stripe according to the embodiment of the present invention;
FIG. 4 is a graph of a stripe extraction effect provided by an embodiment of the present invention;
FIG. 5 is a flow chart of classifying and matching output images according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a decoding principle using corresponding stripe and epipolar constraints according to an embodiment of the present invention;
FIG. 7 is a flowchart of a decoding method using corresponding stripe and epipolar constraints according to an embodiment of the present invention;
fig. 8 is a block diagram of a decoding apparatus for implicit structured light according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In an embodiment of the present invention, a method for decoding implicit structured light is provided, as shown in fig. 1, the method includes:
s1: image I shot by camerarawTraversing to obtain the gray value of each pixel point, and constructing ideal neighborhood gray distribution of each pixel point in the direction perpendicular to the stripes according to the gray value of each pixel point; traversing line by line when the input image is a transverse stripe pattern, traversing line by line when the input image is a longitudinal stripe pattern, and setting the number of rows or columns h as 1 when the number of rows or columns is the first row or column;
s2: traversing the shot image, searching all fringe central points in the shot image according to the gray value of each pixel point and the ideal neighborhood gray distribution in combination with a preset output image, and updating the preset output image according to the searched fringe central point when one fringe central point is searched;
s3: classifying the stripe center points in the updated output image to obtain a stripe classification result (namely obtaining a plurality of stripes);
s4: determining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result;
s5: and decoding the central point of the stripe in the updated output image by using an epipolar constraint method and combining the corresponding relation between the stripe in the updated output image and the stripe in the structured light image.
In the embodiments of the present inventionIn the method, considering that an image sensor has a sampling blurring effect which is approximately gaussian distribution, and a captured image has a large amount of noise, it is necessary to analyze the characteristics of a fringe central point and establish a probability model to extract the fringe central point from an original image in a manner of searching for a local maximum probability. The central point probability model is constructed according to the following thought: setting the center point X of the ideal stripePThe m-point sequence formed by the neighborhood is { …, XP-1,XP,XP+1…, the current pixel point is xiAnd m points sequence formed with neighborhood is { …, xi-1,xi,xi+1…, if the point is the center point of the stripe, it should satisfy a one-to-one correspondence with the ideal stripe center sequence, which can be expressed as the following probability form:
P(xi)=P(…,xi-1=XP-1,xi=XP,xi+1=XP+1,…);
the above equation can be decomposed into P (x) according to a conditional probability formulai)=P(…,xi-1=XP-1,xi=XP,xi+1=XP+1,…|xi=XP) And P (x)i=XP) The two parts are separately constructed. In the embodiment of the invention, the first term is considered to be the current point xiThe probability that each point in the neighborhood corresponds to the neighborhood point of the ideal center one by one under the condition of the center point can be expressed as the fitting degree of the gray distribution of the ideal neighborhood constructed by the current point and the actual distribution. The ideal gray distribution has a plurality of construction modes, such as gaussian distribution: when x isiIs the center point of the ideal stripe and corresponds to the gray value yiWhen is with xiPoint x at distance di+dHas a gray value of yi+dThe following relationship is satisfied:
Figure GDA0003520208090000051
where W represents the stripe width. Considering that the larger the sum of the error of the actual neighborhood gray scale distribution and the ideal distribution is, the lower the fitting degree is, and P (x)i)=P(…,xi-1=XP-1,xi=XP,xi+1=XP+1,…|xi=XP) It can be constructed in the following way:
Figure GDA0003520208090000052
this term takes into account the current point xiAs the probability of the center point in the direction perpendicular to the stripes. Next, embodiments of the present invention utilize a second term P (x)i=XP) The probability of the current point in the direction parallel to the stripes is considered. Let current point x beiHas the coordinates of (a, h)i) Known as IextractThe coordinate of the central point extracted from the middle row a-1 is (a-1, h)prev) From the continuity of the horizontal stripes, x can be considered to beiThe probability that the point is the central point of the stripe satisfies the parameter (h)prev2) Is a Gaussian distribution of
Figure GDA0003520208090000053
Note that for special cases, such as the case where the initial search did not have previously extracted results, when hi-hprevAnd in the case of W/2, or when the distance between the current point and the abscissa of the last extraction result is greater than 1 pixel, the continuity probability of the stripes in the neighborhood of the point is considered to be the same, and because the search of the center point of the stripes is based on the maximum probability value, the item does not influence the probability distribution in the neighborhood at this time and is not considered any more. Summarizing the above analysis, the following formula can be obtained as an optimization function in the stripe extraction process, the corresponding value of each point in the image is calculated, and the local maximum probability value is searched
Figure GDA0003520208090000061
And obtaining the position of the center point of the stripe.
Figure GDA0003520208090000062
According to the above analysis, the step S1 of constructing the ideal neighborhood gray distribution perpendicular to the stripe direction according to the gray value of each pixel point in each row or each column includes:
comparing the gray value of the pixel point with a preset threshold value, and comparing the gray value which is greater than the preset threshold value ythThe ideal neighborhood gray level distribution in the direction vertical to the stripes is constructed. The operation can eliminate the interference of background pixel points and reduce the calculation amount.
As shown in fig. 2, in step S2, the determining, according to the gray value of each pixel and the ideal neighborhood gray distribution, whether the current pixel is a fringe central point by combining with the preset output image, and if the current pixel is the fringe central point, performing image update on the preset output image according to the current pixel, includes:
step S21: according to the ideal neighborhood gray distribution and the actual neighborhood gray distribution of the pixel points with the gray values larger than the preset threshold value, the method utilizes
Figure GDA0003520208090000063
Calculating the fitting degree of the gray distribution of the ideal central point of the pixel point with the gray value larger than a preset threshold value;
step S22: based on a preset output image IextractBy using
Figure GDA0003520208090000064
Calculating the fringe continuity probability of the current pixel point and the neighborhood;
step S23: calculating the current pixel point x according to the ideal central point gray distribution fitting degree of the pixel points with the gray values larger than the preset threshold value and the stripe continuity probability of the current pixel point and the neighborhoodiFringe center point probability of
Figure GDA0003520208090000065
And neighborhood probability sequence
Figure GDA0003520208090000066
Wherein k is i-1, i-2, …, i-m, m is the total width of the neighborhood;
step S24: judging the current pixel point xiFringe center point probability of
Figure GDA0003520208090000067
Whether satisfied is a neighborhood probability sequence
Figure GDA0003520208090000068
If yes, the preset output image is subjected to image updating. For example, image I will be outputextractMidpoint xiThe corresponding gray scale value of (1) is recorded as 0, and the gray scale value of the pixel in the neighborhood position is recorded as 255. The noise points near the stripes found before can be eliminated by updating the gray value of the currently found central point neighborhood range.
Specifically, as shown in fig. 3, the pixel points are traversed and determined according to the following flow:
according to each row (column) and each pixel point xiThe gray value and ideal neighborhood gray distribution of the image are combined with a preset output image IextractJudging whether the current pixel point is a fringe central point, if so, outputting a preset output image I according to the current pixel point pairextractAnd updating the image, if the image is not a stripe center point, judging whether all traversal of the row (column) where the current pixel is located is finished, if not, enabling i to be i +1 and returning to the step S22 to start the processing of the next point, if all traversal of the row (column) where the current pixel is located is finished, judging whether all traversal of the row (column) is finished, if not, judging whether the pixel of the downlink (column) (h to be h +1) is a stripe center point, and if all traversal of the row (column) is finished, finishing stripe extraction and outputting an updated output image. The gray values of all pixel points of the preset output image are 0 (full black) or 255 (full white), and the updated output image comprises all the extracted stripes; x is the number ofiIs the ith pixel in each row, i ranges from 1 to the sum of the pixels of each row, which is determined from the actual image.
Fig. 4 is a graph showing the effect of extracting the fringes, and it can be seen from fig. 4 that the center points of the fringes on the projection plane and the finger are extracted relatively completely. Image I after completion of extractionextractWill be used in steps S3-S5 to realize all the extraction pointsAnd (4) decoding.
In the embodiment of the present invention, when determining the stripe correspondence relationship between the captured image and the structured-light image, it is considered that the stripe after the homography conversion and the structured-light image should completely correspond to each other in an ideal case, and therefore the structured-light image P may be used as a template, and the homography conversion result I of step S3 may be used as a templatehomoAnd finding out the stripes completely corresponding to the P (namely, non-offset stripes) and the non-corresponding stripes (namely, offset stripes), and determining the corresponding relation of the stripes by utilizing the structural similarity of the two stripes. The flow is shown in FIG. 5, i.e., steps S3 and S4 described above.
Step S3 includes:
step S31: for the updated output image IextractPerforming homographic transformation to output homographic transformation image Ihomo
The homography transformation describes a position mapping relation between a pixel coordinate system of the camera and a pixel coordinate system of the projector, and a corresponding change matrix is a homography matrix, and parameters of the homography matrix are obtained during calibration of the camera and the projector. And in the homography transformation process, the position of each pixel point in the image shot by the camera is multiplied by the homography matrix to obtain the corresponding position of the pixel point on the projection pattern. Under the condition that no hand or other objects are shielded, the result of performing homography transformation on the central point position of the stripe in the image shot by the camera is completely the same as the distribution of the stripe in the projection pattern.
Step S32: transforming an image I from homography based on a structured light image PhomoSearching a fringe central point set U1 without deviation, and determining that the fringe set without deviation is T1 according to U1 (namely, the fringe central points form fringes after being connected); will IhomoSubtracting U1 from all the stripe center point sets to obtain an offset stripe center point set U2; and classifying the center point set U2 of the offset stripes by using a FloodFill algorithm to obtain a classified set T2 of the offset stripes.
Due to the offset, the pattern structure formed at the broken position should ideally be the same as the offset fringe structure, so that the broken position is broken at the position corresponding to the non-offset fringe and a new fringe is formed. The present embodiment performs graph matching using this structural similarity to realize matching of stripes, that is, step S4 includes:
step S41: searching the breakpoint of the stripe from the stripe set T1 without offset, and constructing a stripe bipartite graph structure G1 without offset according to the breakpoint (for example, there are two breakpoints to construct a stripe, and there are four breakpoints to construct two stripes); searching the endpoint of the stripe from the generated offset stripe classification set T2, and constructing a generated offset stripe bipartite graph structure G2 according to the endpoint;
step S42: and matching the stripe bipartite graph structure G1 without deviation with the stripe bipartite graph structure G2 with the deviation by using a graph matching algorithm to obtain the corresponding relation between the deviation stripes and the deviation stripes.
Preferably, there are various graph matching methods used in step S42, such as those based on the spectral decomposition principle, those based on the random walk principle, etc., and the graph matching algorithm based on the spectral decomposition principle is used in the present invention to perform operations in consideration of the difference in the computational complexity of these algorithms.
In the embodiment of the present invention, based on the stripe matching result of the above steps, all stripe center points can be decoded by combining the epipolar constraint principle. FIG. 6 is a schematic diagram of the decoding principle using corresponding stripe and epipolar constraints, for IextractIf there is no corresponding relationship of stripes, it is difficult to determine IextractWhich is the corresponding point in the structured-light image P. From the stripe matching relationship obtained in step S42, I can be knownextractIf the middle stripe b corresponds to the position of the stripe e in the structured-light image P, the decoding of the point can be completed by solving the intersection point of the stripe e and the epipolar line, as shown in fig. 7, i.e. step S5 includes:
step S51: determining an updated output image I according to the corresponding relation between the offset stripes and the non-offset stripesextractThe middle stripe center point corresponds to a stripe position in the structured light image P;
step S52: according toNewly completed output image IextractThe central point of the middle stripe corresponds to the position of the stripe in the structured light image P, and an updated output image I is obtained by utilizing an epipolar constraint methodextractThe polar line position of the central point of the middle stripe in the structured light image P;
step S53: according to the updated output image IextractCalculating the polar line position of the central point of the middle stripe in the structured light image P to obtain an updated output image IextractDecoding position of middle stripe center point in structured light image P, the decoding position being output image IextractThe center point of the middle stripe corresponds to the intersection point of the polar line and the corresponding stripe in the structured light image P.
In the embodiment of the present invention, the method further includes:
and acquiring point cloud data of all fringe central points according to the decoded output image by combining internal and external parameters of a camera and a projector, establishing a point cloud model of the gesture and the fingertip in a three-dimensional space according to the point cloud data of all the fringe central points, and judging the touch of the gesture and the fingertip according to the point cloud model.
The internal parameters comprise the focal length, the principal point coordinate and the distortion parameter of the camera or the projector, and are internal intrinsic parameters of the camera or the projector. The external parameters describe the pose of the camera or the projector in a world coordinate system and comprise a rotation matrix and a translation matrix. The world coordinate system is a three-dimensional coordinate system set before use by the user. The positions of all the fringe central points under the world coordinate system can be calculated through the internal and external parameters of the camera and the projector to form point cloud.
The scheme is also applicable to the case that the input image is a projected visible structured light pattern.
Based on the same inventive concept, the embodiment of the present invention further provides a decoding apparatus for implicit structured light, as described in the following embodiments. Because the principle of the decoding device for implicit structured light to solve the problem is similar to the decoding method for implicit structured light, the implementation of the decoding device for implicit structured light can refer to the implementation of the decoding method for implicit structured light, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 8 is a block diagram of a decoding apparatus with implicit structured light according to an embodiment of the present invention, as shown in fig. 8, including:
the traversal module 02 is used for traversing the shot image, acquiring the gray value of each pixel point, and constructing ideal neighborhood gray distribution of each pixel point in the direction perpendicular to the stripes according to the gray value of each pixel point;
the stripe extraction module 04 is configured to traverse the photographed image, find all stripe center points in the photographed image according to the gray value of each pixel point and the ideal neighborhood gray level distribution in combination with a preset output image, and update the preset output image according to the found stripe center point when one stripe center point is found;
the stripe classification module 06 is configured to classify stripe center points in the updated output image to obtain a stripe classification result;
a corresponding relation determining module 08 for determining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result;
and the decoding module 10 is configured to decode a central point of a stripe in the updated output image by using an epipolar constraint method and combining a corresponding relationship between the stripe in the updated output image and the stripe in the structured light image.
In the embodiment of the present invention, the traversal module 02 is specifically configured to:
and comparing the gray value of the pixel point with a preset threshold value, and constructing ideal neighborhood gray distribution in the direction vertical to the stripes for the pixel point meeting the condition that the gray value is greater than the preset threshold value.
In the embodiment of the present invention, the traversal module 02 is specifically configured to: the ideal neighborhood gray scale distribution in the direction perpendicular to the stripes is constructed according to the following formula:
Figure GDA0003520208090000101
wherein, yiIs the center point x of the ideal stripeiCorresponding gray value, yi+dIs equal to xiPoint x at distance di+dW represents the stripe width.
In the embodiment of the present invention, the stripe extraction module 04 is specifically configured to:
calculating the fitting degree of the ideal central point gray scale distribution of the pixel points with the gray scale values larger than the preset threshold according to the ideal neighborhood gray scale distribution and the actual neighborhood gray scale distribution of the pixel points with the gray scale values larger than the preset threshold;
calculating the fringe continuity probability of the current pixel point and the neighborhood based on a preset output image;
calculating the fringe central point probability and the neighborhood probability sequence of the current pixel point according to the fitting degree of the ideal central point gray distribution of the pixel point with the gray value larger than the preset threshold value and the fringe continuity probabilities of the current pixel point and the neighborhood;
and judging whether the fringe central point probability of the current pixel point meets the maximum value of the neighborhood probability sequence, and if so, updating the image of the preset output image.
In the embodiment of the present invention, the stripe extraction module 04 is specifically configured to:
if the current pixel point is judged not to be the fringe central point, whether the traversal of the row or the column of the current pixel point is completed or not is judged, if the traversal of the row or the column of the current pixel point is not completed, whether the next pixel point of the current pixel point is the fringe central point or not is continuously judged according to the gray value of the next pixel point of the current pixel point and the gray distribution of the neighborhood inner point of the next pixel point of the current pixel point by combining the updated output image, if the traversal of the row or the column of the current pixel point is completed, whether the traversal of all rows or columns is completed or not is judged, if the traversal of all rows or columns is not completed, whether the pixel point of the downlink or the columns is the fringe central point or not is determined, and if the traversal of all rows or columns is completed, the extraction of fringes is completed, and the updated output image is output.
In the embodiment of the present invention, the stripe classification module 06 is specifically configured to:
performing homography transformation on the updated output image, and outputting a homography transformation image;
based on the structured light image, searching a fringe central point set which is not subjected to offset from the homography conversion image, and determining the fringe set which is not subjected to offset according to the fringe central point set which is not subjected to offset;
determining a fringe central point set which is subjected to offset according to the fringe central point set which is not subjected to offset;
and classifying the set of the center points of the stripes with the deviation by using a FloodFill algorithm to obtain a classified set of the stripes with the deviation.
In this embodiment of the present invention, the correspondence determining module 08 is specifically configured to:
searching the breakpoint of the stripe from the non-offset stripe set, and constructing a non-offset stripe bipartite graph structure according to the breakpoint;
searching the endpoint of the stripe from the generated offset stripe classification set, and constructing a binary graph structure of the generated offset stripe according to the endpoint;
and matching the fringe bipartite graph structure which is not subjected to deviation with the fringe bipartite graph structure which is subjected to deviation by using a graph matching algorithm to obtain the corresponding relation between the deviation fringes and the non-deviation fringes.
In this embodiment of the present invention, the decoding module 10 is specifically configured to:
determining the position of the central point of the stripe in the updated output image corresponding to the stripe in the structured light image according to the corresponding relation between the offset stripe and the non-offset stripe;
obtaining an epipolar line position of the central point of the stripe in the updated output image in the structured light image by utilizing an epipolar line constraint method according to the position of the central point of the stripe in the updated output image corresponding to the stripe in the structured light image;
and calculating a decoding position of the central point of the stripe in the updated output image in the structured light image according to the polar line position of the central point of the stripe in the updated output image in the structured light image, wherein the decoding position is an intersection point of a corresponding polar line and a corresponding stripe of the central point of the stripe in the output image in the structured light image.
In the embodiment of the present invention, the method further includes:
and the judging module is used for acquiring point cloud data of all the fringe central points according to the decoded output image by combining internal and external parameters of the camera and the projector, establishing a point cloud model of the gesture and the fingertip in a three-dimensional space according to the point cloud data of all the fringe central points, and judging the touch of the gesture and the fingertip according to the point cloud model.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the method.
In summary, the decoding method and apparatus for implicit structured light provided by the present invention have the following advantages:
in addition, the invention firstly utilizes the similarity of the graph structure formed by the breakpoint of the undisplaced stripe and the end point of the deviated stripe to finish quick and accurate decoding, and has a promoting effect on the development and application of the special technology of the hidden structured light in the stereoscopic vision system. The method solves the problems that the stripe extraction robustness is poor and a mature and efficient stripe pattern matching method does not exist in the current implicit structured light technology.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for decoding implicit structured light, comprising:
traversing the shot image, acquiring the gray value of each pixel point, and constructing ideal neighborhood gray distribution of each pixel point in the direction perpendicular to the stripes according to the gray value of each pixel point;
traversing the shot image, searching all fringe central points in the shot image according to the gray value of each pixel point and the ideal neighborhood gray distribution in combination with a preset output image, and updating the preset output image according to the searched fringe central point when one fringe central point is searched;
classifying the stripe central points in the updated output image to obtain a stripe classification result;
determining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result;
decoding the central point of the stripe in the updated output image by using an epipolar constraint method and combining the corresponding relation between the stripe in the updated output image and the stripe in the structured light image;
constructing ideal neighborhood gray distribution of each pixel point in the direction vertical to the stripes according to the gray value of each pixel point, comprising the following steps:
comparing the gray value of the pixel point with a preset threshold value, and constructing ideal neighborhood gray distribution in the direction vertical to the stripe for the pixel point meeting the condition that the gray value is greater than the preset threshold value;
traversing the shot image, searching all fringe central points in the shot image by combining a preset output image according to the gray value of each pixel point and the ideal neighborhood gray distribution, and updating the preset output image according to the searched fringe central points when one fringe central point is searched, wherein the method comprises the following steps:
calculating the fitting degree of the ideal central point gray scale distribution of the pixel points with the gray scale values larger than the preset threshold according to the ideal neighborhood gray scale distribution and the actual neighborhood gray scale distribution of the pixel points with the gray scale values larger than the preset threshold;
calculating the fringe continuity probability of the current pixel point and the neighborhood based on a preset output image;
calculating the fringe central point probability and the neighborhood probability sequence of the current pixel point according to the fitting degree of the ideal central point gray distribution of the pixel point with the gray value larger than the preset threshold value and the fringe continuity probabilities of the current pixel point and the neighborhood;
judging whether the fringe central point probability of the current pixel point meets the maximum value of the neighborhood probability sequence, and if so, updating the image of a preset output image;
Figure FDA0003520208080000011
wherein the function
Figure FDA0003520208080000021
Comprising the sum of the gray distribution fitting degree and the fringe continuity probability of the pixel points, wherein,
Figure FDA0003520208080000022
for the current pixel point xiFringe center point probability of (a) < y >iFor the current pixel point xiCorresponding gray value, yi-kIs equal to xiPoint x at a distance ki-kW represents the stripe width,
Figure FDA0003520208080000023
representing the current pixel point xiThe degree of fitting of the gray distribution of the ideal central point; h isiFor the current pixel point xiOrdinate of (a), hprevFor the current pixel point xiThe longitudinal coordinate of the extracted central point in the previous row of (2), and sigma represents a Gaussian distribution parameter satisfied by the continuity of the stripes; x is the number ofiIs the range of the ith pixel, i, in each rowThe sum of pixels from 1 to each row; m is the total width of the neighborhood.
2. A method for decoding an implied structured light as claimed in claim 1, characterized in that the ideal neighborhood gray scale distribution perpendicular to the stripe direction is constructed according to the following formula:
Figure FDA0003520208080000024
wherein, yiIs the center point x of the ideal stripeiCorresponding gray value, yi+dIs equal to xiPoint x at distance di+dW represents the stripe width.
3. A method for decoding of implicit structured light as claimed in claim 1, wherein the traversal is a row-by-row traversal or a column-by-column traversal;
traversing the shot image, searching all fringe central points in the shot image by combining a preset output image according to the gray value of each pixel point and the ideal neighborhood gray distribution, and updating the preset output image according to the searched fringe central point when one fringe central point is searched, and the method also comprises the following steps:
if the current pixel point is judged not to be the fringe central point, whether the traversal of the row or the column of the current pixel point is completed or not is judged, if the traversal of the row or the column of the current pixel point is not completed, whether the next pixel point of the current pixel point is the fringe central point or not is continuously judged according to the gray value of the next pixel point of the current pixel point and the gray distribution of the neighborhood inner point of the next pixel point of the current pixel point by combining the updated output image, if the traversal of the row or the column of the current pixel point is completed, whether the traversal of all rows or columns is completed or not is judged, if the traversal of all rows or columns is not completed, whether the pixel point of the downlink or the columns is the fringe central point or not is determined, and if the traversal of all rows or columns is completed, the extraction of fringes is completed, and the updated output image is output.
4. A method for decoding implicit structured light as claimed in claim 1, wherein the step of classifying the central point of the stripe in the updated output image to obtain the stripe classification result comprises:
performing homography transformation on the updated output image, and outputting a homography transformation image;
based on the structured light image, searching a fringe central point set which is not subjected to offset from the homography conversion image, and determining the fringe set which is not subjected to offset according to the fringe central point set which is not subjected to offset;
determining a fringe central point set which is subjected to offset according to the fringe central point set which is not subjected to offset;
and classifying the set of the center points of the stripes with the deviation by using a FloodFill algorithm to obtain a classified set of the stripes with the deviation.
5. An implicit structured light decoding method as claimed in claim 4, wherein determining the corresponding relationship between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result comprises:
searching the breakpoint of the stripe from the non-offset stripe set, and constructing a non-offset stripe bipartite graph structure according to the breakpoint;
searching the endpoint of the stripe from the generated offset stripe classification set, and constructing a binary graph structure of the generated offset stripe according to the endpoint;
and matching the fringe bipartite graph structure which is not subjected to deviation with the fringe bipartite graph structure which is subjected to deviation by using a graph matching algorithm to obtain the corresponding relation between the deviation fringes and the non-deviation fringes.
6. A method for decoding implicit structured light as in claim 5, wherein the decoding of the center point of the stripe in the updated output image by using the epipolar constraint method in combination with the corresponding relationship between the stripe in the updated output image and the stripe in the structured light image comprises:
determining the position of the central point of the stripe in the updated output image corresponding to the stripe in the structured light image according to the corresponding relation between the offset stripe and the non-offset stripe;
obtaining an epipolar line position of the central point of the stripe in the updated output image in the structured light image by utilizing an epipolar line constraint method according to the position of the central point of the stripe in the updated output image corresponding to the stripe in the structured light image;
and calculating a decoding position of the central point of the stripe in the updated output image in the structured light image according to the polar line position of the central point of the stripe in the updated output image in the structured light image, wherein the decoding position is an intersection point of a corresponding polar line and a corresponding stripe of the central point of the stripe in the output image in the structured light image.
7. A method for decoding implied structured light as recited in claim 1, further comprising:
and acquiring point cloud data of all fringe central points according to the decoded output image by combining internal and external parameters of a camera and a projector, establishing a point cloud model of the gesture and the fingertip in a three-dimensional space according to the point cloud data of all the fringe central points, and judging the touch of the gesture and the fingertip according to the point cloud model.
8. An apparatus for decoding implicit structured light, comprising:
the traversal module is used for traversing the shot image, acquiring the gray value of each pixel point, and constructing ideal neighborhood gray distribution of each pixel point in the direction perpendicular to the stripes according to the gray value of each pixel point;
the stripe extraction module is used for traversing the shot image, searching all stripe central points in the shot image according to the gray value of each pixel point and ideal neighborhood gray level distribution in combination with a preset output image, and updating the preset output image according to the searched stripe central point when one stripe central point is searched;
the stripe classification module is used for classifying stripe central points in the updated output image to obtain a stripe classification result;
the corresponding relation determining module is used for determining the corresponding relation between the stripes in the updated output image and the stripes in the structured light image according to the stripe classification result;
the decoding module is used for decoding the central point of the stripe in the updated output image by utilizing an epipolar constraint method and combining the corresponding relation between the stripe in the updated output image and the stripe in the structured light image;
constructing ideal neighborhood gray distribution of each pixel point in the direction vertical to the stripes according to the gray value of each pixel point, comprising the following steps:
comparing the gray value of the pixel point with a preset threshold value, and constructing ideal neighborhood gray distribution in the direction vertical to the stripe for the pixel point meeting the condition that the gray value is greater than the preset threshold value;
traversing the shot image, searching all fringe central points in the shot image by combining a preset output image according to the gray value of each pixel point and the ideal neighborhood gray distribution, and updating the preset output image according to the searched fringe central points when one fringe central point is searched, wherein the method comprises the following steps:
calculating the fitting degree of the ideal central point gray scale distribution of the pixel points with the gray scale values larger than the preset threshold according to the ideal neighborhood gray scale distribution and the actual neighborhood gray scale distribution of the pixel points with the gray scale values larger than the preset threshold;
calculating the fringe continuity probability of the current pixel point and the neighborhood based on a preset output image;
calculating the fringe central point probability and the neighborhood probability sequence of the current pixel point according to the fitting degree of the ideal central point gray distribution of the pixel point with the gray value larger than the preset threshold value and the fringe continuity probabilities of the current pixel point and the neighborhood;
judging whether the fringe central point probability of the current pixel point meets the maximum value of the neighborhood probability sequence, and if so, updating the image of a preset output image;
Figure FDA0003520208080000041
wherein the function
Figure FDA0003520208080000042
Comprising the sum of the gray distribution fitting degree and the fringe continuity probability of the pixel points, wherein,
Figure FDA0003520208080000043
for the current pixel point xiFringe center point probability of (a) < y >iFor the current pixel point xiCorresponding gray value, yi-kIs equal to xiPoint x at a distance ki-kW represents the stripe width,
Figure FDA0003520208080000051
representing the current pixel point xiThe degree of fitting of the gray distribution of the ideal central point; h isiFor the current pixel point xiOrdinate of (a), hprevFor the current pixel point xiThe longitudinal coordinate of the extracted central point in the previous row of (2), and sigma represents a Gaussian distribution parameter satisfied by the continuity of the stripes; x is the number ofiIs the ith pixel in each row, i ranges from 1 to the sum of the pixels of each row; m is the total width of the neighborhood.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 7.
CN202010268291.8A 2020-04-08 2020-04-08 Decoding method and device of implicit structured light Active CN111582310B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010268291.8A CN111582310B (en) 2020-04-08 2020-04-08 Decoding method and device of implicit structured light
PCT/CN2020/086096 WO2021203488A1 (en) 2020-04-08 2020-04-22 Method and apparatus for decoding implicit structured light
US16/860,737 US11238620B2 (en) 2020-04-08 2020-04-28 Implicit structured light decoding method, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010268291.8A CN111582310B (en) 2020-04-08 2020-04-08 Decoding method and device of implicit structured light

Publications (2)

Publication Number Publication Date
CN111582310A CN111582310A (en) 2020-08-25
CN111582310B true CN111582310B (en) 2022-05-06

Family

ID=72113537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010268291.8A Active CN111582310B (en) 2020-04-08 2020-04-08 Decoding method and device of implicit structured light

Country Status (3)

Country Link
US (1) US11238620B2 (en)
CN (1) CN111582310B (en)
WO (1) WO2021203488A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220230335A1 (en) * 2021-01-20 2022-07-21 Nicolae Paul Teodorescu One-shot high-accuracy geometric modeling of three-dimensional scenes
CN114252027B (en) * 2021-12-22 2023-07-14 深圳市响西科技有限公司 Continuous playing method of structured light stripe graph and 3D structured light machine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014169273A1 (en) * 2013-04-12 2014-10-16 The Trustees Of Columbia University In The City Of New York Systems, methods, and media for generating structured light
CN107798698A (en) * 2017-09-25 2018-03-13 西安交通大学 Structured light strip center extracting method based on gray-level correction and adaptive threshold
CN109556533A (en) * 2018-06-13 2019-04-02 中国人民解放军陆军工程大学 A kind of extraction method for multi-line structured light stripe pattern
CN110599539A (en) * 2019-09-17 2019-12-20 广东奥普特科技股份有限公司 Stripe center extraction method of structured light stripe image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2796519A1 (en) * 2010-04-16 2011-10-20 Flex Lighting Ii, Llc Illumination device comprising a film-based lightguide
CN103809880B (en) * 2014-02-24 2017-02-08 清华大学 Man-machine interaction system and method
GB201511334D0 (en) * 2015-06-29 2015-08-12 Nokia Technologies Oy A method, apparatus, computer and system for image analysis
CN108108744B (en) * 2016-11-25 2021-03-02 同方威视技术股份有限公司 Method and system for radiation image auxiliary analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014169273A1 (en) * 2013-04-12 2014-10-16 The Trustees Of Columbia University In The City Of New York Systems, methods, and media for generating structured light
CN107798698A (en) * 2017-09-25 2018-03-13 西安交通大学 Structured light strip center extracting method based on gray-level correction and adaptive threshold
CN109556533A (en) * 2018-06-13 2019-04-02 中国人民解放军陆军工程大学 A kind of extraction method for multi-line structured light stripe pattern
CN110599539A (en) * 2019-09-17 2019-12-20 广东奥普特科技股份有限公司 Stripe center extraction method of structured light stripe image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
线结构光条纹自适应中心提取优化算法;张佳等;《应用激光》;20191231;第39卷(第6期);全文 *

Also Published As

Publication number Publication date
CN111582310A (en) 2020-08-25
US11238620B2 (en) 2022-02-01
US20210319594A1 (en) 2021-10-14
WO2021203488A1 (en) 2021-10-14

Similar Documents

Publication Publication Date Title
Simo-Serra et al. A joint model for 2d and 3d pose estimation from a single image
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
CN109658515A (en) Point cloud gridding method, device, equipment and computer storage medium
CN105654492A (en) Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
US11568601B2 (en) Real-time hand modeling and tracking using convolution models
CN106155299B (en) A kind of pair of smart machine carries out the method and device of gesture control
CN111582310B (en) Decoding method and device of implicit structured light
US11651581B2 (en) System and method for correspondence map determination
CN113312973B (en) Gesture recognition key point feature extraction method and system
CN114152217B (en) Binocular phase expansion method based on supervised learning
Chen et al. 3D reconstruction of unstructured objects using information from multiple sensors
Bleiweiss et al. Robust head pose estimation by fusing time-of-flight depth and color
Wang et al. Deep nrsfm++: Towards unsupervised 2d-3d lifting in the wild
JP4850768B2 (en) Apparatus and program for reconstructing 3D human face surface data
Jorstad et al. A deformation and lighting insensitive metric for face recognition based on dense correspondences
Zheng et al. Self-expressive dictionary learning for dynamic 3d reconstruction
Taheri et al. Joint albedo estimation and pose tracking from video
CN115908202A (en) ToF depth image denoising method based on expansion modeling and multi-mode fusion
Jia et al. One-shot m-array pattern based on coded structured light for three-dimensional object reconstruction
JP2008261756A (en) Device and program for presuming three-dimensional head posture in real time from stereo image pair
CN114863036B (en) Data processing method and device based on structured light, electronic equipment and storage medium
CN117710603B (en) Unmanned aerial vehicle image three-dimensional building modeling method under constraint of linear geometry
Nadar et al. Sensor simulation for monocular depth estimation using deep neural networks
CN117315018B (en) User plane pose detection method, equipment and medium based on improved PnP

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant