CN112771573B - Depth estimation method and device based on speckle images and face recognition system - Google Patents

Depth estimation method and device based on speckle images and face recognition system Download PDF

Info

Publication number
CN112771573B
CN112771573B CN201980000582.4A CN201980000582A CN112771573B CN 112771573 B CN112771573 B CN 112771573B CN 201980000582 A CN201980000582 A CN 201980000582A CN 112771573 B CN112771573 B CN 112771573B
Authority
CN
China
Prior art keywords
speckle
target
point
image
speckle image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980000582.4A
Other languages
Chinese (zh)
Other versions
CN112771573A (en
Inventor
吴勇辉
刘川熙
詹洁琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Publication of CN112771573A publication Critical patent/CN112771573A/en
Application granted granted Critical
Publication of CN112771573B publication Critical patent/CN112771573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Abstract

A depth estimation method and device based on speckle images and a face recognition system are provided. The method comprises the following steps: performing speckle detection on an input speckle image to determine a target speckle (101); performing stereo correction on the input speckle image and the reference speckle image to align the speckle image to be processed with the reference speckle image (102); constructing a target topology (103) of the target speckles in the input speckle image; finding a best matching reference speckle in the same row as each of the target speckle on the reference speckle image using a window matching algorithm (104); determining a disparity value between the target speckle point and the matched reference speckle point according to the reference speckle point which is the closest match to the target speckle point (105); interpolating the disparity values based on a target topology of the target speckle to obtain a disparity map (106) of the input speckle image; the disparity map is converted into a depth image (107) of the input speckle image. The method solves the problems that the accuracy of speckle point detection in the prior art is low, so that the error is large when the speckle points are matched, and the accuracy of the depth image is influenced due to the poor parallax image effect of the scattered spots.

Description

Depth estimation method and device based on speckle images and face recognition system
Technical Field
The application relates to the technical field of computer vision, in particular to a depth estimation method and device based on speckle images and a face recognition system.
Background
The recovery of depth images from images is a fundamental problem in the field of computer vision, and the problem is paid more and more attention with the development of computer vision technology.
Active depth estimation using laser speckle is gradually valued by researchers, and since the laser mode is basically unchanged at different depths, an image is photographed by using a separate camera and matched with a pre-stored reference speckle image to obtain a disparity map, which is then converted into a depth image. However, in the existing depth estimation method based on laser speckles, on one hand, the precision of scattered spot detection in the collected speckle images is not high, which causes larger errors when speckle point matching is performed subsequently; on the other hand, the speckle points projected on the image by the projector are distributed unevenly and are relatively dispersed, so that the parallax image of the scattered spots obtained by calculation has poor effect (is relatively sparse), and the accuracy of the depth image is influenced.
Content of application
In view of this, the embodiment of the present application provides a depth estimation method and apparatus based on a speckle image, and a face recognition system, so as to solve the problems that in the prior art, the accuracy of a depth image is affected due to a large error and a poor parallax image effect of scattered spots when speckle point matching is performed subsequently because the accuracy of scattered spot detection in an acquired speckle image is not high.
In one aspect, an embodiment of the present application provides a depth estimation method based on a speckle image, including: performing speckle detection on the input speckle image to determine a target speckle; performing stereo correction on the input speckle image and the reference speckle image to align the input speckle image and the reference speckle image; constructing a target topological structure of the target scattered spots in the input speckle image; searching matched reference scattered spots which are positioned in the same line with each target scattered spot on the reference speckle image by using a window matching algorithm; determining a parallax value between the target speckle point and the matched reference scattered point according to the reference speckle point matched with the target scattered point; interpolating the parallax value based on the target topological structure of the target scattered spots to obtain a parallax image of the input speckle image; converting the disparity map into a depth image of the input speckle image.
Optionally, the performing speckle point detection on the input speckle image includes: detecting the input speckle image based on the gray gradient of the pixel points to determine a preliminary speckle in the input speckle image; and determining the sub-pixel central point of each preliminary scattered spot by utilizing a quadratic paraboloid fitting algorithm, and taking the sub-pixel central point as a target scattered spot.
Optionally, before performing the speckle detection on the input speckle image to determine the target speckle, the method further includes: and preprocessing the acquired initial speckle image to obtain the input speckle image.
Optionally, the detecting the input speckle image based on the gray gradient of the pixel point to determine a preliminary speckle point in the input speckle image includes: respectively taking each pixel point in the input speckle image as a central point, and determining the gray gradient of each pixel point in a first neighborhood of the central point; and if the gray gradient of each pixel point meets the preset gradient distribution, determining the central point as a preliminary scattered spot in the input speckle image.
Optionally, if the gray scale gradient of each pixel point satisfies a preset gradient distribution, determining that the central point is a preliminary speckle point in the input speckle image includes: and if the pixel gray value of the pixel point in the first neighborhood is inversely proportional to the distance between the pixel point and the central point, and the number of the pixel points meeting the preset gradient direction in the first neighborhood is greater than a preset pixel number threshold, determining that the central point is a preliminary scattered spot in the input speckle image.
Optionally, after performing the pixel-based gray gradient detection on the input speckle image to determine a preliminary speckle in the input speckle image, the method further includes: if a plurality of preliminary scattered spots adjacent in position exist in each of the determined preliminary scattered spots, a communication area is formed based on the plurality of adjacent preliminary speckle points, and only the central point of the communication area is used as the preliminary scattered spots.
Optionally, the determining the sub-pixel center point of each of the preliminary scattered spots by using a quadratic paraboloid fitting algorithm, and taking the sub-pixel center point as a target speckle point includes: establishing a second neighborhood based on the preliminary speckle points; constructing a quadratic function according to the position coordinates of each pixel point in the second neighborhood and the gray value of the pixel point; obtaining a fitting curved surface under the condition that the quadratic function meets the constraint of a quadratic paraboloid; and taking a pixel point corresponding to the position coordinate of the highest point projection of the fitting curved surface as the sub-pixel central point of the preliminary scattered spot.
Optionally, the finding, by using a window matching algorithm, a reference speckle point on the reference speckle image that matches with each of the target speckle points in the same row includes: respectively taking each pixel point in the input speckle image as a central point, and determining the gray gradient of each pixel point in a first neighborhood of the central point; and if the gray gradient of each pixel point meets the preset gradient distribution, determining the central point as a preliminary scattered spot in the input speckle image.
Optionally, the constructing a target topology of the target speckle in the input speckle image includes: and constructing a plurality of target triangular surfaces based on the target scattered spots in the input speckle image as vertexes, wherein the target triangular surfaces are not overlapped with each other to form a target triangular mesh.
Optionally, after the constructing the target topology of the target speckle in the input speckle image, the method further includes: constructing a plurality of reference triangular surfaces based on the reference scattered spots in the reference speckle image as vertexes, wherein the reference triangular surfaces are not overlapped with each other to form a reference triangular mesh;
the finding, by using a window matching algorithm, a matched reference speckle point on the reference speckle image in the same row as each of the target scattered spots includes: establishing a first window by taking the target scattered spot as a window center; establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center; performing similarity evaluation on each target triangular mesh in the first window and each reference triangular mesh in the second window; and taking the window center of the second window with the highest similarity evaluation as a reference speckle matched with the target speckle.
Optionally, the interpolating the disparity value based on the target topology of the target speckle to obtain the disparity map of the input speckle image includes: taking the central point of each target triangular surface as an interpolation point, wherein the parallax value of the interpolation point is determined based on the parallax values of three vertexes of the target triangular surface; and generating a disparity map of the input speckle image based on the disparity value of each target speckle and the disparity value of each interpolation point.
In another aspect, an embodiment of the present application further provides a depth estimation apparatus based on a speckle image, including: the speckle point detection module is used for carrying out speckle detection on the input speckle image so as to determine a target speckle point; the three-dimensional correction module is used for carrying out three-dimensional correction on the input speckle image and the reference speckle image so as to align the input speckle image with the reference speckle image; the topological structure establishing module is used for establishing a target topological structure of the target scattered spots in the input speckle image; the window matching module is used for searching reference scattered spots matched with the target scattered spots in the same line on the reference speckle image by using a window matching algorithm; the parallax value determining module is used for determining the parallax value of the target speckle according to the reference speckle point matched with the target speckle; the interpolation processing module is used for interpolating the parallax value based on the target topological structure of the target scattered spots so as to obtain a parallax image of the input speckle image; and the depth image generation module is used for converting the disparity map into a depth image of the input speckle image.
In another aspect, an embodiment of the present application further provides a face recognition system, which includes the above depth estimation device based on speckle images.
Compared with the prior art, the technical scheme at least has the following beneficial effects:
according to the depth estimation method based on the speckle images, provided by the embodiment of the application, speckle detection is carried out on an input speckle image to determine a target speckle, then three-dimensional correction is carried out on the input speckle image and a reference speckle image to align two image lines, and a target topological structure of the target speckle in the input speckle image is constructed; and then, searching matched reference scattered spots which are positioned in the same line with each target scattered spot on the reference speckle image by using a window matching algorithm, and determining the parallax value of each target scattered spot according to the reference scattered spots matched with each target scattered spot. Because speckles projected by the speckle projector are usually dispersed, a disparity map obtained based on the disparity value of the target speckle is sparse, and the disparity value needs to be interpolated by combining a target topological structure of the target speckle constructed before, so that a denser disparity map is obtained, and the accuracy of a depth image is improved.
Further, because the application of the mobile terminal puts high requirements on the matching speed and the calculation cost of the scattered spots, before window matching is performed, a reference topological structure of the reference scattered spots in the reference speckle image is also constructed, and the process of searching the reference scattered spots matched with each target scattered spot in the input speckle image in the same row on the reference speckle image can be combined with consideration of the reference topological structure and the target topological structure for matching, so that the matching speed and accuracy are improved.
In the process of performing speckle detection on an input speckle image to determine a target speckle, determining the target speckle from the input speckle image sequentially based on gray gradients of pixel points and a quadratic paraboloid fitting algorithm, wherein possible speckles (namely primary speckles) in the input speckle image are detected based on gradient distribution requirements, determining sub-pixel center points (namely positions of the speckles to be further accurately estimated) of the primary speckles by combining the quadratic paraboloid fitting algorithm, and taking the sub-pixel center points as the target speckle. The accuracy of scattered spot detection in the input speckle image can be improved through the processing mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a speckle image based depth estimation method of the present application;
fig. 2a is a schematic diagram of gray scale gradient distribution of a detection pixel point in the depth estimation method based on speckle images according to the present application;
FIG. 2b is a schematic diagram of a fitted surface according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a target topology constructed in a depth estimation method based on speckle images according to the present application;
FIG. 4 is a schematic diagram of a depth estimation method based on speckle images according to the present application before and after interpolation of a disparity value of a target speckle of an input speckle image;
FIG. 5 is a schematic structural diagram of an embodiment of a speckle image based depth estimation apparatus according to the present application;
FIG. 6 is a schematic diagram of a computer device of the present application;
fig. 7 is a schematic structural diagram of an embodiment of a face recognition system according to the present application.
Detailed Description
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flow chart of an embodiment of a depth estimation method based on a speckle image according to the present application. Referring to fig. 1, the method includes:
step 101, performing speckle detection on an input speckle image to determine a target speckle.
And 102, performing stereo correction on the input speckle image and the reference speckle image to align the input speckle image and the reference speckle image.
And 103, constructing a target topological structure of the target scattered spots in the input speckle image.
And 104, searching a matched reference scattered spot which is positioned in the same line with each target scattered spot on the reference speckle image by using a window matching algorithm.
And 105, determining a parallax value between the target speckle point and the matched reference scattered spot according to the reference speckle point matched with the target scattered spot.
And 106, interpolating the parallax value based on the target topological structure of the target scattered spots to obtain a parallax image of the input speckle image.
And step 107, converting the parallax map into a depth image of the input speckle image.
In this embodiment, the reference speckle image is a pre-stored speckle image that is formed by the diffuse reflection of the whiteboard when the speckle projector irradiates the whiteboard, and is collected by the image collector.
And the depth estimation of the object surface takes the position distribution and the pixel gray value of scattered spots on the reference speckle image as comparison standards, and an input speckle image obtained by preprocessing the initial speckle image collected from the object surface is matched with the reference speckle image, so that the parallax value of a target scattered spot in the input speckle image is determined, the parallax value of each target scattered spot is interpolated to obtain a denser parallax image, and the parallax image is converted into a depth image of the input speckle image. The depth image (depth image) is also called a range image (range image), and refers to an image in which the distance (i.e., depth) from the image collector to each pixel point on the surface of the object to be recognized is used as a pixel value, and reflects the geometric shape of the surface of the object to be recognized.
Specifically, as described in step 101, speckle detection is performed on the input speckle image to determine a target speckle.
In this embodiment, the present step includes:
and step 1011, detecting the input speckle image based on the gray gradient of the pixel points to determine the preliminary scattered spots in the input speckle image.
And 1012, determining the sub-pixel central point of each preliminary speckle by utilizing a quadratic paraboloid fitting algorithm, and taking the sub-pixel central point as a target speckle.
Specifically, the step 1011 includes:
10111, respectively taking each pixel point in the input speckle image as a central point, and determining the gray gradient of each pixel point in a first neighborhood of the central point.
10112, if the direction of the gray gradient of each pixel point meets the preset gradient direction distribution, determining the central point as a preliminary scattered spot in the input speckle image.
Fig. 2a is a schematic diagram of gray gradient distribution of a detection pixel point in the depth estimation method based on the speckle image according to the present application.
Referring to fig. 2a, with a pixel point in the input speckle image as a central point (as shown in fig. 2 a), the gray scale of each pixel point is determined according to the gray scale value of each pixel point in a first neighborhood of the central point (as shown in fig. 2a, the first neighborhood includes 24 pixel points except the central point in a 5 × 5 pixel matrix) and the gray scale change rate of each pixel point and two pixel points adjacent to each pixel point in the X direction and the Y direction respectively. The input speckle image is a digital image, and a person skilled in the art knows that, for the digital image, the image can be regarded as a two-dimensional discrete function, the gray gradient is obtained by derivation of the two-dimensional discrete function, and the difference can be used for replacing differentiation to obtain the gray gradient of each pixel point in the input speckle image.
In this embodiment, if the direction of the gray scale gradient of each pixel point satisfies the distribution of the preset gradient direction, the central point is determined as the preliminary scattered spot in the input speckle image. Wherein, the requirement for meeting the preset gradient distribution simultaneously meets the following two conditions:
1) The gray value of the pixel point in the first neighborhood is inversely proportional to the distance between the pixel point and the central point. In other words, in the first neighborhood, the farther a pixel point around the central point is from the central point, the smaller the pixel gray value of the pixel point is.
2) The number of pixel points meeting the preset gradient direction in the first neighborhood is larger than a preset pixel number threshold value.
Specifically, the gradient direction of a pixel is obtained by performing arc tangent operation on the gray gradient of the pixel in the X direction and the gray gradient of the pixel in the Y direction.
As shown by the arrow direction of each pixel point in fig. 2a, the optimal gradient direction distribution state is obtained, that is, the gray scale gradient of each pixel point in the first neighborhood all satisfies the preset gradient direction at the corresponding position. However, in practical applications, such an optimal gradient direction distribution state may rarely occur, and therefore, in this embodiment, a preset pixel number threshold is set, and when the number of the pixels satisfying the preset gradient direction in the first neighborhood is greater than the preset pixel number threshold, it is considered that the gray scale gradient of the pixels in the first neighborhood satisfies the preset gradient direction.
And when the gray gradient of the pixel points in the first neighborhood simultaneously meets the two conditions, determining the central point as a preliminary scattered spot in the input speckle image.
Further, if a plurality of pixel points adjacent to each other exist in each determined preliminary scattered spot, further processing is required.
Specifically, because the speckle points projected by the speckle projector are all dispersed, two or more pixel points adjacent to each other in position are not determined as preliminary scattered points, and when a plurality of pixel points adjacent to each other in position exist, a communication area is formed based on the pixel points adjacent to each other in the plurality of positions, and only the central point of the communication area is used as the preliminary scattered points. And taking pixel points corresponding to the geometric center points of the communication areas with regular shapes as the preliminary scattered spots, and taking pixel points corresponding to the gravity centers of the communication areas with irregular shapes as the preliminary scattered spots.
The step 1012 includes:
step 10121, establishing a second neighborhood based on the preliminary speckle.
Step 10122, constructing a quadratic function according to the position coordinates of each pixel point in the second neighborhood and the gray value of the pixel point.
Step 10123, obtaining a fitting curved surface under the condition that the quadratic function meets the constraint of a quadratic paraboloid.
Step 10124, taking pixel points corresponding to the position coordinates of the highest point projection of the fitting curved surface as sub-pixel central points of the preliminary scattered spots.
Referring to the second neighborhood pixel matrix shown in table one below and the schematic diagram of the fitted surface in the depth estimation method based on speckle images of the present application shown in fig. 2 b.
Specifically, as shown in table one below, a second neighborhood (7 × 7 pixel matrix, where the value in the pixel matrix represents the gray value of the pixel) is created based on a preliminary blob (the pixel with the gray value of 234).
Watch 1
89 82 95 130 140 114 92
103 125 96 165 155 139 103
112 164 180 197 176 165 113
145 165 201 234 180 181 145
152 196 198 221 210 199 175
171 213 216 203 196 176 130
199 176 181 176 154 148 155
And constructing a quadratic function by using the position coordinates of each pixel point in the second neighborhood and the gray value of the pixel point, wherein the quadratic paraboloid fitting is to regard the gray value of the pixel points in the second neighborhood as Z, the position coordinates of each pixel point as (X, Y), and the quadratic function Z = f (X, Y) is constructed. The fitted surface obtained under the condition that the quadratic function f (x, y) satisfies the quadratic paraboloid constraint is similar to the fitted surface shown in fig. 2 b; and then, taking a pixel point corresponding to the position coordinate of the highest point projection of the fitting curved surface as a sub-pixel central point of the primary scattered spot.
For example, if an XOY coordinate system is established with the pixel point at the top left corner in the first table (the pixel point with the gray scale value of 89 is the origin), and the row direction of the pixel point is the X axis, and the column direction of the pixel point is the Y axis, the position coordinate of the preliminary speckle (the pixel point with the gray scale value of 234) is (3,3). Assuming that the position coordinate of the projection of the highest point of the fitting curved surface is (4.32,4.98), the pixel point corresponding to the position coordinate is taken as the sub-pixel central point of the preliminary scattered spot.
Therefore, the sub-pixel central points corresponding to the preliminary speckle points are determined to be more accurate than the original preliminary speckle points through a secondary paraboloid fitting algorithm, and the speckle detection precision can be improved by taking the sub-pixel points as the target speckle points.
The input speckle image is spatially corrected with the reference speckle image to align the input speckle image with the reference speckle image lines, as described in step 102.
And then calculating the parallax value of the target scattered spots on the input speckle image based on the matched reference speckle points. However, it is time-consuming to perform window matching in a two-dimensional space, and in order to reduce the search range of subsequent window matching, epipolar constraint may be used to reduce the matching of corresponding scattered spots from two-dimensional search to one-dimensional search.
Therefore, the effect of performing stereo correction on the input speckle image and the reference speckle image in this step is to perform line alignment on the two images, so that epipolar lines of the two images are exactly on the same horizontal line, and thus any point on one image and the corresponding point on the other image have the same line number, and the corresponding point can be matched only by performing one-dimensional search on the line. The reference speckle image is a reference standard when the input speckle image is subjected to stereo correction, and the input speckle image after the stereo correction is aligned with the reference speckle image.
However, in practical applications, after the input speckle image and the reference speckle image are subjected to stereo correction, there may be a line error of 3-5 lines (i.e., no complete line alignment), and when subsequent windows are matched, a larger window may be set to eliminate the line error.
A target topology of the target speckle spots in the input speckle image is constructed, as described in step 103.
In this embodiment, the purpose of constructing the target topology of the target speckle is to:
on the one hand, after subsequently determining the disparity values between each target speckle point and the matched reference speckle point, each interpolation point is determined based on the target topology when interpolating the disparity values.
On the other hand, when the target topological structure is constructed and window matching is performed subsequently, the matching degree of scattered spots in the two windows is judged based on the similarity between the target topological structure on the input speckle image and the reference topological structure on the reference speckle image.
In particular, the target topology may be a target triangular mesh (e.g., a Delaunay triangular mesh). The process of constructing the target triangular mesh is to construct a plurality of target triangular faces based on each target speckle in the input speckle image as a vertex, and the target triangular faces are not overlapped with each other to form the target triangular mesh.
Fig. 3 is a schematic diagram of a target topology constructed in a depth estimation method based on speckle images according to the present application. Referring to fig. 3, a plurality of target triangular surfaces (such as the target triangular surfaces shown in fig. 3) are constructed based on each target speckle in the input speckle image as a vertex, and the target triangular surfaces are not overlapped with each other, so that a target triangular mesh (i.e., a triangular mesh formed by all the target triangular surfaces in fig. 3) is formed.
A window matching algorithm is used to find matching reference speckle points on the reference speckle image that are in the same row as each of the target speckle points, as described in step 104.
In one embodiment, specifically, the present step includes:
step 1041a, establishing a first window with the target scattered spot as a window center;
1042a, establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center;
step 1043a, performing correlation operation on the pixel gray values in the first window and the second window to obtain a matching cost;
and step 1044a, taking the window center of the second window corresponding to the extreme value in the matching cost as a reference speckle matched with the target speckle.
After the input speckle image and the reference speckle image are subjected to stereo correction, the input speckle image and the reference speckle image are aligned in a row. That is, in the process of searching for a reference speckle matching the target speckle on the input speckle image from the reference speckle image, it is only necessary to search for the reference speckle on the same line.
Specifically, a first window (for example, the window size is 4 × 5) is established by taking a target speckle point on an input speckle image as a window center, then a second window (for example, the window size is also 4 × 5) is established by taking each reference speckle point on the reference speckle image, which is in the same row as the target speckle point, as a window center, and then correlation operation is performed on pixel gray values in the first window and the second window to obtain a cost.
Wherein, the calculation process of the correlation operation comprises the following steps: the pixel gray values of all the pixel points in the second window are firstly inverted, then the pixel gray values of the pixel points in the first window are respectively and-operated with the pixel gray values of the corresponding pixel points in the second window (the pixel gray values after inversion), then the number of the overlapped scattered spots in the two windows is counted to determine the specific gravity value of the number of the overlapped scattered spots in the window to the number of all the scattered spots in the window, and the specific gravity value is the matching cost.
And then, taking the window center of the second window corresponding to the extreme value in the matching cost as a reference scattered spot matched with the target scattered spot.
In another embodiment, specifically, the present step comprises:
step 1041b, establishing a first window with the target scattered spot as a window center;
1042b, respectively establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center;
step 1043b, performing similarity evaluation on each target triangular mesh in the first window and each reference triangular mesh in the second window;
and step 1044b, taking the window center of the second window with the highest similarity evaluation as a reference speckle matched with the target speckle.
Unlike the foregoing embodiment, in this embodiment, after the foregoing step 103 is executed (i.e., after the target topology of the target scattered spots in the input speckle image is constructed), the method further includes: and constructing a plurality of reference triangular surfaces based on the reference scattered spots in the reference speckle image as vertexes, wherein the reference triangular surfaces are not overlapped with each other to form a reference triangular mesh. That is to say, a reference triangular mesh is also constructed for each reference speckle point in the reference speckle image, and the construction method may refer to the specific description of constructing the target triangular mesh in step 103 above, which is not described herein again.
Further, similarity evaluation is carried out on each target triangular grid in the first window and each reference triangular grid in the second window, and then the window center of the second window with the highest similarity evaluation is used as a reference speckle matched with the target speckle. That is, in the present embodiment, it is determined whether the speckle points (i.e., the reference speckle point and the target speckle point) in the two windows match based on the similarity of the triangular meshes (i.e., the reference triangular mesh and the target triangular mesh) constructed in the two windows. In practical application, window matching based on a topological structure can improve the matching speed and accuracy.
The disparity values for the target speckle point and the matching reference speckle point are determined from the reference speckle points that match the target speckle point, as described in step 105.
As known to those skilled in the art, the parallax refers to the horizontal distance between two matched pixels in two images. Thus, after a reference speckle point matching a target speckle point on the input speckle image is found from the reference speckle image, a disparity value between the target speckle point and its matching reference speckle point can be determined based on the horizontal distance between the two speckle points.
The disparity value is interpolated based on the target topology of the target speckle to obtain a disparity map of the input speckle image, as described in step 106.
Because speckles projected by the speckle projector are usually dispersed, a disparity map obtained based on the disparity value of the target speckle is sparse, and the disparity value needs to be interpolated by combining a target topological structure of the target speckle constructed before, so that a denser disparity map is obtained, and the accuracy of a depth image is improved.
Specifically, the method comprises the following steps:
and 1061, taking the central point of each target triangular surface as an interpolation point, wherein the parallax value of the interpolation point is determined based on the parallax values of three vertexes of the target triangular surface.
And 1062, generating a disparity map of the input speckle image based on the disparity value of each target speckle and the disparity value of each interpolation point.
In step 1061, the target topology is a target triangular mesh, and a central point (for example, a geometric center of a target triangular surface) of each target triangular surface in the target triangular mesh is used as an interpolation point, where a disparity value of the interpolation point is determined by performing a linear operation (for example, averaging disparity values of three vertices) on the disparity values of three vertices of the target triangular surface. In other embodiments, other ways may be used to determine the location of the interpolation point and the disparity value at the interpolation point.
Then, a disparity map of the input speckle image is generated based on the disparity value of each target speckle (i.e. the real disparity value of the target speckle) and the disparity value of each interpolation point, and the obtained disparity map is denser after interpolation operation. Fig. 4 is a schematic diagram of a depth estimation method based on a speckle image according to the present application before and after interpolation of a disparity value of a target speckle in an input speckle image.
The disparity map is converted to a depth image of the input speckle image, as set forth in step 107.
As known to those skilled in the art, in general, the depth value and the disparity value are in an inverse relationship, that is, the smaller the depth value is, the larger the disparity value is, the smaller the depth value is, the specific application may be calculated by using different conversion formulas, in this embodiment, the specific conversion formula is not limited, and the existing conversion formula may be used to convert the disparity value into the depth value. Therefore, the disparity map (including the disparity value of the target speckle and the disparity value of the interpolation point) obtained in step 106 is converted into a depth image of the input speckle image through a conversion formula.
In this embodiment, the input speckle image is obtained by preprocessing the acquired initial speckle image by the image acquisition device. The preprocessing comprises the processing of noise reduction, background removal and the like on the acquired initial speckle images. The noise reduction processing is to eliminate the influence of factors such as ambient light, and the background removal processing is to remove other images (i.e. image background) except the object to be identified on the initial speckle image. The above-mentioned noise reduction processing and background removal processing can be implemented by using the prior art, and are not described herein again.
It should be noted that the initial speckle image acquired by the image acquisition apparatus is an acquired image containing scattered spots projected onto the object to be identified by the speckle projector. That is to say, the initial speckle image includes not only speckle pixels but also other image pixels, and the proportion of the speckle pixels is smaller than that of the whole initial speckle image.
Based on the method embodiment, the application also provides a depth estimation device based on the speckle image. Fig. 5 is a schematic structural diagram of an embodiment of the depth estimation device based on speckle images according to the present application. Referring to fig. 5, the apparatus 5 includes:
a speckle point detection module 51, configured to perform speckle detection on the input speckle image to determine a target speckle point; a stereo correction module 52, configured to perform stereo correction on the input speckle image and the reference speckle image, so that the input speckle image and the reference speckle image are aligned in a row; a topology construction module 53, configured to construct a target topology of the target speckle in the input speckle image; a window matching module 54, configured to find, on the reference speckle image, a matched reference speckle that is in the same row as each of the target speckle by using a window matching algorithm; a disparity value determining module 55, configured to determine, according to the reference speckle point matched with the target speckle point, a disparity value between the target speckle point and the matched reference speckle point; an interpolation processing module 56, configured to interpolate the disparity value based on a target topology of the target speckle to obtain a disparity map of the input speckle image; a depth image generation module 57, configured to convert the disparity map into a depth image of the input speckle image.
The speckle detection module 51 is configured to detect the input speckle image based on a gray gradient of a pixel point to determine a preliminary speckle in the input speckle image; and determining the sub-pixel central point of each preliminary scattered spot by utilizing a quadratic paraboloid fitting algorithm, and taking the sub-pixel central point as a target scattered spot.
The speckle point detection module 51 is further configured to determine a gray scale gradient of each pixel point in a first neighborhood of the central point by using each pixel point in the input speckle image as the central point; and if the direction of the gray gradient of each pixel point meets the preset gradient direction distribution, determining the central point as a preliminary scattered spot in the input speckle image.
The speckle point detecting module 51 is further configured to determine that the central point is a preliminary speckle point in the input speckle image if the gray-level value of the pixel point in the first neighborhood is inversely proportional to the distance between the pixel point and the central point, and the number of the pixel points in the first neighborhood that satisfy the preset gradient direction is greater than a preset threshold of the number of pixels.
The device 5 further includes a communication region processing module (not shown in fig. 5) configured to, if a plurality of neighboring preliminary scattered spots exist in each of the determined preliminary scattered spots, form a communication region based on the neighboring preliminary scattered spots, and use only a central point of the communication region as the preliminary scattered spot.
The speckle point detection module 51 is further configured to establish a second neighborhood based on the preliminary speckle points; constructing a quadratic function according to the position coordinates of each pixel point in the second neighborhood and the gray value of the pixel point; obtaining a fitting curved surface under the condition that the quadratic function meets the constraint of a quadratic paraboloid; and taking a pixel point corresponding to the position coordinate of the highest point projection of the fitting curved surface as the sub-pixel central point of the preliminary scattered spot.
The window matching module 54 is configured to establish a first window with the target speckle as a window center; establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center; performing correlation operation on the pixel gray values in the first window and the second window to obtain matching cost; and taking the window center of the second window corresponding to the extreme value in the matching cost as a reference scattered spot matched with the target scattered spot.
The topological structure building module 53 is configured to build a plurality of target triangular surfaces based on each target speckle in the input speckle image as a vertex, where the target triangular surfaces are not overlapped with each other to form a target triangular mesh.
The topological structure building module 53 is further configured to build a plurality of reference triangular surfaces based on the reference scattered spots in the reference speckle image as vertices, and the reference triangular surfaces are not overlapped with each other to form a reference triangular mesh. The window matching module 54 is further configured to establish a first window with the target speckle as a window center; establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center; carrying out similarity evaluation on each target triangular grid in the first window and each reference triangular grid in the second window; and taking the window center of the second window with the highest similarity evaluation as a reference scattered spot matched with the target scattered spot.
The interpolation processing module 56 is configured to use a central point of each of the target triangular surfaces as an interpolation point, where a disparity value of the interpolation point is determined based on disparity values of three vertices of the target triangular surface; and generating a disparity map of the input speckle image based on the disparity value of each target speckle and the disparity value of each interpolation point.
For the specific processing procedure of each module in the depth estimation device 5 based on the speckle image according to this embodiment, reference may be made to the above method embodiments, which are not described herein again.
FIG. 6 is a schematic block diagram of an embodiment of a computer apparatus of the present application.
The computer device may include a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the depth estimation method based on the speckle image provided by the embodiment of the present application may be implemented.
The computer device may be a server, for example: the cloud server may also be an electronic device, for example: the present embodiment does not limit the specific form of the computer device, such as a smart phone, a smart watch, or a tablet computer.
FIG. 6 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present application. The computer device 12 shown in fig. 6 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present application.
As shown in FIG. 6, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
Program/utility 40 having a set (at least one) of program modules 62 may be stored, for example, in memory 28, such program modules 62 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 62 generally perform the functions and/or methodologies of the embodiments described herein.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public Network such as the Internet via Network adapter 20. As shown in FIG. 6, the network adapter 20 communicates with the other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in FIG. 6, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The processing unit 16 executes programs stored in the system memory 28 to perform various functional applications and data processing, such as implementing the depth estimation method based on speckle images provided by the embodiments of the present application.
An embodiment of the present application further provides a face recognition system, and fig. 7 is a schematic structural diagram of an embodiment of the face recognition system of the present application.
Referring to fig. 7, the face recognition system 7 includes a speckle projector 71, an image acquisition device 72, and a speckle-image-based depth estimation device 73 as provided in the above-described embodiments. Wherein the content of the first and second substances,
the speckle projector 71 is used for generating laser speckle to be projected to a human face; the image collecting device 72 is configured to collect an optical signal formed by reflecting the laser speckle on the face to obtain an initial speckle image; the speckle image based depth estimation device 73 is configured to process the initial speckle image to obtain a depth image. The face recognition system 7 can determine the distance between the face and the image acquisition device 72 according to the obtained depth image, and further recognize the face.
Embodiments of the present application further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where when the computer program is executed by a processor, the method for depth estimation based on a speckle image provided in an embodiment of the present application may be implemented.
The non-transitory computer readable storage medium described above may take any combination of one or more computer readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (23)

1. A depth estimation method based on speckle images is characterized by comprising the following steps:
detecting an input speckle image based on the gray gradient of a pixel point to determine a preliminary scattered spot in the input speckle image;
determining a sub-pixel central point of each preliminary scattered spot by utilizing a quadratic paraboloid fitting algorithm, and taking the sub-pixel central point as a target scattered spot;
performing stereo correction on the input speckle image and the reference speckle image to align the input speckle image and the reference speckle image;
constructing a target topological structure of the target scattered spots in the input speckle image;
searching matched reference scattered spots which are positioned in the same line with each target scattered spot on the reference speckle image by using a window matching algorithm;
determining a parallax value of the target speckle according to the reference speckle point matched with the target speckle;
interpolating the parallax value based on the target topological structure of the target scattered spots to obtain a parallax image of the input speckle image;
converting the disparity map into a depth image of the input speckle image.
2. The method of claim 1, further comprising, prior to performing the pixel-point based gray gradient detection on the input speckle image to determine preliminary speckle in the input speckle image: and preprocessing the acquired initial speckle image to obtain the input speckle image.
3. The method of claim 1, wherein the detecting the input speckle image based on the gray scale gradients of the pixel points to determine preliminary speckle points in the input speckle image comprises:
respectively taking each pixel point in the input speckle image as a central point, and determining the gray gradient of each pixel point in a first neighborhood of the central point;
and if the direction of the gray gradient of each pixel point meets the preset gradient direction distribution, determining the central point as a preliminary scattered spot in the input speckle image.
4. The method of claim 3, wherein if the direction of the gray gradient of each pixel point satisfies a preset gradient direction distribution, determining the center point as a preliminary speckle point in the input speckle image comprises:
and if the pixel gray value of the pixel point in the first neighborhood is inversely proportional to the distance between the pixel point and the central point, and the number of the pixel points meeting the preset gradient direction in the first neighborhood is greater than a preset pixel number threshold, determining that the central point is a preliminary scattered spot in the input speckle image.
5. The method of claim 1, further comprising, after performing the pixel-point-based gray scale gradient detection on the input speckle image to determine preliminary speckle points in the input speckle image:
and if a plurality of pixel points adjacent to each other exist in each determined preliminary scattered spot, forming a communication area based on the pixel points adjacent to each other, and only taking the central point of the communication area as the preliminary scattered spot.
6. The method of claim 1, wherein the determining the sub-pixel center point of each of the preliminary speckle points using a quadratic parabolic fit algorithm, the using the sub-pixel center point as a target speckle point comprises:
establishing a second neighborhood based on the preliminary speckle points;
constructing a quadratic function according to the position coordinates of each pixel point in the second neighborhood and the gray value of the pixel point;
obtaining a fitting curved surface under the condition that the quadratic function meets the constraint of a quadratic paraboloid;
and taking a pixel point corresponding to the position coordinate of the highest point projection of the fitting curved surface as the sub-pixel central point of the preliminary scattered spot.
7. The method of claim 1, wherein finding a matching reference speckle point on the reference speckle image that is in the same row as each of the target speckle spots using a window matching algorithm comprises:
establishing a first window by taking the target scattered spot as a window center;
establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center;
performing correlation operation on the pixel gray values in the first window and the second window to obtain matching cost;
and taking the window center of the second window corresponding to the extreme value in the matching cost as a reference scattered spot matched with the target scattered spot.
8. The method of claim 1, wherein said constructing the target topology of the target speckle in the input speckle image comprises:
and constructing a plurality of target triangular surfaces based on the target scattered spots in the input speckle image as vertexes, wherein the target triangular surfaces are not overlapped with each other to form a target triangular mesh.
9. The method of claim 8, further comprising, after performing said constructing the target topology of the target speckle in the input speckle image: constructing a plurality of reference triangular surfaces based on the reference scattered spots in the reference speckle image as vertexes, wherein the reference triangular surfaces are not overlapped with each other to form a reference triangular mesh;
the finding, by using a window matching algorithm, a matched reference speckle point on the reference speckle image in the same row as each of the target scattered spots includes:
establishing a first window by taking the target scattered spot as a window center;
establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center;
carrying out similarity evaluation on each target triangular grid in the first window and each reference triangular grid in the second window;
and taking the window center of the second window with the highest similarity evaluation as a reference scattered spot matched with the target scattered spot.
10. The method of claim 8, wherein said interpolating the disparity values based on the target topology of the target speckle to obtain the disparity map for the input speckle image comprises:
taking the central point of each target triangular surface as an interpolation point, wherein the parallax value of the interpolation point is determined based on the parallax values of three vertexes of the target triangular surface;
and generating a disparity map of the input speckle image based on the disparity value of each target speckle and the disparity value of each interpolation point.
11. A speckle-image-based depth estimation apparatus, comprising:
the speckle point detection module is used for detecting an input speckle image based on the gray gradient of the pixel points so as to determine a preliminary speckle point in the input speckle image; determining the sub-pixel center point of each preliminary scattered spot by utilizing a quadratic paraboloid fitting algorithm, and taking the sub-pixel center point as a target scattered spot;
the three-dimensional correction module is used for carrying out three-dimensional correction on the input speckle image and the reference speckle image so as to align the input speckle image with the reference speckle image;
the topological structure establishing module is used for establishing a target topological structure of the target scattered spots in the input speckle image;
the window matching module is used for searching reference scattered spots matched with the target scattered spots in the same line on the reference speckle image by using a window matching algorithm;
the parallax value determining module is used for determining the parallax value of the target speckle point and the matched reference scattered spot according to the reference speckle point matched with the target scattered spot;
the interpolation processing module is used for interpolating the parallax value based on the target topological structure of the target scattered spots so as to obtain a parallax image of the input speckle image;
and the depth image generation module is used for converting the disparity map into a depth image of the input speckle image.
12. The apparatus of claim 11, further comprising a pre-processing module to pre-process the acquired initial speckle image to obtain the input speckle image.
13. The apparatus of claim 11, wherein the speckle detection module is further configured to determine a gray scale gradient of each pixel point in a first neighborhood of a center point, respectively, with each pixel point in the input speckle image as the center point; and if the direction of the gray gradient of each pixel point meets the preset gradient direction distribution, determining the central point as a preliminary scattered spot in the input speckle image.
14. The apparatus of claim 13, wherein the speckle point detection module is further configured to determine the central point as a preliminary speckle point in the input speckle image if the gray-level value of the pixel points in the first neighborhood is inversely proportional to the distance of the pixel points from the central point, and the number of the pixel points in the first neighborhood satisfying a predetermined gradient direction is greater than a predetermined threshold number of pixels.
15. The apparatus of claim 11, further comprising a connected component processing module configured to, if a plurality of adjacently located pixel points exist in each of the preliminary scattered points, form a connected component based on the plurality of adjacently located pixel points, and use only a center point of the connected component as the preliminary scattered point.
16. The apparatus of claim 11, wherein the speckle point detection module is further configured to establish a second neighborhood based on the preliminary speckle points; constructing a quadratic function according to the position coordinates of each pixel point in the second neighborhood and the gray value of the pixel point; obtaining a fitting curved surface under the condition that the quadratic function meets the constraint of a quadratic paraboloid; and taking a pixel point corresponding to the position coordinate of the highest point projection of the fitting curved surface as the sub-pixel central point of the preliminary scattered spot.
17. The apparatus of claim 11, wherein the window matching module is configured to establish a first window centered on the target blob; establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center; performing correlation operation on the pixel gray values in the first window and the second window to obtain matching cost; and taking the window center of the second window corresponding to the extreme value in the matching cost as a reference scattered spot matched with the target scattered spot.
18. The apparatus of claim 11, wherein the topology establishment module is configured to construct a plurality of target triangular faces based on each of the target speckles in the input speckle image as vertices, each of the target triangular faces being non-overlapping to form a target triangular mesh.
19. The apparatus of claim 18, wherein the topology building module is further configured to build a plurality of reference triangular faces based on each of the reference speckle points in the reference speckle image as vertices, each of the reference triangular faces being mutually non-overlapping to form a reference triangular mesh;
the window matching module is also used for establishing a first window by taking the target scattered spot as a window center; establishing a second window by taking each reference scattered spot on the reference speckle image in the same line with the target scattered spot as a window center; carrying out similarity evaluation on each target triangular grid in the first window and each reference triangular grid in the second window; and taking the window center of the second window with the highest similarity evaluation as a reference speckle matched with the target speckle.
20. The apparatus of claim 18, wherein the interpolation processing module is configured to use a center point of each of the target triangular surfaces as an interpolation point, and a disparity value of the interpolation point is determined based on disparity values of three vertices of the target triangular surface; and generating a disparity map of the input speckle image based on the disparity value of each target speckle and the disparity value of each interpolation point.
21. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-10 when executing the computer program.
22. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of any one of claims 1-10.
23. A face recognition system, comprising: a speckle projector, an image acquisition apparatus, and the speckle image-based depth estimation device of any one of claims 11 to 20; wherein, the first and the second end of the pipe are connected with each other,
the speckle projector is used for generating laser speckles to be projected to a human face;
the image acquisition equipment is used for acquiring an optical signal formed by the laser speckles through the face reflection to obtain an initial speckle image;
the depth estimation device based on the speckle images is used for processing the initial speckle images to obtain depth images.
CN201980000582.4A 2019-04-12 2019-04-12 Depth estimation method and device based on speckle images and face recognition system Active CN112771573B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/082373 WO2020206666A1 (en) 2019-04-12 2019-04-12 Depth estimation method and apparatus employing speckle image and face recognition system

Publications (2)

Publication Number Publication Date
CN112771573A CN112771573A (en) 2021-05-07
CN112771573B true CN112771573B (en) 2023-01-20

Family

ID=72750798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980000582.4A Active CN112771573B (en) 2019-04-12 2019-04-12 Depth estimation method and device based on speckle images and face recognition system

Country Status (2)

Country Link
CN (1) CN112771573B (en)
WO (1) WO2020206666A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598717A (en) * 2020-12-14 2021-04-02 珠海欧比特宇航科技股份有限公司 Full-spectrum registration method and medium for hyperspectral satellite images
CN112669362B (en) * 2021-01-12 2024-03-29 四川深瑞视科技有限公司 Depth information acquisition method, device and system based on speckles
CN112861764B (en) * 2021-02-25 2023-12-08 广州图语信息科技有限公司 Face recognition living body judging method
CN113298785A (en) * 2021-05-25 2021-08-24 Oppo广东移动通信有限公司 Correction method, electronic device, and computer-readable storage medium
CN113409404B (en) * 2021-06-29 2023-06-16 常熟理工学院 CUDA architecture parallel optimization three-dimensional deformation measurement method based on novel correlation function constraint
CN113658241B (en) * 2021-08-16 2022-12-16 合肥的卢深视科技有限公司 Monocular structured light depth recovery method, electronic device and storage medium
CN113888614B (en) * 2021-09-23 2022-05-31 合肥的卢深视科技有限公司 Depth recovery method, electronic device, and computer-readable storage medium
CN113936050B (en) * 2021-10-21 2022-08-12 合肥的卢深视科技有限公司 Speckle image generation method, electronic device, and storage medium
CN113936049A (en) * 2021-10-21 2022-01-14 北京的卢深视科技有限公司 Monocular structured light speckle image depth recovery method, electronic device and storage medium
CN114387324A (en) * 2021-12-22 2022-04-22 北京的卢深视科技有限公司 Depth imaging method, depth imaging device, electronic equipment and computer readable storage medium
CN114283089B (en) * 2021-12-24 2023-01-31 合肥的卢深视科技有限公司 Jump acceleration based depth recovery method, electronic device, and storage medium
CN114332014A (en) * 2021-12-29 2022-04-12 合肥瑞识智能科技有限公司 Projector quality evaluation method, device, equipment and storage medium
CN114299129B (en) * 2021-12-31 2023-01-31 合肥的卢深视科技有限公司 Depth recovery method, electronic device, and computer-readable storage medium
CN116067305A (en) * 2023-02-09 2023-05-05 深圳市安思疆科技有限公司 Structured light measurement system and measurement method
CN115861308B (en) * 2023-02-22 2023-05-12 山东省林草种质资源中心(山东省药乡林场) Acer truncatum disease detection method
CN116823809B (en) * 2023-08-23 2023-11-24 威海迈尼生物科技有限公司 Visual detection method for speckle reduction effect of microneedle patch technology
CN117409174B (en) * 2023-12-14 2024-03-15 南昌虚拟现实研究院股份有限公司 Speckle image temperature compensation method and device, readable medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160680A (en) * 2015-09-08 2015-12-16 北京航空航天大学 Design method of camera with no interference depth based on structured light
CN106954058A (en) * 2017-03-09 2017-07-14 深圳奥比中光科技有限公司 Depth image obtains system and method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9514522B2 (en) * 2012-08-24 2016-12-06 Microsoft Technology Licensing, Llc Depth data processing and compression
CN103268608B (en) * 2013-05-17 2015-12-02 清华大学 Based on depth estimation method and the device of near-infrared laser speckle
CN103279982B (en) * 2013-05-24 2016-06-22 中国科学院自动化研究所 The speckle three-dimensional rebuilding method of the quick high depth resolution of robust
US20160245641A1 (en) * 2015-02-19 2016-08-25 Microsoft Technology Licensing, Llc Projection transformations for depth estimation
US9959455B2 (en) * 2016-06-30 2018-05-01 The United States Of America As Represented By The Secretary Of The Army System and method for face recognition using three dimensions
CN108734776B (en) * 2018-05-23 2022-03-25 四川川大智胜软件股份有限公司 Speckle-based three-dimensional face reconstruction method and equipment
CN109461181B (en) * 2018-10-17 2020-10-27 北京华捷艾米科技有限公司 Depth image acquisition method and system based on speckle structured light

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160680A (en) * 2015-09-08 2015-12-16 北京航空航天大学 Design method of camera with no interference depth based on structured light
CN106954058A (en) * 2017-03-09 2017-07-14 深圳奥比中光科技有限公司 Depth image obtains system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于投影散斑的实时场景深度恢复;王梦伟 等;《计算机辅助设计与图形学学报》;20140831;第26卷(第8期);第1304-1313页 *

Also Published As

Publication number Publication date
WO2020206666A1 (en) 2020-10-15
CN112771573A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112771573B (en) Depth estimation method and device based on speckle images and face recognition system
CN113486797B (en) Unmanned vehicle position detection method, unmanned vehicle position detection device, unmanned vehicle position detection equipment, storage medium and vehicle
US10880541B2 (en) Stereo correspondence and depth sensors
CN110009727B (en) Automatic reconstruction method and system for indoor three-dimensional model with structural semantics
CN109300190B (en) Three-dimensional data processing method, device, equipment and storage medium
US10127685B2 (en) Profile matching of buildings and urban structures
CN109493375B (en) Data matching and merging method and device for three-dimensional point cloud and readable medium
US9135710B2 (en) Depth map stereo correspondence techniques
JP2013004088A (en) Image processing method, image processing device, scanner and computer program
WO2022227489A1 (en) Collision detection method and apparatus for objects, and device and storage medium
CN112764004A (en) Point cloud processing method, device, equipment and storage medium
US20220198743A1 (en) Method for generating location information, related apparatus and computer program product
CN113538555B (en) Volume measurement method, system, equipment and storage medium based on rule box
CN115346020A (en) Point cloud processing method, obstacle avoidance method, device, robot and storage medium
CN111815748A (en) Animation processing method and device, storage medium and electronic equipment
CN116721230A (en) Method, device, equipment and storage medium for constructing three-dimensional live-action model
CN114565721A (en) Object determination method, device, equipment, storage medium and program product
CN113658203A (en) Method and device for extracting three-dimensional outline of building and training neural network
CN113763468A (en) Positioning method, device, system and storage medium
CN112465692A (en) Image processing method, device, equipment and storage medium
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
US11783501B2 (en) Method and apparatus for determining image depth information, electronic device, and media
CN116246038B (en) Multi-view three-dimensional line segment reconstruction method, system, electronic equipment and medium
CN113673286B (en) Depth reconstruction method, system, equipment and medium based on target area
CN115760588A (en) Point cloud correction method and three-dimensional model generation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant