CN111080689B - Method and device for determining face depth map - Google Patents

Method and device for determining face depth map Download PDF

Info

Publication number
CN111080689B
CN111080689B CN201811231805.1A CN201811231805A CN111080689B CN 111080689 B CN111080689 B CN 111080689B CN 201811231805 A CN201811231805 A CN 201811231805A CN 111080689 B CN111080689 B CN 111080689B
Authority
CN
China
Prior art keywords
face
image
dimensional
determining
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811231805.1A
Other languages
Chinese (zh)
Other versions
CN111080689A (en
Inventor
杨宏伟
李�杰
夏循龙
毛慧
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201811231805.1A priority Critical patent/CN111080689B/en
Publication of CN111080689A publication Critical patent/CN111080689A/en
Application granted granted Critical
Publication of CN111080689B publication Critical patent/CN111080689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a device for determining a face depth map, and belongs to the field of computer vision. The method comprises the following steps: acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image; determining normal information of face pixel points in the first two-dimensional face image based on the initial face depth map; according to the normal information of the face pixel points, determining a target matching position with the highest similarity between the corresponding pixel values and the pixel values of the face pixel points in the second two-dimensional face image, and according to the target matching position, determining the depth value corresponding to the face pixel points; determining a corrected facial depth map for the target user based on the depth value corresponding to each facial pixel point in the first two-dimensional facial image. By the method and the device, the more accurate face depth map can be obtained.

Description

Method and device for determining face depth map
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for determining a face depth map.
Background
With the development of image processing techniques, face image processing techniques have also been widely used, such as face recognition, face tracking, face alignment, and the like. Many face image processes require the use of face depth maps, which are composed of depth values corresponding to face pixel points in a face image.
There are two main types of methods for determining face depth maps: one is acquisition by high precision optical measurement equipment, for example, the determination of facial depth maps using 3D scanners based on structured light technology and line laser technology; the other is a binocular stereo vision method.
The binocular stereo vision method is characterized in that two shooting components are used for shooting the face of a target user at different visual angles to obtain two face images. Then, for each pixel point in one of the face images, the position information of the pixel point corresponding to the one pixel point is determined in the other face image, and the determining method is to determine a pixel block taking the pixel point as the center based on the position of the pixel point, wherein the size of the common pixel block is 5 × 5. And then determining a 5 x 5 pixel block with the highest similarity to the pixel block on the other face image, calculating the difference of the position information of the central pixel points of the two pixel blocks to obtain the parallax value of the two central pixel points, calculating the depth value corresponding to the pixel point according to the parallax value, and performing the above processing on each pixel point to obtain the depth value corresponding to each pixel point in the face image, so as to determine the face depth image of the face image.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
in the related art, due to different shooting angles, the shape of a pixel block corresponding to a 5 × 5 pixel block in one face image in another target image is no longer a 5 × 5 square, so that the determined position information of the central pixel point is inaccurate, and the depth value in the finally obtained face depth image is inaccurate.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present application provide a method and an apparatus for determining a face depth map. The technical scheme is as follows:
in a first aspect, a method of determining a face depth map is provided, the method comprising:
acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image;
determining normal information of face pixel points in the first two-dimensional face image based on the initial face depth map;
for each face pixel point in the first two-dimensional face image, according to the normal information of the face pixel point, determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the face pixel point in the second two-dimensional face image, and according to the target matching position, determining a depth value corresponding to the face pixel point;
determining a corrected facial depth map for the target user based on the depth value corresponding to each facial pixel point in the first two-dimensional facial image.
Optionally, the determining, according to the normal information of the facial pixel point, a target matching position where a corresponding pixel value and a pixel value of the facial pixel point have a highest similarity in the second two-dimensional facial image includes:
determining a functional relation between the depth value of the facial pixel point and the matching position according to the normal information of the facial pixel point, determining a retrievable value set of the matching position based on the functional relation, and determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the facial pixel point in the retrievable value set, wherein the matching position is the position of the corresponding facial pixel point of the facial pixel point in the second two-dimensional facial image;
the determining the depth value corresponding to the face pixel point according to the target matching position comprises:
and determining the depth value corresponding to the target matching position in the functional relation.
Optionally, determining a functional relationship between the depth value and the matching position of the facial pixel point according to the normal information of the facial pixel point, including:
according to the normal information of the facial pixel points, determining that the function relationship between the depth value and the matching position of the facial pixel points is P r =K r (R lr -t lr ·n T /z)K l -1 ·P l
Wherein, P l For position information, P, of any facial pixel in the first two-dimensional facial image r Position information of face pixels in a second two-dimensional face image corresponding to the face pixels in the first two-dimensional face image, K l For taking a pictureInternal reference matrix of the first image-capturing element to the first two-dimensional face image, K r An internal reference matrix, R, of a second image-capturing means for capturing a second two-dimensional face image lr For a rotation parameter matrix, t, of the first image acquisition element to the second image acquisition element lr N is normal information of any one face pixel point in the first two-dimensional face image, the normal information is a normal vector, and z is a depth value of any one face pixel point in the first two-dimensional face image.
Optionally, the determining a set of admissible values for the matching locations based on the functional relationship includes:
determining a set of admissible values for disparity values based on resolutions of the first and second two-dimensional face images;
determining a set of admissible values for the depth values based on the set of admissible values for the disparity values;
determining a set of admissible values for the matching location based on the functional relationship and the set of admissible values for the depth values.
Optionally, said determining a set of advisable values for said depth values based on said set of advisable values for said disparity values comprises:
determining an admissible set of depth values based on the formula z = f · B/d and the admissible set of disparity values;
wherein f is the focal length of the shooting component for obtaining the first two-dimensional image, d is the parallax value in the set of the allowable values of the parallax value, and B is a base line matrix between the two shooting components.
Optionally, the obtaining a first two-dimensional face image and a second two-dimensional face image of the target user includes:
acquiring a first two-dimensional image of a target user shot by a first infrared shooting component, a second two-dimensional image of the target user shot by a second infrared shooting component and a third two-dimensional image of the target user shot by a visible light shooting component;
determining the position information of the face key points in the third two-dimensional image, and determining the position information of the face key points in the first two-dimensional image and the second two-dimensional image based on the position information of the face key points in the third two-dimensional image;
and determining a first two-dimensional face image in the first two-dimensional image based on the position information of the face key points in the first two-dimensional image, and determining a second two-dimensional face image in the second two-dimensional image based on the position information of the face key points in the second two-dimensional image.
Optionally, the obtaining a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image include:
acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on a first reduced-resolution face image corresponding to the first two-dimensional face image and a second reduced-resolution face image corresponding to the second two-dimensional face image;
the determining normal information of face pixel points in the first two-dimensional face image based on the initial face depth map comprises:
determining normal information of face pixel points in the first reduced-resolution face image based on the initial face depth map;
and carrying out up-sampling interpolation on the normal information of the face pixel points in the first reduced-resolution face image to obtain the normal information of the face pixel points in the first two-dimensional face image.
In a second aspect, there is provided an apparatus for determining a face depth map, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first two-dimensional face image and a second two-dimensional face image of a target user and determining an initial face depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image;
the calculation module is used for determining normal information of face pixel points in the first two-dimensional face image based on the initial face depth map;
the determining module is used for determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the facial pixel point in the second two-dimensional facial image according to the normal information of the facial pixel point for each facial pixel point in the first two-dimensional facial image, and determining a depth value corresponding to the facial pixel point according to the target matching position;
and the correcting module is used for determining a corrected face depth map of the target user based on the depth value corresponding to each face pixel point in the first two-dimensional face image.
Optionally, the determining module is configured to:
determining a functional relation between the depth value of the facial pixel point and a matching position according to normal information of the facial pixel point, determining a retrievable value set of the matching position based on the functional relation, determining a target matching position with the highest similarity between a corresponding pixel value and the pixel value of the facial pixel point in the retrievable value set, and determining a depth value corresponding to the target matching position in the functional relation, wherein the matching position is the position of the facial pixel point corresponding to the facial pixel point in the second two-dimensional facial image;
optionally, according to the determining module, the determining module is configured to:
according to the normal information of the facial pixel points, determining that the function relation between the depth value and the matching position of the facial pixel points is P r =K r (R lr -t lr ·n T /z)K l -1 ·P l
Wherein, P l For position information, P, of any facial pixel in the first two-dimensional facial image r Position information of face pixels in a second two-dimensional face image corresponding to the face pixels in the first two-dimensional face image, K l An internal reference matrix of a first image-capturing component for capturing a first two-dimensional face image, K r Get the second for shootingReference matrix, R, of the second image pickup element of the face-dimensional image lr For the rotation parameter matrix from the first image capture component to the second image capture component, t lr And a translation parameter matrix from the first image shooting component to the second image shooting component, wherein n is normal information of any one face pixel point in the first two-dimensional face image, the normal information is a normal vector, and z is a depth value of any one face pixel point in the first two-dimensional face image.
Optionally, the determining module is configured to:
determining a set of admissible values for disparity values based on resolutions of the first and second two-dimensional facial images;
determining a set of admissible values for the depth values based on the set of admissible values for the disparity values;
based on the functional relationship and a set of admissible values for the depth values, a set of admissible values for the matching locations is determined.
Optionally, the determining module is configured to:
determining an admissible set of depth values based on the formula z = f · B/d and the admissible set of disparity values;
wherein f is the focal length of the shooting component for obtaining the first two-dimensional image, d is the parallax value in the set of the allowable values of the parallax value, and B is a base line matrix between the two shooting components.
Optionally, the obtaining module is configured to:
acquiring a first two-dimensional image of a target user shot by a first infrared shooting component, a second two-dimensional image of the target user shot by a second infrared shooting component and a third two-dimensional image of the target user shot by a visible light shooting component;
determining position information of the face key points in the third two-dimensional image, and determining the position information of the face key points in the first two-dimensional image and the second two-dimensional image based on the position information of the face key points in the third two-dimensional image;
and determining a first two-dimensional face image in the first two-dimensional image based on the position information of the face key points in the first two-dimensional image, and determining a second two-dimensional face image in the second two-dimensional image based on the position information of the face key points in the second two-dimensional image.
Optionally, the obtaining module is configured to:
acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on a first reduced-resolution face image corresponding to the first two-dimensional face image and a second reduced-resolution face image corresponding to the second two-dimensional face image;
the calculation module is configured to:
determining normal information of face pixel points in the first reduced-resolution face image based on the initial face depth map;
and performing up-sampling interpolation on the normal information of the face pixel points in the first reduced-resolution face image to obtain the normal information of the face pixel points in the first two-dimensional face image.
In a third aspect, a computer device is provided, which includes a processor, a memory, a display, and an image collector, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the method for determining a face depth map according to the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction which is loaded and executed by the processor to implement the method of determining a face depth map as described in the first aspect above.
The beneficial effects that technical scheme that this application embodiment brought include at least:
in the embodiment of the application, an initial face depth map of a target user is determined based on a first two-dimensional face image and a second two-dimensional face image of the target user, and normal information of face pixel points in the first two-dimensional face image is obtained according to the initial face depth map. Although the initial face depth map has errors, the relative relationship between the depth values is relatively accurate, and therefore, the normal information determined by the relative relationship is approximately considered to be accurate. And determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the face pixel point in the second two-dimensional face image according to the determined normal information, and determining the depth value corresponding to the face pixel point according to the target matching position. Therefore, through pixel value matching, for the facial pixel points in the first two-dimensional facial image, the matched facial pixel points can be found in the second two-dimensional facial image more accurately, and correspondingly, the obtained depth value is more accurate, so that the accuracy of the facial depth map can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a method for determining a face depth map according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an apparatus for determining a face depth map according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for determining a face depth map according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 5 is a flowchart of a method for determining a face depth map according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a method for determining a face depth map, which can be realized by a terminal. Among them, the terminal may have a plurality of image capturing parts, and the image capturing parts may be an infrared capturing part or a visible light capturing part, etc. The terminal may include at least two image capturing parts, and the following method flow is performed based on two-dimensional images captured by the two image capturing parts to obtain a face depth map. In addition, the terminal may further include more image capturing components, for example, if the terminal includes 4 image capturing components, images captured by every two image capturing components may be subjected to the following method flows to obtain one face depth map, then two face Point clouds are obtained by converting each obtained face depth map through an internal reference matrix, and then the two face Point clouds are matched based on an ICP (Iterative Closest Point) algorithm. The image capturing means in the present embodiment may also be provided independently outside the terminal.
Above-mentioned terminal can set up entrance guard's department, and when someone was walked towards the entrance guard, terminal equipment can shoot its face to based on the facial image of shooting, carry out authentication to this personnel.
As shown in fig. 1, the processing flow of the method may include the following steps:
in step 101, a first two-dimensional face image and a second two-dimensional face image of a target user are obtained, and an initial face depth map of the target user is determined based on the first two-dimensional face image and the second two-dimensional face image.
The first two-dimensional face image and the second two-dimensional face image refer to images of a face area of a target user.
In implementation, when the target user is located in the effective face recognition range of the terminal, the terminal shoots a first two-dimensional image through the first image shooting component, shoots a second two-dimensional image through the second image shooting component, determines a face area in the first two-dimensional image to obtain a first two-dimensional face image, and determines the face area in the second two-dimensional image to obtain a second two-dimensional face image. Then, an initial face depth map of the target user may be obtained using a stereo matching algorithm, which may be a SGM (binocular stereo matching) algorithm. The corresponding processing may be as follows: firstly, a pixel block taking a pixel point as a central pixel point is determined based on the position of a face pixel point in a first two-dimensional face image, wherein the size of the commonly used pixel block is 5 multiplied by 5. And then determining a 5 x 5 pixel block with the highest similarity with the pixel block on the second two-dimensional face image, obtaining the difference of the position information of the central pixel point of the two pixel blocks to obtain the parallax value of the face pixel point, obtaining the depth value z corresponding to the face pixel point by using the parallax value and a formula z = f.B/d, obtaining the depth values corresponding to other pixel points in the first two-dimensional face image, and finally obtaining the initial face depth image of the target user.
Optionally, when the Infrared (IR) shooting component is used to determine the face depth map, a visible light shooting component may be used to assist in determining the face region in the Infrared two-dimensional image, so as to obtain the two-dimensional face image of the target user, and accordingly, the processing in step 101 may be as follows: acquiring a first two-dimensional image of a target user shot by a first infrared shooting component, a second two-dimensional image of the target user shot by a second infrared shooting component and a third two-dimensional image of the target user shot by a visible light shooting component; determining position information of the face key points in the third two-dimensional image, and determining the position information of the face key points in the first two-dimensional image and the second two-dimensional image based on the position information of the face key points in the third two-dimensional image; a first two-dimensional face image is determined in the first two-dimensional image based on position information of face key points in the first two-dimensional image, and a second two-dimensional face image is determined in the second two-dimensional image based on position information of face key points in the second two-dimensional image.
The visible light imaging component may be referred to as an RGB (Red, green, blue, three primary color) imaging component. In addition, when the infrared shooting components are arranged on the terminal, an infrared speckle device can be arranged beside each infrared shooting component to assist the infrared shooting components in shooting.
In practice, the target user is photographedBefore, the internal and external parameters of the shooting component can be calibrated by using a calibration algorithm in advance. Obtaining an internal reference matrix after calibration: k l 、K r And K m And obtaining an external parameter matrix: r lr 、t lr 、B、R ml And t ml Wherein, K is l Is an internal reference matrix of a visible light photographing part, K r Is the internal reference matrix of the second infrared camera component, K m Is an internal reference matrix of a visible light photographing part, R lr Is a rotation parameter matrix from the first infrared photographing part to the second infrared photographing part, t lr Is a translation parameter matrix from the first infrared shooting component to the second infrared shooting component, R ml Is a rotation parameter matrix from the visible light shooting component to the first infrared shooting component, t ml The parameter matrix is a translation parameter matrix from the visible light shooting component to the first infrared shooting component, and B is a baseline matrix between the first infrared shooting component and the second infrared shooting component. The calibration algorithm may use a zhengyou calibration algorithm, or may use other calibration methods, which is not limited in the embodiments of the present application.
The first infrared photographing part, the second infrared photographing part, and the visible light photographing part may be placed in positions as shown in fig. 2. The first infrared shooting component, the second infrared shooting component and the visible light shooting component synchronously trigger shooting when receiving an image shooting instruction, and shoot a same target user to obtain a group of two-dimensional images at different shooting visual angles, wherein the group of two-dimensional images comprises a first two-dimensional image shot by the first infrared shooting component, a second two-dimensional image shot by the second infrared shooting component and a third two-dimensional image shot by the visible light shooting component, the first two-dimensional image and the second two-dimensional image are IR two-dimensional images, and the third two-dimensional image is an RGB two-dimensional image. And performing epipolar line correction on the first two-dimensional image and the second two-dimensional image obtained by shooting based on the internal reference matrix and the external reference matrix obtained by calibration, wherein after the epipolar line correction, corresponding pixel points in the two-dimensional images can be approximately considered to be on the same scanning line, and the epipolar line correction can be performed by adopting a Bouguet epipolar line correction algorithm (the name of a epipolar line correction algorithm), or can be performed by adopting other correction algorithms, and the embodiment of the application is not limited.
Face key point detection is carried out on the corrected third two-dimensional image by adopting a face key point detection algorithm to obtain position information P of the face key point of the third two-dimensional image m . Then, according to the position information of the face key points of the third two-dimensional face image, the internal reference matrix and the external reference matrix of the first infrared shooting component and the internal reference matrix and the external reference matrix of the visible light shooting component, the position information P of the face key points of the first two-dimensional face image is determined l The specific calculation method is as follows:
P l =K l (R ml K -1 m Z m P m +t ml )/Z l
wherein, Z m Is an average depth value, Z, of a third two-dimensional face image previously estimated by a technician l Is an average depth value of the first two-dimensional face image estimated in advance, where Z can be approximately regarded as being the same as the distances of the first infrared photographing part and the visible light photographing part from the face of the target user m =Z l Then the formula can be written as:
P l =K l (R ml K -1 m P m +t ml /Z l )。
due to Z in the above formula m 、Z l Is an estimated value, so the position information of the face key point of the first two-dimensional image obtained from the average depth value is inaccurate, alternate iteration is performed on the average depth value and the position information of the face key point of the first two-dimensional image, and the iteration process flow is as shown in fig. 5, and the process may be as follows:
step 1011, a depth map of the first two-dimensional image is obtained by using a stereo matching algorithm.
It should be noted that the first two-dimensional image includes a face region and a non-face region, and the depth map also includes a face region and a non-face region.
Step 1012, pre-estimating the average depth value of the first two-dimensional face imageAnd the position information of the face key points of the third two-dimensional image are substituted into a formula: p is l =K l (R ml K -1 m P m +t ml /Z l ) Obtaining the position information of each facial key point of the first two-dimensional image, and calculating the minimum circumscribed rectangle of the facial key points based on the obtained position information of all the facial key points of the first two-dimensional image.
And 1013, calculating the average depth value of each pixel point in the minimum circumscribed rectangle of the facial key points based on the depth map of the first two-dimensional image.
Step 1014, substituting the average depth value obtained by the statistical calculation and the position information of the face key point of the third two-dimensional image into a formula: p l =K l (R ml K -1 m P m +t ml /Z l ) And obtaining the position information of the face key points of the first two-dimensional image, and calculating to obtain the minimum circumscribed rectangle of the face key points based on the position information of the face key points.
Step 1015, calculating a variation of the position information of the facial key point of the first two-dimensional image obtained this time compared with the position information of the facial key point of the first two-dimensional image obtained last time.
Step 1016, if the variation is smaller than a preset threshold, executing step 507; if the variation is larger than the preset threshold, go to step 503.
And step 1017, outputting the minimum circumscribed rectangle of the face key points obtained by the calculation.
It should be noted that, as a determination condition in the iterative processing flow, whether a variation of the average depth value obtained by the current statistical calculation compared to the average depth obtained by the previous statistical calculation is smaller than a preset threshold may be determined, and if yes, the minimum circumscribed rectangle of the face key points calculated based on the average depth value obtained by the current statistical calculation is output.
In addition, if the iteration processing flow reaches the preset maximum iteration times before the first two-dimensional face image corresponding to the first two-dimensional image is determined, the iteration is stopped, and the minimum circumscribed rectangle of the face key point obtained by the last calculation is output.
And then according to the position information and the average depth value of the obtained face key points of the first two-dimensional image, utilizing a formula: p r =K r (R lr K -1 l P l +t lr /Z l ) Obtaining a face key point P of a second two-dimensional image r And then calculating the minimum bounding rectangle of the face key points according to the position information of the face key points.
Then, for the number of pixel rows and the number of pixel columns in the minimum bounding rectangle of the facial key points of the first two-dimensional image and the number of pixel rows and the number of pixel columns in the minimum bounding rectangle of the facial key points of the second two-dimensional image, the number of pixel rows and the number of pixel columns obtained by union is obtained and used as the number of pixel rows and the number of pixel columns in the final minimum bounding rectangle. And determining an image in the final minimum bounding rectangle of the face key points of the first two-dimensional image as a first two-dimensional face image, and determining an image in the final minimum bounding rectangle of the face key points of the second two-dimensional image as a second two-dimensional face image.
In step 102, normal information of face pixels in the first two-dimensional face image is determined based on the initial face depth map.
The normal information may be a normal vector of the face pixel point in the three-dimensional space.
In implementation, each face pixel point in the obtained initial face depth map is subjected to normal vector calculation, that is, the face pixel point P is used l Is a central pixel point, and is combined with four pixel points P of the upper, the lower, the left and the right On the upper part 、P Lower part 、P Left side of 、P Right side Using n = (P) Left side of -P Right side )×(P Upper part of -P Lower part ) Wherein the symbol x represents the outer product between the vectors, and the center pixel point P can be obtained l The normal vector n. And solving the normal vector of each face pixel point in the first two-dimensional face image. Finally, all the faces in the first two-dimensional face image can be obtainedThe normal vectors of the pixels and the graph formed by the normal vectors of the face pixels can be called a normal graph.
Alternatively, in order to reduce the computational complexity, the initial face depth map may be determined using the reduced resolution face images corresponding to the first two-dimensional face image and the second two-dimensional face image, and accordingly, the processing in step 101 may be as follows: the method comprises the steps of obtaining a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on a first reduced-resolution face image corresponding to the first two-dimensional face image and a second reduced-resolution face image corresponding to the second two-dimensional face image.
In implementation, the first two-dimensional image and the second two-dimensional image obtained by shooting may be subjected to resolution reduction processing, the first two-dimensional image and the second two-dimensional image with resolution reduction are subjected to the processing for obtaining the two-dimensional face image, the first reduced-resolution face image and the second reduced-resolution face image are obtained, and then the first reduced-resolution face image and the second reduced-resolution face image are subjected to a stereo matching algorithm to obtain an initial face depth map of the target user.
Accordingly, for the case of reduced resolution, the processing in step 102 may be as follows: determining normal information of face pixel points in the first reduced-resolution face image based on the initial face depth map; and carrying out up-sampling interpolation on the normal information of the face pixel points in the first reduced-resolution face image to obtain the normal information of the face pixel points in the first two-dimensional face image.
In implementation, the normal information of the face pixel points in the first reduced-resolution face image is obtained by using the method for obtaining the normal information of the face pixel points. Then, to obtain the normal information of the face pixel points in the first two-dimensional face image with the original resolution, upsampling interpolation is required. Namely, it is determined that the normal information of several facial pixels is to be obtained between two facial pixels of which the normal information is known, and then the average value of the normal information of the two facial pixels is obtained and used as the normal information of the middle several facial pixels. Therefore, normal information of each face pixel point in the first two-dimensional face image with the original resolution can be obtained.
In step 103, for each facial pixel point in the first two-dimensional facial image, a functional relationship between the depth value of the facial pixel point and the matching position is determined according to the normal information of the facial pixel point. And determining a retrievable value set of the matching positions based on the functional relationship, determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the face pixel point in the retrievable value set, and determining a depth value corresponding to the target matching position in the functional relationship as the depth value corresponding to the face pixel point.
And the matching position is the position of the corresponding facial pixel point of the facial pixel point in the second two-dimensional facial image. According to the normal information of the facial pixel points, the determined functional relationship between the depth value of the facial pixel point and the matching position can be as follows: p r =K r (R lr -t lr ·n T /z)K l -1 ·P l ,P l Is the position information, P, of any facial pixel point in the first two-dimensional facial image r The position information of the face pixel points in the second two-dimensional face image corresponding to the face pixel points in the first two-dimensional face image is obtained, n is the normal information of any one face pixel point in the first two-dimensional face image, and z is the depth value of any one face pixel point in the first two-dimensional face image. Based on a formula and the set of admissible values for the disparity values, the formula for determining the admissible set of depth values may be: z = f · B/d, where f is the focal length of the capturing component that captured the first two-dimensional image, d is the disparity value in the set of admissible values of disparity values, and B is the baseline matrix between the two capturing components. The set of the admissible values of the disparity values is a set of all possible values of the disparity values, the set of the admissible values of the depth values is a set of all possible values of the depth values corresponding to the face pixel points, and the set of the admissible values of the matching positions is a set of all possible values of the matching positions of the face pixel points.
In implementation, a 5 × 5 pixel block is found by using any one of the facial pixels in the first two-dimensional facial image as a central pixel, and the facial pixel isPosition information of the point is P l . Based on the plane homography principle, the functional relation between the depth value of the face pixel point and the position information of the face pixel point corresponding to the face pixel point in the second two-dimensional face image can be obtained: p r =K r (R lr -t lr ·n T /z)K l -1 ·P l . From the resolutions of the first and second two-dimensional face images, an admissible set of disparity values can be determined. The first two-dimensional image and the second two-dimensional image are subjected to epipolar line correction, so that the first two-dimensional face image and the second face image obtained based on the first two-dimensional image and the second two-dimensional image are also subjected to epipolar line correction, and the parallax value can be the pixel column number of the face pixel points in the first two-dimensional face image and the pixel column difference value of the corresponding face pixel points in the second two-dimensional face image. For example, for an image with a resolution of 60 × 50, a desirable set of disparity values may be (0,1, 2 \8230; \823030; 60). The disparity values in the above mentioned set of admissible values of disparity values may be referred to as candidate matching disparity values.
And substituting the candidate matching disparity values into a formula z = f · B/d to obtain a depth value z corresponding to each candidate matching disparity value, thereby obtaining a set of retrievable depth values. Substituting each depth value z in the set of retrievable values into a formula P r =K r (R lr -t lr ·n T /z)K l -1 ·P l The set of advisable values for the matching position corresponding to the set of advisable values for the depth value can be obtained. Then, each face pixel point in the set of the retrievable values of the matching position is interpolated to obtain a corresponding gray value, and the face pixel point with the highest matching degree with the face pixel point in the first two-dimensional face image can be determined in the second two-dimensional face image by utilizing a gray consistency similarity measurement principle. Further, the depth value used to obtain the position information of the face pixel point with the highest matching degree may be used as the depth value corresponding to the face pixel point in the first two-dimensional face image.
In step 104, a corrected face depth map of the target user is determined based on the depth value corresponding to each face pixel point in the first two-dimensional face image.
The corrected face depth map of the target user may include a depth value corresponding to each face pixel point in the first two-dimensional face image.
Based on the same technical concept, an embodiment of the present application further provides an apparatus for determining a face depth map, where the apparatus may be a terminal in the foregoing embodiment, as shown in fig. 3, and the apparatus includes: an acquisition module 310, a calculation module 320, a determination module 330, and a correction module 340.
An obtaining module 310, configured to obtain a first two-dimensional face image and a second two-dimensional face image of a target user, and determine an initial face depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image;
a calculating module 320, configured to determine normal information of a face pixel point in the first two-dimensional face image based on the initial face depth map;
a determining module 330, configured to determine, for each face pixel point in the first two-dimensional face image, a target matching position where a corresponding pixel value has a highest similarity with a pixel value of the face pixel point in the second two-dimensional face image according to normal information of the face pixel point, and determine, according to the target matching position, a depth value corresponding to the face pixel point;
a correction module 340 for determining a corrected face depth map of the target user based on the depth value corresponding to each face pixel point in the first two-dimensional face image.
Optionally, the determining module 330 is configured to:
determining a functional relation between the depth value of the facial pixel point and a matching position according to normal information of the facial pixel point, determining a retrievable value set of the matching position based on the functional relation, determining a target matching position with the highest similarity between a corresponding pixel value and the pixel value of the facial pixel point in the retrievable value set, and determining a depth value corresponding to the target matching position in the functional relation, wherein the matching position is the position of the facial pixel point corresponding to the facial pixel point in the second two-dimensional facial image;
optionally, according to the determining module 330, the method is configured to:
according to the normal information of the facial pixel points, determining that the function relationship between the depth value and the matching position of the facial pixel points is P r =K r (R lr -t lr ·n T /z)K l -1 ·P l
Wherein, P l For position information, P, of any facial pixel in the first two-dimensional facial image r Position information of face pixels in a second two-dimensional face image corresponding to the face pixels in the first two-dimensional face image, K l An internal reference matrix of a first image-capturing component for capturing a first two-dimensional face image, K r An internal reference matrix, R, of a second image-capturing means for capturing a second two-dimensional face image lr For a rotation parameter matrix, t, of the first image acquisition element to the second image acquisition element lr And a translation parameter matrix from the first image shooting component to the second image shooting component, wherein n is normal information of any one face pixel point in the first two-dimensional face image, the normal information is a normal vector, and z is a depth value of any one face pixel point in the first two-dimensional face image.
Optionally, the determining module 330 is configured to:
determining a set of admissible values for disparity values based on resolutions of the first and second two-dimensional facial images;
determining a set of admissible values for the depth values based on the set of admissible values for the disparity values;
determining a set of admissible values for the matching location based on the functional relationship and the set of admissible values for the depth values.
Optionally, the determining module 330 is configured to:
determining a set of admissible values for the depth values based on the formula z = f · B/d and the admissible set of disparity values;
wherein f is the focal length of the shooting component for obtaining the first two-dimensional image, d is the parallax value in the set of the allowable values of the parallax value, and B is a base line matrix between the two shooting components.
Optionally, the obtaining module 310 is configured to:
acquiring a first two-dimensional image of a target user shot by a first infrared shooting component, a second two-dimensional image of the target user shot by a second infrared shooting component and a third two-dimensional image of the target user shot by a visible light shooting component;
determining position information of the face key points in the third two-dimensional image, and determining the position information of the face key points in the first two-dimensional image and the second two-dimensional image based on the position information of the face key points in the third two-dimensional image;
determining a first two-dimensional face image in the first two-dimensional image based on the position information of the face key points in the first two-dimensional image, and determining a second two-dimensional face image in the second two-dimensional image based on the position information of the face key points in the second two-dimensional image.
Optionally, the obtaining module 310 is configured to:
acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on a first reduced-resolution face image corresponding to the first two-dimensional face image and a second reduced-resolution face image corresponding to the second two-dimensional face image;
the calculating module 320 is configured to:
determining normal information of face pixel points in the first reduced-resolution face image based on the initial face depth map;
and performing up-sampling interpolation on the normal information of the face pixel points in the first reduced-resolution face image to obtain the normal information of the face pixel points in the first two-dimensional face image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that: the apparatus for determining a face depth map provided in the above embodiment is only illustrated by the above division of each functional module when determining the face depth map, and in practical applications, the above function allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the above described functions. In addition, the apparatus for determining a face depth map and the method for determining a face depth map provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
In an exemplary embodiment, a computer-readable storage medium is further provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the method for identifying an action category in the above embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 4 is a schematic structural diagram of a computer device 400 according to an embodiment of the present disclosure, where the computer device 400 may have a relatively large difference due to different configurations or performances, and may include one or more processors 401, one or more memories 402, one or more displays 403, and a plurality of image capturing components 404. Wherein the memory 402 has at least one instruction stored therein, which is loaded and executed by the processor 401 to implement the above-mentioned method for determining a face depth map.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A method of determining a facial depth map, the method comprising:
acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image;
determining normal information of face pixel points in the first two-dimensional face image based on the initial face depth map;
for each face pixel point in the first two-dimensional face image, according to the normal information of the face pixel point, determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the face pixel point in the second two-dimensional face image, and according to the target matching position, determining a depth value corresponding to the face pixel point;
determining a corrected facial depth map for the target user based on the depth value corresponding to each facial pixel point in the first two-dimensional facial image.
2. The method according to claim 1, wherein the determining, according to the normal information of the facial pixel point, a target matching position in the second two-dimensional facial image where a corresponding pixel value has a highest similarity with the pixel value of the facial pixel point comprises:
determining a functional relation between the depth value of the facial pixel point and the matching position according to the normal information of the facial pixel point, determining a retrievable value set of the matching position based on the functional relation, and determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the facial pixel point in the retrievable value set, wherein the matching position is the position of the corresponding facial pixel point of the facial pixel point in the second two-dimensional facial image;
the determining the depth value corresponding to the face pixel point according to the target matching position comprises:
and determining the depth value corresponding to the target matching position in the functional relation.
3. The method of claim 2, wherein determining a functional relationship between the depth value and the matching position of the facial pixel according to the normal information of the facial pixel comprises:
according to the normal information of the facial pixel points, determining the function relation between the depth value and the matching position of the facial pixel points as
P r =K r (R lr -t lr ·n T /z)K l -1 ·P l
Wherein, P l For position information, P, of any facial pixel in the first two-dimensional facial image r Position information of face pixels in a second two-dimensional face image corresponding to the face pixels in the first two-dimensional face image, K l An internal reference matrix of a first image-capturing component for capturing a first two-dimensional face image, K r An internal reference matrix, R, of a second image-capturing means for capturing a second two-dimensional face image lr For the rotation parameter matrix from the first image capture component to the second image capture component, t lr And a translation parameter matrix from the first image shooting component to the second image shooting component, wherein n is normal information of any one face pixel point in the first two-dimensional face image, the normal information is a normal vector, and z is a depth value of any one face pixel point in the first two-dimensional face image.
4. The method according to claim 2 or 3, wherein said determining a set of admissible values for said matching location based on said functional relationship comprises:
determining a set of admissible values for disparity values based on resolutions of the first and second two-dimensional face images;
determining a set of admissible values for the depth values based on the set of admissible values for the disparity values;
based on the functional relationship and a set of admissible values for the depth values, a set of admissible values for the matching locations is determined.
5. The method of claim 4, wherein said determining a admissible set of depth values based on the admissible set of disparity values comprises:
determining a set of admissible values for the depth values based on the formula z = f · B/d and the admissible set of disparity values;
f is the focal length of a shooting component for shooting the first two-dimensional face image, d is a parallax value in the set of the parallax value, and B is a baseline matrix between the two shooting components.
6. The method of claim 1, wherein obtaining a first two-dimensional face image and a second two-dimensional face image of a target user comprises:
acquiring a first two-dimensional image of a target user shot by a first infrared shooting component, a second two-dimensional image of the target user shot by a second infrared shooting component and a third two-dimensional image of the target user shot by a visible light shooting component;
determining position information of the face key points in the third two-dimensional image, and determining the position information of the face key points in the first two-dimensional image and the second two-dimensional image based on the position information of the face key points in the third two-dimensional image;
and determining a first two-dimensional face image in the first two-dimensional image based on the position information of the face key points in the first two-dimensional image, and determining a second two-dimensional face image in the second two-dimensional image based on the position information of the face key points in the second two-dimensional image.
7. The method of claim 1, wherein the obtaining a first two-dimensional face image and a second two-dimensional face image of a target user, determining an initial facial depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image, comprises:
acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on a first reduced-resolution face image corresponding to the first two-dimensional face image and a second reduced-resolution face image corresponding to the second two-dimensional face image;
the determining normal information of face pixel points in the first two-dimensional face image based on the initial face depth map comprises:
determining normal information of face pixel points in the first reduced-resolution face image based on the initial face depth map;
and carrying out up-sampling interpolation on the normal information of the face pixel points in the first reduced-resolution face image to obtain the normal information of the face pixel points in the first two-dimensional face image.
8. An apparatus for determining a face depth map, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first two-dimensional face image and a second two-dimensional face image of a target user and determining an initial face depth map of the target user based on the first two-dimensional face image and the second two-dimensional face image;
the calculation module is used for determining normal information of face pixel points in the first two-dimensional face image based on the initial face depth map;
the determining module is used for determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the face pixel point in the second two-dimensional face image according to the normal information of the face pixel point for each face pixel point in the first two-dimensional face image, and determining a depth value corresponding to the face pixel point according to the target matching position;
a correction module that determines a corrected face depth map of the target user based on a depth value corresponding to each face pixel point in the first two-dimensional face image.
9. The apparatus of claim 8, wherein the determining module is configured to:
determining a functional relation between the depth value of the facial pixel point and the matching position according to the normal information of the facial pixel point, determining a desirable value set of the matching position based on the functional relation, determining a target matching position with the highest similarity between the corresponding pixel value and the pixel value of the facial pixel point in the desirable value set, and determining the depth value corresponding to the target matching position in the functional relation, wherein the matching position is the position of the facial pixel point corresponding to the facial pixel point in the second two-dimensional facial image.
10. The apparatus of claim 9, wherein the determining module is configured to:
according to the normal information of the facial pixel points, determining the function relation between the depth value and the matching position of the facial pixel points as
P r =K r (R lr -t lr ·n T /z)K l -1 ·P l
Wherein, P l Position information, P, of any facial pixel in the first two-dimensional facial image r Position information of face pixels in a second two-dimensional face image corresponding to the face pixels in the first two-dimensional face image, K l An internal reference matrix of a first image-capturing component for capturing a first two-dimensional face image, K r An internal reference matrix of a second image-capturing element for capturing a second two-dimensional face image, R lr For the rotation parameter matrix from the first image capture component to the second image capture component, t lr For the translation parameter matrix from the first image capturing component to the second image capturing component, n is the first two-dimensional faceAnd normal information of any face pixel point in the image is normal vector, and z is the depth value of any face pixel point in the first two-dimensional face image.
11. The apparatus of any one of claims 9 or 10, wherein the determining module is configured to:
determining a set of admissible values for disparity values based on resolutions of the first and second two-dimensional face images;
determining a set of admissible values for the depth values based on the set of admissible values for the disparity values;
determining a set of admissible values for the matching location based on the functional relationship and the set of admissible values for the depth values.
12. The apparatus of claim 11, wherein the determining module is configured to:
determining a set of admissible values for the depth values based on the formula z = f · B/d and the admissible set of disparity values;
f is the focal length of a shooting component for shooting the first two-dimensional face image, d is a parallax value in the set of the parallax value, and B is a baseline matrix between the two shooting components.
13. The apparatus of claim 8, wherein the obtaining module is configured to:
acquiring a first two-dimensional image of a target user shot by a first infrared shooting component, a second two-dimensional image of the target user shot by a second infrared shooting component and a third two-dimensional image of the target user shot by a visible light shooting component;
determining position information of the face key points in the third two-dimensional image, and determining the position information of the face key points in the first two-dimensional image and the second two-dimensional image based on the position information of the face key points in the third two-dimensional image;
determining a first two-dimensional face image in the first two-dimensional image based on the position information of the face key points in the first two-dimensional image, and determining a second two-dimensional face image in the second two-dimensional image based on the position information of the face key points in the second two-dimensional image.
14. The apparatus of claim 8, wherein the obtaining module is configured to:
acquiring a first two-dimensional face image and a second two-dimensional face image of a target user, and determining an initial face depth map of the target user based on a first reduced-resolution face image corresponding to the first two-dimensional face image and a second reduced-resolution face image corresponding to the second two-dimensional face image;
the calculation module is configured to:
determining normal information of face pixel points in the first reduced-resolution face image based on the initial face depth map;
and performing up-sampling interpolation on the normal information of the face pixel points in the first reduced-resolution face image to obtain the normal information of the face pixel points in the first two-dimensional face image.
CN201811231805.1A 2018-10-22 2018-10-22 Method and device for determining face depth map Active CN111080689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811231805.1A CN111080689B (en) 2018-10-22 2018-10-22 Method and device for determining face depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811231805.1A CN111080689B (en) 2018-10-22 2018-10-22 Method and device for determining face depth map

Publications (2)

Publication Number Publication Date
CN111080689A CN111080689A (en) 2020-04-28
CN111080689B true CN111080689B (en) 2023-04-14

Family

ID=70309915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811231805.1A Active CN111080689B (en) 2018-10-22 2018-10-22 Method and device for determining face depth map

Country Status (1)

Country Link
CN (1) CN111080689B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908230A (en) * 2010-07-23 2010-12-08 东南大学 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method
EP2386998A1 (en) * 2010-05-14 2011-11-16 Honda Research Institute Europe GmbH A Two-Stage Correlation Method for Correspondence Search
CN103438834A (en) * 2013-09-17 2013-12-11 清华大学深圳研究生院 Hierarchy-type rapid three-dimensional measuring device and method based on structured light projection
CN105718853A (en) * 2014-12-22 2016-06-29 现代摩比斯株式会社 Obstacle detecting apparatus and obstacle detecting method
CN106228507A (en) * 2016-07-11 2016-12-14 天津中科智能识别产业技术研究院有限公司 A kind of depth image processing method based on light field
CN107247834A (en) * 2017-05-31 2017-10-13 华中科技大学 A kind of three dimensional environmental model reconstructing method, equipment and system based on image recognition
CN108288286A (en) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 A kind of half global solid matching method preferential based on surface orientation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2386998A1 (en) * 2010-05-14 2011-11-16 Honda Research Institute Europe GmbH A Two-Stage Correlation Method for Correspondence Search
CN101908230A (en) * 2010-07-23 2010-12-08 东南大学 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method
CN103438834A (en) * 2013-09-17 2013-12-11 清华大学深圳研究生院 Hierarchy-type rapid three-dimensional measuring device and method based on structured light projection
CN105718853A (en) * 2014-12-22 2016-06-29 现代摩比斯株式会社 Obstacle detecting apparatus and obstacle detecting method
CN106228507A (en) * 2016-07-11 2016-12-14 天津中科智能识别产业技术研究院有限公司 A kind of depth image processing method based on light field
CN107247834A (en) * 2017-05-31 2017-10-13 华中科技大学 A kind of three dimensional environmental model reconstructing method, equipment and system based on image recognition
CN108288286A (en) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 A kind of half global solid matching method preferential based on surface orientation

Also Published As

Publication number Publication date
CN111080689A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111179358B (en) Calibration method, device, equipment and storage medium
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
CN110036410B (en) Apparatus and method for obtaining distance information from view
CN109813251B (en) Method, device and system for three-dimensional measurement
US11039121B2 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
US8452081B2 (en) Forming 3D models using multiple images
JP5580164B2 (en) Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program
US20130335535A1 (en) Digital 3d camera using periodic illumination
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
KR102206108B1 (en) A point cloud registration method based on RGB-D camera for shooting volumetric objects
KR20110059506A (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
CN107517346B (en) Photographing method and device based on structured light and mobile device
CN111107337B (en) Depth information complementing method and device, monitoring system and storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN112470189B (en) Occlusion cancellation for light field systems
JP4193342B2 (en) 3D data generator
JPH05303629A (en) Method for synthesizing shape
US8340399B2 (en) Method for determining a depth map from images, device for determining a depth map
CN114485953A (en) Temperature measuring method, device and system
CN112802114A (en) Multi-vision sensor fusion device and method and electronic equipment
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
JP7312026B2 (en) Image processing device, image processing method and program
CN110800020A (en) Image information acquisition method, image processing equipment and computer storage medium
CN111080689B (en) Method and device for determining face depth map
JP2013200840A (en) Video processing device, video processing method, video processing program, and video display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant