CN109377551B - Three-dimensional face reconstruction method and device and storage medium thereof - Google Patents

Three-dimensional face reconstruction method and device and storage medium thereof Download PDF

Info

Publication number
CN109377551B
CN109377551B CN201811207981.1A CN201811207981A CN109377551B CN 109377551 B CN109377551 B CN 109377551B CN 201811207981 A CN201811207981 A CN 201811207981A CN 109377551 B CN109377551 B CN 109377551B
Authority
CN
China
Prior art keywords
depth data
point cloud
under
angle
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811207981.1A
Other languages
Chinese (zh)
Other versions
CN109377551A (en
Inventor
廖声洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811207981.1A priority Critical patent/CN109377551B/en
Publication of CN109377551A publication Critical patent/CN109377551A/en
Application granted granted Critical
Publication of CN109377551B publication Critical patent/CN109377551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a three-dimensional face reconstruction method, a three-dimensional face reconstruction device and a storage medium thereof, and relates to the technical field of image processing. The three-dimensional face reconstruction method comprises the following steps: acquiring depth data of a target face under a plurality of view angles; judging whether the depth data under each view angle has local depth data abnormal points or not; if yes, the depth data under the view angles corresponding to the local depth data abnormal points are re-acquired, so that the depth data under the multiple view angles are updated; and obtaining the three-dimensional model of the target face according to the depth data under the multiple view angles. According to the method, when local depth data abnormal points exist, the depth data is acquired in a complementary mode through the depth camera, so that the error correction capability of face modeling is improved, and a more accurate face three-dimensional model is obtained.

Description

Three-dimensional face reconstruction method and device and storage medium thereof
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a three-dimensional face reconstruction method, apparatus, and storage medium thereof.
Background
With the rapid development of computer equipment, networks and image processing technologies, the traditional visual image recognition mode has been gradually replaced by an image recognition mode automatically performed by a computer, so that the efficiency and accuracy of image recognition are greatly improved. Based on the depth data of the face, three-dimensional face modeling by computer automation is a common task in the field of computer vision, and is also widely applied.
However, in the existing three-dimensional face modeling mode, when point cloud matching is performed on depth data and face plane images, local depth data abnormal points often occur due to calculation errors, acquisition data errors and the like, and the accuracy of a face three-dimensional model can be greatly affected by the local depth data abnormal points. In the prior art, the depth data of the complete face under different angles needs to be scanned once and then point cloud registration is carried out, the real-time feedback logic is not provided, local depth data abnormal points are easy to appear, and therefore the acquisition quality of the depth data is not easy to effectively control.
Disclosure of Invention
In view of the above, an objective of the embodiments of the present invention is to provide a three-dimensional face reconstruction method, apparatus and storage medium thereof, so as to solve the above-mentioned problems.
In a first aspect, an embodiment of the present invention provides a three-dimensional face reconstruction method, where the three-dimensional face reconstruction method includes: acquiring depth data of a target face under a plurality of view angles; judging whether the depth data under each view angle has local depth data abnormal points or not; if yes, the depth data under the view angles corresponding to the local depth data abnormal points are re-acquired, so that the depth data under the multiple view angles are updated; and obtaining the three-dimensional model of the target face according to the depth data under the multiple view angles.
Combining the first aspect, performing point cloud matching on the collected depth data under each view angle and the planar face image, including: the obtaining depth data of the target face under a plurality of view angles comprises the following steps: and driving the depth camera to shoot the target face when rotating to each preset shooting angle, and obtaining depth data of the target face under a plurality of view angles.
Synthesizing the first aspect, the determining whether the depth data under each view angle has a local depth data abnormal point includes: and carrying out point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle, and judging whether local depth data abnormal points exist in the depth data under each view angle or not based on the result of the point cloud matching.
Combining the first aspect, performing point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle, including: acquiring a key point set of a planar face image under each view angle through face key point detection, and taking the key point set as a target point cloud P t Taking the point set of the depth data as a source point cloud P s The method comprises the steps of carrying out a first treatment on the surface of the Accurate fixed point cloud registration equation P through rough matching t =R*P s The rotation matrix R and the translation matrix T are approximated in +T; determining an accurate rotation matrix R and a translation matrix T in the point cloud registration equation through coarse registration based on the approximate rotation matrix R and the translation matrix T; substituting the accurate rotation matrix R and the translation matrix T into the point cloud registration equation to obtain a transformation result.
Synthesizing the first aspect, the accurate fixed point cloud registration equation P through rough matching t =R*P s The rotation matrix R and the translation matrix T approximated in +t include: determining a point cloud registration equation P through a four-point superposition method searching strategy t =R*P s Cloud of target points P in +T t And source point cloud P s An approximate rotation matrix R and a translation matrix T with overlapping degree exceeding a preset overlapping threshold, wherein the source point cloud P is transformed based on the approximate rotation matrix R and the translation matrix T s Target point cloud P with any point within tolerance t And (2) the overlapping points are the overlapping degree, and the proportion of the overlapping points to the number of all the points is the overlapping degree.
Synthesizing the first aspect, determining, by coarse registration, the accurate rotation matrix R and the translation matrix T in the point cloud registration equation based on the approximated rotation matrix R and the translation matrix T, including: using the approximated rotation matrix R and translation matrix T to transform the source point cloud P s Transforming to the target point cloud P t To determine the source point cloud P s And the target point cloud P t Two points with middle distance smaller than the corresponding point threshold value are corresponding points P i t And P i s The method comprises the steps of carrying out a first treatment on the surface of the Based on the approximated rotation matrix R and translation matrix T and the corresponding points P i t And P i s And performing iterative optimization on the rotation matrix R and the translation matrix T to obtain an accurate rotation matrix R and an accurate translation matrix T.
According to a first aspect, the determining whether the depth data under each view angle has local depth data abnormal points based on the result of the point cloud matching includes: determining that a plane face image corresponding to the first visual angle is acquired; judging whether the overlapping degree between the depth data under the first view angle and the point set of the depth data under the adjacent view angle is larger than a preset adjacent overlapping threshold value or not; if yes, determining that no local depth data abnormal point exists in the depth data under the first view angle; if not, determining that local depth data abnormal points exist in the depth data under the first view angle.
In combination with the first aspect, the three-dimensional face reconstruction method further includes: and acquiring a preview video stream of the target face through a planar camera, determining the position of the target face under each view angle based on the preview video stream, and determining a planar face image under each view angle based on the position of the target face in the preview video stream under each view angle.
In combination with the first aspect, the three-dimensional face reconstruction method further includes: and adjusting an acquisition area of the depth camera under each view angle based on the position, so that the depth camera can acquire the depth data of the target face in the acquisition area.
In a second aspect, an embodiment of the present invention provides a three-dimensional face reconstruction device, including: the depth data acquisition module is used for acquiring depth data of the target face under a plurality of view angles; the abnormal point judging module is used for judging whether the depth data under each view angle has local depth data abnormal points or not; the supplementary acquisition module is used for re-acquiring depth data under the view angles corresponding to the local depth data abnormal points when the local depth data abnormal points exist, so as to update the depth data under the multiple view angles; and the model acquisition module is used for acquiring a three-dimensional model of the target face according to the depth data under the multiple view angles.
In a third aspect, an embodiment of the present invention further provides an adjustable image capturing apparatus, where the adjustable image capturing apparatus includes an image capturing component and a processor, where the image capturing component includes a depth camera and a planar camera, and the depth camera and the planar camera may rotate and translate based on a control signal of the processor.
In a fourth aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes a processor, a memory, and a bus, where the processor is connected to the memory through the bus, and the processor may execute a program stored in the memory to perform steps in the method in any one of the foregoing aspects.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium having stored therein computer program instructions which, when read and executed by a processor, perform the steps of the method of any of the above aspects.
The beneficial effects provided by the invention are as follows:
the invention provides a three-dimensional face reconstruction method, a device and a storage medium thereof, wherein the three-dimensional face reconstruction method carries out depth data supplementary acquisition of local depth data abnormal points when the local depth data abnormal points exist in the depth data, so that immediate system feedback and supplementary acquisition of the depth data are carried out when the local depth data abnormal points appear in the depth data, the problem that the local depth data abnormal points easily appear in the depth data is solved, and the acquisition quality of the depth data is improved; according to the method, the three-dimensional modeling of the target face is performed based on the depth data after the supplementary acquisition, and the accuracy of the three-dimensional model of the target face obtained by reconstructing the three-dimensional face is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a three-dimensional face reconstruction method according to a first embodiment of the present invention;
fig. 2 is a schematic flow chart of a planar face image acquisition and acquisition position determining step according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a position determining step according to a first embodiment of the present invention;
fig. 4 is a flowchart illustrating steps of a method for establishing a face key point detection model according to a first embodiment of the present invention;
Fig. 5 is a schematic flow chart of a point cloud matching step according to the first embodiment of the present invention;
fig. 6 is a flowchart illustrating a partial depth data outlier determining step according to a first embodiment of the present invention;
fig. 7 is a schematic block diagram of a three-dimensional face reconstruction device according to a second embodiment of the present invention;
fig. 8 is a block diagram of an electronic device according to a third embodiment of the present invention.
Icon: 100-a three-dimensional face reconstruction device; 110-a depth data acquisition module; 120-an abnormal point judging module; 130-a supplemental acquisition module; 140-a model acquisition module; 200-an electronic device; 201-an adjustable camera device; 202-a processor; 203-memory; 204-a memory controller; 205-display.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Terms that may appear in the embodiments of the present invention will be explained first:
the depth camera is used for acquiring stereoscopic depth data of an object, and the existing depth camera generally adopts a Time of flight (TOF) or a structured light ranging method. Wherein the time-of-flight ranging 3D imaging is performed by continuously transmitting light pulses to the target, then receiving light returned from the object with a sensor, and obtaining the target distance by detecting the flight (round trip) time of the light pulses. This technique is basically similar to the principle of a 3D laser sensor, except that the 3D laser sensor scans point by point, while the TOF camera obtains depth information of the entire image at the same time. The TOF camera is similar to the common machine vision imaging process, and consists of a light source, an optical component, a sensor, a control circuit, a processing circuit and the like. Compared with a binocular measuring system which belongs to the non-invasive three-dimensional detection and is very similar to the application field, a TOF camera has a fundamentally different 3D imaging mechanism, binocular stereo measurement is carried out by performing stereo detection through a triangulation method after left and right stereo pair matching, and the TOF camera is used for acquiring a target distance acquired through incident and reflected light detection. The principle of the structured light ranging method is to avoid the complex algorithm design in binocular matching, and change one camera into an infrared projector which actively projects complex light spots outwards, and the camera at the other parallel position also becomes an infrared camera, so that all the light spots projected by the projector can be clearly seen. The infrared light spots cannot be seen by the human eyes, and the texture is very complex, so that the binocular matching algorithm is very beneficial, and the depth information can be identified by using a very concise algorithm.
For example, based on texture information of a planar face image, coordinates of key points in corresponding depth data are transformed to a plurality of key point coordinate systems of the planar face image, in the fields of reverse engineering, computer vision, cultural relics digitization and the like, due to incomplete rotation dislocation, translation dislocation and the like of the point clouds, the obtained complete point clouds need to be matched, in order to obtain a complete data model of a measured object, a proper coordinate system needs to be determined, point sets obtained from all view angles are combined to form a complete point cloud under the coordinate system of the target point cloud, and then visual operation can be conveniently carried out.
First embodiment
The applicant finds that the 3D reconstruction of a human face is to reconstruct a 3D model of the human face from one or more pictures, and the traditional 3D reconstruction problem solving method is model matching (model matching), namely, by identifying key points in 2D plane pictures of a plurality of human faces, performing model matching based on the key points, and adjusting three-dimensional coordinate shape feature points on the 3D model to match with corresponding feature points on all 2D plane pictures, so as to obtain the 3D reconstructed human face model. The three-dimensional coordinate shape can be obtained according to the face depth data acquired by the depth camera, so that the accuracy of 3D reconstruction of the face is improved. However, due to the acquisition problem of the depth data or the matching problem of the depth data and the planar 2D picture, the problem that local depth data abnormal points exist in the face data of point cloud matching often occurs, and because the 3D reconstruction of the face needs to scan the whole scene once and then perform point cloud operation to perform point cloud registration, the local depth data abnormal points are not easy to correct, so that the quality of the face model obtained by the 3D reconstruction is greatly influenced. In order to solve the above-mentioned problems, a first embodiment of the present invention provides a three-dimensional face reconstruction method, which may be executed by a computer, a smart phone, a cloud server, or other processing devices capable of performing logic operations.
Referring to fig. 1, fig. 1 is a flow chart of a three-dimensional face reconstruction method according to a first embodiment of the present invention. The specific steps of the three-dimensional face reconstruction method can be as follows:
step S10: and acquiring depth data of the target face under a plurality of view angles.
In this embodiment, the depth data may be obtained by a depth camera, which may be a camera based on a structured light ranging method, a TOF ranging method, or other depth ranging methods. The different angles in the step can be the angle of the horizontal direction or the angle of the vertical direction, and can also be the change of any angle on the universal spherical surface facing the human face, so that the sample integrity degree of the human face image is increased, and the model accuracy of the subsequent human face reconstruction is improved.
And step S20, judging whether the depth data under each view angle has local depth data abnormal points or not.
In this embodiment, the local depth data abnormal point may be determined based on a point cloud matching result of depth data under an adjacent view angle, and it should be understood that in other embodiments, the determination may be performed based on the depth image under the same view angle and the planar face image acquired under the same view angle on the basis of the depth image under the adjacent view angle.
Step S30: and if so, re-acquiring the depth data under the view angles corresponding to the local depth data abnormal points so as to update the depth data under the multiple view angles.
The three-dimensional face reconstruction method provided by the embodiment can perform preset angle marking in the depth data acquired at each preset angle to establish a mapping relation table of the depth data and the acquisition angles thereof, so that when a local depth data abnormal point exists at a certain angle, the depth camera is controlled to return to the corresponding angle according to the mapping relation table to perform depth data re-acquisition.
Step S40: and obtaining the three-dimensional model of the target face according to the depth data under the multiple view angles.
According to the embodiment of the invention, the step S10-S40 is used for carrying out the depth data supplementary acquisition of the local depth data abnormal points when the local depth data abnormal points exist in the depth data, so that the immediate system feedback and the supplementary acquisition of the depth data are carried out when the local depth data abnormal points appear in the depth data, the problem that the local depth data abnormal points easily appear in the depth data is solved, and the acquisition quality of the depth data is improved; according to the method, the three-dimensional modeling of the target face is performed based on the depth data after the supplementary acquisition, and the accuracy of the three-dimensional model of the target face obtained by reconstructing the three-dimensional face is improved.
As an optional implementation manner, the present embodiment may further include a step of planar face image acquisition and acquisition position determination before step S10, please refer to fig. 2, fig. 2 is a schematic flow chart of a planar face image acquisition and acquisition position determination step provided in the first embodiment of the present invention. The planar face image acquisition and acquisition position determination steps can be specifically as follows:
step S2: and acquiring a preview video stream of the target face through a planar camera, determining the position of the target face under each view angle based on the preview video stream, and determining a planar face image under each view angle based on the position of the target face in the preview video stream under each view angle.
In this embodiment, when the preview video stream is collected, the preview video stream may be displayed to the user through the screen, so that the user may adjust the face position; the preview video stream may not be displayed, and the user operation flow is simplified.
Step S4: and adjusting an acquisition area of the depth camera under each view angle based on the position, so that the depth camera can acquire the depth data of the target face in the acquisition area.
The adjustment of the acquisition area of the depth camera in this embodiment may include adjustment of basic parameters of the camera such as a viewing angle, a lens focal length, and an aperture.
Alternatively, the determination of the face position may be achieved by determining a face key point in the planar face image, specifically referring to fig. 3, step S2 in this embodiment may include the following specific steps:
step S2.1: and loading a face key point detection model constructed based on the neural network.
Step S2.2: and acquiring a preview image corresponding to the preview data frame of the face in each view angle in the video stream.
Step S2.3: and determining the position of the human face in the preview image through the human face key point detection model.
It should be understood that before the face keypoint detection model is loaded in step S1.1, the face keypoint detection model needs to be built and trained based on a face image library containing face keypoint labels. Referring to fig. 4, fig. 4 is a flowchart illustrating steps of establishing a face key point detection model according to a first embodiment of the present invention. The step of establishing the face key point detection model can be as follows:
step S1.1: and acquiring or directly acquiring a plurality of face images.
Step S1.2: and carrying out face key point accurate labeling on the face image.
Step S1.3: and dividing the marked face image into a training set and a verification set.
Step S1.4: training the training set based on the neural network model, simultaneously verifying the training intermediate result by using the verification set, adjusting training parameters in real time, and stopping training when the training precision and the verification precision reach the corresponding threshold values to obtain the face key point detection model.
Optionally, in step S1.3 in this embodiment, a test set may be further divided while the training set and the verification set are divided, and then the face key point detection model is tested by using the test set after step S1.4, so as to measure performance and accuracy of the model.
For step S10, the "obtaining depth data of the target face under multiple views" may specifically be: and driving the depth camera to shoot the target face when rotating to each preset shooting angle, and obtaining depth data of the target face under a plurality of view angles.
The preset shooting angle may be a maximum rotation angle of the depth camera as a preset starting angle, and shooting and acquisition of depth data are performed once every 1 ° rotation. It should be understood that the interval angle between every two depth data shots and acquisitions can be adjusted according to the user's needs or the depth data acquisition needs, in addition to 1 °.
For step S20, the step of "determining whether the depth data under each view angle has the local depth data abnormal point" may specifically be: and carrying out point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle, and judging whether local depth data abnormal points exist in the depth data under each view angle or not based on the result of the point cloud matching.
As an optional implementation manner, the specific steps of performing the point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle are shown in fig. 5, and fig. 5 is a schematic flow chart of a point cloud matching step provided in the first embodiment of the present invention, where the point cloud matching step specifically may be as follows:
step S21: acquiring a key point set of a planar face image under each view angle through face key point detection, and taking the key point set as a target point cloud P t Taking the point set of the depth data as a source point cloud P s
Step S22: accurate fixed point cloud registration equation P through rough matching t =R*P s The rotation matrix R and the translation matrix T approximated in + T.
In this embodiment, step S22 may specifically be: determining a point cloud registration equation P through a four-point superposition method searching strategy t =R*P s Cloud of target points P in +T t And source point cloud P s An approximate rotation matrix R and a translation matrix T with overlapping degree exceeding a preset overlapping threshold, wherein the source point cloud P is transformed based on the approximate rotation matrix R and the translation matrix T s Target point cloud P with any point within tolerance t And (2) the overlapping points are the overlapping degree, and the proportion of the overlapping points to the number of all the points is the overlapping degree.
Step S23: based on the approximated rotation matrix R and translation matrix T, determining an accurate rotation matrix R and translation matrix T in the point cloud registration equation by coarse registration.
In this embodiment, step S23 may specifically be: using the approximated rotation matrix R and translation matrix T to transform the source point cloud P s Transforming to the target point cloud P t To determine the source point cloud P s And the target point cloud P t Two points with middle distance smaller than the corresponding point threshold value are corresponding points P i t And P i s The method comprises the steps of carrying out a first treatment on the surface of the Based on the approximated rotation matrix R and translation matrix T and the corresponding points P i t And P i s And performing iterative optimization on the rotation matrix R and the translation matrix T to obtain an accurate rotation matrix R and an accurate translation matrix T.
Step S24: substituting the accurate rotation matrix R and the translation matrix T into the point cloud registration equation to obtain a transformation result.
The above-mentioned point cloud matching and depth data acquisition processes in this embodiment are performed simultaneously, so as to improve the feedback speed of the local depth data abnormal points. For example, when the depth camera performs depth data acquisition at a third angle on the face, the processing device may perform point cloud matching on the planar face image at the second angle and the depth data at the second angle, determine whether a local depth data abnormal point exists in the point cloud matching result at the second angle, and not discover that the local depth data abnormal point exists and is difficult to perform depth data re-acquisition and abnormal point correction after the point cloud registration of the depth data of the whole face is completed.
Further, in step S20, the specific step of determining whether the local depth data abnormal point exists in the depth data under each view angle based on the result of the point cloud matching is shown in fig. 6, and fig. 6 is a schematic flow chart of a local depth data abnormal point determining step provided in the first embodiment of the present invention, and the specific steps of the step may be as follows:
step S25: and determining that the plane face image corresponding to the first visual angle is acquired.
Wherein the planar face image is typically an RGB image.
Step S26: and judging whether the overlapping degree between the depth data under the first view angle and the point set of the depth data under the adjacent view angle is larger than a preset adjacent overlapping threshold value or not.
Step S27: if so, determining that the depth data under the first view angle does not have local depth data abnormal points.
Step S28: if not, determining that local depth data abnormal points exist in the depth data under the first view angle.
In steps S21-S27, the embodiment performs point cloud matching on the depth data under adjacent view angles and the planar face image with the same angle as the depth data acquisition angle, accurately determines whether the depth data acquired under each angle has local depth data abnormal points, improves the accuracy of the depth data, and further improves the accuracy of three-dimensional face modeling.
As an optional implementation manner, after step S20 is executed to determine that the local depth data abnormal points exist in the depth data under a certain view angle, the embodiment may further determine the number of the local depth data abnormal points under the view angle, that is, determine whether the number of the local depth data abnormal points in the depth data under the first view angle is greater than a preset abnormal point number threshold. When the number of the partial depth data abnormal points is determined to be lower than the preset abnormal point number threshold value, executing the subsequent step S30; and when the number of the local depth data abnormal points is higher than the preset abnormal point number threshold, discarding the depth data under the view angle to ensure the accuracy of the depth data.
As an optional implementation manner, after the step of performing the "performing depth data supplementary collection on the area of the local depth data abnormal point" in step S30, the determination of the local depth data abnormal point may be further performed on the depth data obtained by supplementary collection, that is: judging whether the number of local depth data abnormal points of the supplementary acquired depth data under the visual angle is more than a preset threshold value; if so, discarding the complementarily acquired depth data at the viewing angle. The preset threshold may be adjusted according to specific situations, and may be a value including zero. Optionally, in other embodiments, after directly discarding the depth data of the complementary acquisition under the viewing angle, the complementary acquisition of the depth data may be performed again by slightly adjusting the viewing angle.
For step S40, the three-dimensional model in this embodiment is obtained by performing point cloud matching on depth data under multiple angles of view, and then performing surface fitting, and the specific manner of the point cloud matching can refer to steps S21-S24.
As an alternative implementation, considering that the user may have requirements on the three-dimensional model of the face in terms of aesthetic or other aspects, the three-dimensional model that needs to be satisfied by the user needs to be approved by the user, after step S40, the embodiment may further include the following steps: transmitting the three-dimensional model to a display so that a user returns model confirmation information according to the display, wherein the model confirmation information indicates whether the user is satisfied with the three-dimensional model or not; if yes, storing the three-dimensional model; if not, the steps S10-S40 are executed again.
Second embodiment
In order to cooperate with the three-dimensional face reconstruction method provided in the first embodiment of the present invention, the second embodiment of the present invention further provides a three-dimensional face reconstruction device 100.
Referring to fig. 7, fig. 7 is a schematic block diagram of a three-dimensional face reconstruction device according to a second embodiment of the present invention.
The three-dimensional face reconstruction device 100 includes a depth data acquisition module 110, an abnormal point judgment module 120, a supplementary acquisition module 130, and a model acquisition module 140.
The depth data obtaining module 110 is configured to obtain depth data of the target face under multiple angles of view.
The abnormal point judging module 120 is configured to judge whether the depth data at each view angle has a local depth data abnormal point.
And the supplementary acquisition module 130 is configured to, when a local depth data point exists, re-acquire depth data under a view angle corresponding to the local depth data point, so as to update the depth data under the multiple view angles.
The model obtaining module 140 is configured to obtain a three-dimensional model of the target face according to the depth data under the multiple perspectives.
As an optional implementation manner, the three-dimensional face reconstruction device 100 in this embodiment may further include: the face position determining module is used for acquiring a preview video stream of the face through a planar camera, determining the position of the face under each view angle based on the preview video stream, and determining a planar face image under each view angle based on the position of the face in the preview video stream under each view angle; and the adjusting module is used for adjusting an acquisition area of the depth camera under each view angle based on the position so that the depth camera can acquire the depth data of the target face in the acquisition area.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding procedure in the foregoing method for the specific working procedure of the apparatus described above, and this will not be repeated here.
Third embodiment
Referring to fig. 8, fig. 8 is a block diagram of an electronic device according to a third embodiment of the present invention. The electronic device 200 provided in the present embodiment may include the three-dimensional face reconstruction apparatus 100, the adjustable image capturing apparatus 201, and the processor 202. Optionally, the electronic device 200 may also include a memory 203, a memory controller 204, and a display 205.
The adjustable camera device 201 includes a camera assembly and a processor, the camera assembly includes a depth camera and a plane camera, and the depth camera and the plane camera can rotate and translate based on a control signal of the processor of the adjustable camera device 201. It should be understood that the processor of the adjustable image capturing apparatus 201 may be the same as the processor 202.
The adjustable camera device 201, the processor 202, the memory 203, and the storage controller 204 are electrically connected directly or indirectly to each other, so as to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The three-dimensional face reconstruction device 100 includes at least one software functional module that may be stored in the memory 203 in the form of software or firmware (firmware) or cured in an Operating System (OS) of the three-dimensional face reconstruction device 100. The processor 202 is configured to execute executable modules stored in the memory 203, such as software functional modules or computer programs included in the three-dimensional face reconstruction device 100.
The processor 202 may be an integrated circuit chip with signal processing capabilities. The processor 202 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 202 may be any conventional processor or the like.
The Memory 203 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory 203 is configured to store a program, and the processor 202 executes the program after receiving an execution instruction, and a method executed by a server defined by a flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 202 or implemented by the processor 202.
The display 205 provides an interactive interface (e.g., a user interface) between the electronic device 200 and the user or is used to display image data to a user reference, such as displaying a generated three-dimensional model of a face to the user. In this embodiment, the display 205 may be a liquid crystal display or a touch display. In the case of a touch display, the touch display may be a capacitive touch screen or a resistive touch screen, etc. supporting single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more locations on the touch display and communicate the sensed touch operations to the processor 202 for computation and processing.
It is to be understood that the configuration shown in fig. 8 is merely illustrative, and that the electronic device 200 may also include more or fewer components than those shown in fig. 8, or have a different configuration than that shown in fig. 8. The components shown in fig. 8 may be implemented in hardware, software, or a combination thereof.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding procedure in the foregoing method for the specific working procedure of the apparatus described above, and this will not be repeated here.
In summary, the embodiment of the invention provides a three-dimensional face reconstruction method, a device and a storage medium thereof, wherein the three-dimensional face reconstruction method performs depth data supplementary acquisition of local depth data abnormal points when the local depth data abnormal points exist in the depth data, so that immediate system feedback and supplementary acquisition of the depth data are performed when the local depth data abnormal points appear in the depth data, the problem that the local depth data abnormal points easily appear in the depth data is solved, and the acquisition quality of the depth data is improved; according to the method, the three-dimensional modeling of the target face is performed based on the depth data after the supplementary acquisition, and the accuracy of the three-dimensional model of the target face obtained by reconstructing the three-dimensional face is improved.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (12)

1. The three-dimensional face reconstruction method is characterized by comprising the following steps of:
acquiring depth data of a target face under a plurality of view angles;
judging whether the depth data under each view angle has local depth data abnormal points or not;
if yes, the depth data under the view angles corresponding to the local depth data abnormal points are re-acquired, so that the depth data under the multiple view angles are updated;
obtaining a three-dimensional model of the target face according to the depth data under the multiple view angles;
judging whether the depth data under each view angle has local depth data abnormal points or not comprises the following steps:
performing point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle, and judging whether local depth data abnormal points exist in the depth data under each view angle or not based on the result of the point cloud matching;
the process of judging whether the local depth data abnormal points exist or not through the point cloud matching and the depth data acquisition process are performed simultaneously, the point cloud matching is performed on the plane face image of the second angle and the depth data of the second angle while the depth data of the target face is acquired at the third angle, and whether the local depth data abnormal points exist or not is judged according to the point cloud matching result of the second angle.
2. The method for reconstructing a three-dimensional face according to claim 1, wherein the acquiring depth data of the target face at a plurality of view angles comprises:
and driving the depth camera to shoot the target face when rotating to each preset shooting angle, and obtaining depth data of the target face under a plurality of view angles.
3. The three-dimensional face reconstruction method according to claim 1, wherein the performing point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle includes:
acquiring a key point set of a planar face image under each view angle through face key point detection, and taking the key point set as a target point cloud P t Taking the point set of the depth data as a source point cloud P s
Accurate fixed point cloud registration equation P through rough matching t = R * P s The rotation matrix R and the translation matrix T are approximated in +T;
determining an accurate rotation matrix R and a translation matrix T in the point cloud registration equation through coarse registration based on the approximate rotation matrix R and the translation matrix T;
substituting the accurate rotation matrix R and the translation matrix T into the point cloud registration equation to obtain a transformation result.
4. A three-dimensional face reconstruction method according to claim 3, wherein the accurate point cloud registration equation P is calculated by rough matching t = R * P s The rotation matrix R and the translation matrix T approximated in +t include:
determining a point cloud registration equation P through a four-point superposition method searching strategy t = R * P s Cloud of target points P in +T t And source point cloud P s An approximate rotation matrix R and a translation matrix T with overlapping degree exceeding a preset overlapping threshold, wherein the source point cloud P is transformed based on the approximate rotation matrix R and the translation matrix T s Target point cloud P with any point within tolerance t And (2) the overlapping points are the overlapping degree, and the proportion of the overlapping points to the number of all the points is the overlapping degree.
5. A three-dimensional face reconstruction method according to claim 3, wherein the determining, by coarse registration, the accurate rotation matrix R and translation matrix T in the point cloud registration equation based on the approximated rotation matrix R and translation matrix T comprises:
using the approximated rotation matrix R and translation matrix T to transform the source point cloud P s Transforming to the target point cloud P t To determine the source point cloud P s And the target point cloud P t Two points with middle distance smaller than the corresponding point threshold value are corresponding points P i t And P i s
Based on the approximated rotation matrix R and translation matrix T and the corresponding points P i t And P i s And performing iterative optimization on the rotation matrix R and the translation matrix T to obtain an accurate rotation matrix R and an accurate translation matrix T.
6. The three-dimensional face reconstruction method according to claim 1, wherein the determining whether the depth data at each view angle has the local depth data abnormal point based on the result of the point cloud matching comprises:
determining that a plane face image corresponding to the first visual angle is acquired;
judging whether the overlapping degree between the depth data under the first view angle and the point set of the depth data under the adjacent view angle is larger than a preset adjacent overlapping threshold value or not;
if yes, determining that no local depth data abnormal point exists in the depth data under the first view angle; if not, determining that local depth data abnormal points exist in the depth data under the first view angle.
7. The three-dimensional face reconstruction method according to claim 1, further comprising:
and acquiring a preview video stream of the target face through a planar camera, determining the position of the target face under each view angle based on the preview video stream, and determining a planar face image under each view angle based on the position of the target face in the preview video stream under each view angle.
8. The three-dimensional face reconstruction method according to claim 7, further comprising:
And adjusting an acquisition area of the depth camera under each view angle based on the position, so that the depth camera can acquire the depth data of the target face in the acquisition area.
9. A three-dimensional face reconstruction device, characterized in that the three-dimensional face reconstruction device comprises:
the depth data acquisition module is used for acquiring depth data of the target face under a plurality of view angles;
the abnormal point judging module is used for judging whether the depth data under each view angle has local depth data abnormal points or not;
the supplementary acquisition module is used for re-acquiring depth data under the view angles corresponding to the local depth data abnormal points when the local depth data abnormal points exist, so as to update the depth data under the multiple view angles;
the model acquisition module is used for acquiring a three-dimensional model of the target face according to the depth data under the multiple view angles;
the abnormal point judging module judges whether the depth data under each view angle has local abnormal points of the depth data or not, and the abnormal point judging module comprises:
performing point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle, and judging whether local depth data abnormal points exist in the depth data under each view angle or not based on the result of the point cloud matching;
The process of judging whether the local depth data abnormal points exist or not through the point cloud matching and the depth data acquisition process are performed simultaneously, the point cloud matching is performed on the plane face image of the second angle and the depth data of the second angle while the depth data of the target face is acquired at the third angle, and whether the local depth data abnormal points exist or not is judged according to the point cloud matching result of the second angle.
10. An adjustable camera device, characterized in that the adjustable camera device comprises a camera component and a processor;
the camera assembly comprises a depth camera and a plane camera, and the depth camera and the plane camera can rotate and translate based on control signals of the processor;
the depth camera is used for acquiring depth data of a target face under a plurality of view angles, and whether the depth data under each view angle has local depth data abnormal points or not is judged by the following modes:
performing point cloud matching on the depth data under each view angle and the planar face image acquired under the same view angle, and judging whether local depth data abnormal points exist in the depth data under each view angle or not based on the result of the point cloud matching;
The process of judging whether the local depth data abnormal points exist or not through the point cloud matching and the depth data acquisition process are performed simultaneously, the point cloud matching is performed on the plane face image of the second angle and the depth data of the second angle while the depth data of the target face is acquired at the third angle, and whether the local depth data abnormal points exist or not is judged according to the point cloud matching result of the second angle.
11. An electronic device comprising a processor, a memory and a bus, the processor being connected to the memory via the bus, the processor being operable to execute a program stored in the memory to perform the steps of the method of any of claims 1-8.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer program instructions which, when read and executed by a processor, perform the steps of the method of any of claims 1-8.
CN201811207981.1A 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof Active CN109377551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811207981.1A CN109377551B (en) 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811207981.1A CN109377551B (en) 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof

Publications (2)

Publication Number Publication Date
CN109377551A CN109377551A (en) 2019-02-22
CN109377551B true CN109377551B (en) 2023-06-27

Family

ID=65400799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811207981.1A Active CN109377551B (en) 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof

Country Status (1)

Country Link
CN (1) CN109377551B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816784B (en) * 2019-02-25 2021-02-23 盾钰(上海)互联网科技有限公司 Method and system for three-dimensional reconstruction of human body and medium
CN110188604A (en) * 2019-04-18 2019-08-30 盎锐(上海)信息科技有限公司 Face identification method and device based on 2D and 3D image
CN110188616B (en) * 2019-05-05 2023-02-28 上海盎维信息技术有限公司 Space modeling method and device based on 2D and 3D images
CN110363858B (en) * 2019-06-18 2022-07-01 新拓三维技术(深圳)有限公司 Three-dimensional face reconstruction method and system
CN112446961A (en) * 2019-08-30 2021-03-05 中兴通讯股份有限公司 Scene reconstruction system and method
CN111127639A (en) * 2019-12-30 2020-05-08 深圳小佳科技有限公司 Cloud-based face 3D model construction method, storage medium and system
CN111199579B (en) * 2020-01-02 2023-01-24 腾讯科技(深圳)有限公司 Method, device, equipment and medium for building three-dimensional model of target object
CN111367485B (en) * 2020-03-16 2023-04-18 安博思华智能科技有限责任公司 Method, device, medium and electronic equipment for controlling combined multimedia blackboard
CN113538649B (en) * 2021-07-14 2022-09-16 深圳信息职业技术学院 Super-resolution three-dimensional texture reconstruction method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera
CN107767456A (en) * 2017-09-22 2018-03-06 福州大学 A kind of object dimensional method for reconstructing based on RGB D cameras
CN108389260A (en) * 2018-03-19 2018-08-10 中国计量大学 A kind of three-dimensional rebuilding method based on Kinect sensor
CN108537876A (en) * 2018-03-05 2018-09-14 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN107767456A (en) * 2017-09-22 2018-03-06 福州大学 A kind of object dimensional method for reconstructing based on RGB D cameras
CN108537876A (en) * 2018-03-05 2018-09-14 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN108389260A (en) * 2018-03-19 2018-08-10 中国计量大学 A kind of three-dimensional rebuilding method based on Kinect sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
weixin_30882895.Smart3D系列教程4之《案例实战演练1——小物件的照片三维重建》.《CSDN》.https://blog.csdn.net/weixin_30882895/article/details/96008372,2016, *

Also Published As

Publication number Publication date
CN109377551A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109377551B (en) Three-dimensional face reconstruction method and device and storage medium thereof
JP5580164B2 (en) Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program
CN111060023B (en) High-precision 3D information acquisition equipment and method
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN111292364B (en) Method for rapidly matching images in three-dimensional model construction process
Bonfort et al. General specular surface triangulation
US8718326B2 (en) System and method for extracting three-dimensional coordinates
CA3068659A1 (en) Augmented reality displays with active alignment and corresponding methods
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
US20130335535A1 (en) Digital 3d camera using periodic illumination
JP2003130621A (en) Method and system for measuring three-dimensional shape
CN113034612B (en) Calibration device, method and depth camera
JP6580761B1 (en) Depth acquisition apparatus and method using polarization stereo camera
CN111028205B (en) Eye pupil positioning method and device based on binocular distance measurement
CN102997891A (en) Device and method for measuring scene depth
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN112254680B (en) Multi freedom's intelligent vision 3D information acquisition equipment
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
JP2008275366A (en) Stereoscopic 3-d measurement system
Furferi et al. A RGB-D based instant body-scanning solution for compact box installation
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
WO2019087253A1 (en) Stereo camera calibration method
JP5727969B2 (en) Position estimation apparatus, method, and program
JP3637416B2 (en) Three-dimensional measurement method, three-dimensional measurement system, image processing apparatus, and computer program
Li et al. Two-phase approach—Calibration and iris contour estimation—For gaze tracking of head-mounted eye camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant