CN109377551A - A kind of three-dimensional facial reconstruction method, device and its storage medium - Google Patents

A kind of three-dimensional facial reconstruction method, device and its storage medium Download PDF

Info

Publication number
CN109377551A
CN109377551A CN201811207981.1A CN201811207981A CN109377551A CN 109377551 A CN109377551 A CN 109377551A CN 201811207981 A CN201811207981 A CN 201811207981A CN 109377551 A CN109377551 A CN 109377551A
Authority
CN
China
Prior art keywords
depth data
under
visual angle
point
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811207981.1A
Other languages
Chinese (zh)
Other versions
CN109377551B (en
Inventor
廖声洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201811207981.1A priority Critical patent/CN109377551B/en
Publication of CN109377551A publication Critical patent/CN109377551A/en
Application granted granted Critical
Publication of CN109377551B publication Critical patent/CN109377551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of three-dimensional facial reconstruction method, device and its storage mediums, are related to technical field of image processing.The three-dimensional facial reconstruction method includes: to obtain the depth data of target face under a multiple of viewing angles;Judge the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data;If so, the depth data under the corresponding visual angle of the abnormal point of the partial-depth data is reacquired, to update the depth data under the multiple visual angle;According to the depth data under the multiple visual angle, the threedimensional model of the target face is obtained.This method carries out depth data supplement acquisition by depth camera when there are partial-depth data abnormal, improves the error correcting capability of face modeling, to obtain more accurate human face three-dimensional model.

Description

A kind of three-dimensional facial reconstruction method, device and its storage medium
Technical field
The present invention relates to technical field of image processing, in particular to a kind of three-dimensional facial reconstruction method, device and its Storage medium.
Background technique
With the rapid development of computer equipment, network and image processing techniques, traditional naked eyes image identification method Gradually substituted by the image recognition mode carried out automatically by computer, thus greatly improve image recognition efficiency and Accuracy rate.Depth data based on face is that a computer vision field is normal by the automatic three-dimensional face modeling of computer Seeing for task, application are also very much.
But existing three-dimensional face modeling pattern when carrying out point cloud matching to depth data and face flat image by It usually will appear the abnormal point of partial-depth data, the abnormal point meeting of the partial-depth data in calculating the reasons such as mistake, acquisition error in data Greatly influence the accuracy of human face three-dimensional model.And the prior art need single sweep operation completely a face under different angle Depth data carries out point cloud registering again, does not have immediate feedback logic, is easy to appear the abnormal point of partial-depth data, therefore is not easy pair The acquisition quality of depth data control effectively.
Summary of the invention
In view of this, the embodiment of the present invention is designed to provide a kind of three-dimensional facial reconstruction method, device and its storage Medium, to solve the above problems.
In a first aspect, the embodiment of the invention provides a kind of three-dimensional facial reconstruction method, the three-dimensional facial reconstruction method It include: to obtain the depth data of target face under a multiple of viewing angles;Judge the depth data under each visual angle with the presence or absence of part The abnormal point of depth data;If so, the depth data under the corresponding visual angle of the abnormal point of the partial-depth data is reacquired, to update State the depth data under multiple visual angles;According to the depth data under the multiple visual angle, the three-dimensional mould of the target face is obtained Type.
Synthesis is in a first aspect, depth data and plane facial image under each visual angle of described pair of acquisition carry out a cloud Match, comprising: described to obtain the depth data of target face under a multiple of viewing angles, comprising: driving depth camera rotates extremely every Each the target face is shot when default shooting angle, obtains the depth number of the target face under a multiple of viewing angles According to.
Synthesis is in a first aspect, the depth data under each visual angle of judgement whether there is the abnormal point of partial-depth data, packet It includes: point cloud matching being carried out with the plane facial image acquired under same view angle to the depth data under each visual angle, is based on point The matched result of cloud judges the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data.
Synthesis is in a first aspect, the depth data under each visual angle and the plane face figure acquired under same view angle As carrying out point cloud matching, comprising: the crucial point set of the plane facial image under each visual angle is obtained by face critical point detection, Using the crucial point set as target point cloud Pt, using the point set of the depth data as source point cloud Ps;It is determined a little by rough registration Cloud is registrated equation Pt=R*PsApproximate spin matrix R and translation matrix T in+T;Based on the approximate spin matrix R and translation Matrix T determines accurate spin matrix R and translation matrix T in the point cloud registering equation by rough registration;It will be described accurate Spin matrix R and translation matrix T substitutes into the point cloud registering equation and obtains transformation results.
Synthesis is in a first aspect, described determine point cloud registering equation P by rough registrationt=R*PsApproximate spin matrix R in+T With translation matrix T, comprising: determine point cloud registering equation P by 4 overlay technique search strategiest=R*PsMake target point cloud in+T PtWith source point cloud PsDegree of overlapping be more than default anti-eclipse threshold approximate spin matrix R and translation matrix T, wherein described in being based on The approximate spin matrix R and transformed source point cloud P of translation matrix TsInterior any point existing target point in range of tolerable variance Cloud PtPoint be coincidence point, the coincidence point account for all the points quantity ratio be the degree of overlapping.
It is comprehensive in a first aspect, described based on the approximate spin matrix R and translation matrix T, institute is determined by rough registration State accurate spin matrix R and translation matrix T in point cloud registering equation, comprising: with the approximate spin matrix R and translation square T is by the source point cloud P for battle arraysTransform to the target point cloud PtCoordinate under, determine the source point cloud PsWith the target point cloud Pt Two points that middle distance is less than corresponding points threshold value are corresponding points Pi tAnd Pi s;Based on the approximate spin matrix R and translation matrix The T and corresponding points Pi tAnd Pi sOptimization is iterated to spin matrix R and translation matrix T, obtains accurate spin matrix R With translation matrix T.
Synthesis is in a first aspect, the result based on point cloud matching judges the depth data under each visual angle with the presence or absence of office The abnormal point of portion's depth data, comprising: determination has collected the corresponding plane facial image in the first visual angle;Judge under first visual angle Depth data and adjacent view under depth data point set between degree of overlapping whether be greater than and preset adjacent anti-eclipse threshold;If It is to determine that the abnormal point of partial-depth data is not present in the depth data under first visual angle;If it is not, determining under first visual angle Depth data there are the abnormal points of partial-depth data.
Synthesis is in a first aspect, the three-dimensional facial reconstruction method further include: by planar pickup head to the target face The acquisition of preview video stream is carried out, the position of the target face under each viewing angle, and base are determined based on the preview video stream Plane facial image of the target face in the case where the position in preview video stream determines the visual angle under each visual angle.
Synthesis is in a first aspect, the three-dimensional facial reconstruction method further include: is based on the position, adjusts depth camera and exist Pickup area under each visual angle, so that the depth camera can collect the target face in the pickup area Depth data.
Second aspect, the embodiment of the invention provides a kind of three-dimensional facial reconstruction device, the three-dimensional facial reconstruction device It include: that depth data obtains module, for obtaining the depth data of target face under a multiple of viewing angles;Abnormal judgment module is used In judging the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data;Acquisition module is supplemented, for there are offices At portion's depth data abnormal, the depth data under the corresponding visual angle of the abnormal point of the partial-depth data is reacquired, to update State the depth data under multiple visual angles;Model obtains module, for according to the depth data under the multiple visual angle, described in acquisition The threedimensional model of target face.
The third aspect, the embodiment of the invention also provides a kind of adjustable camera equipment, the adjustable camera equipment packet Include camera assembly and processor, the camera assembly includes depth camera and plane camera, the depth camera and institute Stating planar pickup head can be rotated based on the control signal of the processor, be translated.
Fourth aspect, the embodiment of the invention also provides a kind of electronic equipment, the electronic equipment includes processor, storage Device and bus, the processor are connect with the memory by the bus, and the processor can be run in the memory The program of storage is to execute the step in any of the above-described aspect the method.
Fourth aspect, it is described computer-readable the embodiment of the invention also provides a kind of computer-readable storage medium It takes and is stored with computer program instructions in storage medium, when the computer program instructions are read and run by a processor, hold Step in any of the above-described aspect the method for row.
Beneficial effect provided by the invention is:
The present invention provides a kind of three-dimensional facial reconstruction method, device and its storage medium, the three-dimensional facial reconstruction side There are the depth data supplement acquisitions that the partial-depth data abnormal point is carried out when abnormal of local data in depth data for method, thus There is the supplement acquisition that instant system feedback and depth data are carried out at partial-depth data abnormal in depth data, improves depth Degree improves the acquisition quality of depth data according to the problem of being easy to appear at partial-depth data abnormal;This method is based on supplement Depth data after acquisition carries out the three-dimensional modeling of target face, improves the three-dimensional of the target face of three-dimensional facial reconstruction acquisition The accuracy of model.
Other features and advantages of the present invention will be illustrated in subsequent specification, also, partly be become from specification It is clear that by implementing understanding of the embodiment of the present invention.The objectives and other advantages of the invention can be by written theory Specifically noted structure is achieved and obtained in bright book, claims and attached drawing.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is a kind of flow diagram for three-dimensional facial reconstruction method that first embodiment of the invention provides;
Fig. 2 is the stream that a kind of plane man face image acquiring that first embodiment of the invention provides and acquisition position determine step Journey schematic diagram;
Fig. 3 is the flow diagram that a kind of position that first embodiment of the invention provides determines step;
Fig. 4 is a kind of process signal of the establishment step for face critical point detection model that first embodiment of the invention provides Figure;
Fig. 5 is a kind of flow diagram for point cloud matching step that first embodiment of the invention provides;
Fig. 6 is a kind of flow diagram for abnormal judgment step of partial-depth data that first embodiment of the invention provides;
Fig. 7 is a kind of module diagram for three-dimensional facial reconstruction device that second embodiment of the invention provides;
Fig. 8 is a kind of structure that can be applied to the electronic equipment in the embodiment of the present application that third embodiment of the invention provides Block diagram.
Icon: 100- three-dimensional facial reconstruction device;110- depth data obtains module;Abnormal judgment module of 120-;130- Supplement acquisition module;140- model obtains module;200- electronic equipment;201- adjustable camera equipment;202- processor;203- Memory;204- storage control;205- display.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
The term being likely to occur in the embodiment of the present invention is explained first:
Depth camera, for acquiring the three-dimensional depth data of object, when existing depth camera generallys use flight Between telemetry (Time of flight, TOF) or structure light measurement method.Wherein, flight time telemetry 3D be imaged, be by Target continuously transmits light pulse, and the light returned from object is then received with sensor, passes through the flight (round-trip) of detecting optical pulses Time obtains object distance.This technology is substantially similar with 3D laser sensor principle, and only 3D laser sensor is Point by point scanning, and TOF camera is then the depth information for obtaining entire image simultaneously.TOF camera and common machines visual imaging mistake Journey also has similar place, is all made of several units such as light source, optical component, sensor, control circuit and processing circuit. Compared with belonging to the very similar binocular measuring system of non-intrusion type three-dimensional detection, suitable application area, TOF camera has not Three-dimensional spy is carried out using triangulation after binocular three-dimensional measuring passes through left and right stereo matching with 3D imaging mechanism It surveys, and TOF camera is that the target range obtained by entering, reflecting optical detection obtains.The principle of structure light measurement method is to avoid Complicated algorithm design in binocular ranging, then a camera is replaced with to the infrared projection for actively projecting complicated hot spot outward Instrument, and the camera of another parallel position also becomes infrared camera, can be clearly seen that all hot spots of projector projects. Because human eye can't see infrared light spot, and texture is extremely complex, this is just very beneficial for binocular ranging algorithm, can be with very Succinct algorithm, identifies depth information.
Point cloud matching exactly seeks rotational translation matrix (the rigid transform or between two clouds It is identical to be transformed to target point cloud (target cloud) by euclidean transform for source point cloud (source cloud) Under coordinate system.Such as the texture information based on plane facial image, by the coordinate transform of in corresponding depth data key point To under multiple key point coordinate systems of plane facial image.In reverse-engineering, computer vision, in the fields such as cultural relic digitalization, It is imperfect due to cloud, rotary shifted, translation dislocation etc., make the complete cloud that must be obtained just need to partial points cloud into Row matches, in order to obtain the complete data model of testee, it is thus necessary to determine that a suitable coordinate system will be obtained from each visual angle To point set be merged under unified coordinate system and form a complete point cloud, then can facilitate and visually be grasped Make, here it is point cloud registerings.The autoregistration technology of point cloud is by certain algorithm or statistics law utilization computer It calculates and misplaces between two pieces of point clouds, to achieve the effect that two pieces of point cloud autoregistrations, essence is exactly different coordinate systems In the point cloud data that measures carry out the transformation of coordinate system, to obtain whole data model, the key of problem is how to allow To the parameter R (spin matrix) and T (translation vector) of coordinate transform, so that the three-dimensional data measured under two visual angles is through coordinate transform Distance afterwards is minimum, and registration Algorithm can be divided into whole registration and local registration according to process at present.
First embodiment
Through the applicant the study found that 3D face reconstruction is exactly the mould for reconstructing the 3D of face from one or more photo Type, traditional 3D Problems of Reconstruction solution are Model Matching (model fitting), i.e., by identifying that the 2D of multiple faces is flat Key point in the picture of face carries out Model Matching based on these key points and adjusts the three-dimensional coordinate Feature Points on 3D model It is matched at the character pair point with all 2D plan view on pieces, obtains the faceform after 3D is rebuild.Wherein, three-dimensional coordinate shape The face depth data that can also be acquired according to depth camera obtains, to improve the accuracy of 3D face reconstruction.But by In the acquisition problems or depth data of depth data and the matching problem of plane 2D picture, it usually will appear the face of point cloud matching There are problems that in data at partial-depth data abnormal, and completely a scene is done a little again since 3D face reconstruction needs single sweep operation Cloud operation carries out point cloud registering, is not easy point abnormal to local depth data and is modified, and obtains to greatly affected 3D and rebuild Faceform quality.To solve the above-mentioned problems, first embodiment of the invention provides a kind of three-dimensional facial reconstruction method, The three-dimensional facial reconstruction method can by computer, smart phone, Cloud Server or other be able to carry out the processing of logical operation Equipment operation.
Referring to FIG. 1, Fig. 1 is a kind of process signal for three-dimensional facial reconstruction method that first embodiment of the invention provides Figure.The specific steps of the three-dimensional facial reconstruction method can be such that
Step S10: the depth data of target face under a multiple of viewing angles is obtained.
Depth data can be obtained by depth camera in the present embodiment, depth camera can be based on structure flash ranging Camera away from method, TOF measurement method or other Range finder modes.The angle that different angle in this step can be horizontally oriented Degree or vertical direction angle variation, can also be towards the variation at any angle on the universal globe of face, to increase people The sample integrated degree of face image, improves the model accuracy of subsequent human face rebuilding.
Step S20: judge the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data.
Point abnormal to local depth data can be the point cloud matching based on the depth data under adjacent view in the present embodiment As a result judged, it should be appreciated that the judgement can also be in above-mentioned adjacent view depth image in other embodiments On the basis of carried out according to the depth image and two-dimensional person's face image that are acquired under same view angle.
Step S30: if so, the depth data under the corresponding visual angle of the abnormal point of the partial-depth data is reacquired, with more Depth data under new the multiple visual angle.
Three-dimensional facial reconstruction method provided in this embodiment can carry out in the depth data that each predetermined angle acquires Predetermined angle label, to establish the mapping table of depth data and its acquisition angles, so that there are partial-depths in certain angle Corresponding angle is returned to according to mapping table controlling depth camera at data abnormal and carries out resurveying for depth data.
Step S40: according to the depth data under the multiple visual angle, the threedimensional model of the target face is obtained.
There are carry out the part when abnormal of local data in depth data by S10-S40 through the above steps for the embodiment of the present invention The depth data of the abnormal point of depth data supplements acquisition, to carry out immediately when depth data occurs at partial-depth data abnormal System feedback and the acquisition of the supplement of depth data, improve the problem of depth data is easy to appear at partial-depth data abnormal, mention The high acquisition quality of depth data;This method carries out the three-dimensional modeling of target face based on the depth data after supplement acquisition, Improve the accuracy of the threedimensional model of the target face of three-dimensional facial reconstruction acquisition.
As an alternative embodiment, the present embodiment can also include that plane facial image is adopted before step S10 Collection and acquisition position determine step, referring to FIG. 2, Fig. 2 is that a kind of plane facial image that first embodiment of the invention provides is adopted Collection and acquisition position determine the flow diagram of step.The plane man face image acquiring and acquisition position determine that step specifically can be with It is as follows:
Step S2: the acquisition of preview video stream is carried out to the target face by planar pickup head, is regarded based on the preview Frequency, which flows, determines the position of the target face under each viewing angle, and based on target face under each visual angle in preview video stream Position determine the plane facial image under the visual angle.
In the present embodiment, the aobvious of preview video stream can be carried out to user by screen when acquiring the preview video stream Show, so that user adjusts face location;It can not also show the preview video stream, simplify user's operation process.
Step S4: being based on the position, the pickup area of depth camera under each viewing angle is adjusted, so that the depth Camera can collect the depth data of the target face in the pickup area.
The adjusting of the pickup area of depth camera in the present embodiment may include visual angle angle and lens focus, light The adjustment of the cameras basic parameters such as circle.
Optionally, the determination of face location can be realized by the face key point position determined in plane facial image, Specifically, referring to FIG. 3, step S2 may include following specific steps in the present embodiment:
Step S2.1: the face critical point detection model that load is constructed based on neural network.
Step S2.2: the corresponding preview graph of preview data frame of face described in the video flowing under each viewing angle is obtained Picture.
Step S2.3: the position of the face is determined in the preview image by the face critical point detection model It sets.
It should be understood that execute step S1.1 load face critical point detection model before, it is also necessary to based on comprising The facial image database of face key point mark carries out the foundation and training of face critical point detection model.Referring to FIG. 4, Fig. 4 is A kind of flow diagram of the establishment step for face critical point detection model that first embodiment of the invention provides.The face is crucial The establishment step of point detection model can be such that
Step S1.1: acquiring or directly acquires multiple facial images.
Step S1.2: face key point is carried out to the facial image and is precisely marked.
Step S1.3: the facial image after mark is divided into training set, verifying collects.
Step S1.4: the training set is carried out based on neural network model training, while being collected with the verifying to training Intermediate result verify and adjusting training parameter in real time, stop instruction when training precision and verifying precision reach corresponding threshold value Practice, obtains face critical point detection model.
Optionally, the step S1.3 in the present embodiment can also mark off test while dividing training set and verifying collection Then collection tests face critical point detection model using the test set after step S1.4, measures the property of the model Energy and accuracy.
It is described " obtaining the depth data of target face under a multiple of viewing angles " for step S10, it is specifically as follows: driving Depth camera shoots the target face in every rotation to each default shooting angle, obtains the target face Depth data under a multiple of viewing angles.
Wherein, default shooting angle can be using the maximum rotation angle of depth camera as default start angle, often Rotate the shooting, collecting of depth data of 1 ° of progress.It should be understood that per the angular interval of depth data shooting, collecting twice Degree can also be adjusted in addition to 1 ° according to user demand or depth data acquisition demand.
For step S20, " judging the depth data under each visual angle with the presence or absence of the abnormal point of the partial-depth data " step It suddenly can be with specifically: a cloud is carried out with the plane facial image acquired under same view angle to the depth data under each visual angle Match, the result based on point cloud matching judges the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data.
As an alternative embodiment, above-mentioned " adopts the depth data under each visual angle under same view angle The specific steps of the plane facial image progress point cloud matching integrated " are as shown in figure 5, what Fig. 5 provided as first embodiment of the invention A kind of flow diagram of point cloud matching step, the point cloud matching step specifically can be such that
Step S21: obtaining the crucial point set of the plane facial image under each visual angle by face critical point detection, by institute Crucial point set is stated as target point cloud Pt, using the point set of the depth data as source point cloud Ps
Step S22: point cloud registering equation P is determined by rough registrationt=R*PsApproximate spin matrix R and translation square in+T Battle array T.
In the present embodiment, step S22 is specifically as follows: determining point cloud registering equation P by 4 overlay technique search strategiest =R*PsMake target point cloud P in+TtWith source point cloud PsDegree of overlapping be more than the approximate spin matrix R of default anti-eclipse threshold and flat Move matrix T, wherein based on the approximate spin matrix R and transformed source point cloud P of translation matrix TsInterior any point is holding Existing target point cloud P in poor rangetPoint be coincidence point, the coincidence point account for all the points quantity ratio be the overlapping Degree.
Step S23: based on the approximate spin matrix R and translation matrix T, the point cloud registering is determined by rough registration Accurate spin matrix R and translation matrix T in equation.
In the present embodiment, step S23 is specifically as follows: with the approximate spin matrix R and translation matrix T by the source Point cloud PsTransform to the target point cloud PtCoordinate under, determine the source point cloud PsWith the target point cloud PtMiddle distance is less than Two points of corresponding points threshold value are corresponding points Pi tAnd Pi s;Based on the approximate spin matrix R and translation matrix T and described Corresponding points Pi tAnd Pi sOptimization is iterated to spin matrix R and translation matrix T, obtains accurate spin matrix R and translation matrix T。
Step S24: the accurate spin matrix R and translation matrix T is substituted into the point cloud registering equation and is converted As a result.
The collection process of above-mentioned point cloud matching and depth data in the present embodiment carries out simultaneously, to improve partial-depth number According to the feedback speed of abnormal point.Such as it when the depth data that depth camera carries out third angle to face acquires, handles simultaneously Equipment can the depth data of the plane facial image to second angle and second angle carry out point cloud matching, judge second angle Point cloud matching result whether there is the abnormal point of partial-depth data, rather than the point cloud in the depth data for completing entire face is matched Just discovery carries out depth data there are the abnormal point of partial-depth data and being not easy and acquire again and abnormal point correction after standard.
Further, " result based on point cloud matching judges that the depth data under each visual angle whether there is in step S20 The specific steps of the abnormal point of partial-depth data " are as shown in fig. 6, Fig. 6 is a kind of partial-depth that first embodiment of the invention provides The flow diagram of abnormal judgment step of data, the specific steps of the step can be such that
Step S25: determination has collected the corresponding plane facial image in the first visual angle.
Wherein, which is usually RGB image.
Step S26: between the point set for judging the depth data under the depth data and adjacent view under first visual angle Degree of overlapping whether be greater than and preset adjacent anti-eclipse threshold.
Step S27: if so, determining that the abnormal point of partial-depth data is not present in the depth data under first visual angle.
Step S28: if it is not, there are the abnormal points of partial-depth data for the depth data under determining first visual angle.
The present embodiment in step S21-S27, under adjacent view depth data and with depth data acquisition angles Identical plane facial image carries out point cloud matching, accurately determines the depth data acquired under each angle with the presence or absence of part The abnormal point of depth data, improves the accuracy of depth data, and then improves the accuracy of three-dimensional face modeling.
As an alternative embodiment, depth data of the present embodiment in the case where executing step S20 and judging some visual angle There are after the abnormal point of partial-depth data, the quantity of the abnormal point of partial-depth data under the visual angle can also be judged, that is, sentenced Whether the quantity of the abnormal point of partial-depth data in depth data under disconnected first visual angle, which is greater than, is preset abnormal amount threshold.It determines The quantity of the abnormal point of partial-depth data executes subsequent step S30 lower than when presetting abnormal amount threshold;Determine partial-depth data The quantity of abnormal point gives up the depth data under the visual angle, then higher than when presetting abnormal amount threshold to ensure the essence of depth data Exactness.
As an alternative embodiment, executing described in step S30 " to the area of the abnormal point of the partial-depth data When after the step of domain progress depth data supplement acquisition ", the depth data of supplement acquisition can also be carried out again local deep Degree according to abnormal point judgement, it may be assumed that judge under the visual angle supplement acquisition depth data existing for the abnormal point of partial-depth data Quantity whether be more than preset threshold;If so, giving up the depth data of the supplement acquisition under the visual angle.Wherein, the default threshold Value can be adjusted as the case may be, can be the numerical value including zero.Optionally, in other embodiments, it is directly giving up It abandons after the depth data of the supplement acquisition under the visual angle, faint adjustment can also be carried out to visual angle and carries out depth data again Supplement acquisition.
For step S40, the threedimensional model in the present embodiment is to carry out point cloud matching to the depth data under multiple visual angles Surface fitting acquisition is carried out afterwards, and the concrete mode of the point cloud matching can refer to step S21-S24.
As an alternative embodiment, in view of user may have beautiful or other aspects to human face three-dimensional model Requirement, need to obtain customer satisfaction system threedimensional model need user itself approve, then after step S40, the present embodiment may be used also To include the following steps: the threedimensional model being sent to display, so that user returns to model validation according to the display Information, the model validation information indicate whether user is satisfied with the threedimensional model;If so, saving the threedimensional model;If It is no, step S10-S40 is executed again.
Second embodiment
For the three-dimensional facial reconstruction method for cooperating first embodiment of the invention to provide, second embodiment of the invention is also provided A kind of three-dimensional facial reconstruction device 100.
Referring to FIG. 7, Fig. 7 is a kind of module signal for three-dimensional facial reconstruction device that second embodiment of the invention provides Figure.
Three-dimensional facial reconstruction device 100 includes that depth data obtains module 110, abnormal judgment module 120, supplement acquisition mould Block 130 and model obtain module 140.
Depth data obtains module 110, for obtaining the depth data of target face under a multiple of viewing angles.
Abnormal judgment module 120, for judging the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data.
Acquisition module 130 is supplemented, for reacquiring the partial-depth data when there are partial-depth data abnormal Depth data under the corresponding visual angle of abnormal point, to update the depth data under the multiple visual angle.
Model obtains module 140, for obtaining the three of the target face according to the depth data under the multiple visual angle Dimension module.
As an alternative embodiment, the three-dimensional facial reconstruction device 100 in the present embodiment can also include: face Position determination module is based on the preview video for carrying out the acquisition of preview video stream to the face by planar pickup head Stream determines the position of the face under each viewing angle, and is determined based on position of each visual angle human face in preview video stream Plane facial image under the visual angle;Adjustment module adjusts depth camera under each viewing angle for being based on the position Pickup area so that the depth camera can collect the depth number of the target face in the pickup area According to.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description Specific work process, no longer can excessively be repeated herein with reference to the corresponding process in preceding method.
3rd embodiment
Fig. 8 is please referred to, Fig. 8 is a kind of electronics that can be applied in the embodiment of the present application that third embodiment of the invention provides The structural block diagram of equipment.Electronic equipment 200 provided in this embodiment may include three-dimensional facial reconstruction device 100, adjustable take the photograph As equipment 201 and processor 202.Optionally, electronic equipment 200 can also include memory 203, storage control 204 and Display 205.
Wherein, adjustable camera equipment 201 includes camera assembly and processor, and camera assembly includes depth camera peace Face camera, depth camera and plane camera can the control signal based on the processor of adjustable camera equipment 201 into Row rotation, translation.It should be understood that the processor of adjustable camera equipment 201 can also be same processing with processor 202 Device.
The adjustable camera equipment 201, processor 202, memory 203, each element of storage control 204 are mutual It is directly or indirectly electrically connected, to realize the transmission or interaction of data.For example, these elements between each other can by one or A plurality of communication bus or signal wire, which are realized, to be electrically connected.Three-dimensional facial reconstruction device 100 includes that at least one can be with software or solid The form of part (firmware) is stored in memory 203 or is solidificated in the operating system of three-dimensional facial reconstruction device 100 Software function module in (operating system, OS).The processor 202 is used to execute to store in memory 203 Executable module, such as software function module or computer program that three-dimensional facial reconstruction device 100 includes.
Processor 202 can be a kind of IC chip, the processing capacity with signal.Above-mentioned processor 202 can To be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;Can also be digital signal processor (DSP), specific integrated circuit (ASIC), Ready-made programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hard Part component.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor It can be microprocessor or the processor 202 be also possible to any conventional processor etc..
Memory 203 may be, but not limited to, random access memory (Random Access Memory, RAM), only It reads memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc.. Wherein, memory 203 is for storing program, and the processor 202 executes described program after receiving and executing instruction, aforementioned Method performed by the server that the stream process that any embodiment of the embodiment of the present invention discloses defines can be applied to processor 202 In, or realized by processor 202.
Display 205 provided between electronic equipment 200 and user an interactive interface (such as user interface) or It is referred to for display image data to user, such as shows the human face three-dimensional model generated to user.In the present embodiment, described Display 205 can be liquid crystal display or touch control display.It can be support single-point and multi-point touch if touch control display Capacitance type touch control screen or resistance type touch control screen of operation etc..Single-point and multi-point touch operation is supported to refer to that touch control display can incude To the touch control operation that one or more positions generate simultaneously on the touch control display, and the touch control operation that this is sensed Processor 202 is transferred to be calculated and handled.
It is appreciated that structure shown in Fig. 8 is only to illustrate, electronic equipment 200 may also include it is more than shown in Fig. 8 or Less component, or with the configuration different from shown in Fig. 8.Each component shown in fig. 8 can using hardware, software or its Combination is realized.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description Specific work process, no longer can excessively be repeated herein with reference to the corresponding process in preceding method.
In conclusion the embodiment of the invention provides a kind of three-dimensional facial reconstruction method, device and its storage medium, it is described There are the depth datas that the partial-depth data abnormal point is carried out when abnormal of local data in depth data for three-dimensional facial reconstruction method Supplement acquisition, to carry out the benefit of instant system feedback and depth data when depth data occurs at partial-depth data abnormal Acquisition is filled, the problem of depth data is easy to appear at partial-depth data abnormal is improved, improves the acquisition quality of depth data; This method carries out the three-dimensional modeling of target face based on the depth data after supplement acquisition, improves three-dimensional facial reconstruction acquisition The accuracy of the threedimensional model of target face.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, flow chart and block diagram in attached drawing Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product, Function and operation.In this regard, each box in flowchart or block diagram can represent the one of a module, section or code Part, a part of the module, section or code, which includes that one or more is for implementing the specified logical function, to be held Row instruction.It should also be noted that function marked in the box can also be to be different from some implementations as replacement The sequence marked in attached drawing occurs.For example, two continuous boxes can actually be basically executed in parallel, they are sometimes It can execute in the opposite order, this depends on the function involved.It is also noted that every in block diagram and or flow chart The combination of box in a box and block diagram and or flow chart can use the dedicated base for executing defined function or movement It realizes, or can realize using a combination of dedicated hardware and computer instructions in the system of hardware.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation together Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should also be noted that similar label and letter exist Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing It is further defined and explained.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.

Claims (13)

1. a kind of three-dimensional facial reconstruction method, which is characterized in that the three-dimensional facial reconstruction method includes:
Obtain the depth data of target face under a multiple of viewing angles;
Judge the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data;
If so, the depth data under the corresponding visual angle of the abnormal point of the partial-depth data is reacquired, to update the multiple view Depth data under angle;
According to the depth data under the multiple visual angle, the threedimensional model of the target face is obtained.
2. three-dimensional facial reconstruction method according to claim 1, which is characterized in that the acquisition target face is in multiple views Depth data under angle, comprising:
Driving depth camera shoots the target face in every rotation to each default shooting angle, described in acquisition The depth data of target face under a multiple of viewing angles.
3. three-dimensional facial reconstruction method according to claim 1, which is characterized in that the depth under each visual angle of judgement Data whether there is the abnormal point of partial-depth data, comprising:
Point cloud matching is carried out with the plane facial image acquired under same view angle to the depth data under each visual angle, is based on point The matched result of cloud judges the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data.
4. three-dimensional facial reconstruction method according to claim 3, which is characterized in that the depth number under each visual angle Point cloud matching is carried out according to the plane facial image acquired under same view angle, comprising:
The crucial point set that the plane facial image under each visual angle is obtained by face critical point detection makees the crucial point set For target point cloud Pt, using the point set of the depth data as source point cloud Ps
Point cloud registering equation P is determined by rough registrationt=R*PsApproximate spin matrix R and translation matrix T in+T;
Based on the approximate spin matrix R and translation matrix T, determined by rough registration accurate in the point cloud registering equation Spin matrix R and translation matrix T;
The accurate spin matrix R and translation matrix T is substituted into the point cloud registering equation and obtains transformation results.
5. three-dimensional facial reconstruction method according to claim 4, which is characterized in that described to determine that point cloud is matched by rough registration Quasi- equation Pt=R*PsApproximate spin matrix R and translation matrix T in+T, comprising:
Point cloud registering equation P is determined by 4 overlay technique search strategiest=R*PsMake target point cloud P in+TtWith source point cloud Ps's Degree of overlapping is more than the approximate spin matrix R and translation matrix T of default anti-eclipse threshold, wherein is based on the approximate spin moment The battle array R and transformed source point cloud P of translation matrix TsInterior any point existing target point cloud P in range of tolerable variancetPoint for be overlapped Point, the ratio that the coincidence point accounts for all the points quantity is the degree of overlapping.
6. three-dimensional facial reconstruction method according to claim 4, which is characterized in that described to be based on the approximate spin moment Battle array R and translation matrix T determines accurate spin matrix R and translation matrix T in the point cloud registering equation, packet by rough registration It includes:
With the approximate spin matrix R and translation matrix T by the source point cloud PsTransform to the target point cloud PtCoordinate Under, determine the source point cloud PsWith the target point cloud PtTwo points that middle distance is less than corresponding points threshold value are corresponding points Pi tWith Pi s
Based on the approximate spin matrix R and the translation matrix T and corresponding points Pi tAnd Pi sTo spin matrix R and translation square Battle array T is iterated optimization, obtains accurate spin matrix R and translation matrix T.
7. three-dimensional facial reconstruction method according to claim 3, which is characterized in that the result based on point cloud matching is sentenced The depth data to break under each visual angle whether there is the abnormal point of partial-depth data, comprising:
Determination has collected the corresponding plane facial image in the first visual angle;
Judge whether is degree of overlapping between the point set of the depth data under the depth data and adjacent view under first visual angle Greater than presetting adjacent anti-eclipse threshold;
If so, determining that the abnormal point of partial-depth data is not present in the depth data under first visual angle;If it is not, determining described first There are the abnormal points of partial-depth data for depth data under visual angle.
8. three-dimensional facial reconstruction method according to claim 1, which is characterized in that the three-dimensional facial reconstruction method also wraps It includes:
The acquisition of preview video stream is carried out to the target face by planar pickup head, based on described in preview video stream determination The position of target face under each viewing angle, and institute is determined based on position of the target face in preview video stream under each visual angle State the plane facial image under visual angle.
9. three-dimensional facial reconstruction method according to claim 8, which is characterized in that the three-dimensional facial reconstruction method also wraps It includes:
Based on the position, the pickup area of depth camera under each viewing angle is adjusted, so that the depth camera is in institute State the depth data that the target face can be collected in pickup area.
10. a kind of three-dimensional facial reconstruction device, which is characterized in that the three-dimensional facial reconstruction device includes:
Depth data obtains module, for obtaining the depth data of target face under a multiple of viewing angles;
Abnormal judgment module, for judging the depth data under each visual angle with the presence or absence of the abnormal point of partial-depth data;
Acquisition module is supplemented, for when there are partial-depth data abnormal, reacquiring the abnormal point pair of the partial-depth data Depth data under the visual angle answered, to update the depth data under the multiple visual angle;
Model obtains module, for obtaining the threedimensional model of the target face according to the depth data under the multiple visual angle.
11. a kind of adjustable camera equipment, which is characterized in that the adjustable camera equipment includes camera assembly and processor;
The camera assembly includes depth camera and plane camera, and the depth camera and the planar pickup head can be with Control signal based on the processor is rotated, is translated.
12. a kind of electronic equipment, which is characterized in that the electronic equipment includes processor, memory and bus, the processor It is connect with the memory by the bus, the processor can run the program stored in the memory with perform claim It is required that the step in any one of 1-9 the method.
13. a kind of computer-readable storage medium, which is characterized in that be stored with meter in the computer-readable storage medium Calculation machine program instruction, when the computer program instructions are read and run by a processor, perform claim requires any one of 1-9 institute State the step in method.
CN201811207981.1A 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof Active CN109377551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811207981.1A CN109377551B (en) 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811207981.1A CN109377551B (en) 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof

Publications (2)

Publication Number Publication Date
CN109377551A true CN109377551A (en) 2019-02-22
CN109377551B CN109377551B (en) 2023-06-27

Family

ID=65400799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811207981.1A Active CN109377551B (en) 2018-10-16 2018-10-16 Three-dimensional face reconstruction method and device and storage medium thereof

Country Status (1)

Country Link
CN (1) CN109377551B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816784A (en) * 2019-02-25 2019-05-28 盾钰(上海)互联网科技有限公司 The method and system and medium of three-dimensionalreconstruction human body
CN110188616A (en) * 2019-05-05 2019-08-30 盎锐(上海)信息科技有限公司 Space modeling method and device based on 2D and 3D image
CN110188604A (en) * 2019-04-18 2019-08-30 盎锐(上海)信息科技有限公司 Face identification method and device based on 2D and 3D image
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN111127639A (en) * 2019-12-30 2020-05-08 深圳小佳科技有限公司 Cloud-based face 3D model construction method, storage medium and system
CN111199579A (en) * 2020-01-02 2020-05-26 腾讯科技(深圳)有限公司 Method, device, equipment and medium for building three-dimensional model of target object
CN111367485A (en) * 2020-03-16 2020-07-03 安博思华智能科技有限责任公司 Method, device, medium and electronic equipment for controlling combined multimedia blackboard
WO2021037257A1 (en) * 2019-08-30 2021-03-04 中兴通讯股份有限公司 Scene reconstruction system and method
CN112634427A (en) * 2019-09-24 2021-04-09 中国移动通信有限公司研究院 Three-dimensional modeling method, three-dimensional modeling device, network equipment and computer-readable storage medium
CN113538649A (en) * 2021-07-14 2021-10-22 深圳信息职业技术学院 Super-resolution three-dimensional texture reconstruction method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN107767456A (en) * 2017-09-22 2018-03-06 福州大学 A kind of object dimensional method for reconstructing based on RGB D cameras
CN108389260A (en) * 2018-03-19 2018-08-10 中国计量大学 A kind of three-dimensional rebuilding method based on Kinect sensor
CN108537876A (en) * 2018-03-05 2018-09-14 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780618A (en) * 2016-11-24 2017-05-31 周超艳 3 D information obtaining method and its device based on isomery depth camera
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN107767456A (en) * 2017-09-22 2018-03-06 福州大学 A kind of object dimensional method for reconstructing based on RGB D cameras
CN108537876A (en) * 2018-03-05 2018-09-14 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN108389260A (en) * 2018-03-19 2018-08-10 中国计量大学 A kind of three-dimensional rebuilding method based on Kinect sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEIXIN_30882895: "《CSDN》", 8 December 2016, HTTPS://BLOG.CSDN.NET/WEIXIN_30882895/ARTICLE/DETAILS/96008372 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816784A (en) * 2019-02-25 2019-05-28 盾钰(上海)互联网科技有限公司 The method and system and medium of three-dimensionalreconstruction human body
CN110188604A (en) * 2019-04-18 2019-08-30 盎锐(上海)信息科技有限公司 Face identification method and device based on 2D and 3D image
CN110188616A (en) * 2019-05-05 2019-08-30 盎锐(上海)信息科技有限公司 Space modeling method and device based on 2D and 3D image
CN110188616B (en) * 2019-05-05 2023-02-28 上海盎维信息技术有限公司 Space modeling method and device based on 2D and 3D images
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110363858B (en) * 2019-06-18 2022-07-01 新拓三维技术(深圳)有限公司 Three-dimensional face reconstruction method and system
WO2021037257A1 (en) * 2019-08-30 2021-03-04 中兴通讯股份有限公司 Scene reconstruction system and method
CN112634427A (en) * 2019-09-24 2021-04-09 中国移动通信有限公司研究院 Three-dimensional modeling method, three-dimensional modeling device, network equipment and computer-readable storage medium
CN111127639A (en) * 2019-12-30 2020-05-08 深圳小佳科技有限公司 Cloud-based face 3D model construction method, storage medium and system
WO2021135627A1 (en) * 2020-01-02 2021-07-08 腾讯科技(深圳)有限公司 Method for constructing three-dimensional model of target object, and related apparatus
CN111199579B (en) * 2020-01-02 2023-01-24 腾讯科技(深圳)有限公司 Method, device, equipment and medium for building three-dimensional model of target object
CN111199579A (en) * 2020-01-02 2020-05-26 腾讯科技(深圳)有限公司 Method, device, equipment and medium for building three-dimensional model of target object
US12014461B2 (en) 2020-01-02 2024-06-18 Tencent Technology (Shenzhen) Company Limited Method for constructing three-dimensional model of target object and related apparatus
CN111367485A (en) * 2020-03-16 2020-07-03 安博思华智能科技有限责任公司 Method, device, medium and electronic equipment for controlling combined multimedia blackboard
CN111367485B (en) * 2020-03-16 2023-04-18 安博思华智能科技有限责任公司 Method, device, medium and electronic equipment for controlling combined multimedia blackboard
CN113538649A (en) * 2021-07-14 2021-10-22 深圳信息职业技术学院 Super-resolution three-dimensional texture reconstruction method, device and equipment

Also Published As

Publication number Publication date
CN109377551B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN109377551A (en) A kind of three-dimensional facial reconstruction method, device and its storage medium
US11126016B2 (en) Method and device for determining parameters for spectacle fitting
RU2668404C2 (en) Device for recording images in three-dimensional scale, method for formation of 3d-image and method for producing device for recording images in three dimensional scale
US9265414B2 (en) Methods and systems for measuring interpupillary distance
CN104173054B (en) Measuring method and measuring device for height of human body based on binocular vision technique
JP4226550B2 (en) Three-dimensional measurement data automatic alignment apparatus and method using optical markers
EP3101624A1 (en) Image processing method and image processing device
CN104154898B (en) A kind of initiative range measurement method and system
CN108389212A (en) To measure the method and computer-readable media of foot size
CN107290854A (en) Virtual implementing helmet interpupillary distance optimizes the method and device of display with the depth of field
CN105354825B (en) The intelligent apparatus of reading matter position and its application in automatic identification read-write scene
CN103959012A (en) Position and orientation determination in 6-dof
CN107003744B (en) Viewpoint determines method, apparatus and electronic equipment
CN105654547B (en) Three-dimensional rebuilding method
JP2003130621A (en) Method and system for measuring three-dimensional shape
CN108830906A (en) A kind of camera parameters automatic calibration method based on virtual Binocular Vision Principle
CN105354822B (en) The intelligent apparatus of read-write element position and application in automatic identification read-write scene
CN105335699B (en) Read-write scene is read and write intelligent identification and the application thereof of element three-dimensional coordinate
CN103229036A (en) Method of determining at least one refraction characteristic of an ophthalmic lens
CN109974659A (en) A kind of embedded range-measurement system based on binocular machine vision
CN110440747A (en) It is assessed using the distance of multiple camera apparatus
TW201017092A (en) Three-dimensional model reconstruction method and system thereof
Langmann Wide area 2D/3D imaging: development, analysis and applications
CN109785375A (en) Distance detection method and device based on 3D modeling
Li et al. Two-phase approach—Calibration and iris contour estimation—For gaze tracking of head-mounted eye camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant