CN113610971A - Fine-grained three-dimensional model construction method and device and electronic equipment - Google Patents

Fine-grained three-dimensional model construction method and device and electronic equipment Download PDF

Info

Publication number
CN113610971A
CN113610971A CN202111069651.2A CN202111069651A CN113610971A CN 113610971 A CN113610971 A CN 113610971A CN 202111069651 A CN202111069651 A CN 202111069651A CN 113610971 A CN113610971 A CN 113610971A
Authority
CN
China
Prior art keywords
model
dimensional
point cloud
fine
grained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111069651.2A
Other languages
Chinese (zh)
Inventor
颜雪军
程海敬
王春茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202111069651.2A priority Critical patent/CN113610971A/en
Publication of CN113610971A publication Critical patent/CN113610971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application provides a fine-grained three-dimensional model construction method and device and electronic equipment. Wherein the method comprises the following steps: acquiring an original point cloud of an object to be modeled; generating a three-dimensional deformation model based on the original point cloud; and shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model. The three-dimensional deformation model with lower granularity can be generated based on the original point cloud, and the granularity of the three-dimensional deformation model is improved by offsetting the points on the surface patch of the three-dimensional deformation model, so that the fine-granularity three-dimensional model with higher granularity is obtained. And the detailed information which cannot be represented by the three-dimensional deformation model can be recovered in the migration process, so that the precision of the constructed three-dimensional model can be effectively improved.

Description

Fine-grained three-dimensional model construction method and device and electronic equipment
Technical Field
The application relates to the technical field of machine vision, in particular to a fine-grained three-dimensional model construction method and device and electronic equipment.
Background
Since more information, such as geometric information of the object, is contained in the three-dimensional information of the object than the two-dimensional texture information. Therefore, the three-dimensional information can represent the object more accurately, so that the object can be identified by using the three-dimensional information, and exemplarily, the face identification can be performed by using the three-dimensional face information, so that the identification accuracy is improved.
And the three-dimensional information of the object is often represented by a form of a three-dimensional deformation model. To generate the three-dimensional deformation model of the object, a three-dimensional point cloud of the object may be generated based on the depth image of the object, and point cloud matching may be performed on the three-dimensional point cloud with reference to a specific topology structure to convert the unordered three-dimensional point cloud into the three-dimensional deformation model having the specific topology structure.
However, the granularity of the generated three-dimensional deformation model is limited due to the referenced topological structure, and the detail information of the object is difficult to reflect, so that the generated three-dimensional deformation model has low precision.
Disclosure of Invention
The embodiment of the application aims to provide a fine-grained three-dimensional model construction method and device and electronic equipment so as to improve the accuracy of a generated three-dimensional model. The specific technical scheme is as follows:
in a first aspect of embodiments of the present application, a fine-grained three-dimensional model building method is provided, where the method includes:
acquiring an original point cloud of an object to be modeled;
generating a three-dimensional deformation model based on the original point cloud;
and shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model.
In one possible embodiment, the generating a three-dimensional deformation model based on the original point cloud comprises:
extracting point cloud characteristics of the original point cloud;
and inputting the point cloud characteristics into a point cloud reconstruction model obtained by pre-training to obtain a three-dimensional deformation model output by the point cloud reconstruction model.
In a possible embodiment, the point cloud reconstruction model is trained in advance by:
obtaining a sample fine-grained three-dimensional model with granularity higher than a preset upper-limit granularity threshold;
registering the sample fine-grained three-dimensional model to obtain a sample fine-grained three-dimensional deformation model;
carrying out down-sampling on the sample fine-granularity three-dimensional deformation model according to a preset down-sampling mode to obtain a sample coarse-granularity three-dimensional deformation model with granularity lower than a preset lower-limit granularity threshold;
and training to obtain a point cloud reconstruction model according to the point cloud characteristics of the sample coarse-grained three-dimensional deformation model, the preset down-sampling mode and the sample fine-grained three-dimensional deformation model.
In one possible embodiment, after the extracting point cloud features of the original point cloud, the method further comprises:
inputting the original point cloud into a pre-trained offset map generation model to obtain an offset map output by the offset map generation model, wherein the offset map is used for representing the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud;
shifting points on at least one patch of the three-dimensional deformation model based on the offset of the three-dimensional deformation model relative to the original point cloud to obtain a fine-grained three-dimensional model, comprising:
and shifting points on at least one patch of the three-dimensional deformation model based on the offset represented by the shift diagram to obtain a fine-grained three-dimensional model.
In a possible embodiment, the offset map generation model is trained in advance by:
scanning the sample object by using scanning equipment with the precision higher than a preset precision threshold value to obtain a true value three-dimensional model;
carrying out point cloud registration on the true value three-dimensional model to generate a sample three-dimensional deformation model;
calculating the offset of each point on the surface of the sample three-dimensional deformation model relative to each point in the true value three-dimensional model to obtain a sample offset map, wherein the resolution of the sample offset map is higher than a preset resolution threshold;
and training to obtain an offset map generation model according to the sample offset map, the sample three-dimensional deformation model and the true value three-dimensional model.
In a possible embodiment, the object to be modeled is a human face of a person to be identified;
the method further comprises the following steps:
and carrying out face recognition on the fine-grained three-dimensional model, and determining the identity of the person to be recognized.
In a possible embodiment, the object to be modeled is a human face of a person;
and shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the fine-grained three-dimensional model comprises the following steps:
and offsetting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to the detail texture point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the detail texture point is a point in the original point cloud used for representing the facial texture of the human face.
In a second aspect of the embodiments of the present application, a face recognition method is provided, where the method includes:
acquiring point cloud of the face of a person to be identified as original point cloud;
generating a three-dimensional deformation model based on the original point cloud;
shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model;
determining a target three-dimensional model matched with the fine-grained three-dimensional model from preset three-dimensional models corresponding to a plurality of candidate persons;
and determining the person to be identified as a candidate person corresponding to the target three-dimensional model.
In a third aspect of embodiments of the present application, there is provided a fine-grained three-dimensional model building apparatus, including:
the point cloud acquisition module is used for acquiring an original point cloud of an object to be modeled;
the point cloud reconstruction module is used for generating a three-dimensional deformation model based on the original point cloud;
and the shifting module is used for shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, and the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model.
In one possible embodiment, the point cloud reconstruction module generates a three-dimensional deformation model based on the original point cloud, including:
extracting point cloud characteristics of the original point cloud;
and inputting the point cloud characteristics into a point cloud reconstruction model obtained by pre-training to obtain a three-dimensional deformation model output by the point cloud reconstruction model.
In a possible embodiment, the apparatus further includes a first model training module, configured to train a point cloud reconstruction model according to the following manner:
obtaining a sample fine-grained three-dimensional model with granularity higher than a preset upper-limit granularity threshold;
registering the sample fine-grained three-dimensional model to obtain a sample fine-grained three-dimensional deformation model;
carrying out down-sampling on the sample fine-granularity three-dimensional deformation model according to a preset down-sampling mode to obtain a sample coarse-granularity three-dimensional deformation model with granularity lower than a preset lower-limit granularity threshold;
and training to obtain a point cloud reconstruction model according to the point cloud characteristics of the sample coarse-grained three-dimensional deformation model, the preset down-sampling mode and the sample fine-grained three-dimensional deformation model.
In a possible embodiment, the migration module is further configured to input the original point cloud into a migration map generation model obtained through pre-training, so as to obtain a migration map output by the migration map generation model, where the migration map is used to represent the migration amount of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud;
the shifting module shifts points on at least one patch of the three-dimensional deformation model based on the offset of the three-dimensional deformation model relative to the original point cloud to obtain a fine-grained three-dimensional model, and the shifting module comprises:
and shifting points on at least one patch of the three-dimensional deformation model based on the offset represented by the shift diagram to obtain a fine-grained three-dimensional model.
In a possible embodiment, the apparatus further includes a second model training module, configured to train the offset map generation model according to the following manner:
scanning the sample object by using scanning equipment with the precision higher than a preset precision threshold value to obtain a true value three-dimensional model;
carrying out point cloud registration on the true value three-dimensional model to generate a sample three-dimensional deformation model;
calculating the offset of each point on the surface of the sample three-dimensional deformation model relative to each point in the true value three-dimensional model to obtain a sample offset map, wherein the resolution of the sample offset map is higher than a preset resolution threshold;
and training to obtain an offset map generation model according to the sample offset map, the sample three-dimensional deformation model and the true value three-dimensional model.
In a possible embodiment, the object to be modeled is a human face of a person to be identified;
the device also comprises a face recognition module which is used for carrying out face recognition on the fine-grained three-dimensional model and determining the identity of the person to be recognized.
In a possible embodiment, the object to be modeled is a human face of a person;
the shifting module is specifically configured to shift points on at least one patch of the three-dimensional deformation model based on offsets of points on the surface of the three-dimensional deformation model relative to detail texture points in the original point cloud to obtain a fine-grained three-dimensional model, where the detail texture points are points in the original point cloud used for representing facial textures of the human face.
In a fourth aspect of embodiments of the present application, there is provided an electronic device, including:
a memory for storing a computer program;
a processor adapted to perform the method steps of any of the above first aspects when executing a program stored in the memory.
In a fifth aspect of embodiments of the present application, there is provided a face recognition device, including: a scanning unit, a reconstruction unit and an identification unit;
the scanning unit is used for scanning the face of a person to be identified to obtain an original point cloud;
the reconstruction unit is used for generating a three-dimensional deformation model based on the original point cloud; shifting points on at least one surface patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model;
the identification unit is used for determining a target three-dimensional model matched with the fine-grained three-dimensional model from preset three-dimensional models corresponding to a plurality of candidate persons.
In a sixth aspect of embodiments of the present application, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, realizes the method steps of any one of the above-mentioned first aspects.
The embodiment of the application has the following beneficial effects:
the fine-grained three-dimensional model construction method and device and the electronic equipment are provided by the embodiment of the application. The three-dimensional deformation model with lower granularity can be generated based on the original point cloud, and the granularity of the three-dimensional deformation model is improved by offsetting the points on the surface patch of the three-dimensional deformation model, so that the fine-granularity three-dimensional model with higher granularity is obtained. And because the deviation is carried out according to the deviation amount of each point in the original point cloud of each point object on the surface of the three-dimensional deformation model during the deviation, the detail information which cannot be represented by the three-dimensional deformation model can be recovered in the deviation process, so that the fine-grained three-dimensional model constructed by the method is more accurate compared with the three-dimensional deformation model directly generated based on the original point cloud, namely the method can effectively improve the accuracy of the constructed three-dimensional model.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is also obvious for a person skilled in the art to obtain other embodiments according to the drawings.
Fig. 1 is a schematic flow chart of a method for constructing a three-dimensional deformation model according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a point cloud reconstruction model training method provided in the embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a method for training a migration map generation model according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a face recognition method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a three-dimensional deformation model building apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a face recognition device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
In order to more clearly describe the fine-grained three-dimensional model construction method provided in the embodiment of the present application, a possible application scenario of the fine-grained three-dimensional model construction method provided in the embodiment of the present application will be exemplarily described below, it can be understood that the following example is only one possible application scenario of the fine-grained three-dimensional model construction method provided in the embodiment of the present application, in other possible embodiments, the fine-grained three-dimensional model construction method provided in the embodiment of the present application may also be applied to other possible application scenarios, and the following example does not limit the present invention in any way.
The accuracy of face recognition based on the two-dimensional face image is low in the situations of large posture, shielding, poor illumination condition and the like, and in order to improve the accuracy of face recognition in the situations, three-dimensional face recognition can be carried out based on three-dimensional information of the face.
In order to obtain the three-dimensional information of the face, a depth camera can be used for collecting a depth image of the face, a disordered three-dimensional point cloud is generated based on the depth image of the face, and each point in the three-dimensional point cloud is used for representing the space coordinate of each point on the face.
It is understood that although the faces of different persons are different, the geometric structures of the five sense organs of different faces are the same, and can be represented by a uniform topological structure, namely a three-dimensional deformation model. Therefore, in order to better represent the human face, point cloud registration can be performed on the three-dimensional point cloud according to the unified topological structure, so as to convert the three-dimensional point cloud into a three-dimensional point cloud with the unified topological structure, which is hereinafter referred to as a three-dimensional deformation model.
However, due to the uniform topological structure, the granularity of the generated three-dimensional deformation model is often limited, and it is difficult to accurately reflect the detail information of the human face. For example, one face in the three-dimensional deformation model may be a triangular plane represented by three vertices in the three-dimensional deformation model, and a face corresponding to the one face in the human face may be a curved face, and the three vertices may be difficult to effectively represent the curved face. Due to the limitation of granularity, the three-dimensional deformation model cannot represent detailed information such as wrinkles and moles.
Based on this, an embodiment of the present application provides a fine-grained three-dimensional model building method, which may be applied to any electronic device with a three-dimensional deformation model building capability, and referring to fig. 1, fig. 1 is a schematic flow diagram of the three-dimensional deformation model building method provided in the embodiment of the present application, and includes:
s101, obtaining an original point cloud of an object to be modeled.
And S102, generating a three-dimensional deformation model based on the original point cloud.
S103, shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model.
By adopting the embodiment, the three-dimensional deformation model with lower granularity can be generated based on the original point cloud, and the granularity of the three-dimensional deformation model is improved by shifting the points on the surface patch of the three-dimensional deformation model, so that the fine-grained three-dimensional model with higher granularity is obtained. And because the deviation is carried out according to the deviation amount of each point in the original point cloud of each point object on the surface of the three-dimensional deformation model during the deviation, the detail information which cannot be represented by the three-dimensional deformation model can be recovered in the deviation process, so that the fine-grained three-dimensional model constructed by the method is more accurate compared with the three-dimensional deformation model directly generated based on the original point cloud, namely the method can effectively improve the accuracy of the constructed three-dimensional model.
In S101, the object to be modeled may be different types of objects according to different embodiments, and may be, for example, a person, a vehicle, a pet, or the like. The original point cloud is a disordered point cloud used for representing the spatial distribution of each point on the object to be modeled.
The original point cloud can be generated based on the depth image of the object to be modeled, or can be acquired based on other point cloud acquisition modes, which is not limited in this application. And, the original point cloud may be a point cloud that has undergone point cloud preprocessing, which may include point cloud key point estimation, ICP registration, and the like.
The point cloud preprocessing used for obtaining the original point cloud can be realized based on any point cloud preprocessing method, and since how to carry out the point cloud preprocessing is not a main application point of the application, the details are not repeated herein.
In S102, a point cloud feature of the original point cloud may be extracted, and the point cloud feature may be input to a point cloud reconstruction model obtained by pre-training, so as to obtain a three-dimensional deformation model output by the point cloud reconstruction model. The point cloud feature can be regarded as a low-dimensional characterization manner of the original point cloud, wherein the low-dimensional means that the dimension of the dimension is lower than that of the original point cloud. Illustratively, the point cloud features may be represented in the form of one-dimensional feature vectors.
The point cloud reconstruction model may be a model obtained by pre-training and used for mapping point cloud features to a three-dimensional deformation model, and how to train the point cloud reconstruction model will be described in detail below, and will not be described herein again.
The point cloud features may be extracted by using any model with feature extraction capability, and the model may be a network model obtained based on deep learning training or an algorithm model obtained based on conventional machine learning training, and illustratively, the model may be a pointent network model, a pointent + + network model, or a PATs network model.
It can be understood that the input lengths of some models are fixed, and for convenience of description, it is assumed that the model for extracting the point cloud features requires that the number of points included in the input original point cloud is N, and the number of points included in the original point cloud (hereinafter referred to as M) may be different according to an application scenario, so that M may be equal to N or may not be equal to N according to the application scenario, and when M is not equal to N, the original point cloud cannot be directly input to the model to extract the point cloud features of the original point cloud.
Based on this, in a possible embodiment, when M is smaller than N, the original point cloud may be upsampled to obtain an upsampled point cloud including N points, and the upsampled point cloud is input to the model to obtain a point cloud feature output by the model as the point cloud feature of the original point cloud.
When M is larger than N, the original point cloud can be subjected to down-sampling to obtain a down-sampling point cloud containing N points, and the down-sampling point cloud is input into the model to obtain point cloud characteristics output by the model and used as the point cloud characteristics of the original point cloud.
By adopting the embodiment, the point cloud characteristics of the original point clouds with different points can be extracted by using the same model, so that the applicability of the scheme is improved.
In S103, as described above, the analysis is limited by the granularity of the three-dimensional deformation model, and part of the details in the object to be modeled will be lost in the process of generating the three-dimensional deformation model, and each point on the surface of the three-dimensional deformation model has a certain offset relative to the point cloud in the original point cloud. How to obtain the offset will be described in detail below, and will not be described in detail here.
When the displacement is performed, in addition to the displacement of the point on the patch of the first three-dimensional model, the displacement of the vertex of the first three-dimensional model may also be performed, that is, only the displacement of the point on the patch of the three-dimensional deformation model may be performed, or the displacement of the vertex of the three-dimensional deformation model and the point on the patch may also be performed. When a point on a patch is shifted, the shift amount should include a shift component in the normal direction of the patch, i.e., the point is no longer in the plane of the original patch after the shift. It will be appreciated that after offsetting at least one point on a patch, a new vertex is added, and the patch will be decomposed into a plurality of sub-patches according to the new added vertex. The granularity (the number of vertices and the number of patches) of the fine-grained three-dimensional model will be higher than that of the three-dimensional deformation model.
In a possible embodiment, the object to be modeled is a human face of a person, and in the embodiment, to sufficiently improve the accuracy of the generated fine-grained three-dimensional model, the point on at least one patch of the three-dimensional deformation model may be shifted based on the offset of each point on the surface of the three-dimensional deformation model relative to each fine texture point in the original point cloud, so as to obtain the fine-grained three-dimensional model, where the fine texture point is a point in the original point cloud used for representing the facial texture of the human face.
Facial textures include, but are not limited to, wrinkles, scars, moles, bumps, rashes, and the like. By adopting the embodiment, the points on the surface patch in the three-dimensional deformation model can be read for shifting, so that the face texture of the human face can be presented in the generated fine-grained three-dimensional model, the fine-grained three-dimensional model can present more star information about personnel, and the accuracy of the generated fine-grained three-dimensional model is further improved.
The representation of the offset may be different according to the application scenario, and for example, in one possible embodiment, the offset may be represented in the form of a UV offset map. The UV offset map is a two-dimensional image, and the pixel value of any pixel point (u, v) in the image is used for representing the offset of a point with a spatial coordinate (u, v) in the three-dimensional deformation model. When a point in the three-dimensional deformation model is shifted, if the spatial coordinate of the point to be shifted is the same as the pixel coordinate of a pixel point in the shift map, the point in the three-dimensional deformation model can be shifted according to the pixel coordinate of the pixel point, if the spatial coordinate of the point to be shifted is different from the pixel coordinate of any pixel point in the shift map, interpolation can be performed according to the pixel value of a neighborhood pixel point in the shift map, and the point in the three-dimensional deformation model can be shifted according to the interpolation result.
The UV offset graph can represent continuous coordinate offset on the surface of the three-dimensional deformation model, and interpolation of any granularity is carried out on a patch of the three-dimensional deformation model so as to obtain a three-dimensional model with higher granularity.
The offset map may be generated based on an offset map generation model, and for example, the offset map output by the offset map generation model may be obtained by inputting the original point cloud to an offset map generation model obtained by training in advance.
After the fine-grained three-dimensional model is obtained, the fine-grained three-dimensional model can be processed differently according to different application scenes, and illustratively, the fine-grained three-dimensional model can be used for image rendering in some application scenes to generate a two-dimensional image of an image to be modeled. In other application scenarios, if the object to be modeled is the face of the person to be identified, the fine-grained three-dimensional model may also be subjected to face identification to determine the identity of the person to be identified.
The process of face recognition may be to input the fine-grained three-dimensional model into a feature coding model in a pre-training process to obtain feature codes of the fine-grained three-dimensional model, match the feature codes of the fine-grained three-dimensional model with feature codes corresponding to a plurality of candidate persons, determine candidate persons whose corresponding feature codes are matched with the feature codes of the fine-grained three-dimensional model, and recognize persons to be recognized as the candidate persons. The feature code corresponding to each candidate person may be extracted according to a two-dimensional face image of the candidate person, or may be extracted according to a three-dimensional face model of the candidate person.
In order to more clearly describe the three-dimensional deformation model construction method provided in the embodiment of the present application, how to train to obtain the point cloud reconstruction model is described below, referring to fig. 2, where fig. 2 is a schematic flow diagram of the point cloud reconstruction model training method provided in the embodiment of the present application, and the schematic flow diagram may include:
s201, obtaining a sample fine-grained three-dimensional model with the granularity higher than a preset upper-limit granularity threshold.
The sample fine-grained three-dimensional model can be a real fine-grained point cloud generated based on scanning data obtained by scanning a sample object by high-precision scanning equipment.
S202, registering the sample fine-grained three-dimensional model to obtain the sample fine-grained three-dimensional deformation model.
Taking the sample fine-grained three-dimensional model as the real fine-grained point cloud as an example, the real fine-grained point cloud can be registered by using algorithms such as NICP (network information center) and the like to obtain a corresponding three-dimensional deformation model which is used as the sample fine-grained three-dimensional deformation model.
S203, down-sampling the sample fine-granularity three-dimensional deformation model according to a preset down-sampling mode to obtain a sample coarse-granularity three-dimensional deformation model with granularity lower than a preset lower-limit granularity threshold value.
In the down-sampling process, the process of deleting the vertexes in the three-dimensional deformation model can be regarded as a process of deleting the vertexes in the three-dimensional deformation model, in order to record the sampling process, the connection relation of each intermediate grid and the vertex deletion relation between adjacent point clouds can be recorded, and the barycentric coordinates from a deletion fixed point to the nearest triangular patch are recorded, so that the purpose of restoring the vertexes which need to be newly added through the barycentric coordinates in the process of reconstructing the sampling on the network is realized. The down-sampling method may be a method of minimizing a quadratic error, or other down-sampling methods may be used, which is not limited in this embodiment.
And S204, training to obtain a point cloud reconstruction model according to the point cloud characteristics of the sample coarse-grained three-dimensional deformation model, a preset down-sampling mode and the sample fine-grained three-dimensional deformation model.
The input of the trained point cloud reconstruction model is a point cloud feature, and the point cloud reconstruction model can reconstruct and obtain a point cloud with a fine granularity based on the mapping relation between the learned point cloud feature and the point cloud with the fine granularity, namely a three-dimensional deformation model with the fine granularity based on the point cloud feature reconstruction.
For more clearly explaining the three-dimensional deformation model construction method provided in the embodiment of the present application, how to train to obtain the offset diagram generation model is described below, referring to fig. 3, where fig. 3 is a schematic flow diagram of the offset diagram generation model training method provided in the embodiment of the present application, and the method may include:
s301, scanning the sample object by using a scanning device with the precision higher than a preset precision threshold value to obtain a true value three-dimensional model.
Taking the sample object as a face as an example, the true value three-dimensional model may be obtained by scanning with a high-precision face scanner, a stereoscopic vision, and other scanning devices.
And S302, carrying out point cloud registration on the true value three-dimensional model to generate a sample three-dimensional deformation model.
The truth-value three-dimensional model can be registered to the three-dimensional deformation model by using a point cloud registration method such as NICP (zero intensity point correlation) and the like, so that the sample three-dimensional deformation model is obtained.
And S303, calculating the offset of each point on the surface of the sample three-dimensional deformation model relative to each point in the true three-dimensional model to obtain a sample offset map.
Wherein the resolution of the sample offset map is higher than a preset resolution threshold. The method can be realized by performing UV expansion on the sample three-dimensional deformation model by using CG software to obtain a UV image of the sample three-dimensional model, calculating the registration error between the sample three-dimensional deformation model and the true value three-dimensional deformation model, and obtaining a sample offset image based on the registration error and the UV image.
And S304, training to obtain an offset map generation model according to the sample offset map, the sample three-dimensional deformation model and the true value three-dimensional deformation model.
In the process of training the offset map generation model, the sample three-dimensional deformation model and the real three-dimensional deformation model are input into the initial model to obtain a prediction offset map output by the initial model, and model parameters of the initial model are adjusted according to a difference value between the prediction offset map and the sample offset map to obtain the offset map generation model.
In some application scenarios, due to the limitation of the precision of the acquisition equipment, the acquisition environment and other factors, a sample fine-grained three-dimensional model and a true-value three-dimensional model may not be obtained, and in these application scenarios, an unsupervised training mode may be adopted to train and obtain a point cloud reconstruction model and a migration diagram generation model.
The method includes the steps of obtaining sample point clouds of sample objects, extracting point cloud characteristics of the sample point clouds, inputting the point cloud characteristics of the sample point clouds to a first initial model to obtain a three-dimensional deformation model output by the first initial model, inputting the sample point clouds to a second initial model to obtain an offset map output by the second initial model, and offsetting points on at least one patch of the three-dimensional deformation model output by the first initial model based on offset represented by the offset map to obtain a fine-grained three-dimensional model.
Calculating one or more losses in the following losses according to the sample point cloud, the three-dimensional deformation model and the fine-grained three-dimensional model, adjusting model parameters of the first initial model according to the calculated losses to obtain a point cloud reconstruction model, and adjusting model parameters of the second initial model according to the calculated losses to obtain an offset map generation model:
chamfer distance loss: calculating chamfering distances between each point of the sample point cloud and a corresponding vertex in the three-dimensional deformation model, and if the distances are smaller than a preset threshold value, taking the distance difference as chamfering distance loss, wherein the chamfering distance loss is used for adjusting model parameters of the first initial model; if the distance is larger than the threshold value, the point in the sample point cloud is regarded as a noise point, the calculated distance is not used as the chamfer distance loss, and the model parameter is not adjusted according to the chamfer distance loss.
Offset distance loss: and calculating the distance from each point in the sample point cloud to a patch with the minimum average distance of the patch vertex in the three-dimensional deformation model, if the distance is smaller than a preset threshold value, calculating the coordinate position of the point in the offset map according to the projection coordinate of the point in the sample point cloud on the patch, acquiring the predicted value of the offset value, and calculating the difference between the predicted value and the distance between the point cloud and the patch as offset distance loss, wherein the offset distance loss is used for adjusting the model parameters of the second initial model.
If the distance between the point in the sample point cloud and the nearest patch is larger than a preset threshold value or the projection coordinate is positioned outside the patch, the point does not participate in calculation of offset distance loss. If the distance is greater than the preset threshold, the point is considered as a noise point, and the offset map is prevented from predicting an excessive value. If the projection coordinates are outside the patch, the calculated offset coordinates do not match the physical meaning of the offset map, and therefore do not participate in the loss calculation.
Coding feature consistency loss: and inputting the obtained fine-grained three-dimensional point cloud into a point cloud coding network to obtain a feature code of the fine-grained three-dimensional point cloud, and calculating the difference between the feature code of the fine-grained three-dimensional point cloud and the feature code of the sample point cloud to be used as a coding feature consistency loss, wherein the coding feature consistency loss can be used for adjusting the model parameters of the first initial model and/or the second initial model. And adjusting the model parameters of the first initial model and/or the second initial model by using the coding feature consistency loss, so that when a fine-grained three-dimensional model is constructed according to the generated point cloud reconstruction model and the offset map generation model, the constructed fine-grained three-dimensional model and the original point cloud have the same or similar information, and the information loss in the fine-grained three-dimensional model construction process is reduced or even completely eliminated.
In other possible embodiments, other losses may be adopted, and the embodiment does not limit this.
Based on the fine-grained three-dimensional model construction method, the embodiment of the application also provides a face recognition method, which can be applied to any electronic equipment with face recognition capability, including but not limited to entrance guard, airport security inspection and the like. The face recognition method provided by the embodiment of the application can be as shown in fig. 4, and includes:
s401, point cloud of the face of the person to be recognized is obtained and used as original point cloud.
The point cloud of the face of the person to be identified can be obtained by scanning with an electronic device with scanning capability. Or the original point cloud can be constructed by the human faces of the people to be recognized, which are shot from a plurality of different angles.
The original point cloud may be obtained by scanning the execution subject of the face recognition method provided by the application, or may be obtained by scanning other devices except the execution subject and sent to the execution subject.
For example, assuming that the face recognition method provided by the present application is applied to a security inspection apparatus with a face scanning capability, the original point cloud can be obtained by performing a main scanning on the face of a person to be recognized. If the face recognition method provided by the application is applied to a server which is in communication connection with a scanner, the face of a person to be recognized can be obtained by scanning of the scanner and sent to the server.
For the description of the original point cloud, reference may be made to the related description of S101, which is not described herein again.
S402, generating a three-dimensional deformation model based on the original point cloud.
The step is the same as the step S102, and reference may be made to the related description about S102, which is not repeated herein.
And S403, offsetting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model.
This step is the same as S103, and reference may be made to the related description about S103, which is not described herein again.
S404, determining a target three-dimensional model matched with the fine-grained three-dimensional model from the preset three-dimensional models corresponding to the candidate persons.
The preset three-dimensional model corresponding to each candidate person is a three-dimensional model obtained by scanning the faces of the candidate persons and modeling. The manner of determining whether the preset three-dimensional model is matched with the fine-grained three-dimensional model may be different according to different application scenarios, and for example, the encoding features of the preset three-dimensional model and the fine-grained three-dimensional model may be respectively extracted, a difference between the extracted encoding features may be calculated, if the difference is smaller than a preset difference threshold, it is determined that the preset three-dimensional model is matched with the fine-grained three-dimensional model, and if the difference is not smaller than a preset difference threshold, it is determined that the preset three-dimensional model is not matched with the fine-grained three-dimensional model. Or calculating the similarity between the preset three-dimensional model and the fine-grained three-dimensional model according to a preset similarity algorithm, if the calculated similarity is greater than a preset similarity threshold, determining that the preset three-dimensional model is matched with the fine-grained three-dimensional model, and if the similarity is not greater than the preset similarity threshold, determining that the preset three-dimensional model is not matched with the fine-grained three-dimensional model.
S405, determining the person to be identified as a candidate person corresponding to the target three-dimensional model.
For example, assuming that the candidate person corresponding to the target three-dimensional model is person a, it may be determined that the person to be identified is person a. It can be understood that the faces of different persons have a certain difference, so that the three-dimensional models of the faces of different persons have a certain difference, and the target three-dimensional model is matched with the fine-grained three-dimensional model, so that the target three-dimensional model and the fine-grained three-dimensional model can be considered as a three-dimensional model of the same face, that is, the person to be identified is a candidate person corresponding to the target three-dimensional model.
Compared with the traditional three-dimensional deformation model, the fine-grained three-dimensional model constructed by the method has more detail information, so that the human face characteristics of the person to be recognized can be better represented. Therefore, the embodiment can be used for more accurately representing the face characteristics of the person to be recognized in the face recognition process by constructing the fine-grained three-dimensional model containing more detailed information, so that the person to be recognized can be recognized more accurately.
For example, as described in the foregoing analysis, the fine-grained three-dimensional model can characterize the facial texture of the person to be recognized, and if the similarity between the person to be recognized and the facial contour of another person is high, the person to be recognized may be erroneously recognized as the other person when the conventional three-dimensional deformation model is used for face recognition.
By adopting the embodiment, the facial texture of the person to be identified can be represented in the fine-grained three-dimensional model, so that the person to be identified can be distinguished from the other person according to the facial texture represented by the fine-grained three-dimensional model.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a fine-grained three-dimensional model building apparatus provided in an embodiment of the present application, and the schematic structural diagram may include:
a point cloud obtaining module 501, configured to obtain an original point cloud of an object to be modeled;
a point cloud reconstruction module 502, configured to generate a three-dimensional deformation model based on the original point cloud;
a shifting module 503, configured to shift, based on offsets of the points on the surface of the three-dimensional deformation model relative to the points in the original point cloud, points on at least one patch of the three-dimensional deformation model to obtain a fine-grained three-dimensional model, where a granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model.
By adopting the embodiment, the three-dimensional deformation model with lower granularity can be generated based on the original point cloud, and the granularity of the three-dimensional deformation model is improved by shifting the points on the surface patch of the three-dimensional deformation model, so that the fine-grained three-dimensional model with higher granularity is obtained. And because the deviation is carried out according to the deviation amount of each point in the original point cloud of each point object on the surface of the three-dimensional deformation model during the deviation, the detail information which cannot be represented by the three-dimensional deformation model can be recovered in the deviation process, so that the fine-grained three-dimensional model constructed by the method is more accurate compared with the three-dimensional deformation model directly generated based on the original point cloud, namely the method can effectively improve the accuracy of the constructed three-dimensional model.
In one possible embodiment, the point cloud reconstruction module 502 generates a three-dimensional deformation model based on the original point cloud, including:
extracting point cloud characteristics of the original point cloud;
and inputting the point cloud characteristics into a point cloud reconstruction model obtained by pre-training to obtain a three-dimensional deformation model output by the point cloud reconstruction model.
In a possible embodiment, the apparatus further includes a first model training module, configured to train a point cloud reconstruction model according to the following manner:
obtaining a sample fine-grained three-dimensional model with granularity higher than a preset upper-limit granularity threshold;
registering the sample fine-grained three-dimensional model to obtain a sample fine-grained three-dimensional deformation model;
carrying out down-sampling on the sample fine-granularity three-dimensional deformation model according to a preset down-sampling mode to obtain a sample coarse-granularity three-dimensional deformation model with granularity lower than a preset lower-limit granularity threshold;
and training to obtain a point cloud reconstruction model according to the point cloud characteristics of the sample coarse-grained three-dimensional deformation model, the preset down-sampling mode and the sample fine-grained three-dimensional deformation model.
In a possible embodiment, the migration module 503 is further configured to input the original point cloud into a migration map generation model obtained through pre-training, so as to obtain a migration map output by the migration map generation model, where the migration map is used to represent the migration amount of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud;
the shifting module 503 shifts the point on at least one patch of the three-dimensional deformation model based on the offset of the three-dimensional deformation model relative to the original point cloud to obtain a fine-grained three-dimensional model, which includes:
and shifting points on at least one patch of the three-dimensional deformation model based on the offset represented by the shift diagram to obtain a fine-grained three-dimensional model.
In a possible embodiment, the apparatus further includes a second model training module, configured to train the offset map generation model according to the following manner:
scanning the sample object by using scanning equipment with the precision higher than a preset precision threshold value to obtain a true value three-dimensional model;
carrying out point cloud registration on the true value three-dimensional model to generate a sample three-dimensional deformation model;
calculating the offset of each point on the surface of the sample three-dimensional deformation model relative to each point in the true value three-dimensional model to obtain a sample offset map, wherein the resolution of the sample offset map is higher than a preset resolution threshold;
and training to obtain an offset map generation model according to the sample offset map, the sample three-dimensional deformation model and the true value three-dimensional model.
In a possible embodiment, the object to be modeled is a human face of a person to be identified;
the device also comprises a face recognition module which is used for carrying out face recognition on the fine-grained three-dimensional model and determining the identity of the person to be recognized.
In a possible embodiment, the object to be modeled is a human face of a person;
the shifting module 503 is specifically configured to shift points on at least one patch of the three-dimensional deformation model based on offsets of points on the surface of the three-dimensional deformation model relative to detail texture points in the original point cloud to obtain a fine-grained three-dimensional model, where the detail texture points are points in the original point cloud used for representing facial textures of the human face.
An embodiment of the present application further provides an electronic device, as shown in fig. 6, including:
a memory 601 for storing a computer program;
the processor 602 is configured to implement the following steps when executing the program stored in the memory 601:
acquiring an original point cloud of an object to be modeled;
generating a three-dimensional deformation model based on the original point cloud;
and shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model.
An embodiment of the present application further provides a face recognition device, as shown in fig. 7, including: a scanning unit 701, a reconstruction unit 702, and an identification unit 703.
The scanning unit 701 is used for scanning the face of a person to be identified to obtain an original point cloud;
the reconstructing unit 702 is configured to generate a three-dimensional deformation model based on the original point cloud; shifting points on at least one surface patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model;
the identifying unit 703 is configured to determine a target three-dimensional model matched with the fine-grained three-dimensional model from preset three-dimensional models corresponding to multiple candidate persons.
The face recognition device may be a camera or a tablet computer or a computer for face recognition.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the three-dimensional deformation model building methods described above.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any one of the three-dimensional deformation model building methods in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, the computer-readable storage medium, and the computer program product, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (11)

1. A fine-grained three-dimensional model construction method is characterized by comprising the following steps:
acquiring an original point cloud of an object to be modeled;
generating a three-dimensional deformation model based on the original point cloud;
and shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model.
2. The method of claim 1, wherein generating a three-dimensional deformation model based on the original point cloud comprises:
extracting point cloud characteristics of the original point cloud;
and inputting the point cloud characteristics into a point cloud reconstruction model obtained by pre-training to obtain a three-dimensional deformation model output by the point cloud reconstruction model.
3. The method of claim 2, wherein the point cloud reconstruction model is trained in advance by:
obtaining a sample fine-grained three-dimensional model with granularity higher than a preset upper-limit granularity threshold;
registering the sample fine-grained three-dimensional model to obtain a sample fine-grained three-dimensional deformation model;
carrying out down-sampling on the sample fine-granularity three-dimensional deformation model according to a preset down-sampling mode to obtain a sample coarse-granularity three-dimensional deformation model with granularity lower than a preset lower-limit granularity threshold;
and training to obtain a point cloud reconstruction model according to the point cloud characteristics of the sample coarse-grained three-dimensional deformation model, the preset down-sampling mode and the sample fine-grained three-dimensional deformation model.
4. The method of claim 2, wherein after said extracting point cloud features of the original point cloud, the method further comprises:
inputting the original point cloud into a pre-trained offset map generation model to obtain an offset map output by the offset map generation model, wherein the offset map is used for representing the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud;
shifting points on at least one patch of the three-dimensional deformation model based on the offset of the three-dimensional deformation model relative to the original point cloud to obtain a fine-grained three-dimensional model, comprising:
and shifting points on at least one patch of the three-dimensional deformation model based on the offset represented by the shift diagram to obtain a fine-grained three-dimensional model.
5. The method of claim 4, wherein the migration map generation model is obtained by training in advance:
scanning the sample object by using scanning equipment with the precision higher than a preset precision threshold value to obtain a true value three-dimensional model;
carrying out point cloud registration on the true value three-dimensional model to generate a sample three-dimensional deformation model;
calculating the offset of each point on the surface of the sample three-dimensional deformation model relative to each point in the true value three-dimensional model to obtain a sample offset map, wherein the resolution of the sample offset map is higher than a preset resolution threshold;
and training to obtain an offset map generation model according to the sample offset map, the sample three-dimensional deformation model and the true value three-dimensional model.
6. The method according to any one of claims 1 to 5, wherein the object to be modeled is a human face of a person to be identified;
the method further comprises the following steps:
and carrying out face recognition on the fine-grained three-dimensional model, and determining the identity of the person to be recognized.
7. The method according to any one of claims 1 to 5, wherein the object to be modeled is a human face of a person;
and shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the fine-grained three-dimensional model comprises the following steps:
and offsetting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to the detail texture point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the detail texture point is a point in the original point cloud used for representing the facial texture of the human face.
8. A face recognition method, comprising:
acquiring point cloud of the face of a person to be identified as original point cloud;
generating a three-dimensional deformation model based on the original point cloud;
shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model;
determining a target three-dimensional model matched with the fine-grained three-dimensional model from preset three-dimensional models corresponding to a plurality of candidate persons;
and determining the person to be identified as a candidate person corresponding to the target three-dimensional model.
9. A fine-grained three-dimensional model construction apparatus, characterized in that the apparatus comprises:
the point cloud acquisition module is used for acquiring an original point cloud of an object to be modeled;
the point cloud reconstruction module is used for generating a three-dimensional deformation model based on the original point cloud;
and the shifting module is used for shifting points on at least one patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, and the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model.
10. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
11. A face recognition device, comprising: a scanning unit, a reconstruction unit and an identification unit;
the scanning unit is used for scanning the face of a person to be identified to obtain an original point cloud;
the reconstruction unit is used for generating a three-dimensional deformation model based on the original point cloud; shifting points on at least one surface patch of the three-dimensional deformation model based on the offset of each point on the surface of the three-dimensional deformation model relative to each point in the original point cloud to obtain a fine-grained three-dimensional model, wherein the granularity of the fine-grained three-dimensional model is higher than that of the three-dimensional deformation model;
the identification unit is used for determining a target three-dimensional model matched with the fine-grained three-dimensional model from preset three-dimensional models corresponding to a plurality of candidate persons.
CN202111069651.2A 2021-09-13 2021-09-13 Fine-grained three-dimensional model construction method and device and electronic equipment Pending CN113610971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111069651.2A CN113610971A (en) 2021-09-13 2021-09-13 Fine-grained three-dimensional model construction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111069651.2A CN113610971A (en) 2021-09-13 2021-09-13 Fine-grained three-dimensional model construction method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113610971A true CN113610971A (en) 2021-11-05

Family

ID=78310422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111069651.2A Pending CN113610971A (en) 2021-09-13 2021-09-13 Fine-grained three-dimensional model construction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113610971A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091676A (en) * 2023-04-13 2023-05-09 腾讯科技(深圳)有限公司 Face rendering method of virtual object and training method of point cloud feature extraction model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035388A (en) * 2018-06-28 2018-12-18 北京的卢深视科技有限公司 Three-dimensional face model method for reconstructing and device
CN109903368A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information
CN111710035A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Face reconstruction method and device, computer equipment and storage medium
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903368A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information
CN109035388A (en) * 2018-06-28 2018-12-18 北京的卢深视科技有限公司 Three-dimensional face model method for reconstructing and device
CN111710035A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Face reconstruction method and device, computer equipment and storage medium
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄超: "新媒体三维动画创意设计研究", 31 October 2020, 吉林人民出版社, pages: 47 - 50 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091676A (en) * 2023-04-13 2023-05-09 腾讯科技(深圳)有限公司 Face rendering method of virtual object and training method of point cloud feature extraction model

Similar Documents

Publication Publication Date Title
Schnabel et al. Efficient RANSAC for point‐cloud shape detection
US9189862B2 (en) Outline approximation for point cloud of building
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
Kamencay et al. Improved Depth Map Estimation from Stereo Images Based on Hybrid Method.
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
Sohn et al. An implicit regularization for 3D building rooftop modeling using airborne lidar data
CN112347550A (en) Coupling type indoor three-dimensional semantic graph building and modeling method
US11651581B2 (en) System and method for correspondence map determination
Satari et al. A multi‐resolution hybrid approach for building model reconstruction from lidar data
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
Jung et al. A line-based progressive refinement of 3D rooftop models using airborne LiDAR data with single view imagery
Zheng et al. Accelerated RANSAC for accurate image registration in aerial video surveillance
JP2017151797A (en) Geometry verification device, program and method
Kang et al. An efficient planar feature fitting method using point cloud simplification and threshold-independent BaySAC
CN113610971A (en) Fine-grained three-dimensional model construction method and device and electronic equipment
CN113723294A (en) Data processing method and device and object identification method and device
Carrilho et al. Extraction of building roof planes with stratified random sample consensus
Tang et al. Automatic structural scene digitalization
Huang et al. Robust fundamental matrix estimation with accurate outlier detection
CN114926536A (en) Semantic-based positioning and mapping method and system and intelligent robot
CN115239776A (en) Point cloud registration method, device, equipment and medium
Lopez-Escogido et al. Automatic extraction of geometric models from 3D point cloud datasets
Chu et al. Hole-filling framework by combining structural and textural information for the 3D Terracotta Warriors
Date et al. Object recognition in terrestrial laser scan data using spin images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination