WO2021238126A1 - Procédé et appareil de reconstruction de visage tridimensionnel - Google Patents

Procédé et appareil de reconstruction de visage tridimensionnel Download PDF

Info

Publication number
WO2021238126A1
WO2021238126A1 PCT/CN2020/132460 CN2020132460W WO2021238126A1 WO 2021238126 A1 WO2021238126 A1 WO 2021238126A1 CN 2020132460 W CN2020132460 W CN 2020132460W WO 2021238126 A1 WO2021238126 A1 WO 2021238126A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional face
face model
model
target
dimensional
Prior art date
Application number
PCT/CN2020/132460
Other languages
English (en)
Chinese (zh)
Inventor
金博
张国鑫
马里千
刘晓强
张博宁
孙佳佳
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2021238126A1 publication Critical patent/WO2021238126A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present disclosure relates to the field of image processing, and in particular to a method, device, electronic device, and storage medium for three-dimensional face reconstruction.
  • 3D face reconstruction technology has gradually developed into an important application branch of Computer Graphics (CG).
  • CG Computer Graphics
  • the three-dimensional face model has a stronger descriptive ability than the two-dimensional face model, and can better express the characteristics of the real face. Therefore, the face recognition based on the three-dimensional face model is more accurate in recognition and live detection. Both have been greatly improved.
  • the 3D points are usually marked manually on the 3D face model to be reconstructed, supplemented by the Deformation Transfer Algorithm, so that the standard 3D face model is compared with the 3D face model to be reconstructed.
  • the face models are as similar as possible to achieve the purpose of 3D face reconstruction.
  • the present disclosure provides a method, device, electronic equipment and storage medium based on three-dimensional face reconstruction.
  • the technical solutions of the present disclosure are as follows:
  • a three-dimensional face reconstruction method including:
  • a three-dimensional face reconstruction device including:
  • the three-dimensional face model acquisition unit is configured to perform acquisition of the target three-dimensional face model and the standard three-dimensional face model
  • a three-dimensional face model fitting unit configured to perform fitting the standard three-dimensional face model according to the shape of the target three-dimensional face model to obtain a fitted three-dimensional face model
  • a three-dimensional face model reconstruction unit configured to perform transformation of the vertices in the fitted three-dimensional face model to the vertices in the target three-dimensional face model to obtain the three-dimensional face corresponding to the target three-dimensional face model Rebuild the model.
  • an electronic device including:
  • a memory for storing executable instructions of the processor
  • the processor is configured to execute the instructions to implement the three-dimensional face reconstruction method described in any one of the embodiments of the first aspect.
  • a storage medium is provided.
  • the electronic device can execute the three-dimensional face described in the first aspect. Reconstruction method.
  • a computer program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of the device obtains data from the readable storage medium.
  • the computer program is read and executed, so that the device executes the three-dimensional face reconstruction method described in any one of the embodiments of the first aspect.
  • Fig. 1 is a flow chart showing a method for reconstructing a three-dimensional face according to an exemplary embodiment.
  • Fig. 2 is a flowchart showing an implementable manner of step S200 according to an exemplary embodiment.
  • Fig. 3 is a flowchart showing an implementable manner of step S220 according to an exemplary embodiment.
  • Fig. 4(a) shows a target three-dimensional face model according to an exemplary embodiment.
  • Fig. 4(b) shows a two-dimensional face image according to an exemplary embodiment.
  • Fig. 4(c) shows the key points of the target three-dimensional face according to an exemplary embodiment.
  • Fig. 5 is a flowchart showing an implementable manner of step S300 according to an exemplary embodiment.
  • Fig. 6(a) shows a target three-dimensional face model according to an exemplary embodiment.
  • Fig. 6(b) shows a fitting three-dimensional face model according to an exemplary embodiment.
  • Fig. 6(c) is a three-dimensional face reconstruction model after Laplace deformation according to an exemplary embodiment.
  • Fig. 7 is a block diagram showing a device for reconstructing a three-dimensional face according to an exemplary embodiment.
  • Fig. 8 is an internal structure diagram of an electronic device for 3D face reconstruction according to an exemplary embodiment.
  • Fig. 1 is a flow chart showing a method for 3D face reconstruction according to an exemplary embodiment. As shown in Fig. 1, the method includes the following steps:
  • step S100 a target three-dimensional face model and a standard three-dimensional face model are acquired.
  • step S200 according to the shape of the target three-dimensional face model, the standard three-dimensional face model is fitted to obtain the fitted three-dimensional face model.
  • step S300 the vertices in the fitted three-dimensional face model are transformed to the vertices in the target three-dimensional face model to obtain a three-dimensional face reconstruction model corresponding to the target three-dimensional face model.
  • a three-dimensional (3D) face model refers to a three-dimensional model of a human face, and a three-dimensional face model has a stronger description ability and a better expression than a two-dimension (2D) face model.
  • the target three-dimensional face model refers to the three-dimensional face structure and the three-dimensional face texture to be reconstructed.
  • the standard 3D face model is an ideal 3D face model set in advance.
  • the target three-dimensional face model and the standard three-dimensional face model to be reconstructed are acquired, and the standard three-dimensional face model is fitted to the target three-dimensional face model based on the shape of the target three-dimensional face model, so that The fitted 3D face model is aligned with the target 3D face model as much as possible.
  • the vertices in the fitted 3D face model are transformed to the target 3D face
  • a three-dimensional face reconstruction model corresponding to the target three-dimensional face model to be reconstructed is obtained.
  • the standard three-dimensional face model is a three-dimensional face model formed by pre-set facial shape bases and expression base groups.
  • the shape base refers to the shape and size of the organs or regions of the face designed in advance, such as the shape of the eyes, the shape of the mouth, the shape of the facial muscles or the shape of the eyebrows, etc.
  • the shape base is explained by taking the eyebrows as an example.
  • the shape base corresponding to the eyebrows can be willow-leaf eyebrows, arched eyebrows, raised eyebrows, straight eyebrows, and other shapes reflecting eyebrows.
  • the expression base refers to the state or action of the organs or regions of the face, such as the opening and closing state of the eyes, the opening and closing state of the mouth, the facial muscle action form or the action form of the eyebrows, etc., take the eyebrows as an example to perform the expression base
  • the expression base corresponding to the eyebrows can be a state or form that reflects expressions such as raising eyebrows, frowning, flashing eyebrows, etc.
  • the standard three-dimensional face model formed by the pre-set facial shape bases and expression base groups is fitted to the target three-dimensional face model, and the pseudo-face model is obtained. Integrate a three-dimensional face model.
  • the fitted three-dimensional face model at this time is similar or identical in shape and expression to the target three-dimensional face model to be reconstructed.
  • the vertices in the fitted 3D face model are transformed to the vertices in the target 3D face model, so that the fitted 3D face model is consistent with the target 3D face model to be reconstructed at the finer vertices, so that the 3D
  • the error between the face reconstruction model and the target 3D face model to be reconstructed is minimized, and a more natural 3D face reconstruction model is obtained.
  • the target three-dimensional face model and the standard three-dimensional face model are obtained, and the standard three-dimensional face model is fitted based on the shape of the target three-dimensional face model, so that the obtained three-dimensional face model can be fitted.
  • the face model is aligned with the target 3D face model as much as possible, and based on the alignment of the fitted 3D face model with the target 3D face model, the vertices in the fitted 3D face model are transformed to the vertices in the target 3D face model Therefore, the error between the 3D face reconstruction model and the target 3D face model to be reconstructed is minimized, so that the finally obtained 3D face reconstruction model is more natural, and the accuracy of the 3D face model reconstruction is improved. It can provide a basis for registration and recognition based on the 3D face reconstruction model, and improve the success rate of registration and recognition.
  • Fig. 2 is a flowchart of an implementable manner of step S200 according to an exemplary embodiment. As shown in Fig. 2, step S200 is to fit a standard three-dimensional face model according to the shape of the target three-dimensional face model , To obtain a fitted three-dimensional face model, including the following steps:
  • step S210 according to the shape of the target three-dimensional face model, the standard three-dimensional face model is fitted to obtain the initial three-dimensional face model; wherein, the initial three-dimensional face key points in the initial three-dimensional face model are the same as those of the standard three-dimensional face model.
  • the key points of the face correspond one-to-one.
  • step S220 the key points of the target three-dimensional face in the target three-dimensional face model are acquired.
  • step S230 a loss function is constructed according to the key points of the initial three-dimensional face and the key points of the target three-dimensional face.
  • step S240 the initial three-dimensional face model corresponding to the loss function that meets the preset conditions is determined as the fitted three-dimensional face model.
  • the standard three-dimensional face model includes key points of the standard three-dimensional face.
  • the three-dimensional face key points on the standard three-dimensional face model are directly obtained from the unified standard three-dimensional face model library.
  • the three-dimensional face key points corresponding to the standard three-dimensional face model in the standard three-dimensional face model library are pre-processed Manually annotated, a standard three-dimensional face model only needs to be manually annotated once.
  • a fitted 3D face model can be obtained.
  • the initial three-dimensional face key points in the initial three-dimensional face model correspond one-to-one with the standard three-dimensional face key points.
  • the 3DMM algorithm refers to the algorithm model of the three-dimensional face model based on the three-dimensional face database, taking the face shape and face texture statistics as constraints, and taking into account the posture of the face and the influence of the lighting factors.
  • the 3D face model generated by this algorithm model has high accuracy.
  • the shape of the standard 3D face model can be made as consistent as possible with the 3D face structure and 3D face texture in the target 3D face model to be reconstructed, and have the same shape as the target 3D face model to be reconstructed.
  • the facial expressions are similar.
  • the implementation of the 3DMM algorithm is as shown in formula (1):
  • S model is the fitted 3D face model after fitting
  • s i is the shape base of 3DMM
  • a i is the parameter corresponding to the shape base
  • n is the number of shape bases
  • e i is the expression base of 3DMM
  • b i Is the parameter corresponding to the expression base
  • m is the number of the expression base.
  • S landmark represents the key points of the target three-dimensional face to be reconstructed
  • S model represents the key points of the three-dimensional face in the fitted three-dimensional face model after fitting
  • L is the number of three-dimensional key points.
  • the fitting process is to find the parameters a i and b i in formula (1), and use the target three-dimensional face key points as a reference to fit the standard three-dimensional face key points, and satisfy the preset
  • the initial three-dimensional face model corresponding to the conditional loss function is determined to be the fitted three-dimensional face model.
  • the fitting algorithm used may also be a deformation transfer algorithm (Deformation Transfer Algorithm), a least square method, or the like.
  • the standard three-dimensional face model is fitted based on the shape of the target three-dimensional face model, and the loss function is constructed based on the initial three-dimensional face key points and the target three-dimensional face key points.
  • the initial three-dimensional face model corresponding to the loss function that meets the preset conditions is determined as a fitted three-dimensional face model. It can align the key points of the 3D face in the fitted 3D face model with the key points of the 3D face in the target 3D face model as much as possible, reducing the 3D face reconstruction model and the target 3D person to be reconstructed
  • the error between face models improves the accuracy of 3D face model reconstruction.
  • Fig. 3 is a flow chart showing an implementable manner of step S220 according to an exemplary embodiment.
  • step S220 acquiring the key points of the target three-dimensional face in the target three-dimensional face model, includes the following steps :
  • step S221 the target three-dimensional face model is projected onto the two-dimensional image to obtain a two-dimensional face image; wherein there is a vertex correspondence between the vertices in the target three-dimensional face model and the vertices in the two-dimensional face image relation.
  • step S222 the key points of the two-dimensional face in the two-dimensional face image are detected.
  • step S223 the key points of the target three-dimensional face are determined according to the correspondence between the key points of the two-dimensional face and the vertices.
  • a two-dimensional image refers to a flat image
  • a two-dimensional face image refers to a flat face image
  • the three-dimensional face model of the target to be reconstructed is rendered onto a two-dimensional image to obtain a two-dimensional face image
  • the face key point detector is used to detect the two-dimensional face image in the two-dimensional face image.
  • the key points of the face Since the two-dimensional face image is obtained by projecting the target three-dimensional face model onto the two-dimensional image, there is a vertex correspondence between the vertices in the two-dimensional face image and the vertices in the three-dimensional face model, so , The two-dimensional face key points detected on the two-dimensional face image must have a corresponding point on the target three-dimensional face model. After the two-dimensional face key points in the two-dimensional face image are detected by the face key point detector, the target three-dimensional face key points on the target three-dimensional face model can be determined according to the above-mentioned vertex correspondence.
  • Fig. 4(a) it is a target three-dimensional face model according to an exemplary embodiment.
  • the target three-dimensional face model includes a corresponding three-dimensional face structure and a three-dimensional face texture
  • Fig. 4( b) shows a two-dimensional face image according to an exemplary embodiment.
  • the target three-dimensional face model is an image obtained by projecting the target three-dimensional face model onto the two-dimensional image, and the vertices in the figure are the face key Point; as shown in Figure 4 (c), it is shown according to an exemplary embodiment of the target three-dimensional face key points, the target three-dimensional face key points are based on the two-dimensional face key points and the corresponding relationship between the vertices to determine the three-dimensional The key points of the face.
  • a two-dimensional face image is obtained, and the two-dimensional face key points in the two-dimensional face image are detected, and then the two-dimensional face is obtained.
  • the key points according to the corresponding relationship between the key points of the two-dimensional face and the vertices, the key points of the target three-dimensional face are determined.
  • the entire process of determining the key points of the target 3D face starts from the detection of the key points of the 2D face by the detector, and then gradually obtains the key points of the 3D face. There is no need to manually participate in any labeling work, which can save a lot of labor and time costs.
  • the key points of the three-dimensional face are obtained, and the obtained key points of the three-dimensional face are obtained by the computer using a unified recognition or detection method, which avoids the randomness and disorder of manual labeling, and improves the detection accuracy of the key points of the target three-dimensional face. It provides a basis for the subsequent reduction of the error between the 3D face reconstruction model and the target 3D face model to be reconstructed, and improves the accuracy of the 3D face model reconstruction.
  • Fig. 5 is a flowchart of an implementable manner of step S300 according to an exemplary embodiment. As shown in Fig. 5, step S300 is to transform the vertices in the fitted 3D face model to the target 3D face model At the vertex of, obtaining the 3D face reconstruction model corresponding to the target 3D face model includes the following steps:
  • step S310 for the vertices in the fitted three-dimensional face model, the corresponding vertices are searched in the target three-dimensional face model to obtain at least one vertex pair.
  • step S320 the vertex corresponding to the fitted three-dimensional face model in the vertex pair is transformed to the vertex corresponding to the target three-dimensional face model to obtain a three-dimensional face reconstruction model.
  • step S200 in order to make the fitted three-dimensional face model obtained by fitting closer to the target three-dimensional face model to be reconstructed, after the fitted three-dimensional face model is obtained in step S200, in order to obtain a more accurate three-dimensional face model To reconstruct the model, perform deformation processing on the vertices in the fitted three-dimensional face model to obtain a three-dimensional face reconstruction model corresponding to the target three-dimensional face model.
  • the standard three-dimensional face model is fitted, and the three-dimensional face structure in the fitted three-dimensional face model is already aligned with the three-dimensional face structure in the target three-dimensional face model to be reconstructed, and it is necessary to open the mouth and curl the mouth.
  • the three-dimensional face structure in the target three-dimensional face model to be reconstructed with large expressions such as closed eyes can complete a good alignment effect.
  • the vertex closest to each vertex in the fitted three-dimensional face model is searched; the vertices in the fitted three-dimensional face model are compared with those in the target three-dimensional face model.
  • the vertex closest to the search is determined to be at least one vertex pair.
  • fitting the three-dimensional face model is consistent with the target three-dimensional face model in shape and expression.
  • the vertices in the fitted three-dimensional face model are the same as those in the target three-dimensional face model to be reconstructed.
  • the vertices basically correspond, and the positions of the vertices in the fitted 3D face model are the same as or slightly different from the positions of the vertices in the target 3D face model to be reconstructed. Therefore, the vertices in the fitted 3D face model are the same as the target 3D to be reconstructed.
  • the distance between the corresponding vertices in the vertices in the face model is smaller than the distance between the vertices in the fitted 3D face model and the non-corresponding vertices in the vertices in the target 3D face model to be reconstructed, so ,
  • the vertex in the fitted three-dimensional face model and the vertex with the closest distance found in the target three-dimensional face model are determined as at least one vertex pair for subsequent transformation processing.
  • the Laplacian algorithm (Laplacian Deformation) is applied to transform the vertices in the vertex pair corresponding to the fitted 3D face model to the vertices corresponding to the target 3D face model to obtain the 3D face reconstruction model.
  • the implementation of the Laplace deformation algorithm can be described by a loss function, as shown in formula (3):
  • the vertices of the non-face area are points outside the face area, such as corresponding points such as hair, ears, and neck. The vertices of the non-face area follow the vertices to transform the points corresponding to the fitted three-dimensional face model, which can ensure the topological integrity and smoothness, and make the obtained three-dimensional face reconstruction model more natural.
  • the vertices in the fitted three-dimensional face model and the target three-dimensional face model are not all corresponding, and the three-dimensional face vertices exist in the fitted three-dimensional face model and the target three-dimensional face model.
  • the corresponding facial points also have corresponding relationships, but the hair, ears, necks, etc. in the three-dimensional face model do not all have corresponding points, and the corresponding points of these hairs, ears, necks, etc. are not face model reconstructions either , Identification and registration of the points of interest, so there is no need to pay special attention to the corresponding points of hair, ears, neck, etc.
  • obtain the vertices of the non-face region in the fitted three-dimensional face model obtain the transformation coefficients at which the vertices in the vertex pair corresponding to the fitted three-dimensional face model are transformed to the vertices corresponding to the target three-dimensional face model ; According to the transformation coefficient, transform the vertices of the non-face area in the fitted three-dimensional face model to obtain a three-dimensional head reconstruction model.
  • v i is the target three-dimensional face model fitting to be rebuilt or a vertex of a three-dimensional face model
  • N i is a neighbor of v i
  • d i is the weight of each vertex neighborhood weight.
  • FIG. 6(a) it is a target three-dimensional face model according to an exemplary embodiment.
  • the target three-dimensional face model includes a corresponding three-dimensional face structure and a three-dimensional face texture; as shown in Figure 6(b) Shown is a fitting three-dimensional face model according to an exemplary embodiment, which is obtained by fitting the key points of a standard three-dimensional face with the key points of the target three-dimensional face as a reference; as shown in Figure 6(c) , Is a three-dimensional face reconstruction model after Laplace deformation according to an exemplary embodiment.
  • the corresponding vertices are searched in the target three-dimensional face model to obtain at least one vertex pair; the vertices are aligned with the vertices corresponding to the fitted three-dimensional face model Transform to the vertex corresponding to the target 3D face model to obtain a 3D face reconstruction model. It can transform the vertices in the vertex pair, minimize the error between the 3D face reconstruction model and the target 3D face model to be reconstructed, make the final 3D face reconstruction model more natural, and improve the reconstruction of the 3D face model. Accuracy. It can provide a basis for registration and recognition based on the 3D face reconstruction model, and improve the success rate of registration and recognition.
  • Fig. 7 is a block diagram showing a device for reconstructing a three-dimensional face according to an exemplary embodiment.
  • the device includes a three-dimensional face model acquisition unit 701, a three-dimensional face model fitting unit 702, and a three-dimensional face model reconstruction unit 703:
  • the three-dimensional face model acquiring unit 701 is configured to perform acquisition of the target three-dimensional face model and the standard three-dimensional face model;
  • the three-dimensional face model fitting unit 702 is configured to perform fitting of the standard three-dimensional face model according to the shape of the target three-dimensional face model to obtain a fitted three-dimensional face model;
  • the three-dimensional face model reconstruction unit 703 is configured to transform the vertices in the fitted three-dimensional face model to the vertices in the target three-dimensional face model to obtain a three-dimensional face reconstruction model corresponding to the target three-dimensional face model.
  • the standard three-dimensional face model includes key points of the standard three-dimensional face; the three-dimensional face model fitting unit 702 is further configured to perform: according to the shape of the target three-dimensional face model, The model is fitted to obtain an initial three-dimensional face model; among them, the initial three-dimensional face key points in the initial three-dimensional face model correspond to the standard three-dimensional face key points one-to-one; to obtain the target three-dimensional face in the target three-dimensional face model Key points: Construct a loss function based on the initial three-dimensional face key points and the target three-dimensional face key points; determine the initial three-dimensional face model corresponding to the loss function that meets the preset conditions as the fitted three-dimensional face model.
  • the three-dimensional face model fitting unit 702 is further configured to execute: project a target three-dimensional face model onto a two-dimensional image to obtain a two-dimensional face image; wherein, in the target three-dimensional face model There is a vertex correspondence between the vertices of and the vertices in the two-dimensional face image; the two-dimensional face key points in the two-dimensional face image are detected; the target three-dimensional The key points of the face.
  • the three-dimensional face model reconstruction unit 703 is further configured to perform: for vertices in the fitted three-dimensional face model, search for corresponding vertices in the target three-dimensional face model to obtain at least one vertex pair; The vertices corresponding to the fitted three-dimensional face model in the vertex pair are transformed to the vertices corresponding to the target three-dimensional face model to obtain a three-dimensional face reconstruction model.
  • the three-dimensional face model reconstruction unit 703 is further configured to perform: in the target three-dimensional face model, find the vertex closest to each vertex in the fitted three-dimensional face model; The vertex in the three-dimensional face model and the vertex with the closest distance found in the target three-dimensional face model are determined to be at least one vertex pair.
  • the three-dimensional face model reconstruction unit 703 is further configured to perform: applying the Laplacian algorithm to transform the vertices corresponding to the fitted three-dimensional face model to the target three-dimensional face model. At the vertex of, a 3D face reconstruction model is obtained.
  • the vertex corresponding to the fitted three-dimensional face model in the vertex pair is located in the face area of the fitted three-dimensional face model;
  • the three-dimensional face model reconstruction unit 703 also includes a three-dimensional head reconstruction unit, It is configured to execute: obtain the vertices of the non-face area in the fitted three-dimensional face model; obtain the transformation coefficients at which the vertices corresponding to the fitted three-dimensional face model in the vertex pair are transformed to the vertices corresponding to the target three-dimensional face model; according to The transformation coefficient transforms the vertices of the non-face area in the fitted three-dimensional face model to obtain a three-dimensional head reconstruction model.
  • Fig. 8 is a block diagram showing a device 800 for three-dimensional face reconstruction according to an exemplary embodiment.
  • the device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • the device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and Communication component 816.
  • a processing component 802 a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and Communication component 816.
  • the processing component 802 generally controls the overall operations of the device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support the operation of the device 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic storage, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic storage flash memory, magnetic or optical disk.
  • the power supply component 806 provides power for various components of the device 800.
  • the power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), and when the device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the device 800 with various aspects of status assessment.
  • the sensor component 814 can detect the open/close state of the device 800 and the relative positioning of components, such as the display and keypad of the device 800.
  • the sensor component 814 can also detect the position change of the device 800 or a component of the device 800. , The presence or absence of contact between the user and the device 800, the orientation or acceleration/deceleration of the device 800, and the temperature change of the device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the device 800 and other devices.
  • the device 800 can access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 8G), or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the device 800 can be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field-available A programmable gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • an electronic device including: a processor 820; a memory 804 for storing executable instructions of the processor 820; wherein the processor 820 is configured to execute instructions to implement any of the foregoing.
  • a three-dimensional face reconstruction method in an embodiment.
  • a storage medium is also provided.
  • the electronic device can execute the three-dimensional face reconstruction method in any one of the above embodiments. .
  • non-transitory computer-readable storage medium including instructions, such as the memory 804 including instructions, and the foregoing instructions may be executed by the processor 820 of the device 800 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de reconstruction de visage tridimensionnel, un dispositif électronique et un support de stockage. Le procédé comprend les étapes consistant à : acquérir un modèle de visage tridimensionnel cible et un modèle de visage tridimensionnel standard; en fonction de la forme du modèle de visage tridimensionnel cible, ajuster le modèle de visage tridimensionnel standard pour obtenir un modèle de visage tridimensionnel ajusté; et transformer le sommet dans le modèle de visage tridimensionnel ajusté en le sommet dans le modèle de visage tridimensionnel cible pour obtenir un modèle de reconstruction de visage tridimensionnel correspondant au modèle de visage tridimensionnel cible. Selon la présente divulgation, le modèle de visage tridimensionnel standard est ajusté sur la base de la forme du modèle de visage tridimensionnel cible, et sur la base de l'alignement du modèle de visage tridimensionnel ajusté avec le modèle de visage tridimensionnel cible, le sommet dans le modèle de visage tridimensionnel ajusté est transformé en le sommet dans le modèle de visage tridimensionnel cible, ce qui permet de réduire au minimum l'erreur entre le modèle de reconstruction de visage tridimensionnel et le modèle de visage tridimensionnel cible à construire, rendant ainsi le modèle de reconstruction de visage tridimensionnel finalement obtenu plus naturel, et améliorant la précision de reconstruction du modèle de visage tridimensionnel.
PCT/CN2020/132460 2020-05-29 2020-11-27 Procédé et appareil de reconstruction de visage tridimensionnel WO2021238126A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010479102.1A CN113744384B (zh) 2020-05-29 2020-05-29 三维人脸重建方法、装置、电子设备及存储介质
CN202010479102.1 2020-05-29

Publications (1)

Publication Number Publication Date
WO2021238126A1 true WO2021238126A1 (fr) 2021-12-02

Family

ID=78725042

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132460 WO2021238126A1 (fr) 2020-05-29 2020-11-27 Procédé et appareil de reconstruction de visage tridimensionnel

Country Status (2)

Country Link
CN (1) CN113744384B (fr)
WO (1) WO2021238126A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023155775A1 (fr) * 2022-02-17 2023-08-24 北京字跳网络技术有限公司 Procédé et appareil de génération de film, dispositif informatique, et support d'enregistrement

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523152B (zh) * 2024-01-04 2024-04-12 广州趣丸网络科技有限公司 一种三维人脸重建方法、装置、计算机设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170084026A1 (en) * 2015-09-21 2017-03-23 Korea Institute Of Science And Technology Method for forming 3d maxillofacial model by automatically segmenting medical image, automatic image segmentation and model formation server performing the same, and storage medium storing the same
CN110796719A (zh) * 2018-07-16 2020-02-14 北京奇幻科技有限公司 一种实时人脸表情重建方法
CN110807836A (zh) * 2020-01-08 2020-02-18 腾讯科技(深圳)有限公司 三维人脸模型的生成方法、装置、设备及介质
CN110866864A (zh) * 2018-08-27 2020-03-06 阿里巴巴集团控股有限公司 人脸姿态估计/三维人脸重构方法、装置及电子设备
CN111028343A (zh) * 2019-12-16 2020-04-17 腾讯科技(深圳)有限公司 三维人脸模型的生成方法、装置、设备及介质
CN111127668A (zh) * 2019-12-26 2020-05-08 网易(杭州)网络有限公司 一种角色模型生成方法、装置、电子设备和存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966316B (zh) * 2015-05-22 2019-03-15 腾讯科技(深圳)有限公司 一种3d人脸重建方法、装置及服务器
US10572720B2 (en) * 2017-03-01 2020-02-25 Sony Corporation Virtual reality-based apparatus and method to generate a three dimensional (3D) human face model using image and depth data
CN108961149B (zh) * 2017-05-27 2022-01-07 北京旷视科技有限公司 图像处理方法、装置和系统及存储介质
CN108510437B (zh) * 2018-04-04 2022-05-17 科大讯飞股份有限公司 一种虚拟形象生成方法、装置、设备以及可读存储介质
CN109191507B (zh) * 2018-08-24 2019-11-05 北京字节跳动网络技术有限公司 三维人脸图像重建方法、装置和计算机可读存储介质
CN109754467B (zh) * 2018-12-18 2023-09-22 广州市百果园网络科技有限公司 三维人脸构建方法、计算机存储介质和计算机设备
CN110414394B (zh) * 2019-07-16 2022-12-13 公安部第一研究所 一种面部遮挡人脸图像重建方法以及用于人脸遮挡检测的模型

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170084026A1 (en) * 2015-09-21 2017-03-23 Korea Institute Of Science And Technology Method for forming 3d maxillofacial model by automatically segmenting medical image, automatic image segmentation and model formation server performing the same, and storage medium storing the same
CN110796719A (zh) * 2018-07-16 2020-02-14 北京奇幻科技有限公司 一种实时人脸表情重建方法
CN110866864A (zh) * 2018-08-27 2020-03-06 阿里巴巴集团控股有限公司 人脸姿态估计/三维人脸重构方法、装置及电子设备
CN111028343A (zh) * 2019-12-16 2020-04-17 腾讯科技(深圳)有限公司 三维人脸模型的生成方法、装置、设备及介质
CN111127668A (zh) * 2019-12-26 2020-05-08 网易(杭州)网络有限公司 一种角色模型生成方法、装置、电子设备和存储介质
CN110807836A (zh) * 2020-01-08 2020-02-18 腾讯科技(深圳)有限公司 三维人脸模型的生成方法、装置、设备及介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023155775A1 (fr) * 2022-02-17 2023-08-24 北京字跳网络技术有限公司 Procédé et appareil de génération de film, dispositif informatique, et support d'enregistrement

Also Published As

Publication number Publication date
CN113744384A (zh) 2021-12-03
CN113744384B (zh) 2023-11-28

Similar Documents

Publication Publication Date Title
US11575856B2 (en) Virtual 3D communications using models and texture maps of participants
US11856328B2 (en) Virtual 3D video conference environment generation
US11805157B2 (en) Sharing content during a virtual 3D video conference
CN109858524A (zh) 手势识别方法、装置、电子设备及存储介质
US11308692B2 (en) Method and device for processing image, and storage medium
CN112348933B (zh) 动画生成方法、装置、电子设备及存储介质
WO2021238126A1 (fr) Procédé et appareil de reconstruction de visage tridimensionnel
CN110148191B (zh) 视频虚拟表情生成方法、装置及计算机可读存储介质
WO2022037285A1 (fr) Procédé et appareil d'étalonnage extrinsèque de caméra
US11765332B2 (en) Virtual 3D communications with participant viewpoint adjustment
WO2022121577A1 (fr) Procédé et appareil de traitement d'images
US20210392231A1 (en) Audio quality improvement related to a participant of a virtual three dimensional (3d) video conference
CN110728621B (zh) 一种面部图像的换脸方法、装置、电子设备及存储介质
CN110580677A (zh) 一种数据处理方法、装置和用于数据处理的装置
JP2004326179A (ja) 画像処理装置、画像処理方法および画像処理プログラムならびに画像処理プログラムを記録した記録媒体
WO2022042570A1 (fr) Procédé et appareil de traitement d'images
WO2022042160A1 (fr) Procédé et appareil de traitement d'images
TW202226049A (zh) 關鍵點檢測方法、電子設備和儲存媒體
US20220005266A1 (en) Method for processing two-dimensional image and device for executing method
KR20200071008A (ko) 2차원 이미지 처리 방법 및 이 방법을 실행하는 디바이스
WO2024114470A1 (fr) Procédé de présentation d'un effet d'essayage virtuel destiné à un produit et dispositif électronique
WO2024114459A1 (fr) Procédé et appareil de génération de modèle de main 3d, et dispositif électronique
JP2024518888A (ja) 仮想3d通信のための方法及びシステム
CN117173734A (zh) 手掌轮廓提取、控制指令生成方法、装置和计算机设备
CN114387388A (zh) 一种近景三维人脸重建装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937314

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/03/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20937314

Country of ref document: EP

Kind code of ref document: A1