WO2020140832A1 - Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium - Google Patents

Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium Download PDF

Info

Publication number
WO2020140832A1
WO2020140832A1 PCT/CN2019/128900 CN2019128900W WO2020140832A1 WO 2020140832 A1 WO2020140832 A1 WO 2020140832A1 CN 2019128900 W CN2019128900 W CN 2019128900W WO 2020140832 A1 WO2020140832 A1 WO 2020140832A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
dimensional
vertex
model
face image
Prior art date
Application number
PCT/CN2019/128900
Other languages
French (fr)
Chinese (zh)
Inventor
曹占魁
李雅子
王一
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2020140832A1 publication Critical patent/WO2020140832A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present application relates to the field of computer technology, and in particular, to a three-dimensional face reconstruction method, device, electronic equipment, and storage medium.
  • the three-dimensional reconstruction of the face refers to generating a three-dimensional face model based on the two-dimensional image containing the face, that is, the face image.
  • 3D Morphable Models are generally used to realize 3D reconstruction of human faces. Specifically, a large number of 3D prototype faces are obtained first, and a complex preprocessing process is performed on these 3D prototype faces. Principal Components Analysis (PCA) is used to statistically model the shape, texture and surface reflectance of the human face to generate a deformation model, and then use the deformation model to synthesize the face image to realize three-dimensional reconstruction of the face .
  • PCA Principal Components Analysis
  • the present application provides a method, device, electronic device, and storage medium for three-dimensional reconstruction of a human face, which can overcome the problems of cumbersome processes, large calculation amount, and low efficiency of three-dimensional reconstruction of a human face.
  • a three-dimensional reconstruction method for a face including:
  • the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
  • texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • a three-dimensional face reconstruction device including:
  • the acquisition module is configured to perform acquisition of the initial three-dimensional face model and face image
  • a recognition module configured to perform face recognition on the face image to obtain face pose data
  • the acquiring module is further configured to perform acquiring the initial three-dimensional face according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model In a pose corresponding to the face pose data of the model, the two vertices of each vertex projected onto the imaging plane of the device;
  • the processing module is configured to perform texture mapping on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain the three-dimensional face model of the face image.
  • an electronic device including:
  • Memory for storing processor executable instructions
  • the processor is configured to:
  • the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
  • texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • a non-transitory computer-readable storage medium when instructions in the storage medium are executed by a processor of an electronic device, enabling the electronic device to execute the first aspect 3D reconstruction method of human face.
  • an application program product which, when instructions in the application product product are executed by a processor of an electronic device, enables the electronic device to perform the three-dimensional reconstruction of the face described in the first aspect method.
  • Fig. 1 is a flow chart showing a method for three-dimensional reconstruction of a human face according to an exemplary embodiment
  • Fig. 2 is a flowchart of another three-dimensional face reconstruction method according to an exemplary embodiment
  • Fig. 3 is a block diagram of a first face three-dimensional reconstruction device according to an exemplary embodiment
  • Fig. 4 is a block diagram of a second face three-dimensional reconstruction device according to an exemplary embodiment
  • Fig. 5 is a block diagram of a third face three-dimensional reconstruction device according to an exemplary embodiment
  • Fig. 6 is a block diagram of an electronic device 600 according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a method for three-dimensional face reconstruction according to an exemplary embodiment. As shown in Fig. 1, the method for three-dimensional face reconstruction is used in an electronic device and includes the following steps.
  • step S11 an initial three-dimensional face model and face image are obtained.
  • step S12 face recognition is performed on the face image to obtain face pose data.
  • step S13 according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model, obtain the initial three-dimensional face model corresponding to the face pose data In the posture, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device.
  • step S14 texture mapping is performed on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain a three-dimensional face model of the face image.
  • the method provided in the embodiment of the present application obtains an initial three-dimensional face model, performs face recognition on the face image to be three-dimensionally reconstructed, and obtains face pose data, and then collects the face image according to the face pose data
  • the projection parameters of the device can quickly convert the three-dimensional coordinates of each vertex of the initial three-dimensional face model into two-dimensional coordinates, that is, the two-dimensional coordinates of each vertex of the initial three-dimensional face model projected on the face image.
  • texture mapping processing is performed according to the two-dimensional coordinates to obtain a reconstructed three-dimensional face model, which simplifies the reconstruction process, has a small amount of calculation, and improves the efficiency of three-dimensional reconstruction of the face.
  • performing face recognition on the face image to obtain face pose data includes:
  • Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
  • face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data, including:
  • a face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix.
  • the displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image.
  • the rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image; the matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  • the projection parameters include a projection matrix, and accordingly, according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model, In the initial three-dimensional face model, in a pose corresponding to the face pose data, the two-dimensional coordinates of each vertex projected onto the imaging plane of the device include:
  • the three-dimensional coordinates of the vertex are multiplied by the matrix and the projection matrix to obtain the two-dimensional coordinates of the vertex.
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, including:
  • texture data collection is performed on the face image; according to the collected texture data, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, the method further includes :
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered on the face image.
  • rendering the three-dimensional face model onto the face image includes:
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered; the rendered image is overlaid on the face image.
  • the method before rendering the three-dimensional face model onto the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, the method further includes: according to the animation data and Acquiring the three-dimensional coordinates of each vertex, and acquiring the three-dimensional coordinates after the displacement of each vertex when driving the three-dimensional face model to make a corresponding expression or action according to the animation data;
  • rendering the three-dimensional face model onto the face image includes: according to the face pose data, after the displacement of each vertex The three-dimensional coordinates of and the two-dimensional coordinates of each vertex, rendering the three-dimensional face model onto the face image.
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, the method further includes :
  • the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
  • Fig. 2 is a flowchart of another three-dimensional face reconstruction method according to an exemplary embodiment. As shown in Fig. 2, the three-dimensional face reconstruction method is used in an electronic device and includes the following steps:
  • step S21 an initial three-dimensional face model and face image are obtained.
  • the initial three-dimensional face model may be a standard three-dimensional face model (or a general three-dimensional face model), and the face image refers to an image including a face, that is, an image to be three-dimensionally reconstructed.
  • the initial three-dimensional face model may be constructed by an electronic device, or may be constructed by other devices, and then sent to the electronic device, so that the electronic device can acquire the initial three-dimensional face model.
  • the electronic device may pre-construct or obtain the initial three-dimensional face model from another device and store it locally.
  • the electronic device may obtain the initial three-dimensional face model from local storage.
  • the electronic device may also construct or obtain the initial three-dimensional face model from other devices at the current time, which is not limited in this embodiment of the present application.
  • the construction process of the initial three-dimensional face model may include: obtaining a face image from a face image database, extracting face feature points of the face image, and generating the initial three-dimensional based on the face feature points Face model.
  • the face feature points include, but are not limited to, key points in the face that characterize eyebrows, nose, eyes, mouth, and contours of the face. Face feature points can be obtained by performing face detection on face images through a face detection software development kit (Software Development Kit, SDK).
  • the three-dimensional model usually uses some points in the three-dimensional space and the triangles connected by these points to represent. These points are called vertices. Once the initial three-dimensional face model is constructed, the three-dimensional coordinates of the vertices of the initial three-dimensional face model can be obtained, that is, the position coordinates (V.POS) of the vertices in the three-dimensional space.
  • V.POS position coordinates
  • the face image can be a face image collected by the camera of the electronic device in real time.
  • the electronic device can collect face images through a camera module (such as a camera), and for each frame of collected face images, the subsequent steps S22 to S25 can be performed to make each frame of face images You can get real-time 3D reconstruction results.
  • the face image may also be a face image collected in advance by the electronic device, or may be a face image collected by a camera device other than the electronic device.
  • the embodiment of the present application does not make the source of the face image Specific restrictions.
  • step S22 face recognition is performed on the face image to obtain face pose data.
  • the face image includes a face
  • the electronic device or other device collects the face image
  • the face has a corresponding posture.
  • this step S22 may include: performing face recognition on the face image, and using the recognized position and orientation of the face in the face image as the face pose data.
  • the position and orientation of the face may be the position and orientation of the face in the three-dimensional space when the device collects the face image, for example, the position of the face may be the left of the face in the field of view of the device The position of the face, the position to the right, or the center of the center, etc.
  • the orientation of the face may be a frontal face, a left face, a right face, a head up or a head down.
  • the process of performing face recognition on the face image may include: using a face recognition algorithm to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix, and the displacement matrix is used for Indicates the position of the face in the three-dimensional space when the device collects the face image, the rotation matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image; the displacement matrix and The matrix obtained by multiplying the rotation matrix is used as the face pose data.
  • the matrix obtained by multiplying the displacement matrix and the rotation matrix (denoted as matrix M) may be a 4 ⁇ 4 matrix.
  • step S23 according to the face pose data, the projection parameters of the device that collected the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model, the initial three-dimensional face model corresponding to the face pose data is acquired In the posture, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device.
  • V.UV texture coordinates
  • the electronic device after the electronic device obtains face posture data according to the face image, it can use the face posture data as posture data of the three-dimensional face model, including the position and orientation of the three-dimensional face model.
  • the electronic device can obtain the two-dimensional coordinates of each vertex of the three-dimensional face model projected in this pose, because the face image is collected by the device, that is, the face image is the face Obtained by imaging on the imaging plane of the device, therefore, each vertex is projected on the two-dimensional coordinates of the imaging plane of the device, that is, the two-dimensional coordinates of each vertex projected on the face image, thus realizing a three-dimensional human Point-to-point mapping between face models and face images.
  • the projection parameter includes a projection matrix (denoted as matrix P). If the face image is collected by an electronic device, the projection matrix refers to the projection matrix of the camera module of the electronic device. If the face image is collected by a camera device other than the electronic device, the projection matrix refers to the The projection matrix of the camera device.
  • the initial three-dimensional face model By multiplying the three-dimensional coordinates of each vertex of the initial three-dimensional face model by the matrix used to express the pose, and then multiplying it by the projection matrix of the device that collects the face image, the initial three-dimensional face model can be projected by the camera under the pose
  • the two-dimensional coordinates of each vertex that is, the coordinates of the vertices of the initial three-dimensional face model projected on the face image
  • each vertex of the initial three-dimensional face model can be mapped to a pixel on the face image. This process of obtaining the two-dimensional coordinates of each vertex of the three-dimensional face model by means of projection is fast and the amount of calculation is small.
  • step S24 based on the two-dimensional coordinates of each vertex and the face image, texture mapping is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • the initial three-dimensional face model may be any standard face model, which does not have the texture information of the face in the face image.
  • the electronic device obtains the initial three-dimensional face model pose data and various After the two-dimensional coordinates of the vertices, texture mapping can be performed on the initial three-dimensional face model to obtain a three-dimensional face model with texture information.
  • this step S24 may include: performing texture data collection on the face image according to the two-dimensional coordinates of each vertex; and performing texture mapping on the initial three-dimensional face model based on the collected texture data After processing, the three-dimensional face model is obtained.
  • the texture data of the face image By sampling the texture data of the face image according to the two-dimensional coordinates of each vertex, it is used as the texture of the initial three-dimensional face model. After the texture is completed, the three-dimensional reconstruction of the face is completed, and the obtained three-dimensional reconstruction result is used as the face image.
  • Three-dimensional face model By using mature face recognition algorithms to obtain pose data, a simple calculation can be used to reconstruct a three-dimensional face model. The calculation is small and the speed is fast, which improves the efficiency of face three-dimensional reconstruction.
  • step S25 according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices, the three-dimensional face model is rendered on the face image.
  • the electronic device after obtaining the reconstructed three-dimensional model of the human face through three-dimensional reconstruction of the human face, the electronic device can display the reconstruction result by rendering.
  • this step S25 may include: rendering the three-dimensional face model according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices; overlaying the rendered image to the person Face image.
  • the electronic device can use the 3D rendering technology to render the 3D face model onto the face image based on the face pose data of the 3D face model, the 3D coordinates and 2D coordinates of each vertex of the 3D face model, and the 3D face model
  • the point-to-point mapping between each vertex and the pixels in the face image can realize the natural fusion of the rendering results of the three-dimensional face model and the face image without clear boundaries.
  • the electronic device may drive the three-dimensional face model to make expressions or actions corresponding to the animation data according to the animation data.
  • the animation data is used to show expressions or actions made by the human face, such as making expressions or actions such as mouth opening and tongue extension, the embodiments of the present application do not limit the expressions or actions corresponding to the animation data.
  • the animation data may be animation data pre-acquired by the electronic device.
  • the animation data may also be obtained.
  • electronic devices can use techniques such as skeletal animation and vertex animation to create animation data. Among them, the basic principle of skeletal animation, vertex animation and other technologies is to make each vertex of the model shift with time.
  • the pre-acquired animation data is directly applied to the 3D face model to achieve animation redirection and can adapt to most people's expressions Or action.
  • this step S25 may include: rendering the three-dimensional face model onto the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
  • each vertex of the three-dimensional face model will be displaced.
  • the electronic device can render the three-dimensional face model according to the three-dimensional coordinates after the displacement of each vertex, so that the rendering result is made.
  • the human face corresponding to the expression or action realizes the redirection of the expression or action.
  • the technical solution provided by the embodiments of the present application can reconstruct a three-dimensional face model in real time with high performance, and combine it with pre-made animation data to realize human
  • the face moves according to the pre-made animation, making expressions and actions such as opening the mouth and sticking out the tongue, and it is well integrated with the face image.
  • Fig. 3 is a block diagram of a device for three-dimensional face reconstruction according to an exemplary embodiment.
  • the device includes an acquisition module 301, an identification module 302 and a processing module 303.
  • the acquiring module 301 is configured to perform acquiring the initial three-dimensional face model and face image
  • the recognition module 302 is configured to perform face recognition on the face image to obtain face pose data
  • the acquisition module 301 is further configured to perform acquisition of the initial three-dimensional face model based on the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model. In the pose corresponding to the face pose data, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device;
  • the processing module 303 is configured to perform texture mapping processing on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain a three-dimensional face model of the face image.
  • the recognition module 302 is configured to perform face recognition on the face image, and use the position and orientation of the face in the recognized face image as the face pose data.
  • the identification module 302 is configured to execute:
  • a face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix.
  • the displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image.
  • the rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image;
  • the matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  • the projection parameter includes a projection matrix
  • the acquisition module 301 is configured to perform, for each of the vertices, the three-dimensional coordinates of the vertex and the matrix and the projection matrix Multiply to get the two-dimensional coordinates of the vertex.
  • the processing module 303 is configured to execute:
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  • the device further includes:
  • the rendering module 304 is configured to perform rendering of the three-dimensional face model on the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex.
  • the rendering module 304 is configured to perform rendering of the three-dimensional face model according to the facial pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices; overlay the rendered image to On the face image.
  • the acquiring module 301 is further configured to perform the acquisition of the corresponding expression or action according to the animation data and the three-dimensional coordinates of the respective vertices when the three-dimensional face model is driven according to the animation data.
  • the rendering module 304 is configured to perform rendering of the three-dimensional face model onto the face image based on the face pose data, the three-dimensional coordinates after the displacement of each vertex, and the two-dimensional coordinates of each vertex.
  • the device further includes:
  • the driving module 305 is configured to execute driving the three-dimensional face model to make expressions or actions corresponding to the animation data according to the animation data.
  • Fig. 6 is a block diagram of an electronic device 600 according to an exemplary embodiment.
  • the electronic device 600 may be: a smartphone, a tablet computer, a motion picture expert compression standard audio level 3 player (Moving Picture Experts Group Audio Audio Layer III, MP3), a motion picture expert compression standard audio level 3 audio layer 4 (Moving Pictures Experts Group Group Audio Layer) IV, MP4) player, laptop or desktop computer.
  • the electronic device 600 may also be referred to as user equipment, portable electronic device, laptop electronic device, desktop electronic device, and other names.
  • the electronic device 600 includes:
  • a memory 602 for storing executable instructions of the processor 601;
  • the processor 601 is configured to execute:
  • the projection parameters of the device that collected the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model obtain the initial three-dimensional face model under the pose corresponding to the face pose data.
  • texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • the processor 601 is specifically configured to execute:
  • Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
  • the processor 601 is specifically configured to execute:
  • a face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix.
  • the displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image.
  • the rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image;
  • the matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  • the projection parameter includes a projection matrix
  • the processor 601 is specifically configured to perform, for each of the vertices, the three-dimensional coordinates of the vertex and the matrix and the projection Multiply the matrix to get the two-dimensional coordinates of the vertex.
  • the processor 601 is specifically configured to execute:
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  • the processor 601 is further configured to execute:
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered on the face image.
  • the processor 601 is specifically configured to execute:
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered; the rendered image is overlaid on the face image.
  • the processor 601 is further configured to execute:
  • the animation data and the three-dimensional coordinates of each vertex obtain the three-dimensional coordinates after the displacement of each vertex when the three-dimensional face model is driven to make a corresponding expression or action according to the animation data;
  • the three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
  • the processor 601 is further configured to execute:
  • the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
  • the processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 601 may adopt at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA). achieve.
  • the processor 601 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in a wake-up state, also called a central processing unit (Central Processing Unit, CPU); the coprocessor is A low-power processor for processing data in the standby state.
  • CPU Central Processing Unit
  • the processor 601 may be integrated with a graphics processor (Graphics, Processing, Unit, GPU), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 601 may further include an artificial intelligence (AI) processor, which is used to process computing operations related to machine learning.
  • AI artificial intelligence
  • the memory 602 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 602 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
  • non-transitory computer-readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the person provided by the method embodiment in the present application Face 3D reconstruction method.
  • the electronic device 600 may optionally further include: a peripheral device interface 603 and at least one peripheral device.
  • the processor 601, the memory 602, and the peripheral device interface 603 may be connected by a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 603 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 604, a display screen 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
  • the peripheral device interface 603 may be used to connect at least one peripheral device related to Input/Output (I/O) to the processor 601 and the memory 602.
  • the processor 601, the memory 602, and the peripheral device interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 601, the memory 602, and the peripheral device interface 603 or Both can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 604 is used to receive and transmit radio frequency (Radio Frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 604 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 604 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 604 can communicate with other electronic devices through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or wireless fidelity (WiFi) networks.
  • the radio frequency circuit 604 may further include a circuit related to short-range wireless communication (Near Field Communication, NFC), which is not limited in this application.
  • NFC Near Field Communication
  • the display screen 605 is used to display a user interface (User Interface, UI).
  • the UI may include graphics, text, icons, video, and any combination thereof.
  • the display screen 605 also has the ability to collect touch signals on or above the surface of the display screen 605.
  • the touch signal can be input to the processor 601 as a control signal for processing.
  • the display screen 605 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 605 may be one, and the front panel of the electronic device 600 is provided; in other embodiments, the display screen 605 may be at least two, respectively disposed on different surfaces of the electronic device 600 or in a folded design In still other embodiments, the display screen 605 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the electronic device 600. Even, the display screen 605 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 605 can be made of liquid crystal display (Liquid Crystal) (LCD), organic light-emitting diode (Organic Light-Emitting Diode, OLED) and other materials.
  • LCD liquid crystal display
  • OLED Organic Light-Emitting Diode
  • the camera component 606 is used to collect images or videos.
  • the camera assembly 606 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the electronic device, and the rear camera is set on the back of the electronic device.
  • there are at least two rear cameras which are respectively one of the main camera, the depth-of-field camera, the wide-angle camera, and the telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera Integrate with wide-angle camera to achieve panoramic shooting and virtual reality (Virtual Reality, VR) shooting function or other fusion shooting functions.
  • the camera assembly 606 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 607 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 601 for processing, or input them to the radio frequency circuit 604 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 601 or the radio frequency circuit 604 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible by humans, but also convert electrical signals into sound waves inaudible to humans for ranging purposes.
  • the audio circuit 607 may further include a headphone jack.
  • the positioning component 608 is used to locate the current geographic location of the electronic device 600 to implement navigation or location-based services (Location Based Services, LBS).
  • LBS Location Based Services
  • the positioning component 608 may be a positioning component based on the Global Positioning System (GPS) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • GPS Global Positioning System
  • the power supply 609 is used to supply power to various components in the electronic device 600.
  • the power source 609 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the electronic device 600 further includes one or more sensors 610.
  • the one or more sensors 610 include, but are not limited to: an acceleration sensor 611, a gyro sensor 612, a pressure sensor 613, a fingerprint sensor 614, an optical sensor 615, and a proximity sensor 616.
  • the acceleration sensor 611 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the electronic device 600.
  • the acceleration sensor 611 can be used to detect the components of gravity acceleration on three coordinate axes.
  • the processor 601 may control the touch screen 605 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 611.
  • the acceleration sensor 611 can also be used for game or user movement data collection.
  • the gyro sensor 612 can detect the body direction and the rotation angle of the electronic device 600, and the gyro sensor 612 can cooperate with the acceleration sensor 611 to collect a 3D action of the user on the electronic device 600.
  • the processor 601 can realize the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 613 may be disposed on the side frame of the electronic device 600 and/or the lower layer of the touch display 605.
  • the pressure sensor 613 can detect the user's grip signal on the electronic device 600, and the processor 601 can perform left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 613.
  • the processor 601 controls the operability control on the UI interface according to the user's pressure operation on the touch display 605.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 614 is used to collect the user's fingerprint, and the processor 601 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the user's identity based on the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 601 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 614 may be provided on the front, back, or side of the electronic device 600. When a physical button or manufacturer logo is provided on the electronic device 600, the fingerprint sensor 614 may be integrated with the physical button or manufacturer logo.
  • the optical sensor 615 is used to collect the ambient light intensity.
  • the processor 601 can control the display brightness of the touch display 605 according to the ambient light intensity collected by the optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display 605 is increased; when the ambient light intensity is low, the display brightness of the touch display 605 is decreased.
  • the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
  • the proximity sensor 616 also called a distance sensor, is usually provided on the front panel of the electronic device 600.
  • the proximity sensor 616 is used to collect the distance between the user and the front of the electronic device 600.
  • the processor 601 controls the touch display 605 to switch from the bright screen state to the breathing state; when the proximity sensor 616 When it is detected that the distance between the user and the front of the electronic device 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the screen-holding state to the screen-lighting state.
  • FIG. 6 does not constitute a limitation on the electronic device 600, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
  • a non-transitory computer-readable storage medium is also provided.
  • the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can perform any of the three-dimensional reconstruction of the face The method, for example, the method shown in FIG. 1 or the method shown in FIG. 2.
  • the non-transitory computer-readable storage medium may be read-only memory (Read-Only Memory, ROM), random-access memory (Random Access Memory, RAM), read-only compact disc (Compact Disc Read-Only Memory, CD- ROM), magnetic tape, floppy disk and optical data storage devices, etc.
  • an application product is also provided.
  • the electronic device can execute any of the three-dimensional face reconstruction methods described above, such as The method shown in 1 or the method shown in FIG. 2 and so on.

Abstract

Disclosed are a three-dimensional facial reconstruction method and apparatus, and an electronic device and a storage medium, wherein same belong to the technical field of computers. The method comprises: acquiring an initial three-dimensional facial model and a facial image (S11); performing facial recognition on the facial image to obtain facial pose data (S12); according to the facial pose data, a projection parameter of a device collecting the facial image and three-dimensional coordinates of each vertex of the initial three-dimensional facial model, acquiring two-dimensional coordinates of each vertex projected to an imaging plane of the device when the initial three-dimensional facial model is in a pose corresponding to the facial pose data (S13); and performing texture mapping processing on the initial three-dimensional facial model according to the two-dimensional coordinates of each vertex and the facial image so as to obtain a three-dimensional facial model of the facial image (S14). According to the method, a reconstruction process is simplified, the calculation burden is small, and the efficiency of three-dimensional facial reconstruction is improved.

Description

人脸三维重建方法、装置、电子设备及存储介质Face three-dimensional reconstruction method, device, electronic equipment and storage medium
相关申请的交叉引用Cross-reference of related applications
本申请要求在2019年01月04日提交中国专利局、申请号为201910008837.3、申请名称为“人脸三维重建方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application submitted to the China Patent Office on January 4, 2019, with the application number 201910008837.3 and the application name "Face 3D reconstruction method, device, electronic equipment and storage medium", all of which are approved by The reference is incorporated in this application.
技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种人脸三维重建方法、装置、电子设备及存储介质。The present application relates to the field of computer technology, and in particular, to a three-dimensional face reconstruction method, device, electronic equipment, and storage medium.
背景技术Background technique
近年来,随着增强现实(Augmented Reality,AR)技术的发展,AR应用中存在根据摄像头获得人脸图像,实现人脸三维重建的需求。其中,人脸三维重建是指根据包含人脸的二维图像,也即是人脸图像,生成三维人脸模型。In recent years, with the development of augmented reality (Augmented Reality, AR) technology, there is a need in AR applications to obtain face images based on cameras to realize three-dimensional reconstruction of faces. Among them, the three-dimensional reconstruction of the face refers to generating a three-dimensional face model based on the two-dimensional image containing the face, that is, the face image.
相关技术中,一般是利用三维形变模型(3D Morphable Model,3DMM)来实现人脸三维重建,具体地,先获取大量的三维原型人脸,对这些三维原型人脸进行复杂的预处理过程,再通过主成分分析方法(Principal Components Analysis,PCA)对人脸的形状、纹理及表面反射率进行统计建模,生成形变模型,再利用形变模型对人脸图像进行人脸合成,实现人脸三维重建。In related technologies, 3D Morphable Models (3DMM) are generally used to realize 3D reconstruction of human faces. Specifically, a large number of 3D prototype faces are obtained first, and a complex preprocessing process is performed on these 3D prototype faces. Principal Components Analysis (PCA) is used to statistically model the shape, texture and surface reflectance of the human face to generate a deformation model, and then use the deformation model to synthesize the face image to realize three-dimensional reconstruction of the face .
上述技术利用三维形变模型来实现人脸三维重建时,发明人意识到需要通过主成分分析方法进行统计建模,过程繁琐,运算量较大,人脸三维重建的效率低。When the above technology uses a three-dimensional deformation model to realize the three-dimensional reconstruction of the face, the inventor realizes that it is necessary to carry out statistical modeling through the principal component analysis method.
发明内容Summary of the invention
本申请提供一种人脸三维重建方法、装置、电子设备及存储介质,能够克服过程繁琐,运算量较大,人脸三维重建的效率低的问题。The present application provides a method, device, electronic device, and storage medium for three-dimensional reconstruction of a human face, which can overcome the problems of cumbersome processes, large calculation amount, and low efficiency of three-dimensional reconstruction of a human face.
根据本申请实施例的第一方面,提供一种人脸三维重建方法,包括:According to a first aspect of the embodiments of the present application, a three-dimensional reconstruction method for a face is provided, including:
获取初始三维人脸模型和人脸图像;Obtain the initial three-dimensional face model and face image;
对所述人脸图像进行人脸识别,得到人脸姿态数据;Performing face recognition on the face image to obtain face pose data;
根据所述人脸姿态数据、采集所述人脸图像的设备的投影参数以及所述初始三维人脸模型的各个顶点的三维坐标,获取所述初始三维人脸模型在所述人脸姿态数据对应的姿态下,所述各个顶点投影到所述设备的成像平面上的二维坐标;Obtaining the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型。According to the two-dimensional coordinates of each vertex and the face image, texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
根据本申请实施例的第二方面,提供一种人脸三维重建装置,包括:According to a second aspect of the embodiments of the present application, a three-dimensional face reconstruction device is provided, including:
获取模块,被配置为执行获取初始三维人脸模型和人脸图像;The acquisition module is configured to perform acquisition of the initial three-dimensional face model and face image;
识别模块,被配置为执行对所述人脸图像进行人脸识别,得到人脸姿态数据;A recognition module configured to perform face recognition on the face image to obtain face pose data;
所述获取模块还被配置为执行根据所述人脸姿态数据、采集所述人脸图像的设备的投影参数以及所述初始三维人脸模型的各个顶点的三维坐标,获取所述初始三维人脸模型在所述人脸姿态数据对应的姿态下,所述各个顶点投影到所述设备的成像平面上的二维坐标;The acquiring module is further configured to perform acquiring the initial three-dimensional face according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model In a pose corresponding to the face pose data of the model, the two vertices of each vertex projected onto the imaging plane of the device;
处理模块,被配置为执行根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型。The processing module is configured to perform texture mapping on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain the three-dimensional face model of the face image.
根据本申请实施例的第三方面,提供了一种电子设备,包括:According to a third aspect of the embodiments of the present application, an electronic device is provided, including:
处理器;processor;
用于存储处理器可执行指令的存储器;Memory for storing processor executable instructions;
其中,所述处理器被配置为:Wherein, the processor is configured to:
获取初始三维人脸模型和人脸图像;Obtain the initial three-dimensional face model and face image;
对所述人脸图像进行人脸识别,得到人脸姿态数据;Performing face recognition on the face image to obtain face pose data;
根据所述人脸姿态数据、采集所述人脸图像的设备的投影参数以及所述 初始三维人脸模型的各个顶点的三维坐标,获取所述初始三维人脸模型在所述人脸姿态数据对应的姿态下,所述各个顶点投影到所述设备的成像平面上的二维坐标;Obtaining the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型。According to the two-dimensional coordinates of each vertex and the face image, texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
根据本申请实施例的第四方面,提供了一种非临时性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行第一方面所述的人脸三维重建方法。According to a fourth aspect of the embodiments of the present application, there is provided a non-transitory computer-readable storage medium, when instructions in the storage medium are executed by a processor of an electronic device, enabling the electronic device to execute the first aspect 3D reconstruction method of human face.
根据本申请实施例的第五方面,提供一种应用程序产品,当所述应用程序产品中的指令由电子设备的处理器执行时,使得电子设备能够执行第一方面所述的人脸三维重建方法。According to a fifth aspect of the embodiments of the present application, there is provided an application program product which, when instructions in the application product product are executed by a processor of an electronic device, enables the electronic device to perform the three-dimensional reconstruction of the face described in the first aspect method.
附图说明BRIEF DESCRIPTION
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The drawings herein are incorporated into and constitute a part of this specification, show embodiments consistent with this application, and are used together with the specification to explain the principles of this application.
图1是根据一示例性实施例示出的一种人脸三维重建方法的流程图;Fig. 1 is a flow chart showing a method for three-dimensional reconstruction of a human face according to an exemplary embodiment;
图2是根据一示例性实施例示出的另一种人脸三维重建方法的流程图;Fig. 2 is a flowchart of another three-dimensional face reconstruction method according to an exemplary embodiment;
图3是根据一示例性实施例示出的第一种人脸三维重建装置的框图;Fig. 3 is a block diagram of a first face three-dimensional reconstruction device according to an exemplary embodiment;
图4是根据一示例性实施例示出的第二种人脸三维重建装置的框图;Fig. 4 is a block diagram of a second face three-dimensional reconstruction device according to an exemplary embodiment;
图5是根据一示例性实施例示出的第三种人脸三维重建装置的框图;Fig. 5 is a block diagram of a third face three-dimensional reconstruction device according to an exemplary embodiment;
图6是根据一示例性实施例示出的一种电子设备600的框图。Fig. 6 is a block diagram of an electronic device 600 according to an exemplary embodiment.
具体实施方式detailed description
图1是根据一示例性实施例示出的一种人脸三维重建方法的流程图,如图1所示,人脸三维重建方法用于电子设备中,包括以下步骤。Fig. 1 is a flowchart of a method for three-dimensional face reconstruction according to an exemplary embodiment. As shown in Fig. 1, the method for three-dimensional face reconstruction is used in an electronic device and includes the following steps.
在步骤S11中,获取初始三维人脸模型和人脸图像。In step S11, an initial three-dimensional face model and face image are obtained.
在步骤S12中,对该人脸图像进行人脸识别,得到人脸姿态数据。In step S12, face recognition is performed on the face image to obtain face pose data.
在步骤S13中,根据该人脸姿态数据、采集该人脸图像的设备的投影参数以及该初始三维人脸模型的各个顶点的三维坐标,获取该初始三维人脸模型在该人脸姿态数据对应的姿态下,该各个顶点投影到该设备的成像平面上的二维坐标。In step S13, according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model, obtain the initial three-dimensional face model corresponding to the face pose data In the posture, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device.
在步骤S14中,根据该各个顶点的二维坐标和该人脸图像,对该初始三维人脸模型进行纹理贴图处理,得到该人脸图像的三维人脸模型。In step S14, texture mapping is performed on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain a three-dimensional face model of the face image.
本申请实施例提供的方法,通过获取一个初始三维人脸模型,对待进行人脸三维重建的人脸图像进行人脸识别,得到人脸姿态数据后,根据人脸姿态数据和采集该人脸图像的设备的投影参数,即可快速地将初始三维人脸模型的各个顶点的三维坐标转换为二维坐标,也即得到了初始三维人脸模型的各个顶点投影到人脸图像上的二维坐标,进而根据该二维坐标进行纹理贴图处理,即可得到重建后的三维人脸模型,简化了重建过程,运算量小,提高了人脸三维重建的效率。The method provided in the embodiment of the present application obtains an initial three-dimensional face model, performs face recognition on the face image to be three-dimensionally reconstructed, and obtains face pose data, and then collects the face image according to the face pose data The projection parameters of the device can quickly convert the three-dimensional coordinates of each vertex of the initial three-dimensional face model into two-dimensional coordinates, that is, the two-dimensional coordinates of each vertex of the initial three-dimensional face model projected on the face image Then, texture mapping processing is performed according to the two-dimensional coordinates to obtain a reconstructed three-dimensional face model, which simplifies the reconstruction process, has a small amount of calculation, and improves the efficiency of three-dimensional reconstruction of the face.
在一种可能实现方式中,对该人脸图像进行人脸识别,得到人脸姿态数据,包括:In a possible implementation manner, performing face recognition on the face image to obtain face pose data includes:
对该人脸图像进行人脸识别,将识别得到的该人脸图像中人脸的位置和朝向作为该人脸姿态数据。Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
在一种可能实现方式中,对该人脸图像进行人脸识别,将识别得到的该人脸图像中人脸的位置和朝向作为该人脸姿态数据,包括:In a possible implementation manner, face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data, including:
采用人脸识别算法,对该人脸图像进行人脸识别,得到位移矩阵和旋转矩阵,该位移矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的位置,该旋转矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的朝向;将该位移矩阵和该旋转矩阵相乘得到的矩阵作为该人脸姿态数据。A face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix. The displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image. The rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image; the matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
在一种可能实现方式中,该投影参数包括投影矩阵,相应地,根据该人脸姿态数据、采集该人脸图像的设备的投影参数以及该初始三维人脸模型的各个顶点的三维坐标,获取该初始三维人脸模型在该人脸姿态数据对应的姿态下,该各个顶点投影到该设备的成像平面上的二维坐标,包括:In a possible implementation manner, the projection parameters include a projection matrix, and accordingly, according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model, In the initial three-dimensional face model, in a pose corresponding to the face pose data, the two-dimensional coordinates of each vertex projected onto the imaging plane of the device include:
对于该各个顶点中的每个顶点,将该顶点的三维坐标与该矩阵和该投影矩阵相乘,得到该顶点的二维坐标。For each of the vertices, the three-dimensional coordinates of the vertex are multiplied by the matrix and the projection matrix to obtain the two-dimensional coordinates of the vertex.
在一种可能实现方式中,根据该各个顶点的二维坐标和该人脸图像,对该初始三维人脸模型进行纹理贴图处理,得到该人脸图像的三维人脸模型,包括:In a possible implementation, according to the two-dimensional coordinates of each vertex and the face image, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, including:
根据该各个顶点的二维坐标,对该人脸图像进行纹理数据采集;根据采集到的纹理数据,对该初始三维人脸模型进行纹理贴图处理,得到该三维人脸模型。According to the two-dimensional coordinates of each vertex, texture data collection is performed on the face image; according to the collected texture data, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
在一种可能实现方式中,根据该各个顶点的二维坐标和该人脸图像,对该初始三维人脸模型进行纹理贴图处理,得到该人脸图像的三维人脸模型之后,该方法还包括:In a possible implementation, according to the two-dimensional coordinates of each vertex and the face image, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, the method further includes :
根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,将该三维人脸模型渲染到该人脸图像上。According to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, the three-dimensional face model is rendered on the face image.
在一种可能实现方式中,根据该人脸姿态数据、各个顶点的三维坐标和二维坐标,将该三维人脸模型渲染到该人脸图像上,包括:In a possible implementation manner, according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, rendering the three-dimensional face model onto the face image includes:
根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,对该三维人脸模型进行渲染;将渲染得到的图像覆盖到该人脸图像上。According to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, the three-dimensional face model is rendered; the rendered image is overlaid on the face image.
在一种可能实现方式中,根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,将该三维人脸模型渲染到该人脸图像上之前,该方法还包括:根据动画数据和该各个顶点的三维坐标,获取根据该动画数据驱动该三维人脸模型作出对应的表情或动作时,该各个顶点发生位移后的三维坐标;In a possible implementation manner, before rendering the three-dimensional face model onto the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, the method further includes: according to the animation data and Acquiring the three-dimensional coordinates of each vertex, and acquiring the three-dimensional coordinates after the displacement of each vertex when driving the three-dimensional face model to make a corresponding expression or action according to the animation data;
相应地,根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,将该三维人脸模型渲染到该人脸图像上,包括:根据该人脸姿态数据、该各个顶点发生位移后的三维坐标和该各个顶点的二维坐标,将该三维人脸模型渲染到该人脸图像上。Correspondingly, according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, rendering the three-dimensional face model onto the face image includes: according to the face pose data, after the displacement of each vertex The three-dimensional coordinates of and the two-dimensional coordinates of each vertex, rendering the three-dimensional face model onto the face image.
在一种可能实现方式中,根据该各个顶点的二维坐标和该人脸图像,对该初始三维人脸模型进行纹理贴图处理,得到该人脸图像的三维人脸模型之 后,该方法还包括:In a possible implementation, according to the two-dimensional coordinates of each vertex and the face image, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, the method further includes :
根据动画数据,驱动该三维人脸模型做出该动画数据对应的表情或动作。Based on the animation data, the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。All the above optional technical solutions may be combined in any combination to form optional embodiments of the present application, which will not be repeated here.
图2是根据一示例性实施例示出的另一种人脸三维重建方法的流程图,如图2所示,人脸三维重建方法用于电子设备中,包括以下步骤:Fig. 2 is a flowchart of another three-dimensional face reconstruction method according to an exemplary embodiment. As shown in Fig. 2, the three-dimensional face reconstruction method is used in an electronic device and includes the following steps:
在步骤S21中,获取初始三维人脸模型和人脸图像。In step S21, an initial three-dimensional face model and face image are obtained.
其中,该初始三维人脸模型可以是一个标准三维人脸模型(或通用三维人脸模型),该人脸图像是指包括人脸的图像,也即是,待进行人脸三维重建的图像。Wherein, the initial three-dimensional face model may be a standard three-dimensional face model (or a general three-dimensional face model), and the face image refers to an image including a face, that is, an image to be three-dimensionally reconstructed.
针对该初始三维人脸模型的获取过程,该初始三维人脸模型可以由电子设备构建,也可以由其他设备构建后,发送给该电子设备,使得电子设备可以获取到该初始三维人脸模型。例如,电子设备可以预先构建或从其他设备获取该初始三维人脸模型,并存储在本地,在该步骤S21中,电子设备可以从本地存储中获取该初始三维人脸模型。当然,电子设备也可以在当前时间构建或从其他设备获取该初始三维人脸模型,本申请实施例对此不做限定。For the acquisition process of the initial three-dimensional face model, the initial three-dimensional face model may be constructed by an electronic device, or may be constructed by other devices, and then sent to the electronic device, so that the electronic device can acquire the initial three-dimensional face model. For example, the electronic device may pre-construct or obtain the initial three-dimensional face model from another device and store it locally. In step S21, the electronic device may obtain the initial three-dimensional face model from local storage. Of course, the electronic device may also construct or obtain the initial three-dimensional face model from other devices at the current time, which is not limited in this embodiment of the present application.
在一种可能实现方式中,该初始三维人脸模型的构建过程可以包括:从人脸图像数据库中获取人脸图像,提取人脸图像的人脸特征点,基于人脸特征点生成该初始三维人脸模型。其中,该人脸特征点包括但不限于人脸中表征眉毛、鼻子、眼睛、嘴巴和脸外轮廓等特征的关键点。人脸特征点可以通过人脸检测软件开发工具包(Software Development Kit,SDK)对人脸图像进行人脸检测得到。In a possible implementation manner, the construction process of the initial three-dimensional face model may include: obtaining a face image from a face image database, extracting face feature points of the face image, and generating the initial three-dimensional based on the face feature points Face model. The face feature points include, but are not limited to, key points in the face that characterize eyebrows, nose, eyes, mouth, and contours of the face. Face feature points can be obtained by performing face detection on face images through a face detection software development kit (Software Development Kit, SDK).
三维模型通常使用三维空间中的一些点以及这些点连成的三角面表示,这些点称为顶点。该初始三维人脸模型一旦构建完成,则可以得到该初始三维人脸模型的各个顶点的三维坐标,也即是,顶点在三维空间中的位置坐标(V.POS)。The three-dimensional model usually uses some points in the three-dimensional space and the triangles connected by these points to represent. These points are called vertices. Once the initial three-dimensional face model is constructed, the three-dimensional coordinates of the vertices of the initial three-dimensional face model can be obtained, that is, the position coordinates (V.POS) of the vertices in the three-dimensional space.
针对人脸图像的获取过程,该人脸图像可以为电子设备的摄像头实时采 集到的人脸图像。例如,电子设备可以通过摄像模组(如摄像头)采集人脸图像,对于采集到的每一帧人脸图像,均可以执行后续步骤S22至步骤S25的处理过程,使得每一帧人脸图像都可以得到实时的三维重建结果。当然,该人脸图像也可以是该电子设备预先采集的人脸图像,还可以是除该电子设备以外的摄像设备采集到的人脸图像,本申请实施例对该人脸图像的来源不做具体限定。For the face image acquisition process, the face image can be a face image collected by the camera of the electronic device in real time. For example, the electronic device can collect face images through a camera module (such as a camera), and for each frame of collected face images, the subsequent steps S22 to S25 can be performed to make each frame of face images You can get real-time 3D reconstruction results. Of course, the face image may also be a face image collected in advance by the electronic device, or may be a face image collected by a camera device other than the electronic device. The embodiment of the present application does not make the source of the face image Specific restrictions.
在步骤S22中,对该人脸图像进行人脸识别,得到人脸姿态数据。In step S22, face recognition is performed on the face image to obtain face pose data.
本申请实施例中,人脸图像中包括人脸,该电子设备或其他设备在采集该人脸图像时,该人脸具有相应的姿态。在一种可能实现方式中,该步骤S22可以包括:对该人脸图像进行人脸识别,将识别得到的该人脸图像中人脸的位置和朝向作为该人脸姿态数据。In the embodiment of the present application, the face image includes a face, and when the electronic device or other device collects the face image, the face has a corresponding posture. In a possible implementation manner, this step S22 may include: performing face recognition on the face image, and using the recognized position and orientation of the face in the face image as the face pose data.
其中,人脸的位置和朝向可以是设备在采集该人脸图像时该人脸在三维空间中的位置和朝向,例如,人脸的位置可以是该人脸在设备的视野范围内偏左的位置、偏右的位置或正中间等位置,人脸的朝向可以是正面人脸、左侧脸、右侧脸、抬头或低头等朝向。Wherein, the position and orientation of the face may be the position and orientation of the face in the three-dimensional space when the device collects the face image, for example, the position of the face may be the left of the face in the field of view of the device The position of the face, the position to the right, or the center of the center, etc. The orientation of the face may be a frontal face, a left face, a right face, a head up or a head down.
在一种可能实现方式中,对该人脸图像进行人脸识别的过程可以包括:采用人脸识别算法,对该人脸图像进行人脸识别,得到位移矩阵和旋转矩阵,该位移矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的位置,该旋转矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的朝向;将该位移矩阵和该旋转矩阵相乘得到的矩阵作为该人脸姿态数据。其中,位移矩阵和该旋转矩阵相乘得到的矩阵(记为矩阵M)可以是一个4×4的矩阵。In a possible implementation manner, the process of performing face recognition on the face image may include: using a face recognition algorithm to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix, and the displacement matrix is used for Indicates the position of the face in the three-dimensional space when the device collects the face image, the rotation matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image; the displacement matrix and The matrix obtained by multiplying the rotation matrix is used as the face pose data. The matrix obtained by multiplying the displacement matrix and the rotation matrix (denoted as matrix M) may be a 4×4 matrix.
在步骤S23中,根据该人脸姿态数据、采集该人脸图像的设备的投影参数以及该初始三维人脸模型的各个顶点的三维坐标,获取该初始三维人脸模型在该人脸姿态数据对应的姿态下,该各个顶点投影到该设备的成像平面上的二维坐标。In step S23, according to the face pose data, the projection parameters of the device that collected the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model, the initial three-dimensional face model corresponding to the face pose data is acquired In the posture, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device.
其中,各个顶点的二维坐标也称为贴图坐标(V.UV)。Among them, the two-dimensional coordinates of each vertex are also called texture coordinates (V.UV).
本申请实施例中,电子设备根据人脸图像获取到人脸姿态数据后,可以 将该人脸姿态数据作为三维人脸模型的姿态数据,包括三维人脸模型的位置和朝向。相应地,电子设备可以获取三维人脸模型在该姿态下经投影后每个顶点的二维坐标,由于人脸图像时由该设备采集的,也即是,该人脸图像是将人脸在该设备的成像平面上成像得到的,因此,各个顶点投影到该设备的成像平面上的二维坐标,也即是,各个顶点投影到人脸图像上的二维坐标,这样就实现了三维人脸模型与人脸图像之间的点对点映射。In the embodiment of the present application, after the electronic device obtains face posture data according to the face image, it can use the face posture data as posture data of the three-dimensional face model, including the position and orientation of the three-dimensional face model. Correspondingly, the electronic device can obtain the two-dimensional coordinates of each vertex of the three-dimensional face model projected in this pose, because the face image is collected by the device, that is, the face image is the face Obtained by imaging on the imaging plane of the device, therefore, each vertex is projected on the two-dimensional coordinates of the imaging plane of the device, that is, the two-dimensional coordinates of each vertex projected on the face image, thus realizing a three-dimensional human Point-to-point mapping between face models and face images.
在一种可能实现方式中,该投影参数包括投影矩阵(记为矩阵P)。如果该人脸图像由电子设备采集得到,则该投影矩阵是指该电子设备的摄像模组的投影矩阵,如果该人脸图像由电子设备以外的摄像设备采集得到,则该投影矩阵是指该摄像设备的投影矩阵。In a possible implementation, the projection parameter includes a projection matrix (denoted as matrix P). If the face image is collected by an electronic device, the projection matrix refers to the projection matrix of the camera module of the electronic device. If the face image is collected by a camera device other than the electronic device, the projection matrix refers to the The projection matrix of the camera device.
针对步骤S22中将该位移矩阵和该旋转矩阵相乘得到的矩阵作为该人脸姿态数据,该步骤S23可以包括:对于该各个顶点中的每个顶点,将该顶点的三维坐标与该矩阵和该投影矩阵相乘,得到该顶点的二维坐标。也即是,V.UV=P*M*V.POS。For the face posture data obtained by multiplying the displacement matrix and the rotation matrix in step S22, step S23 may include: for each of the vertices, the three-dimensional coordinates of the vertex and the matrix sum The projection matrix is multiplied to obtain the two-dimensional coordinates of the vertex. That is, V.UV=P*M*V.POS.
通过将初始三维人脸模型的每个顶点的三维坐标乘以用于表示姿态的矩阵,再乘以采集人脸图像的设备的投影矩阵,可以得到初始三维人脸模型在该姿态下经相机投影后每个顶点的二维坐标,也即是,初始三维人脸模型的顶点投影到人脸图像上的坐标,初始三维人脸模型的每个顶点可以映射到人脸图像上的一个像素点。这种通过投影方式,来获取三维人脸模型的每个顶点的二维坐标的过程速度快、运算量小。By multiplying the three-dimensional coordinates of each vertex of the initial three-dimensional face model by the matrix used to express the pose, and then multiplying it by the projection matrix of the device that collects the face image, the initial three-dimensional face model can be projected by the camera under the pose The two-dimensional coordinates of each vertex, that is, the coordinates of the vertices of the initial three-dimensional face model projected on the face image, each vertex of the initial three-dimensional face model can be mapped to a pixel on the face image. This process of obtaining the two-dimensional coordinates of each vertex of the three-dimensional face model by means of projection is fast and the amount of calculation is small.
在步骤S24中,根据该各个顶点的二维坐标和该人脸图像,对该初始三维人脸模型进行纹理贴图处理,得到该人脸图像的三维人脸模型。In step S24, based on the two-dimensional coordinates of each vertex and the face image, texture mapping is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
本申请实施例中,初始三维人脸模型可以是任意一个标准人脸模型,不具有人脸图像中人脸的纹理信息,电子设备通过上述步骤获取到该初始三维人脸模型的姿态数据和各个顶点的二维坐标后,可以对初始三维人脸模型进行纹理贴图处理,得到具有纹理信息的三维人脸模型。In the embodiment of the present application, the initial three-dimensional face model may be any standard face model, which does not have the texture information of the face in the face image. The electronic device obtains the initial three-dimensional face model pose data and various After the two-dimensional coordinates of the vertices, texture mapping can be performed on the initial three-dimensional face model to obtain a three-dimensional face model with texture information.
在一种可能实现方式中,该步骤S24可以包括:根据该各个顶点的二维 坐标,对该人脸图像进行纹理数据采集;根据采集到的纹理数据,对该初始三维人脸模型进行纹理贴图处理,得到该三维人脸模型。In a possible implementation manner, this step S24 may include: performing texture data collection on the face image according to the two-dimensional coordinates of each vertex; and performing texture mapping on the initial three-dimensional face model based on the collected texture data After processing, the three-dimensional face model is obtained.
通过将人脸图像的纹理数据按照各个顶点的二维坐标进行采样,作为初始三维人脸模型的贴图,贴图完成后即完成了人脸三维重建,将得到的三维重建结果作为该人脸图像的三维人脸模型。通过利用成熟的人脸识别算法得到姿态数据,通过简单运算既可重建三维人脸模型,运算量小,速度快,提高了人脸三维重建的效率。By sampling the texture data of the face image according to the two-dimensional coordinates of each vertex, it is used as the texture of the initial three-dimensional face model. After the texture is completed, the three-dimensional reconstruction of the face is completed, and the obtained three-dimensional reconstruction result is used as the face image. Three-dimensional face model. By using mature face recognition algorithms to obtain pose data, a simple calculation can be used to reconstruct a three-dimensional face model. The calculation is small and the speed is fast, which improves the efficiency of face three-dimensional reconstruction.
在步骤S25中,根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,将该三维人脸模型渲染到该人脸图像上。In step S25, according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices, the three-dimensional face model is rendered on the face image.
本申请实施例中,电子设备在通过人脸三维重建得到重建后的人脸三维模型后,可以通过渲染的方式,将重建结果显示出来。In the embodiment of the present application, after obtaining the reconstructed three-dimensional model of the human face through three-dimensional reconstruction of the human face, the electronic device can display the reconstruction result by rendering.
在一种可能实现方式中,该步骤S25可以包括:根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,对该三维人脸模型进行渲染;将渲染得到的图像覆盖到该人脸图像上。In a possible implementation manner, this step S25 may include: rendering the three-dimensional face model according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices; overlaying the rendered image to the person Face image.
电子设备可以用三维渲染技术,根据三维人脸模型的人脸姿态数据、三维人脸模型的各个顶点的三维坐标和二维坐标,将三维人脸模型渲染到人脸图像上,三维人脸模型的各个顶点与人脸图像中像素点之间的点对点映射,可以实现三维人脸模型的渲染结果与人脸图像自然融合在一起,没有清晰的分界线。The electronic device can use the 3D rendering technology to render the 3D face model onto the face image based on the face pose data of the 3D face model, the 3D coordinates and 2D coordinates of each vertex of the 3D face model, and the 3D face model The point-to-point mapping between each vertex and the pixels in the face image can realize the natural fusion of the rendering results of the three-dimensional face model and the face image without clear boundaries.
在一种可能实现方式中,电子设备在获得人脸图像的三维人脸模型之后,可以根据动画数据,驱动该三维人脸模型做出该动画数据对应的表情或动作。其中,该动画数据用于展示人脸作出表情或动作,如作出张嘴、伸舌头等表情或动作,本申请实施例对动画数据对应的表情或动作不做限定。In a possible implementation manner, after obtaining the three-dimensional face model of the face image, the electronic device may drive the three-dimensional face model to make expressions or actions corresponding to the animation data according to the animation data. Wherein, the animation data is used to show expressions or actions made by the human face, such as making expressions or actions such as mouth opening and tongue extension, the embodiments of the present application do not limit the expressions or actions corresponding to the animation data.
该动画数据可以是电子设备预先获取的动画数据,例如,电子设备可以在获取初始三维人脸模型时,还可以获取该动画数据。例如,电子设备可以采用骨骼动画、顶点动画等技术,来制作动画数据。其中,骨骼动画、顶点动画等技术的基本原理是让模型的各个顶点随着时间的变化发生位移。通过 在重建三维人脸模型之前,获取动画数据,在重建得到三维人脸模型后,将预先获取的动画数据直接应用到三维人脸模型,实现了动画重定向,可以适配大多数人的表情或动作。The animation data may be animation data pre-acquired by the electronic device. For example, when the electronic device obtains the initial three-dimensional face model, the animation data may also be obtained. For example, electronic devices can use techniques such as skeletal animation and vertex animation to create animation data. Among them, the basic principle of skeletal animation, vertex animation and other technologies is to make each vertex of the model shift with time. By retrieving animation data before rebuilding the 3D face model, after rebuilding the 3D face model, the pre-acquired animation data is directly applied to the 3D face model to achieve animation redirection and can adapt to most people's expressions Or action.
在一种可能实现方式中,电子设备将该三维人脸模型渲染到该人脸图像上之前,可以根据动画数据和该各个顶点的三维坐标,获取根据该动画数据驱动该三维人脸模型作出对应的表情或动作时,该各个顶点发生位移后的三维坐标。相应地,该步骤S25可以包括:根据该人脸姿态数据、该各个顶点发生位移后的三维坐标和该各个顶点的二维坐标,将该三维人脸模型渲染到该人脸图像上。In a possible implementation manner, before rendering the three-dimensional face model onto the face image, the electronic device may obtain the corresponding three-dimensional face model driven by the animation data according to the animation data and the three-dimensional coordinates of each vertex 3D coordinates after the displacement of each vertex during the expression or movement of Correspondingly, this step S25 may include: rendering the three-dimensional face model onto the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
电子设备在根据动画数据驱动三维人脸模型时,三维人脸模型的各个顶点会发生位移,电子设备可以根据各个顶点发生位移后的三维坐标,对三维人脸模型进行渲染,使得渲染结果为作出相应表情或动作的人脸,实现了表情或动作的重定向。When the electronic device drives the three-dimensional face model according to the animation data, each vertex of the three-dimensional face model will be displaced. The electronic device can render the three-dimensional face model according to the three-dimensional coordinates after the displacement of each vertex, so that the rendering result is made The human face corresponding to the expression or action realizes the redirection of the expression or action.
以人脸图像为电子设备的摄像头采集到的人脸图像为例,本申请实施例提供的技术方案可以高性能的实时重建三维人脸模型,并与预先制作好的动画数据相结合,实现人脸按照预先制作的动画移动,作出张嘴、伸舌头等表情、动作,并与人脸图像很好的融合。Taking the face image as the face image collected by the camera of the electronic device as an example, the technical solution provided by the embodiments of the present application can reconstruct a three-dimensional face model in real time with high performance, and combine it with pre-made animation data to realize human The face moves according to the pre-made animation, making expressions and actions such as opening the mouth and sticking out the tongue, and it is well integrated with the face image.
图3是根据一示例性实施例示出的一种人脸三维重建装置的框图。参照图3,该装置包括获取模块301,识别模块302和处理模块303。Fig. 3 is a block diagram of a device for three-dimensional face reconstruction according to an exemplary embodiment. Referring to FIG. 3, the device includes an acquisition module 301, an identification module 302 and a processing module 303.
该获取模块301被配置为执行获取初始三维人脸模型和人脸图像;The acquiring module 301 is configured to perform acquiring the initial three-dimensional face model and face image;
该识别模块302被配置为执行对该人脸图像进行人脸识别,得到人脸姿态数据;The recognition module 302 is configured to perform face recognition on the face image to obtain face pose data;
该获取模块301还被配置为执行根据该人脸姿态数据、采集该人脸图像的设备的投影参数以及该初始三维人脸模型的各个顶点的三维坐标,获取该初始三维人脸模型在该人脸姿态数据对应的姿态下,该各个顶点投影到该设备的成像平面上的二维坐标;The acquisition module 301 is further configured to perform acquisition of the initial three-dimensional face model based on the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model. In the pose corresponding to the face pose data, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device;
该处理模块303被配置为执行根据该各个顶点的二维坐标和该人脸图像, 对该初始三维人脸模型进行纹理贴图处理,得到该人脸图像的三维人脸模型。The processing module 303 is configured to perform texture mapping processing on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain a three-dimensional face model of the face image.
在一种可能实现方式中,该识别模块302被配置为执行对该人脸图像进行人脸识别,将识别得到的该人脸图像中人脸的位置和朝向作为该人脸姿态数据。In a possible implementation manner, the recognition module 302 is configured to perform face recognition on the face image, and use the position and orientation of the face in the recognized face image as the face pose data.
在一种可能实现方式中,该识别模块302被配置为执行:In a possible implementation, the identification module 302 is configured to execute:
采用人脸识别算法,对该人脸图像进行人脸识别,得到位移矩阵和旋转矩阵,该位移矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的位置,该旋转矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的朝向;A face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix. The displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image. The rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image;
将该位移矩阵和该旋转矩阵相乘得到的矩阵作为该人脸姿态数据。The matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
在一种可能实现方式中,该投影参数包括投影矩阵,相应地,该获取模块301被配置为执行对于该各个顶点中的每个顶点,将该顶点的三维坐标与该矩阵和该投影矩阵相乘,得到该顶点的二维坐标。In a possible implementation manner, the projection parameter includes a projection matrix, and accordingly, the acquisition module 301 is configured to perform, for each of the vertices, the three-dimensional coordinates of the vertex and the matrix and the projection matrix Multiply to get the two-dimensional coordinates of the vertex.
在一种可能实现方式中,该处理模块303被配置为执行:In a possible implementation, the processing module 303 is configured to execute:
根据该各个顶点的二维坐标,对该人脸图像进行纹理数据采集;Acquire texture data of the face image according to the two-dimensional coordinates of each vertex;
根据采集到的纹理数据,对该初始三维人脸模型进行纹理贴图处理,得到该三维人脸模型。According to the collected texture data, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
在一种可能实现方式中,参见图4,该装置还包括:In a possible implementation manner, referring to FIG. 4, the device further includes:
渲染模块304,被配置为执行根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,将该三维人脸模型渲染到该人脸图像上。The rendering module 304 is configured to perform rendering of the three-dimensional face model on the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex.
在一种可能实现方式中,该渲染模块304被配置为执行根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,对该三维人脸模型进行渲染;将渲染得到的图像覆盖到该人脸图像上。In a possible implementation manner, the rendering module 304 is configured to perform rendering of the three-dimensional face model according to the facial pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices; overlay the rendered image to On the face image.
在一种可能实现方式中,该获取模块301还被配置为执行根据动画数据和该各个顶点的三维坐标,获取根据该动画数据驱动该三维人脸模型作出对应的表情或动作时,该各个顶点发生位移后的三维坐标;In a possible implementation manner, the acquiring module 301 is further configured to perform the acquisition of the corresponding expression or action according to the animation data and the three-dimensional coordinates of the respective vertices when the three-dimensional face model is driven according to the animation data. Three-dimensional coordinates after displacement;
该渲染模块304被配置为执行根据该人脸姿态数据、该各个顶点发生位 移后的三维坐标和该各个顶点的二维坐标,将该三维人脸模型渲染到该人脸图像上。The rendering module 304 is configured to perform rendering of the three-dimensional face model onto the face image based on the face pose data, the three-dimensional coordinates after the displacement of each vertex, and the two-dimensional coordinates of each vertex.
在一种可能实现方式中,参见图5,该装置还包括:In a possible implementation manner, referring to FIG. 5, the device further includes:
驱动模块305,被配置为执行根据动画数据,驱动该三维人脸模型做出该动画数据对应的表情或动作。The driving module 305 is configured to execute driving the three-dimensional face model to make expressions or actions corresponding to the animation data according to the animation data.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the device in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
图6是根据一示例性实施例示出的一种电子设备600的框图。该电子设备600可以是:智能手机、平板电脑、动态影像专家压缩标准音频层面3播放器(Moving Picture Experts Group Audio Layer III,MP3)、动态影像专家压缩标准音频层面4(Moving Picture Experts Group Audio Layer IV,MP4)播放器、笔记本电脑或台式电脑。电子设备600还可能被称为用户设备、便携式电子设备、膝上型电子设备、台式电子设备等其他名称。Fig. 6 is a block diagram of an electronic device 600 according to an exemplary embodiment. The electronic device 600 may be: a smartphone, a tablet computer, a motion picture expert compression standard audio level 3 player (Moving Picture Experts Group Audio Audio Layer III, MP3), a motion picture expert compression standard audio level 3 audio layer 4 (Moving Pictures Experts Group Group Audio Layer) IV, MP4) player, laptop or desktop computer. The electronic device 600 may also be referred to as user equipment, portable electronic device, laptop electronic device, desktop electronic device, and other names.
通常,电子设备600包括有:Generally, the electronic device 600 includes:
处理器601; Processor 601;
用于存储该处理器601可执行指令的存储器602;A memory 602 for storing executable instructions of the processor 601;
其中,该处理器601被配置为执行:Among them, the processor 601 is configured to execute:
获取初始三维人脸模型和人脸图像;Obtain the initial three-dimensional face model and face image;
对该人脸图像进行人脸识别,得到人脸姿态数据;Perform face recognition on the face image to obtain face pose data;
根据该人脸姿态数据、采集该人脸图像的设备的投影参数以及该初始三维人脸模型的各个顶点的三维坐标,获取该初始三维人脸模型在该人脸姿态数据对应的姿态下,该各个顶点投影到该设备的成像平面上的二维坐标;According to the face pose data, the projection parameters of the device that collected the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model, obtain the initial three-dimensional face model under the pose corresponding to the face pose data. The two-dimensional coordinates of each vertex projected onto the imaging plane of the device;
根据该各个顶点的二维坐标和该人脸图像,对该初始三维人脸模型进行纹理贴图处理,得到该人脸图像的三维人脸模型。According to the two-dimensional coordinates of each vertex and the face image, texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
在一种可能的实现方式中,该处理器601具体被配置为执行:In a possible implementation manner, the processor 601 is specifically configured to execute:
对该人脸图像进行人脸识别,将识别得到的该人脸图像中人脸的位置和朝向作为该人脸姿态数据。Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
在一种可能的实现方式中,该处理器601具体被配置为执行:In a possible implementation manner, the processor 601 is specifically configured to execute:
采用人脸识别算法,对该人脸图像进行人脸识别,得到位移矩阵和旋转矩阵,该位移矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的位置,该旋转矩阵用于表示该设备在采集该人脸图像时该人脸在三维空间中的朝向;A face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix. The displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image. The rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image;
将该位移矩阵和该旋转矩阵相乘得到的矩阵作为该人脸姿态数据。The matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
在一种可能的实现方式中,该投影参数包括投影矩阵,相应地,该处理器601具体被配置为执行对于该各个顶点中的每个顶点,将该顶点的三维坐标与该矩阵和该投影矩阵相乘,得到该顶点的二维坐标。In a possible implementation manner, the projection parameter includes a projection matrix, and accordingly, the processor 601 is specifically configured to perform, for each of the vertices, the three-dimensional coordinates of the vertex and the matrix and the projection Multiply the matrix to get the two-dimensional coordinates of the vertex.
在一种可能的实现方式中,该处理器601具体被配置为执行:In a possible implementation manner, the processor 601 is specifically configured to execute:
根据该各个顶点的二维坐标,对该人脸图像进行纹理数据采集;Acquire texture data of the face image according to the two-dimensional coordinates of each vertex;
根据采集到的纹理数据,对该初始三维人脸模型进行纹理贴图处理,得到该三维人脸模型。According to the collected texture data, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
在一种可能的实现方式中,该处理器601还被配置为执行:In a possible implementation, the processor 601 is further configured to execute:
根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,将该三维人脸模型渲染到该人脸图像上。According to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, the three-dimensional face model is rendered on the face image.
在一种可能的实现方式中,该处理器601具体被配置为执行:In a possible implementation manner, the processor 601 is specifically configured to execute:
根据该人脸姿态数据、该各个顶点的三维坐标和二维坐标,对该三维人脸模型进行渲染;将渲染得到的图像覆盖到该人脸图像上。According to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, the three-dimensional face model is rendered; the rendered image is overlaid on the face image.
在一种可能的实现方式中,该处理器601还被配置为执行:In a possible implementation, the processor 601 is further configured to execute:
根据动画数据和该各个顶点的三维坐标,获取根据该动画数据驱动该三维人脸模型作出对应的表情或动作时,该各个顶点发生位移后的三维坐标;According to the animation data and the three-dimensional coordinates of each vertex, obtain the three-dimensional coordinates after the displacement of each vertex when the three-dimensional face model is driven to make a corresponding expression or action according to the animation data;
根据该人脸姿态数据、该各个顶点发生位移后的三维坐标和该各个顶点的二维坐标,将该三维人脸模型渲染到该人脸图像上。The three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
在一种可能的实现方式中,该处理器601还被配置为执行:In a possible implementation, the processor 601 is further configured to execute:
根据动画数据,驱动该三维人脸模型做出该动画数据对应的表情或动作。Based on the animation data, the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
其中,处理器601可以包括一个或多个处理核心,比如4核心处理器、8 核心处理器等。处理器601可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称中央处理器(Central Processing Unit,CPU);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器601可以在集成有图像处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器601还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may adopt at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA). achieve. The processor 601 may also include a main processor and a coprocessor. The main processor is a processor for processing data in a wake-up state, also called a central processing unit (Central Processing Unit, CPU); the coprocessor is A low-power processor for processing data in the standby state. In some embodiments, the processor 601 may be integrated with a graphics processor (Graphics, Processing, Unit, GPU), and the GPU is used to render and draw content that needs to be displayed on the display screen. In some embodiments, the processor 601 may further include an artificial intelligence (AI) processor, which is used to process computing operations related to machine learning.
存储器602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器601所执行以实现本申请中的方法实施例提供的人脸三维重建方法。The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the person provided by the method embodiment in the present application Face 3D reconstruction method.
在一些实施例中,电子设备600还可选包括有:外围设备接口603和至少一个外围设备。处理器601、存储器602和外围设备接口603之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口603相连。具体地,外围设备包括:射频电路604、显示屏605、摄像头606、音频电路607、定位组件608和电源609中的至少一种。In some embodiments, the electronic device 600 may optionally further include: a peripheral device interface 603 and at least one peripheral device. The processor 601, the memory 602, and the peripheral device interface 603 may be connected by a bus or a signal line. Each peripheral device may be connected to the peripheral device interface 603 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a display screen 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
外围设备接口603可被用于将输入/输出(Input/Output,I/O)相关的至少一个外围设备连接到处理器601和存储器602。在一些实施例中,处理器601、存储器602和外围设备接口603被集成在同一芯片或电路板上;在一些其他实施例中,处理器601、存储器602和外围设备接口603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。The peripheral device interface 603 may be used to connect at least one peripheral device related to Input/Output (I/O) to the processor 601 and the memory 602. In some embodiments, the processor 601, the memory 602, and the peripheral device interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 601, the memory 602, and the peripheral device interface 603 or Both can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
射频电路604用于接收和发射射频(Radio Frequency,RF)信号,也称 电磁信号。射频电路604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路604可以通过至少一种无线通信协议来与其它电子设备进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或无线保真(Wireless Fidelity,WiFi)网络。在一些实施例中,射频电路604还可以包括近距离无线通信(Near Field Communication,NFC)有关的电路,本申请对此不加以限定。The radio frequency circuit 604 is used to receive and transmit radio frequency (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 604 communicates with a communication network and other communication devices through electromagnetic signals. The radio frequency circuit 604 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on. The radio frequency circuit 604 can communicate with other electronic devices through at least one wireless communication protocol. The wireless communication protocol includes but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or wireless fidelity (WiFi) networks. In some embodiments, the radio frequency circuit 604 may further include a circuit related to short-range wireless communication (Near Field Communication, NFC), which is not limited in this application.
显示屏605用于显示用户界面(User Interface,UI)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏605是触摸显示屏时,显示屏605还具有采集在显示屏605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器601进行处理。此时,显示屏605还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏605可以为一个,设置电子设备600的前面板;在另一些实施例中,显示屏605可以为至少两个,分别设置在电子设备600的不同表面或呈折叠设计;在再一些实施例中,显示屏605可以是柔性显示屏,设置在电子设备600的弯曲表面上或折叠面上。甚至,显示屏605还可以设置成非矩形的不规则图形,也即异形屏。显示屏605可以采用液晶显示屏(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等材质制备。The display screen 605 is used to display a user interface (User Interface, UI). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to collect touch signals on or above the surface of the display screen 605. The touch signal can be input to the processor 601 as a control signal for processing. At this time, the display screen 605 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards. In some embodiments, the display screen 605 may be one, and the front panel of the electronic device 600 is provided; in other embodiments, the display screen 605 may be at least two, respectively disposed on different surfaces of the electronic device 600 or in a folded design In still other embodiments, the display screen 605 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the electronic device 600. Even, the display screen 605 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen. The display screen 605 can be made of liquid crystal display (Liquid Crystal) (LCD), organic light-emitting diode (Organic Light-Emitting Diode, OLED) and other materials.
摄像头组件606用于采集图像或视频。可选地,摄像头组件606包括前置摄像头和后置摄像头。通常,前置摄像头设置在电子设备的前面板,后置摄像头设置在电子设备的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及虚拟现实(Virtual Reality,VR)拍摄功能或者其它融 合拍摄功能。在一些实施例中,摄像头组件606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。The camera component 606 is used to collect images or videos. Optionally, the camera assembly 606 includes a front camera and a rear camera. Generally, the front camera is set on the front panel of the electronic device, and the rear camera is set on the back of the electronic device. In some embodiments, there are at least two rear cameras, which are respectively one of the main camera, the depth-of-field camera, the wide-angle camera, and the telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera Integrate with wide-angle camera to achieve panoramic shooting and virtual reality (Virtual Reality, VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 606 may also include a flash. The flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation at different color temperatures.
音频电路607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器601进行处理,或者输入至射频电路604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在电子设备600的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器601或射频电路604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路607还可以包括耳机插孔。The audio circuit 607 may include a microphone and a speaker. The microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 601 for processing, or input them to the radio frequency circuit 604 to implement voice communication. For the purpose of stereo sound collection or noise reduction, there may be multiple microphones, which are respectively disposed in different parts of the electronic device 600. The microphone can also be an array microphone or an omnidirectional acquisition microphone. The speaker is used to convert the electrical signal from the processor 601 or the radio frequency circuit 604 into sound waves. The speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible by humans, but also convert electrical signals into sound waves inaudible to humans for ranging purposes. In some embodiments, the audio circuit 607 may further include a headphone jack.
定位组件608用于定位电子设备600的当前地理位置,以实现导航或基于位置的服务(Location Based Service,LBS)。定位组件608可以是基于美国的全球定位系统(Global Positioning System,GPS)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。The positioning component 608 is used to locate the current geographic location of the electronic device 600 to implement navigation or location-based services (Location Based Services, LBS). The positioning component 608 may be a positioning component based on the Global Positioning System (GPS) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
电源609用于为电子设备600中的各个组件进行供电。电源609可以是交流电、直流电、一次性电池或可充电电池。当电源609包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。The power supply 609 is used to supply power to various components in the electronic device 600. The power source 609 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery can also be used to support fast charging technology.
在一些实施例中,电子设备600还包括有一个或多个传感器610。该一个或多个传感器610包括但不限于:加速度传感器611、陀螺仪传感器612、压力传感器613、指纹传感器614、光学传感器615以及接近传感器616。In some embodiments, the electronic device 600 further includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: an acceleration sensor 611, a gyro sensor 612, a pressure sensor 613, a fingerprint sensor 614, an optical sensor 615, and a proximity sensor 616.
加速度传感器611可以检测以电子设备600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器611可以用于检测重力加速度在三个坐标轴上的分量。处理器601可以根据加速度传感器611采集的重力加速度信号,控制触摸显示屏605以横向视图或纵向视图进行用户界面的显示。加 速度传感器611还可以用于游戏或者用户的运动数据的采集。The acceleration sensor 611 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the electronic device 600. For example, the acceleration sensor 611 can be used to detect the components of gravity acceleration on three coordinate axes. The processor 601 may control the touch screen 605 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 can also be used for game or user movement data collection.
陀螺仪传感器612可以检测电子设备600的机体方向及转动角度,陀螺仪传感器612可以与加速度传感器611协同采集用户对电子设备600的3D动作。处理器601根据陀螺仪传感器612采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。The gyro sensor 612 can detect the body direction and the rotation angle of the electronic device 600, and the gyro sensor 612 can cooperate with the acceleration sensor 611 to collect a 3D action of the user on the electronic device 600. The processor 601 can realize the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
压力传感器613可以设置在电子设备600的侧边框和/或触摸显示屏605的下层。当压力传感器613设置在电子设备600的侧边框时,可以检测用户对电子设备600的握持信号,由处理器601根据压力传感器613采集的握持信号进行左右手识别或快捷操作。当压力传感器613设置在触摸显示屏605的下层时,由处理器601根据用户对触摸显示屏605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。The pressure sensor 613 may be disposed on the side frame of the electronic device 600 and/or the lower layer of the touch display 605. When the pressure sensor 613 is disposed on the side frame of the electronic device 600, it can detect the user's grip signal on the electronic device 600, and the processor 601 can perform left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed on the lower layer of the touch display 605, the processor 601 controls the operability control on the UI interface according to the user's pressure operation on the touch display 605. The operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
指纹传感器614用于采集用户的指纹,由处理器601根据指纹传感器614采集到的指纹识别用户的身份,或者,由指纹传感器614根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器601授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器614可以被设置电子设备600的正面、背面或侧面。当电子设备600上设置有物理按键或厂商Logo时,指纹传感器614可以与物理按键或厂商Logo集成在一起。The fingerprint sensor 614 is used to collect the user's fingerprint, and the processor 601 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the user's identity based on the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 601 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings. The fingerprint sensor 614 may be provided on the front, back, or side of the electronic device 600. When a physical button or manufacturer logo is provided on the electronic device 600, the fingerprint sensor 614 may be integrated with the physical button or manufacturer logo.
光学传感器615用于采集环境光强度。在一个实施例中,处理器601可以根据光学传感器615采集的环境光强度,控制触摸显示屏605的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏605的显示亮度;当环境光强度较低时,调低触摸显示屏605的显示亮度。在另一个实施例中,处理器601还可以根据光学传感器615采集的环境光强度,动态调整摄像头组件606的拍摄参数。The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, the processor 601 can control the display brightness of the touch display 605 according to the ambient light intensity collected by the optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display 605 is increased; when the ambient light intensity is low, the display brightness of the touch display 605 is decreased. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
接近传感器616,也称距离传感器,通常设置在电子设备600的前面板。 接近传感器616用于采集用户与电子设备600的正面之间的距离。在一个实施例中,当接近传感器616检测到用户与电子设备600的正面之间的距离逐渐变小时,由处理器601控制触摸显示屏605从亮屏状态切换为息屏状态;当接近传感器616检测到用户与电子设备600的正面之间的距离逐渐变大时,由处理器601控制触摸显示屏605从息屏状态切换为亮屏状态。The proximity sensor 616, also called a distance sensor, is usually provided on the front panel of the electronic device 600. The proximity sensor 616 is used to collect the distance between the user and the front of the electronic device 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front of the electronic device 600 gradually becomes smaller, the processor 601 controls the touch display 605 to switch from the bright screen state to the breathing state; when the proximity sensor 616 When it is detected that the distance between the user and the front of the electronic device 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the screen-holding state to the screen-lighting state.
本领域技术人员可以理解,图6中示出的结构并不构成对电子设备600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。Those skilled in the art may understand that the structure shown in FIG. 6 does not constitute a limitation on the electronic device 600, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
在示例性实施例中,还提供了一种非临时性计算机可读存储介质,当该存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行上述任意一种人脸三维重建方法,例如图1所示的方法或图2所示的方法等。In an exemplary embodiment, a non-transitory computer-readable storage medium is also provided. When the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can perform any of the three-dimensional reconstruction of the face The method, for example, the method shown in FIG. 1 or the method shown in FIG. 2.
例如,该非临时性计算机可读存储介质可以是只读内存(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。For example, the non-transitory computer-readable storage medium may be read-only memory (Read-Only Memory, ROM), random-access memory (Random Access Memory, RAM), read-only compact disc (Compact Disc Read-Only Memory, CD- ROM), magnetic tape, floppy disk and optical data storage devices, etc.
在示例性实施例中,还提供了一种应用程序产品,当该应用程序产品中的指令由电子设备的处理器执行时,使得电子设备能够执行上述任意一种人脸三维重建方法,例如图1所示的方法或图2所示的方法等。In an exemplary embodiment, an application product is also provided. When the instructions in the application product are executed by the processor of the electronic device, the electronic device can execute any of the three-dimensional face reconstruction methods described above, such as The method shown in 1 or the method shown in FIG. 2 and so on.

Claims (20)

  1. 一种人脸三维重建方法,包括:A three-dimensional reconstruction method of human face, including:
    获取初始三维人脸模型和人脸图像;Obtain the initial three-dimensional face model and face image;
    对所述人脸图像进行人脸识别,得到人脸姿态数据;Performing face recognition on the face image to obtain face pose data;
    根据所述人脸姿态数据、采集所述人脸图像的设备的投影参数以及所述初始三维人脸模型的各个顶点的三维坐标,获取所述初始三维人脸模型在所述人脸姿态数据对应的姿态下,所述各个顶点投影到所述设备的成像平面上的二维坐标;Obtaining the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
    根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型。According to the two-dimensional coordinates of each vertex and the face image, texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  2. 根据权利要求1所述的人脸三维重建方法,所述对所述人脸图像进行人脸识别,得到人脸姿态数据,包括:The three-dimensional face reconstruction method according to claim 1, the performing face recognition on the face image to obtain face pose data includes:
    对所述人脸图像进行人脸识别,将识别得到的所述人脸图像中人脸的位置和朝向作为所述人脸姿态数据。Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
  3. 根据权利要求2所述的人脸三维重建方法,所述对所述人脸图像进行人脸识别,将识别得到的所述人脸图像中人脸的位置和朝向作为所述人脸姿态数据,包括:According to the three-dimensional reconstruction method of a human face according to claim 2, the face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data, include:
    采用人脸识别算法,对所述人脸图像进行人脸识别,得到位移矩阵和旋转矩阵,所述位移矩阵用于表示所述设备在采集所述人脸图像时所述人脸在三维空间中的位置,所述旋转矩阵用于表示所述设备在采集所述人脸图像时所述人脸在三维空间中的朝向;A face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix, and the displacement matrix is used to indicate that the face is in a three-dimensional space when the device collects the face image The position of the rotation matrix is used to indicate the orientation of the human face in the three-dimensional space when the device acquires the human face image;
    将所述位移矩阵和所述旋转矩阵相乘得到的矩阵作为所述人脸姿态数据。A matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  4. 根据权利要求3所述的人脸三维重建方法,所述投影参数包括投影矩阵,The three-dimensional face reconstruction method according to claim 3, wherein the projection parameters include a projection matrix,
    相应地,所述根据所述人脸姿态数据、采集所述人脸图像的设备的投影参数以及所述初始三维人脸模型的各个顶点的三维坐标,获取所述初始三维 人脸模型在所述人脸姿态数据对应的姿态下,所述各个顶点投影到所述设备的成像平面上的二维坐标,包括:Correspondingly, according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model, the initial three-dimensional face model is acquired in the In the posture corresponding to the face posture data, the two-dimensional coordinates of each vertex projected onto the imaging plane of the device include:
    对于所述各个顶点中的每个顶点,将所述顶点的三维坐标与所述矩阵和所述投影矩阵相乘,得到所述顶点的二维坐标。For each of the vertices, the three-dimensional coordinates of the vertex are multiplied by the matrix and the projection matrix to obtain the two-dimensional coordinates of the vertex.
  5. 根据权利要求1所述的人脸三维重建方法,所述根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型,包括:The three-dimensional face reconstruction method according to claim 1, wherein the initial three-dimensional face model is texture-mapped according to the two-dimensional coordinates of each vertex and the face image to obtain the face image 3D face model, including:
    根据所述各个顶点的二维坐标,对所述人脸图像进行纹理数据采集;Acquire texture data of the face image according to the two-dimensional coordinates of each vertex;
    根据采集到的纹理数据,对所述初始三维人脸模型进行纹理贴图处理,得到所述三维人脸模型。According to the collected texture data, texture mapping processing is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  6. 根据权利要求1所述的人脸三维重建方法,所述根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型之后,所述方法还包括:The three-dimensional face reconstruction method according to claim 1, wherein the initial three-dimensional face model is texture-mapped according to the two-dimensional coordinates of each vertex and the face image to obtain the face image After the 3D human face model, the method further includes:
    根据所述人脸姿态数据、所述各个顶点的三维坐标和二维坐标,将所述三维人脸模型渲染到所述人脸图像上。The three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex.
  7. 根据权利要求6所述的人脸三维重建方法,所述根据所述人脸姿态数据、所述各个顶点的三维坐标和二维坐标,将所述三维人脸模型渲染到所述人脸图像上,包括:The three-dimensional face reconstruction method according to claim 6, wherein the three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex ,include:
    根据所述人脸姿态数据、所述各个顶点的三维坐标和二维坐标,对所述三维人脸模型进行渲染;Rendering the three-dimensional face model according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex;
    将渲染得到的图像覆盖到所述人脸图像上。Overlaying the rendered image on the face image.
  8. 根据权利要求6所述的人脸三维重建方法,所述根据所述人脸姿态数据、所述各个顶点的三维坐标和二维坐标,将所述三维人脸模型渲染到所述人脸图像上之前,所述方法还包括:The three-dimensional face reconstruction method according to claim 6, wherein the three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex Previously, the method also included:
    根据动画数据和所述各个顶点的三维坐标,获取根据所述动画数据驱动所述三维人脸模型作出对应的表情或动作时,所述各个顶点发生位移后的三维坐标;Acquiring, according to the animation data and the three-dimensional coordinates of each vertex, the three-dimensional coordinates after the displacement of each vertex when the three-dimensional face model is driven to make a corresponding expression or action according to the animation data;
    相应地,所述根据所述人脸姿态数据、所述各个顶点的三维坐标和二维坐标,将所述三维人脸模型渲染到所述人脸图像上,包括:Correspondingly, the rendering of the three-dimensional face model onto the face image based on the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex includes:
    根据所述人脸姿态数据、所述各个顶点发生位移后的三维坐标和所述各个顶点的二维坐标,将所述三维人脸模型渲染到所述人脸图像上。The three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
  9. 根据权利要求1所述的人脸三维重建方法,所述根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型之后,所述方法还包括:The three-dimensional face reconstruction method according to claim 1, wherein the initial three-dimensional face model is texture-mapped according to the two-dimensional coordinates of each vertex and the face image to obtain the face image After the 3D human face model, the method further includes:
    根据动画数据,驱动所述三维人脸模型做出所述动画数据对应的表情或动作。Based on the animation data, the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
  10. 一种人脸三维重建装置,包括:A face three-dimensional reconstruction device, including:
    获取模块,被配置为执行获取初始三维人脸模型和人脸图像;The acquisition module is configured to perform acquisition of the initial three-dimensional face model and face image;
    识别模块,被配置为执行对所述人脸图像进行人脸识别,得到人脸姿态数据;A recognition module configured to perform face recognition on the face image to obtain face pose data;
    所述获取模块还被配置为执行根据所述人脸姿态数据、采集所述人脸图像的设备的投影参数以及所述初始三维人脸模型的各个顶点的三维坐标,获取所述初始三维人脸模型在所述人脸姿态数据对应的姿态下,所述各个顶点投影到所述设备的成像平面上的二维坐标;The acquiring module is further configured to perform acquiring the initial three-dimensional face according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model In a pose corresponding to the face pose data of the model, the two vertices of each vertex projected onto the imaging plane of the device;
    处理模块,被配置为执行根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型。The processing module is configured to perform texture mapping on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain the three-dimensional face model of the face image.
  11. 一种电子设备,包括:An electronic device, including:
    处理器;processor;
    用于存储处理器可执行指令的存储器;Memory for storing processor executable instructions;
    其中,所述处理器被配置为执行:Wherein, the processor is configured to execute:
    获取初始三维人脸模型和人脸图像;Obtain the initial three-dimensional face model and face image;
    对所述人脸图像进行人脸识别,得到人脸姿态数据;Performing face recognition on the face image to obtain face pose data;
    根据所述人脸姿态数据、采集所述人脸图像的设备的投影参数以及所述 初始三维人脸模型的各个顶点的三维坐标,获取所述初始三维人脸模型在所述人脸姿态数据对应的姿态下,所述各个顶点投影到所述设备的成像平面上的二维坐标;Obtaining the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
    根据所述各个顶点的二维坐标和所述人脸图像,对所述初始三维人脸模型进行纹理贴图处理,得到所述人脸图像的三维人脸模型。According to the two-dimensional coordinates of each vertex and the face image, texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  12. 根据权利要求11所述的电子设备,所述处理器具体被配置为执行:The electronic device of claim 11, the processor is specifically configured to execute:
    对所述人脸图像进行人脸识别,将识别得到的所述人脸图像中人脸的位置和朝向作为所述人脸姿态数据。Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
  13. 根据权利要求12所述的电子设备,所述处理器具体被配置为执行:The electronic device of claim 12, the processor is specifically configured to execute:
    采用人脸识别算法,对所述人脸图像进行人脸识别,得到位移矩阵和旋转矩阵,所述位移矩阵用于表示所述设备在采集所述人脸图像时所述人脸在三维空间中的位置,所述旋转矩阵用于表示所述设备在采集所述人脸图像时所述人脸在三维空间中的朝向;A face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix, and the displacement matrix is used to indicate that the face is in a three-dimensional space when the device collects the face image The position of the rotation matrix is used to indicate the orientation of the human face in the three-dimensional space when the device acquires the human face image;
    将所述位移矩阵和所述旋转矩阵相乘得到的矩阵作为所述人脸姿态数据。A matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  14. 根据权利要求13所述的电子设备,所述投影参数包括投影矩阵,The electronic device according to claim 13, wherein the projection parameter includes a projection matrix,
    相应地,所述处理器具体被配置为执行对于所述各个顶点中的每个顶点,将所述顶点的三维坐标与所述矩阵和所述投影矩阵相乘,得到所述顶点的二维坐标。Correspondingly, the processor is specifically configured to perform, for each of the vertices, multiply the three-dimensional coordinates of the vertex with the matrix and the projection matrix to obtain the two-dimensional coordinates of the vertex .
  15. 根据权利要求11所述的电子设备,所述处理器具体被配置为执行:The electronic device of claim 11, the processor is specifically configured to execute:
    根据所述各个顶点的二维坐标,对所述人脸图像进行纹理数据采集;Acquire texture data of the face image according to the two-dimensional coordinates of each vertex;
    根据采集到的纹理数据,对所述初始三维人脸模型进行纹理贴图处理,得到所述三维人脸模型。According to the collected texture data, texture mapping processing is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  16. 根据权利要求11所述的电子设备,所述处理器还被配置为执行:The electronic device of claim 11, the processor is further configured to execute:
    根据所述人脸姿态数据、所述各个顶点的三维坐标和二维坐标,将所述三维人脸模型渲染到所述人脸图像上。The three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex.
  17. 根据权利要求16所述的电子设备,所述处理器具体被配置为执行:The electronic device of claim 16, the processor is specifically configured to execute:
    根据所述人脸姿态数据、所述各个顶点的三维坐标和二维坐标,对所述 三维人脸模型进行渲染;将渲染得到的图像覆盖到所述人脸图像上。Rendering the three-dimensional face model according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex; and overlaying the rendered image on the face image.
  18. 根据权利要求16所述的电子设备,所述处理器还被配置为执行:The electronic device of claim 16, the processor is further configured to perform:
    根据动画数据和所述各个顶点的三维坐标,获取根据所述动画数据驱动所述三维人脸模型作出对应的表情或动作时,所述各个顶点发生位移后的三维坐标;Acquiring, according to the animation data and the three-dimensional coordinates of each vertex, the three-dimensional coordinates after the displacement of each vertex when the three-dimensional face model is driven to make a corresponding expression or action according to the animation data;
    根据所述人脸姿态数据、所述各个顶点发生位移后的三维坐标和所述各个顶点的二维坐标,将所述三维人脸模型渲染到所述人脸图像上。The three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
  19. 根据权利要求11所述的电子设备,所述处理器还被配置为执行:The electronic device of claim 11, the processor is further configured to perform:
    根据动画数据,驱动所述三维人脸模型做出所述动画数据对应的表情或动作。Based on the animation data, the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
  20. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如权利要求1至权利要求9中任一项所述的人脸三维重建方法。A non-transitory computer-readable storage medium, when instructions in the storage medium are executed by a processor of an electronic device, enable the electronic device to execute a human face according to any one of claims 1 to 9 Three-dimensional reconstruction method.
PCT/CN2019/128900 2019-01-04 2019-12-26 Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium WO2020140832A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910008837.3 2019-01-04
CN201910008837.3A CN109767487A (en) 2019-01-04 2019-01-04 Face three-dimensional rebuilding method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2020140832A1 true WO2020140832A1 (en) 2020-07-09

Family

ID=66453244

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/128900 WO2020140832A1 (en) 2019-01-04 2019-12-26 Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN109767487A (en)
WO (1) WO2020140832A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037320A (en) * 2020-09-01 2020-12-04 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN112613357A (en) * 2020-12-08 2021-04-06 深圳数联天下智能科技有限公司 Face measurement method, face measurement device, electronic equipment and medium
CN112652057A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Method, device, equipment and storage medium for generating human body three-dimensional model
CN112734890A (en) * 2020-12-22 2021-04-30 上海影谱科技有限公司 Human face replacement method and device based on three-dimensional reconstruction
CN112766215A (en) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 Face fusion method and device, electronic equipment and storage medium
CN113343879A (en) * 2021-06-18 2021-09-03 厦门美图之家科技有限公司 Method and device for manufacturing panoramic facial image, electronic equipment and storage medium
CN113658313A (en) * 2021-09-09 2021-11-16 北京达佳互联信息技术有限公司 Rendering method and device of face model and electronic equipment
CN113763532A (en) * 2021-04-19 2021-12-07 腾讯科技(深圳)有限公司 Human-computer interaction method, device, equipment and medium based on three-dimensional virtual object
CN115082640A (en) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 Single image-based 3D face model texture reconstruction method and equipment
CN115631285A (en) * 2022-11-25 2023-01-20 北京红棉小冰科技有限公司 Face rendering method, device and equipment based on unified drive and storage medium
CN116978102A (en) * 2023-08-04 2023-10-31 深圳市英锐存储科技有限公司 Face feature modeling and recognition method, chip and terminal
CN117496059A (en) * 2023-11-03 2024-02-02 北京元点未来科技有限公司 Three-dimensional image system based on space algorithm and utilizing AIGC technology
CN117496019A (en) * 2023-12-29 2024-02-02 南昌市小核桃科技有限公司 Image animation processing method and system for driving static image

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767487A (en) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 Face three-dimensional rebuilding method, device, electronic equipment and storage medium
CN110309554B (en) * 2019-06-12 2021-01-15 清华大学 Video human body three-dimensional reconstruction method and device based on garment modeling and simulation
CN110533777B (en) * 2019-08-01 2020-09-15 北京达佳互联信息技术有限公司 Three-dimensional face image correction method and device, electronic equipment and storage medium
CN112406608B (en) * 2019-08-23 2022-06-21 国创移动能源创新中心(江苏)有限公司 Charging pile and automatic charging device and method thereof
CN110555815B (en) * 2019-08-30 2022-05-20 维沃移动通信有限公司 Image processing method and electronic equipment
CN110675413B (en) * 2019-09-27 2020-11-13 腾讯科技(深圳)有限公司 Three-dimensional face model construction method and device, computer equipment and storage medium
CN110796083B (en) * 2019-10-29 2023-07-04 腾讯科技(深圳)有限公司 Image display method, device, terminal and storage medium
CN111160278B (en) * 2019-12-31 2023-04-07 陕西西图数联科技有限公司 Face texture structure data acquisition method based on single image sensor
CN111340943B (en) * 2020-02-26 2023-01-03 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN111460937B (en) * 2020-03-19 2023-12-19 深圳市新镜介网络有限公司 Facial feature point positioning method and device, terminal equipment and storage medium
CN113643348B (en) * 2020-04-23 2024-02-06 杭州海康威视数字技术股份有限公司 Face attribute analysis method and device
CN111626924B (en) * 2020-05-28 2023-08-15 维沃移动通信有限公司 Image blurring processing method and device, electronic equipment and readable storage medium
CN113763531B (en) * 2020-06-05 2023-11-28 北京达佳互联信息技术有限公司 Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN111753739B (en) * 2020-06-26 2023-10-31 北京百度网讯科技有限公司 Object detection method, device, equipment and storage medium
CN112883870A (en) * 2021-02-22 2021-06-01 北京中科深智科技有限公司 Face image mapping method and system
CN113129362A (en) * 2021-04-23 2021-07-16 北京地平线机器人技术研发有限公司 Method and device for acquiring three-dimensional coordinate data
CN115019021A (en) * 2022-06-02 2022-09-06 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium
CN117237204A (en) * 2022-06-15 2023-12-15 荣耀终端有限公司 Image processing method, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999942A (en) * 2012-12-13 2013-03-27 清华大学 Three-dimensional face reconstruction method
US20170316598A1 (en) * 2015-05-22 2017-11-02 Tencent Technology (Shenzhen) Company Limited 3d human face reconstruction method, apparatus and server
CN109035394A (en) * 2018-08-22 2018-12-18 广东工业大学 Human face three-dimensional model method for reconstructing, device, equipment, system and mobile terminal
CN109767487A (en) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 Face three-dimensional rebuilding method, device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0778267A (en) * 1993-07-09 1995-03-20 Silicon Graphics Inc Method for display of shadow and computer-controlled display system
CN101515324A (en) * 2009-01-21 2009-08-26 上海银晨智能识别科技有限公司 Control system applied to multi-pose face recognition and a method thereof
US8933925B2 (en) * 2009-06-15 2015-01-13 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes
CN102074015A (en) * 2011-02-24 2011-05-25 哈尔滨工业大学 Two-dimensional image sequence based three-dimensional reconstruction method of target
CN108765550B (en) * 2018-05-09 2021-03-30 华南理工大学 Three-dimensional face reconstruction method based on single picture
CN108921795A (en) * 2018-06-04 2018-11-30 腾讯科技(深圳)有限公司 A kind of image interfusion method, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999942A (en) * 2012-12-13 2013-03-27 清华大学 Three-dimensional face reconstruction method
US20170316598A1 (en) * 2015-05-22 2017-11-02 Tencent Technology (Shenzhen) Company Limited 3d human face reconstruction method, apparatus and server
CN109035394A (en) * 2018-08-22 2018-12-18 广东工业大学 Human face three-dimensional model method for reconstructing, device, equipment, system and mobile terminal
CN109767487A (en) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 Face three-dimensional rebuilding method, device, electronic equipment and storage medium

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037320B (en) * 2020-09-01 2023-10-20 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN112037320A (en) * 2020-09-01 2020-12-04 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN112613357B (en) * 2020-12-08 2024-04-09 深圳数联天下智能科技有限公司 Face measurement method, device, electronic equipment and medium
CN112613357A (en) * 2020-12-08 2021-04-06 深圳数联天下智能科技有限公司 Face measurement method, face measurement device, electronic equipment and medium
CN112734890A (en) * 2020-12-22 2021-04-30 上海影谱科技有限公司 Human face replacement method and device based on three-dimensional reconstruction
CN112734890B (en) * 2020-12-22 2023-11-10 上海影谱科技有限公司 Face replacement method and device based on three-dimensional reconstruction
CN112652057A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Method, device, equipment and storage medium for generating human body three-dimensional model
CN112766215A (en) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 Face fusion method and device, electronic equipment and storage medium
CN113763532A (en) * 2021-04-19 2021-12-07 腾讯科技(深圳)有限公司 Human-computer interaction method, device, equipment and medium based on three-dimensional virtual object
CN113763532B (en) * 2021-04-19 2024-01-19 腾讯科技(深圳)有限公司 Man-machine interaction method, device, equipment and medium based on three-dimensional virtual object
CN113343879A (en) * 2021-06-18 2021-09-03 厦门美图之家科技有限公司 Method and device for manufacturing panoramic facial image, electronic equipment and storage medium
CN113658313A (en) * 2021-09-09 2021-11-16 北京达佳互联信息技术有限公司 Rendering method and device of face model and electronic equipment
CN115082640A (en) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 Single image-based 3D face model texture reconstruction method and equipment
CN115631285B (en) * 2022-11-25 2023-05-02 北京红棉小冰科技有限公司 Face rendering method, device, equipment and storage medium based on unified driving
CN115631285A (en) * 2022-11-25 2023-01-20 北京红棉小冰科技有限公司 Face rendering method, device and equipment based on unified drive and storage medium
CN116978102A (en) * 2023-08-04 2023-10-31 深圳市英锐存储科技有限公司 Face feature modeling and recognition method, chip and terminal
CN117496059A (en) * 2023-11-03 2024-02-02 北京元点未来科技有限公司 Three-dimensional image system based on space algorithm and utilizing AIGC technology
CN117496059B (en) * 2023-11-03 2024-04-12 北京元点未来科技有限公司 Three-dimensional image system based on space algorithm and utilizing AIGC technology
CN117496019A (en) * 2023-12-29 2024-02-02 南昌市小核桃科技有限公司 Image animation processing method and system for driving static image
CN117496019B (en) * 2023-12-29 2024-04-05 南昌市小核桃科技有限公司 Image animation processing method and system for driving static image

Also Published As

Publication number Publication date
CN109767487A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
WO2020140832A1 (en) Three-dimensional facial reconstruction method and apparatus, and electronic device and storage medium
TWI788630B (en) Method, device, computer equipment, and storage medium for generating 3d face model
US20200387698A1 (en) Hand key point recognition model training method, hand key point recognition method and device
US11367307B2 (en) Method for processing images and electronic device
US11436779B2 (en) Image processing method, electronic device, and storage medium
EP3779883A1 (en) Method and device for repositioning in camera orientation tracking process, and storage medium
WO2020125785A1 (en) Hair rendering method, device, electronic apparatus, and storage medium
CN109308727B (en) Virtual image model generation method and device and storage medium
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN110427110A (en) A kind of live broadcasting method, device and direct broadcast server
CN109947338B (en) Image switching display method and device, electronic equipment and storage medium
CN112337105B (en) Virtual image generation method, device, terminal and storage medium
WO2022052620A1 (en) Image generation method and electronic device
WO2020233403A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
CN111680758B (en) Image training sample generation method and device
CN110796083B (en) Image display method, device, terminal and storage medium
WO2022199102A1 (en) Image processing method and device
CN111862148A (en) Method, device, electronic equipment and medium for realizing visual tracking
KR20220124432A (en) Mehtod and system for wearing 3d virtual clothing based on 2d images
CN109767482B (en) Image processing method, device, electronic equipment and storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN109685881B (en) Volume rendering method and device and intelligent equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19907896

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19907896

Country of ref document: EP

Kind code of ref document: A1