WO2020140832A1 - Procédé et appareil de reconstruction faciale en trois dimensions et dispositif électronique et support d'informations - Google Patents

Procédé et appareil de reconstruction faciale en trois dimensions et dispositif électronique et support d'informations Download PDF

Info

Publication number
WO2020140832A1
WO2020140832A1 PCT/CN2019/128900 CN2019128900W WO2020140832A1 WO 2020140832 A1 WO2020140832 A1 WO 2020140832A1 CN 2019128900 W CN2019128900 W CN 2019128900W WO 2020140832 A1 WO2020140832 A1 WO 2020140832A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
dimensional
vertex
model
face image
Prior art date
Application number
PCT/CN2019/128900
Other languages
English (en)
Chinese (zh)
Inventor
曹占魁
李雅子
王一
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2020140832A1 publication Critical patent/WO2020140832A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present application relates to the field of computer technology, and in particular, to a three-dimensional face reconstruction method, device, electronic equipment, and storage medium.
  • the three-dimensional reconstruction of the face refers to generating a three-dimensional face model based on the two-dimensional image containing the face, that is, the face image.
  • 3D Morphable Models are generally used to realize 3D reconstruction of human faces. Specifically, a large number of 3D prototype faces are obtained first, and a complex preprocessing process is performed on these 3D prototype faces. Principal Components Analysis (PCA) is used to statistically model the shape, texture and surface reflectance of the human face to generate a deformation model, and then use the deformation model to synthesize the face image to realize three-dimensional reconstruction of the face .
  • PCA Principal Components Analysis
  • the present application provides a method, device, electronic device, and storage medium for three-dimensional reconstruction of a human face, which can overcome the problems of cumbersome processes, large calculation amount, and low efficiency of three-dimensional reconstruction of a human face.
  • a three-dimensional reconstruction method for a face including:
  • the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
  • texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • a three-dimensional face reconstruction device including:
  • the acquisition module is configured to perform acquisition of the initial three-dimensional face model and face image
  • a recognition module configured to perform face recognition on the face image to obtain face pose data
  • the acquiring module is further configured to perform acquiring the initial three-dimensional face according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model In a pose corresponding to the face pose data of the model, the two vertices of each vertex projected onto the imaging plane of the device;
  • the processing module is configured to perform texture mapping on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain the three-dimensional face model of the face image.
  • an electronic device including:
  • Memory for storing processor executable instructions
  • the processor is configured to:
  • the initial three-dimensional face model corresponding to the face pose data according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model In the posture of each, the two vertices are projected onto the two-dimensional coordinates of the imaging plane of the device;
  • texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • a non-transitory computer-readable storage medium when instructions in the storage medium are executed by a processor of an electronic device, enabling the electronic device to execute the first aspect 3D reconstruction method of human face.
  • an application program product which, when instructions in the application product product are executed by a processor of an electronic device, enables the electronic device to perform the three-dimensional reconstruction of the face described in the first aspect method.
  • Fig. 1 is a flow chart showing a method for three-dimensional reconstruction of a human face according to an exemplary embodiment
  • Fig. 2 is a flowchart of another three-dimensional face reconstruction method according to an exemplary embodiment
  • Fig. 3 is a block diagram of a first face three-dimensional reconstruction device according to an exemplary embodiment
  • Fig. 4 is a block diagram of a second face three-dimensional reconstruction device according to an exemplary embodiment
  • Fig. 5 is a block diagram of a third face three-dimensional reconstruction device according to an exemplary embodiment
  • Fig. 6 is a block diagram of an electronic device 600 according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a method for three-dimensional face reconstruction according to an exemplary embodiment. As shown in Fig. 1, the method for three-dimensional face reconstruction is used in an electronic device and includes the following steps.
  • step S11 an initial three-dimensional face model and face image are obtained.
  • step S12 face recognition is performed on the face image to obtain face pose data.
  • step S13 according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model, obtain the initial three-dimensional face model corresponding to the face pose data In the posture, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device.
  • step S14 texture mapping is performed on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain a three-dimensional face model of the face image.
  • the method provided in the embodiment of the present application obtains an initial three-dimensional face model, performs face recognition on the face image to be three-dimensionally reconstructed, and obtains face pose data, and then collects the face image according to the face pose data
  • the projection parameters of the device can quickly convert the three-dimensional coordinates of each vertex of the initial three-dimensional face model into two-dimensional coordinates, that is, the two-dimensional coordinates of each vertex of the initial three-dimensional face model projected on the face image.
  • texture mapping processing is performed according to the two-dimensional coordinates to obtain a reconstructed three-dimensional face model, which simplifies the reconstruction process, has a small amount of calculation, and improves the efficiency of three-dimensional reconstruction of the face.
  • performing face recognition on the face image to obtain face pose data includes:
  • Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
  • face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data, including:
  • a face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix.
  • the displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image.
  • the rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image; the matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  • the projection parameters include a projection matrix, and accordingly, according to the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model, In the initial three-dimensional face model, in a pose corresponding to the face pose data, the two-dimensional coordinates of each vertex projected onto the imaging plane of the device include:
  • the three-dimensional coordinates of the vertex are multiplied by the matrix and the projection matrix to obtain the two-dimensional coordinates of the vertex.
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, including:
  • texture data collection is performed on the face image; according to the collected texture data, texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, the method further includes :
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered on the face image.
  • rendering the three-dimensional face model onto the face image includes:
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered; the rendered image is overlaid on the face image.
  • the method before rendering the three-dimensional face model onto the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex, the method further includes: according to the animation data and Acquiring the three-dimensional coordinates of each vertex, and acquiring the three-dimensional coordinates after the displacement of each vertex when driving the three-dimensional face model to make a corresponding expression or action according to the animation data;
  • rendering the three-dimensional face model onto the face image includes: according to the face pose data, after the displacement of each vertex The three-dimensional coordinates of and the two-dimensional coordinates of each vertex, rendering the three-dimensional face model onto the face image.
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model of the face image, the method further includes :
  • the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
  • Fig. 2 is a flowchart of another three-dimensional face reconstruction method according to an exemplary embodiment. As shown in Fig. 2, the three-dimensional face reconstruction method is used in an electronic device and includes the following steps:
  • step S21 an initial three-dimensional face model and face image are obtained.
  • the initial three-dimensional face model may be a standard three-dimensional face model (or a general three-dimensional face model), and the face image refers to an image including a face, that is, an image to be three-dimensionally reconstructed.
  • the initial three-dimensional face model may be constructed by an electronic device, or may be constructed by other devices, and then sent to the electronic device, so that the electronic device can acquire the initial three-dimensional face model.
  • the electronic device may pre-construct or obtain the initial three-dimensional face model from another device and store it locally.
  • the electronic device may obtain the initial three-dimensional face model from local storage.
  • the electronic device may also construct or obtain the initial three-dimensional face model from other devices at the current time, which is not limited in this embodiment of the present application.
  • the construction process of the initial three-dimensional face model may include: obtaining a face image from a face image database, extracting face feature points of the face image, and generating the initial three-dimensional based on the face feature points Face model.
  • the face feature points include, but are not limited to, key points in the face that characterize eyebrows, nose, eyes, mouth, and contours of the face. Face feature points can be obtained by performing face detection on face images through a face detection software development kit (Software Development Kit, SDK).
  • the three-dimensional model usually uses some points in the three-dimensional space and the triangles connected by these points to represent. These points are called vertices. Once the initial three-dimensional face model is constructed, the three-dimensional coordinates of the vertices of the initial three-dimensional face model can be obtained, that is, the position coordinates (V.POS) of the vertices in the three-dimensional space.
  • V.POS position coordinates
  • the face image can be a face image collected by the camera of the electronic device in real time.
  • the electronic device can collect face images through a camera module (such as a camera), and for each frame of collected face images, the subsequent steps S22 to S25 can be performed to make each frame of face images You can get real-time 3D reconstruction results.
  • the face image may also be a face image collected in advance by the electronic device, or may be a face image collected by a camera device other than the electronic device.
  • the embodiment of the present application does not make the source of the face image Specific restrictions.
  • step S22 face recognition is performed on the face image to obtain face pose data.
  • the face image includes a face
  • the electronic device or other device collects the face image
  • the face has a corresponding posture.
  • this step S22 may include: performing face recognition on the face image, and using the recognized position and orientation of the face in the face image as the face pose data.
  • the position and orientation of the face may be the position and orientation of the face in the three-dimensional space when the device collects the face image, for example, the position of the face may be the left of the face in the field of view of the device The position of the face, the position to the right, or the center of the center, etc.
  • the orientation of the face may be a frontal face, a left face, a right face, a head up or a head down.
  • the process of performing face recognition on the face image may include: using a face recognition algorithm to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix, and the displacement matrix is used for Indicates the position of the face in the three-dimensional space when the device collects the face image, the rotation matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image; the displacement matrix and The matrix obtained by multiplying the rotation matrix is used as the face pose data.
  • the matrix obtained by multiplying the displacement matrix and the rotation matrix (denoted as matrix M) may be a 4 ⁇ 4 matrix.
  • step S23 according to the face pose data, the projection parameters of the device that collected the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model, the initial three-dimensional face model corresponding to the face pose data is acquired In the posture, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device.
  • V.UV texture coordinates
  • the electronic device after the electronic device obtains face posture data according to the face image, it can use the face posture data as posture data of the three-dimensional face model, including the position and orientation of the three-dimensional face model.
  • the electronic device can obtain the two-dimensional coordinates of each vertex of the three-dimensional face model projected in this pose, because the face image is collected by the device, that is, the face image is the face Obtained by imaging on the imaging plane of the device, therefore, each vertex is projected on the two-dimensional coordinates of the imaging plane of the device, that is, the two-dimensional coordinates of each vertex projected on the face image, thus realizing a three-dimensional human Point-to-point mapping between face models and face images.
  • the projection parameter includes a projection matrix (denoted as matrix P). If the face image is collected by an electronic device, the projection matrix refers to the projection matrix of the camera module of the electronic device. If the face image is collected by a camera device other than the electronic device, the projection matrix refers to the The projection matrix of the camera device.
  • the initial three-dimensional face model By multiplying the three-dimensional coordinates of each vertex of the initial three-dimensional face model by the matrix used to express the pose, and then multiplying it by the projection matrix of the device that collects the face image, the initial three-dimensional face model can be projected by the camera under the pose
  • the two-dimensional coordinates of each vertex that is, the coordinates of the vertices of the initial three-dimensional face model projected on the face image
  • each vertex of the initial three-dimensional face model can be mapped to a pixel on the face image. This process of obtaining the two-dimensional coordinates of each vertex of the three-dimensional face model by means of projection is fast and the amount of calculation is small.
  • step S24 based on the two-dimensional coordinates of each vertex and the face image, texture mapping is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • the initial three-dimensional face model may be any standard face model, which does not have the texture information of the face in the face image.
  • the electronic device obtains the initial three-dimensional face model pose data and various After the two-dimensional coordinates of the vertices, texture mapping can be performed on the initial three-dimensional face model to obtain a three-dimensional face model with texture information.
  • this step S24 may include: performing texture data collection on the face image according to the two-dimensional coordinates of each vertex; and performing texture mapping on the initial three-dimensional face model based on the collected texture data After processing, the three-dimensional face model is obtained.
  • the texture data of the face image By sampling the texture data of the face image according to the two-dimensional coordinates of each vertex, it is used as the texture of the initial three-dimensional face model. After the texture is completed, the three-dimensional reconstruction of the face is completed, and the obtained three-dimensional reconstruction result is used as the face image.
  • Three-dimensional face model By using mature face recognition algorithms to obtain pose data, a simple calculation can be used to reconstruct a three-dimensional face model. The calculation is small and the speed is fast, which improves the efficiency of face three-dimensional reconstruction.
  • step S25 according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices, the three-dimensional face model is rendered on the face image.
  • the electronic device after obtaining the reconstructed three-dimensional model of the human face through three-dimensional reconstruction of the human face, the electronic device can display the reconstruction result by rendering.
  • this step S25 may include: rendering the three-dimensional face model according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices; overlaying the rendered image to the person Face image.
  • the electronic device can use the 3D rendering technology to render the 3D face model onto the face image based on the face pose data of the 3D face model, the 3D coordinates and 2D coordinates of each vertex of the 3D face model, and the 3D face model
  • the point-to-point mapping between each vertex and the pixels in the face image can realize the natural fusion of the rendering results of the three-dimensional face model and the face image without clear boundaries.
  • the electronic device may drive the three-dimensional face model to make expressions or actions corresponding to the animation data according to the animation data.
  • the animation data is used to show expressions or actions made by the human face, such as making expressions or actions such as mouth opening and tongue extension, the embodiments of the present application do not limit the expressions or actions corresponding to the animation data.
  • the animation data may be animation data pre-acquired by the electronic device.
  • the animation data may also be obtained.
  • electronic devices can use techniques such as skeletal animation and vertex animation to create animation data. Among them, the basic principle of skeletal animation, vertex animation and other technologies is to make each vertex of the model shift with time.
  • the pre-acquired animation data is directly applied to the 3D face model to achieve animation redirection and can adapt to most people's expressions Or action.
  • this step S25 may include: rendering the three-dimensional face model onto the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
  • each vertex of the three-dimensional face model will be displaced.
  • the electronic device can render the three-dimensional face model according to the three-dimensional coordinates after the displacement of each vertex, so that the rendering result is made.
  • the human face corresponding to the expression or action realizes the redirection of the expression or action.
  • the technical solution provided by the embodiments of the present application can reconstruct a three-dimensional face model in real time with high performance, and combine it with pre-made animation data to realize human
  • the face moves according to the pre-made animation, making expressions and actions such as opening the mouth and sticking out the tongue, and it is well integrated with the face image.
  • Fig. 3 is a block diagram of a device for three-dimensional face reconstruction according to an exemplary embodiment.
  • the device includes an acquisition module 301, an identification module 302 and a processing module 303.
  • the acquiring module 301 is configured to perform acquiring the initial three-dimensional face model and face image
  • the recognition module 302 is configured to perform face recognition on the face image to obtain face pose data
  • the acquisition module 301 is further configured to perform acquisition of the initial three-dimensional face model based on the face pose data, the projection parameters of the device that collects the face image, and the three-dimensional coordinates of each vertex of the initial three-dimensional face model. In the pose corresponding to the face pose data, each vertex is projected onto the two-dimensional coordinates of the imaging plane of the device;
  • the processing module 303 is configured to perform texture mapping processing on the initial three-dimensional face model according to the two-dimensional coordinates of each vertex and the face image to obtain a three-dimensional face model of the face image.
  • the recognition module 302 is configured to perform face recognition on the face image, and use the position and orientation of the face in the recognized face image as the face pose data.
  • the identification module 302 is configured to execute:
  • a face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix.
  • the displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image.
  • the rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image;
  • the matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  • the projection parameter includes a projection matrix
  • the acquisition module 301 is configured to perform, for each of the vertices, the three-dimensional coordinates of the vertex and the matrix and the projection matrix Multiply to get the two-dimensional coordinates of the vertex.
  • the processing module 303 is configured to execute:
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  • the device further includes:
  • the rendering module 304 is configured to perform rendering of the three-dimensional face model on the face image according to the face pose data, the three-dimensional coordinates and the two-dimensional coordinates of each vertex.
  • the rendering module 304 is configured to perform rendering of the three-dimensional face model according to the facial pose data, the three-dimensional coordinates and the two-dimensional coordinates of the vertices; overlay the rendered image to On the face image.
  • the acquiring module 301 is further configured to perform the acquisition of the corresponding expression or action according to the animation data and the three-dimensional coordinates of the respective vertices when the three-dimensional face model is driven according to the animation data.
  • the rendering module 304 is configured to perform rendering of the three-dimensional face model onto the face image based on the face pose data, the three-dimensional coordinates after the displacement of each vertex, and the two-dimensional coordinates of each vertex.
  • the device further includes:
  • the driving module 305 is configured to execute driving the three-dimensional face model to make expressions or actions corresponding to the animation data according to the animation data.
  • Fig. 6 is a block diagram of an electronic device 600 according to an exemplary embodiment.
  • the electronic device 600 may be: a smartphone, a tablet computer, a motion picture expert compression standard audio level 3 player (Moving Picture Experts Group Audio Audio Layer III, MP3), a motion picture expert compression standard audio level 3 audio layer 4 (Moving Pictures Experts Group Group Audio Layer) IV, MP4) player, laptop or desktop computer.
  • the electronic device 600 may also be referred to as user equipment, portable electronic device, laptop electronic device, desktop electronic device, and other names.
  • the electronic device 600 includes:
  • a memory 602 for storing executable instructions of the processor 601;
  • the processor 601 is configured to execute:
  • the projection parameters of the device that collected the face image, and the three-dimensional coordinates of the vertices of the initial three-dimensional face model obtain the initial three-dimensional face model under the pose corresponding to the face pose data.
  • texture mapping processing is performed on the initial three-dimensional face model to obtain a three-dimensional face model of the face image.
  • the processor 601 is specifically configured to execute:
  • Face recognition is performed on the face image, and the position and orientation of the face in the recognized face image are used as the face pose data.
  • the processor 601 is specifically configured to execute:
  • a face recognition algorithm is used to perform face recognition on the face image to obtain a displacement matrix and a rotation matrix.
  • the displacement matrix is used to represent the position of the face in the three-dimensional space when the device collects the face image.
  • the rotation The matrix is used to indicate the orientation of the face in the three-dimensional space when the device collects the face image;
  • the matrix obtained by multiplying the displacement matrix and the rotation matrix is used as the face pose data.
  • the projection parameter includes a projection matrix
  • the processor 601 is specifically configured to perform, for each of the vertices, the three-dimensional coordinates of the vertex and the matrix and the projection Multiply the matrix to get the two-dimensional coordinates of the vertex.
  • the processor 601 is specifically configured to execute:
  • texture mapping is performed on the initial three-dimensional face model to obtain the three-dimensional face model.
  • the processor 601 is further configured to execute:
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered on the face image.
  • the processor 601 is specifically configured to execute:
  • the three-dimensional coordinates and the two-dimensional coordinates of each vertex is rendered; the rendered image is overlaid on the face image.
  • the processor 601 is further configured to execute:
  • the animation data and the three-dimensional coordinates of each vertex obtain the three-dimensional coordinates after the displacement of each vertex when the three-dimensional face model is driven to make a corresponding expression or action according to the animation data;
  • the three-dimensional face model is rendered on the face image according to the face pose data, the three-dimensional coordinates of each vertex after displacement and the two-dimensional coordinates of each vertex.
  • the processor 601 is further configured to execute:
  • the three-dimensional face model is driven to make expressions or actions corresponding to the animation data.
  • the processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 601 may adopt at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA). achieve.
  • the processor 601 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in a wake-up state, also called a central processing unit (Central Processing Unit, CPU); the coprocessor is A low-power processor for processing data in the standby state.
  • CPU Central Processing Unit
  • the processor 601 may be integrated with a graphics processor (Graphics, Processing, Unit, GPU), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 601 may further include an artificial intelligence (AI) processor, which is used to process computing operations related to machine learning.
  • AI artificial intelligence
  • the memory 602 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 602 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
  • non-transitory computer-readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the person provided by the method embodiment in the present application Face 3D reconstruction method.
  • the electronic device 600 may optionally further include: a peripheral device interface 603 and at least one peripheral device.
  • the processor 601, the memory 602, and the peripheral device interface 603 may be connected by a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 603 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 604, a display screen 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
  • the peripheral device interface 603 may be used to connect at least one peripheral device related to Input/Output (I/O) to the processor 601 and the memory 602.
  • the processor 601, the memory 602, and the peripheral device interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 601, the memory 602, and the peripheral device interface 603 or Both can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 604 is used to receive and transmit radio frequency (Radio Frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 604 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 604 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 604 can communicate with other electronic devices through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or wireless fidelity (WiFi) networks.
  • the radio frequency circuit 604 may further include a circuit related to short-range wireless communication (Near Field Communication, NFC), which is not limited in this application.
  • NFC Near Field Communication
  • the display screen 605 is used to display a user interface (User Interface, UI).
  • the UI may include graphics, text, icons, video, and any combination thereof.
  • the display screen 605 also has the ability to collect touch signals on or above the surface of the display screen 605.
  • the touch signal can be input to the processor 601 as a control signal for processing.
  • the display screen 605 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 605 may be one, and the front panel of the electronic device 600 is provided; in other embodiments, the display screen 605 may be at least two, respectively disposed on different surfaces of the electronic device 600 or in a folded design In still other embodiments, the display screen 605 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the electronic device 600. Even, the display screen 605 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 605 can be made of liquid crystal display (Liquid Crystal) (LCD), organic light-emitting diode (Organic Light-Emitting Diode, OLED) and other materials.
  • LCD liquid crystal display
  • OLED Organic Light-Emitting Diode
  • the camera component 606 is used to collect images or videos.
  • the camera assembly 606 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the electronic device, and the rear camera is set on the back of the electronic device.
  • there are at least two rear cameras which are respectively one of the main camera, the depth-of-field camera, the wide-angle camera, and the telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera Integrate with wide-angle camera to achieve panoramic shooting and virtual reality (Virtual Reality, VR) shooting function or other fusion shooting functions.
  • the camera assembly 606 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 607 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 601 for processing, or input them to the radio frequency circuit 604 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 601 or the radio frequency circuit 604 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible by humans, but also convert electrical signals into sound waves inaudible to humans for ranging purposes.
  • the audio circuit 607 may further include a headphone jack.
  • the positioning component 608 is used to locate the current geographic location of the electronic device 600 to implement navigation or location-based services (Location Based Services, LBS).
  • LBS Location Based Services
  • the positioning component 608 may be a positioning component based on the Global Positioning System (GPS) of the United States, the Beidou system of China, the Grenas system of Russia, or the Galileo system of the European Union.
  • GPS Global Positioning System
  • the power supply 609 is used to supply power to various components in the electronic device 600.
  • the power source 609 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the electronic device 600 further includes one or more sensors 610.
  • the one or more sensors 610 include, but are not limited to: an acceleration sensor 611, a gyro sensor 612, a pressure sensor 613, a fingerprint sensor 614, an optical sensor 615, and a proximity sensor 616.
  • the acceleration sensor 611 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the electronic device 600.
  • the acceleration sensor 611 can be used to detect the components of gravity acceleration on three coordinate axes.
  • the processor 601 may control the touch screen 605 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 611.
  • the acceleration sensor 611 can also be used for game or user movement data collection.
  • the gyro sensor 612 can detect the body direction and the rotation angle of the electronic device 600, and the gyro sensor 612 can cooperate with the acceleration sensor 611 to collect a 3D action of the user on the electronic device 600.
  • the processor 601 can realize the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 613 may be disposed on the side frame of the electronic device 600 and/or the lower layer of the touch display 605.
  • the pressure sensor 613 can detect the user's grip signal on the electronic device 600, and the processor 601 can perform left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 613.
  • the processor 601 controls the operability control on the UI interface according to the user's pressure operation on the touch display 605.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 614 is used to collect the user's fingerprint, and the processor 601 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the user's identity based on the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 601 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 614 may be provided on the front, back, or side of the electronic device 600. When a physical button or manufacturer logo is provided on the electronic device 600, the fingerprint sensor 614 may be integrated with the physical button or manufacturer logo.
  • the optical sensor 615 is used to collect the ambient light intensity.
  • the processor 601 can control the display brightness of the touch display 605 according to the ambient light intensity collected by the optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display 605 is increased; when the ambient light intensity is low, the display brightness of the touch display 605 is decreased.
  • the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
  • the proximity sensor 616 also called a distance sensor, is usually provided on the front panel of the electronic device 600.
  • the proximity sensor 616 is used to collect the distance between the user and the front of the electronic device 600.
  • the processor 601 controls the touch display 605 to switch from the bright screen state to the breathing state; when the proximity sensor 616 When it is detected that the distance between the user and the front of the electronic device 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the screen-holding state to the screen-lighting state.
  • FIG. 6 does not constitute a limitation on the electronic device 600, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
  • a non-transitory computer-readable storage medium is also provided.
  • the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can perform any of the three-dimensional reconstruction of the face The method, for example, the method shown in FIG. 1 or the method shown in FIG. 2.
  • the non-transitory computer-readable storage medium may be read-only memory (Read-Only Memory, ROM), random-access memory (Random Access Memory, RAM), read-only compact disc (Compact Disc Read-Only Memory, CD- ROM), magnetic tape, floppy disk and optical data storage devices, etc.
  • an application product is also provided.
  • the electronic device can execute any of the three-dimensional face reconstruction methods described above, such as The method shown in 1 or the method shown in FIG. 2 and so on.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

La présente invention concerne un procédé et un appareil de reconstruction faciale en trois dimensions et un dispositif électronique et un support d'informations, ceux-ci appartenant au domaine technique des ordinateurs. Le procédé consiste : à acquérir un modèle facial en trois dimensions initial et une image faciale (S11) ; à réaliser une reconnaissance faciale sur l'image faciale pour obtenir des données de pose faciale (S12) ; en fonction des données de pose faciale, d'un paramètre de projection d'un dispositif collectant l'image faciale et de coordonnées en trois dimensions de chaque sommet du modèle facial en trois dimensions initial, à acquérir des coordonnées en deux dimensions de chaque sommet projeté sur un plan d'imagerie du dispositif lorsque le modèle facial en trois dimensions initial est dans une pose correspondant aux données de pose faciale (S13) ; et à réaliser un traitement de mappage de texture sur le modèle facial en trois dimensions initial en fonction des coordonnées en deux dimensions de chaque sommet et de l'image faciale de manière à obtenir un modèle facial en trois dimensions de l'image faciale (S14). Selon le procédé, un processus de reconstruction est simplifié, la charge de calcul est faible et l'efficacité de la reconstruction faciale en trois dimensions est améliorée.
PCT/CN2019/128900 2019-01-04 2019-12-26 Procédé et appareil de reconstruction faciale en trois dimensions et dispositif électronique et support d'informations WO2020140832A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910008837.3A CN109767487A (zh) 2019-01-04 2019-01-04 人脸三维重建方法、装置、电子设备及存储介质
CN201910008837.3 2019-01-04

Publications (1)

Publication Number Publication Date
WO2020140832A1 true WO2020140832A1 (fr) 2020-07-09

Family

ID=66453244

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/128900 WO2020140832A1 (fr) 2019-01-04 2019-12-26 Procédé et appareil de reconstruction faciale en trois dimensions et dispositif électronique et support d'informations

Country Status (2)

Country Link
CN (1) CN109767487A (fr)
WO (1) WO2020140832A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037320A (zh) * 2020-09-01 2020-12-04 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备以及计算机可读存储介质
CN112613357A (zh) * 2020-12-08 2021-04-06 深圳数联天下智能科技有限公司 人脸测量方法、装置、电子设备和介质
CN112652057A (zh) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 生成人体三维模型的方法、装置、设备以及存储介质
CN112734890A (zh) * 2020-12-22 2021-04-30 上海影谱科技有限公司 基于三维重建的人脸替换方法及装置
CN112766215A (zh) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 人脸融合方法、装置、电子设备及存储介质
CN113343879A (zh) * 2021-06-18 2021-09-03 厦门美图之家科技有限公司 全景面部图像的制作方法、装置、电子设备及存储介质
CN113658313A (zh) * 2021-09-09 2021-11-16 北京达佳互联信息技术有限公司 人脸模型的渲染方法、装置及电子设备
CN113763532A (zh) * 2021-04-19 2021-12-07 腾讯科技(深圳)有限公司 基于三维虚拟对象的人机交互方法、装置、设备及介质
CN115082640A (zh) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 基于单张图像的3d人脸模型纹理重建方法及设备
CN115631285A (zh) * 2022-11-25 2023-01-20 北京红棉小冰科技有限公司 基于统一驱动的人脸渲染方法、装置、设备及存储介质
CN116978102A (zh) * 2023-08-04 2023-10-31 深圳市英锐存储科技有限公司 一种人脸特征建模识别方法、芯片及终端
CN117496019A (zh) * 2023-12-29 2024-02-02 南昌市小核桃科技有限公司 一种驱动静态图像的图像动画处理方法及系统
CN117496059A (zh) * 2023-11-03 2024-02-02 北京元点未来科技有限公司 基于空间算法利用aigc技术的三维影像系统

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767487A (zh) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 人脸三维重建方法、装置、电子设备及存储介质
CN110309554B (zh) * 2019-06-12 2021-01-15 清华大学 基于服装建模与仿真的视频人体三维重建方法及装置
CN110533777B (zh) * 2019-08-01 2020-09-15 北京达佳互联信息技术有限公司 三维人脸图像修正方法、装置、电子设备和存储介质
CN112406608B (zh) * 2019-08-23 2022-06-21 国创移动能源创新中心(江苏)有限公司 充电桩及其自动充电装置和方法
CN110555815B (zh) * 2019-08-30 2022-05-20 维沃移动通信有限公司 一种图像处理方法和电子设备
CN110675413B (zh) * 2019-09-27 2020-11-13 腾讯科技(深圳)有限公司 三维人脸模型构建方法、装置、计算机设备及存储介质
CN110796083B (zh) * 2019-10-29 2023-07-04 腾讯科技(深圳)有限公司 图像显示方法、装置、终端及存储介质
CN111160278B (zh) * 2019-12-31 2023-04-07 陕西西图数联科技有限公司 基于单个图像传感器的人脸纹理结构数据采集方法
CN111340943B (zh) * 2020-02-26 2023-01-03 北京市商汤科技开发有限公司 图像处理方法、装置、设备及存储介质
CN111460937B (zh) * 2020-03-19 2023-12-19 深圳市新镜介网络有限公司 脸部特征点的定位方法、装置、终端设备及存储介质
CN113643348B (zh) * 2020-04-23 2024-02-06 杭州海康威视数字技术股份有限公司 一种人脸属性分析方法及装置
CN111626924B (zh) * 2020-05-28 2023-08-15 维沃移动通信有限公司 图像的虚化处理方法、装置、电子设备及可读存储介质
CN113763531B (zh) * 2020-06-05 2023-11-28 北京达佳互联信息技术有限公司 三维人脸重建方法、装置、电子设备及存储介质
CN111753739B (zh) * 2020-06-26 2023-10-31 北京百度网讯科技有限公司 物体检测方法、装置、设备以及存储介质
CN112883870A (zh) * 2021-02-22 2021-06-01 北京中科深智科技有限公司 一种人脸影像映射方法及系统
CN113129362B (zh) * 2021-04-23 2024-05-10 北京地平线机器人技术研发有限公司 一种三维坐标数据的获取方法及装置
CN115019021A (zh) * 2022-06-02 2022-09-06 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质
CN117237204A (zh) * 2022-06-15 2023-12-15 荣耀终端有限公司 一种图像处理方法、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999942A (zh) * 2012-12-13 2013-03-27 清华大学 三维人脸重建方法
US20170316598A1 (en) * 2015-05-22 2017-11-02 Tencent Technology (Shenzhen) Company Limited 3d human face reconstruction method, apparatus and server
CN109035394A (zh) * 2018-08-22 2018-12-18 广东工业大学 人脸三维模型重建方法、装置、设备、系统及移动终端
CN109767487A (zh) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 人脸三维重建方法、装置、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0778267A (ja) * 1993-07-09 1995-03-20 Silicon Graphics Inc 陰影を表示する方法及びコンピュータ制御表示システム
CN101515324A (zh) * 2009-01-21 2009-08-26 上海银晨智能识别科技有限公司 适用于多种姿态的人脸识别布控系统及方法
US8933925B2 (en) * 2009-06-15 2015-01-13 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes
CN102074015A (zh) * 2011-02-24 2011-05-25 哈尔滨工业大学 一种基于二维图像序列的目标对象的三维重建方法
CN108765550B (zh) * 2018-05-09 2021-03-30 华南理工大学 一种基于单张图片的三维人脸重建方法
CN108921795A (zh) * 2018-06-04 2018-11-30 腾讯科技(深圳)有限公司 一种图像融合方法、装置及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999942A (zh) * 2012-12-13 2013-03-27 清华大学 三维人脸重建方法
US20170316598A1 (en) * 2015-05-22 2017-11-02 Tencent Technology (Shenzhen) Company Limited 3d human face reconstruction method, apparatus and server
CN109035394A (zh) * 2018-08-22 2018-12-18 广东工业大学 人脸三维模型重建方法、装置、设备、系统及移动终端
CN109767487A (zh) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 人脸三维重建方法、装置、电子设备及存储介质

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037320A (zh) * 2020-09-01 2020-12-04 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备以及计算机可读存储介质
CN112037320B (zh) * 2020-09-01 2023-10-20 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备以及计算机可读存储介质
CN112613357A (zh) * 2020-12-08 2021-04-06 深圳数联天下智能科技有限公司 人脸测量方法、装置、电子设备和介质
CN112613357B (zh) * 2020-12-08 2024-04-09 深圳数联天下智能科技有限公司 人脸测量方法、装置、电子设备和介质
CN112734890A (zh) * 2020-12-22 2021-04-30 上海影谱科技有限公司 基于三维重建的人脸替换方法及装置
CN112734890B (zh) * 2020-12-22 2023-11-10 上海影谱科技有限公司 基于三维重建的人脸替换方法及装置
CN112652057B (zh) * 2020-12-30 2024-05-07 北京百度网讯科技有限公司 生成人体三维模型的方法、装置、设备以及存储介质
CN112652057A (zh) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 生成人体三维模型的方法、装置、设备以及存储介质
CN112766215A (zh) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 人脸融合方法、装置、电子设备及存储介质
CN113763532A (zh) * 2021-04-19 2021-12-07 腾讯科技(深圳)有限公司 基于三维虚拟对象的人机交互方法、装置、设备及介质
CN113763532B (zh) * 2021-04-19 2024-01-19 腾讯科技(深圳)有限公司 基于三维虚拟对象的人机交互方法、装置、设备及介质
CN113343879A (zh) * 2021-06-18 2021-09-03 厦门美图之家科技有限公司 全景面部图像的制作方法、装置、电子设备及存储介质
CN113658313A (zh) * 2021-09-09 2021-11-16 北京达佳互联信息技术有限公司 人脸模型的渲染方法、装置及电子设备
CN113658313B (zh) * 2021-09-09 2024-05-17 北京达佳互联信息技术有限公司 人脸模型的渲染方法、装置及电子设备
CN115082640A (zh) * 2022-08-01 2022-09-20 聚好看科技股份有限公司 基于单张图像的3d人脸模型纹理重建方法及设备
CN115631285B (zh) * 2022-11-25 2023-05-02 北京红棉小冰科技有限公司 基于统一驱动的人脸渲染方法、装置、设备及存储介质
CN115631285A (zh) * 2022-11-25 2023-01-20 北京红棉小冰科技有限公司 基于统一驱动的人脸渲染方法、装置、设备及存储介质
CN116978102A (zh) * 2023-08-04 2023-10-31 深圳市英锐存储科技有限公司 一种人脸特征建模识别方法、芯片及终端
CN117496059A (zh) * 2023-11-03 2024-02-02 北京元点未来科技有限公司 基于空间算法利用aigc技术的三维影像系统
CN117496059B (zh) * 2023-11-03 2024-04-12 北京元点未来科技有限公司 基于空间算法利用aigc技术的三维影像系统
CN117496019B (zh) * 2023-12-29 2024-04-05 南昌市小核桃科技有限公司 一种驱动静态图像的图像动画处理方法及系统
CN117496019A (zh) * 2023-12-29 2024-02-02 南昌市小核桃科技有限公司 一种驱动静态图像的图像动画处理方法及系统

Also Published As

Publication number Publication date
CN109767487A (zh) 2019-05-17

Similar Documents

Publication Publication Date Title
WO2020140832A1 (fr) Procédé et appareil de reconstruction faciale en trois dimensions et dispositif électronique et support d'informations
US11989350B2 (en) Hand key point recognition model training method, hand key point recognition method and device
US11367307B2 (en) Method for processing images and electronic device
EP3933783A1 (fr) Procédé et appareil d'application informatique destinés à générer un modèle de visage tridimensionnel, dispositif informatique et support d'informations
US11436779B2 (en) Image processing method, electronic device, and storage medium
EP3779883A1 (fr) Procédé et dispositif de repositionnement dans un processus de suivi d'orientation de caméra, et support d'informations
CN109308727B (zh) 虚拟形象模型生成方法、装置及存储介质
WO2020125785A1 (fr) Procédé de rendu capillaire, dispositif, appareil électronique et support de stockage
CN111324250B (zh) 三维形象的调整方法、装置、设备及可读存储介质
CN112907725B (zh) 图像生成、图像处理模型的训练、图像处理方法和装置
CN112287852B (zh) 人脸图像的处理方法、显示方法、装置及设备
CN110427110A (zh) 一种直播方法、装置以及直播服务器
CN109947338B (zh) 图像切换显示方法、装置、电子设备及存储介质
CN111680758B (zh) 图像训练样本生成方法和装置
CN112337105B (zh) 虚拟形象生成方法、装置、终端及存储介质
WO2022052620A1 (fr) Procédé de génération d'image et dispositif électronique
WO2020233403A1 (fr) Procédé et appareil d'affichage de visage personnalisé pour un personnage tridimensionnel et dispositif et support de stockage
CN110956580B (zh) 图像换脸的方法、装置、计算机设备以及存储介质
CN110796083B (zh) 图像显示方法、装置、终端及存储介质
WO2022199102A1 (fr) Procédé et dispositif de traitement d'image
CN111862148A (zh) 实现视觉跟踪的方法、装置、电子设备及介质
KR20220124432A (ko) 2차원 이미지에 기초한 3차원 가상 의류 착용방법 및 그 시스템
CN109767482B (zh) 图像处理方法、装置、电子设备及存储介质
CN112967261B (zh) 图像融合方法、装置、设备及存储介质
CN109685881B (zh) 一种体绘制方法、装置及智能设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19907896

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19907896

Country of ref document: EP

Kind code of ref document: A1