WO2016188318A1 - 一种3d人脸重建方法、装置及服务器 - Google Patents

一种3d人脸重建方法、装置及服务器 Download PDF

Info

Publication number
WO2016188318A1
WO2016188318A1 PCT/CN2016/081452 CN2016081452W WO2016188318A1 WO 2016188318 A1 WO2016188318 A1 WO 2016188318A1 CN 2016081452 W CN2016081452 W CN 2016081452W WO 2016188318 A1 WO2016188318 A1 WO 2016188318A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
face model
preliminary
model
points
Prior art date
Application number
PCT/CN2016/081452
Other languages
English (en)
French (fr)
Inventor
汪铖杰
李季檩
黄飞跃
张磊
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2016188318A1 publication Critical patent/WO2016188318A1/zh
Priority to US15/652,009 priority Critical patent/US10055879B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present application relates to the field of computer technology, and in particular, to a 3D face reconstruction method, apparatus, and server.
  • 3D face modeling has attracted more and more researchers' attention.
  • 3D face shape reconstruction is One of the key technologies for 3D face modeling.
  • 3D face reconstruction most methods of 3D face reconstruction are to collect multiple face photos from multiple angles, or to collect a frontal photo of a face, obtain a 3D face, and then reconstruct a 3D face according to the 3D face photo.
  • the existing 3D face reconstruction method will Unable to build a 3D face, or build a 3D face that is extremely poor.
  • the present application provides a 3D face reconstruction method, device, and server, so that the needle It is still possible to construct a 3D face for a photo containing only a human face.
  • a 3D face reconstruction method includes:
  • Texture mapping is performed on the deformed 3D face model to obtain a 3D face.
  • a 3D face reconstruction device includes:
  • An image feature point determining unit configured to acquire a 2D face image for performing 3D face reconstruction, and determine feature points on the 2D face image, the feature points being used to represent a face contour
  • a posture adjustment unit configured to determine a posture parameter of the face by using the feature point, and adjust a posture of the universal three-dimensional face model acquired in advance according to the posture parameter;
  • a feature point matching unit configured to determine a corresponding point of the feature point on the universal three-dimensional face model, and adjust a corresponding point in an occlusion state to obtain a preliminary 3D face model
  • a model deformation unit configured to perform deformation adjustment on the preliminary 3D face model, so that the positional relationship between the corresponding points on the preliminary 3D face model and the position between the feature points on the 2D face image are closed Consistently, the deformed 3D face model is obtained;
  • a texture mapping unit configured to perform texture mapping on the deformed 3D face model to obtain a 3D face.
  • a server includes the above 3D face reconstruction device.
  • the 3D face reconstruction method provided by the embodiment of the present application first determines the feature points on the acquired 2D face image, and determines the posture parameters of the face according to the feature points, and according to the feature points.
  • the attitude parameter adjusts the attitude of the general three-dimensional face model, and then determines the corresponding points of the feature points on the universal three-dimensional face model, and adjusts the corresponding points in the occlusion state to obtain the preliminary 3D face model, and then the preliminary The 3D face model is deformed and the texture of the deformed 3D face model is mapped to obtain the final 3D face.
  • the 2D face image obtained by the present application may be a human face image, and the posture parameter of the side face image is determined according to the feature point, and then the posture of the general three-dimensional face model is adjusted, so that the posture and the person of the universal three-dimensional face model are The face gestures are consistent, and then the 3D face is obtained through subsequent processing. Since the present application does not limit the shooting angle of the 2D image, the robustness thereof is higher, and the accuracy of the face recognition is further improved.
  • FIG. 1 is a schematic structural diagram of a server hardware according to an embodiment of the present application.
  • FIG. 2 is a flowchart of a 3D face reconstruction method disclosed in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a method for selecting feature points of a 2D face image according to an example of an embodiment of the present application
  • FIG. 4 is a schematic diagram of a rotation of a face image in a space rectangular coordinate system according to an example of an embodiment of the present application
  • FIG. 5 is a flowchart of another 3D face reconstruction method disclosed in an embodiment of the present application.
  • FIG. 6 is a flowchart of still another 3D face reconstruction method disclosed in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a 3D face reconstruction device according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a feature point matching unit according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a model deformation unit disclosed in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a 3D face reconstruction terminal disclosed in an embodiment of the present application.
  • the 3D face reconstruction method provided by the embodiment of the present application is implemented based on a server, and the server may be a computer, a notebook, or the like. Before introducing the 3D face reconstruction method of the present application, first introduce the hardware structure of the server. As shown in FIG. 1, the server may include:
  • Processor 1 communication interface 2, memory 3, communication bus 4, and display screen 5;
  • the processor 1, the communication interface 2, the memory 3 and the display screen 5 complete communication with each other via the communication bus 4.
  • the method includes steps S200 to S240.
  • Step S200 Acquire a 2D face image for performing 3D face reconstruction, and determine a preset number of feature points on the 2D face image.
  • the 2D face image can be acquired through the communication interface 2.
  • the communication interface 2 can be an interface of the communication module, such as an interface of the GSM module.
  • the acquired 2D face image may be stored in the memory 3, which may include a high speed RAM memory, and may also include a non-volatile memory such as at least one disk memory.
  • the acquired 2D face image it may be stored locally or downloaded from the network.
  • the format of the image is not limited in this embodiment, and may be a JPEG format, a BMP format, or the like.
  • the 2D face image obtained in the present application may not be a face front image, and may be a face image that deviates from a frontal angle.
  • two eyes on the face of the person can be seen in the 2D face image.
  • the selected feature point it may be a point on a person's face that is preset, and may be a plurality of feature points.
  • the selected feature points are used to represent the contour of the face. As shown in Figure 3, the eyebrows, eyes, nose, mouth, chin and cheeks on the sides of the face can better describe a face, so these positions can be Or some points in these locations as feature points.
  • the way to determine feature points can be either automated or manual.
  • the program can be programmed according to a certain algorithm to automatically read the coordinates of the feature points from the image; for the latter, the position can be specified on the image manually.
  • Step S210 Determine a posture parameter of the face by using the feature point, and adjust a posture of the universal three-dimensional face model acquired in advance according to the posture parameter.
  • This step can be performed by the processor 1, the code corresponding to this step can be stored in the memory 3, which is called by the processor 1 at the time of execution.
  • the processor 1 may be a central processing unit CPU or an application specific integrated circuit (ASIC), or The one or more integrated circuits are configured to implement the embodiments of the present application.
  • the processor 1 can determine the pose parameters of the face by utilizing the positional relationship between the respective feature points.
  • the attitude parameter can be determined by the rotation direction and angle of the face in the space rectangular coordinate system. Referring to FIG. 4, the attitude parameter can be formed by the angle at which the face image is rotated in the X-axis, the Y-axis, and the Z-axis direction.
  • Step S220 Determine corresponding points of the feature points on the universal three-dimensional face model, and adjust corresponding points in the occlusion state to obtain a preliminary 3D face model.
  • the depth information of the corresponding point of the three-dimensional face model may be used as the depth information of the corresponding feature point.
  • This step can be performed by the processor 1, the code corresponding to this step can be stored in the memory 3, which is called by the processor 1 at the time of execution.
  • the processor 1 can obtain a series of corresponding points when the feature points on the 2D face image are correspondingly labeled on the general three-dimensional face model. Since the gesture is adjusted in the general three-dimensional face model in step S210, that is, when the 2D face image is not positive, the adjusted general three-dimensional face model is not a front model, so the corresponding point marked on it may be There is a phenomenon that occlusion occurs at the corresponding point on one side. For such corresponding points in the occlusion state, the translation can be adjusted to make it visible. After adjustment, the preliminary 3D face model is obtained.
  • Step S230 performing deformation adjustment on the preliminary 3D face model.
  • This step can be performed by the processor 1, the code corresponding to this step can be stored in the memory 3, which is called by the processor 1 at the time of execution.
  • the corresponding correspondence on the preliminary 3D face model is made.
  • the positional relationship between the points is consistent with the positional relationship between the feature points on the 2D face image, and the deformed 3D face model is obtained.
  • the initial 3D face model is deformed and adjusted to adjust the height, fatness and thinness of the face.
  • Step S240 performing texture mapping on the deformed 3D face model to obtain a 3D face.
  • This step can be performed by the processor 1, the code corresponding to this step can be stored in the memory 3, which is called by the processor 1 at the time of execution.
  • the 3D face is mainly composed of two parts: geometric structure information and texture information.
  • geometric structure information When the deformation obtains the specific face model, the geometrical information of the face is obtained, and then the texture is added to the model through texture mapping technology to obtain realistic 3D face.
  • texture mapping technology For the finally obtained 3D face, it can be displayed through the display 5.
  • the 3D face reconstruction method provided by the embodiment of the present application first determines a feature point of a preset number of 2D face images, determines a posture parameter of the face according to the feature point, and adjusts the general three-dimensional according to the posture parameter.
  • the pose of the face model determine the corresponding points of each feature point on the general three-dimensional face model, and perform translation adjustment on the corresponding point in the occlusion state to obtain the preliminary 3D face model, and then the preliminary 3D face
  • the model is deformed and texture mapped to the deformed 3D face model to obtain the final 3D face.
  • the 2D face image obtained by the present application may be a human face image, and the posture parameter of the side face image is determined according to the feature point, and then the posture of the general three-dimensional face model is adjusted, so that the posture and the person of the universal three-dimensional face model are The face gestures are consistent, and then the 3D face is obtained through subsequent processing. Since the present application does not limit the shooting angle of the 2D image, the robustness thereof is higher, and the accuracy of the face recognition is further improved.
  • the method includes steps S500 to S560.
  • Step S500 Acquire a 2D face image for performing 3D face reconstruction, and determine feature points on the 2D face image.
  • Step S510 Determine a posture parameter of the face by using each of the feature points, and adjust a posture of the universal three-dimensional face model acquired in advance according to the posture parameter.
  • Step S520 Determine a corresponding point of the feature point on the universal three-dimensional face model, and determine a plane perpendicular to a vertical axis of the face for the corresponding point in the occlusion state.
  • Step S530 determining an intersection trajectory of the plane and the universal three-dimensional face model.
  • the intersection trajectory of the plane with the general three-dimensional face model is further determined.
  • Step S540 moving the corresponding point to the outermost end of the intersecting trajectory.
  • the outermost end of the intersecting trajectory can also be regarded as the position farthest from the Y axis after the general three-dimensional face model is projected onto the plane.
  • Step S550 Perform deformation adjustment on the preliminary 3D face model.
  • Step S560 performing texture mapping on the deformed 3D face model to obtain a 3D face.
  • the method includes steps S600 to S660.
  • Step S600 Acquire a 2D face image for performing 3D face reconstruction, and determine feature points on the 2D face image.
  • the 2D face image used for performing 3D face reconstruction may be a 2D face image, and the feature points may be selected as multiple.
  • Step S610 Determine a posture parameter of the face by using the feature point, and adjust a posture of the universal three-dimensional face model acquired in advance according to the posture parameter.
  • Step S620 Determine corresponding points of the feature points on the universal three-dimensional face model, and adjust corresponding points in the occlusion state to obtain a preliminary 3D face model.
  • Step S630 Referring to the proportional relationship between the feature points on the 2D face image, calculate the displacement amount of each corresponding point on the preliminary 3D face model compared to the feature point.
  • Step S640 constructing an interpolation function, and calculating a displacement amount of other points of the non-corresponding point on the preliminary 3D face model according to the interpolation function.
  • a radial basis function can be selected for the construction of the interpolation function.
  • the displacement of the points other than the corresponding points on the preliminary 3D face model is calculated.
  • Step S650 Adjust the preliminary 3D face model according to the displacement amounts of the corresponding points and the non-corresponding points.
  • step S630 and step S640 the displacement amount of each point on the preliminary 3D face model is obtained, and then the face model is deformed and adjusted according to the displacement amount to obtain a deformed 3D face model.
  • Step S660 performing texture mapping on the deformed 3D face model to obtain a 3D face.
  • a specific implementation manner of deforming the preliminary 3D face model is introduced.
  • the face model is closer to the actual situation.
  • the above process of texture mapping the deformed 3D face model may be performed by using a similar mesh parameterization method for texture mapping.
  • the similarity of the mesh parameterization method is based on the edge relationship represented by the triangle. Using the decision theorem of the adjacent side length ratio and the angle equal to the triangle, the length of the corresponding adjacent edge on the three-dimensional grid is constructed on the plane. The ratio and the angle are used to establish a global linear equations. The equations are solved to obtain a parameterized two-dimensional planar mesh, and then the mapping relationship between the vertices on the model and the vertices of the triangles in the two-dimensional plane is obtained.
  • the method is simple and fast to calculate, and the triangular distortion after parameterization is small, which can produce a better texture mapping effect.
  • the application can determine the posture parameter of the image based on the feature points in the acquired 2D face image, and then perform posture adjustment on the universal three-dimensional face model, so that the posture of the universal three-dimensional face model is consistent with the face posture, and further The 3D face is obtained through subsequent processing. Since the present application can reconstruct a 3D face based on a single 2D image, the speed of reconstructing a 3D face is further improved.
  • the 3D face reconstruction device provided by the embodiment of the present application is described below, and the 3D face reconstruction device described below and the 3D face reconstruction method described above can refer to each other.
  • 3D provided by this application The face reconstruction device can be applied to the server shown in FIG. 1.
  • FIG. 7 is a schematic structural diagram of a 3D face reconstruction device according to an embodiment of the present application.
  • the device includes:
  • the image feature point determining unit 71 is configured to acquire a 2D face image for performing 3D face reconstruction, and determine a feature point on the 2D face image, where the feature point is used to represent a face contour;
  • the posture adjustment unit 72 is configured to determine a posture parameter of the face by using the feature point, and adjust a posture of the universal three-dimensional face model acquired in advance according to the posture parameter;
  • the feature point matching unit 73 is configured to determine corresponding points of the feature points on the universal three-dimensional face model, and adjust corresponding points in the occlusion state to obtain a preliminary 3D face model;
  • the model deformation unit 74 is configured to perform deformation adjustment on the preliminary 3D face model, so that the positional relationship between the corresponding points on the preliminary 3D face model is consistent with the positional relationship between the feature points on the 2D face image. , obtaining a deformed 3D face model;
  • the texture mapping unit 75 is configured to perform texture mapping on the deformed 3D face model to obtain a 3D face.
  • the embodiment of the present application further discloses an optional structure of the feature point matching unit 73.
  • the feature point matching unit 73 may include:
  • a plane determining unit 731 configured to determine a plane perpendicular to a longitudinal axis of the face for a corresponding point in the occlusion state
  • a trajectory determining unit 732 configured to determine an intersection trajectory of the plane and the universal three-dimensional face model
  • Corresponding point translation unit 733 is configured to move the corresponding point to the outermost end of the intersecting trajectory.
  • model deformation unit 74 may include:
  • the first displacement amount calculation unit 741 is configured to calculate a displacement amount of each corresponding point on the preliminary 3D face model compared to the feature point by referring to a proportional relationship between the feature points on the 2D face image;
  • a second displacement amount calculation unit 742 configured to construct an interpolation function, and calculate a displacement amount of other points of the non-corresponding point on the preliminary 3D face model according to the interpolation function;
  • a radial basis function can be selected for the construction of the interpolation function.
  • the displacement amount adjustment unit 743 is configured to adjust the preliminary 3D face model according to the displacement amounts of the corresponding points and the non-corresponding points.
  • the texture mapping unit 75 may include: a first texture mapping sub-unit, configured to perform texture mapping by using a similar mesh parameterization method.
  • the 3D face reconstruction device provided by the embodiment of the present application first determines a feature point on the acquired 2D face image, determines a posture parameter of the face according to the feature point, and adjusts the general three-dimensional face model according to the posture parameter. Attitude, then determine the corresponding points of each feature point on the general three-dimensional face model, and adjust the corresponding points in the occlusion state to obtain the preliminary 3D face model, and then perform deformation adjustment on the preliminary 3D face model. The texture of the deformed 3D face model is mapped to obtain the final 3D face.
  • the 2D face image obtained by the present application may be a human face image, and the posture parameter of the side face image is determined according to the feature point, and then the posture of the general three-dimensional face model is adjusted, so that the posture and the person of the universal three-dimensional face model are The face gestures are consistent, and then the 3D face is obtained through subsequent processing. Since the present application does not limit the shooting angle of the 2D image, the robustness thereof is higher, and the accuracy of the face recognition is further improved.
  • a terminal 600 which may include communication.
  • terminal structure shown in FIG. 10 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or combine some components, or different component arrangements. among them:
  • the communication unit 610 can be used for transmitting and receiving information and receiving and transmitting signals during a call.
  • the communication unit 610 can be an RF (Radio Frequency) circuit, a router, a modem, or the like.
  • RF circuits as communication units include, but are not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, and a LNA (Low Noise Amplifier, low).
  • SIM Subscriber Identity Module
  • the communication unit 610 can also communicate with the network and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
  • the memory 620 can be used to store software programs and modules, and the processor 680 executes various functional applications and data processing by running software programs and modules stored in the memory 620.
  • the memory 620 can mainly include a storage process a sequence area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area can be stored according to the use of the terminal 600. Data (such as audio data, phone book, etc.).
  • memory 620 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 620 can also include a memory controller to provide access to memory 620 by processor 680 and input unit 630.
  • Input unit 630 can be used to receive input numeric or character information, as well as to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 630 can include touch sensitive surface 630a as well as other input devices 630b.
  • Touch-sensitive surface 630a also referred to as a touch display or trackpad, can collect touch operations on or near the user (eg, the user uses a finger, stylus, etc., any suitable object or accessory on touch-sensitive surface 630a or The operation near the touch-sensitive surface 630a) and driving the corresponding connecting device according to a preset program.
  • the touch sensitive surface 630a can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 680 is provided and can receive commands from the processor 680 and execute them.
  • the touch-sensitive surface 630a can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • input unit 630 can also include other input devices 630b.
  • other input devices 630b may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 640 can be used to display information input by the user or information provided to the user and
  • the various graphical user interfaces of end 600 can be constructed from graphics, text, icons, video, and any combination thereof.
  • the display unit 640 can include a display panel 640a.
  • the display panel 640a can be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch-sensitive surface 630a can cover the display panel 640a, and when the touch-sensitive surface 630a detects a touch operation thereon or nearby, it is transmitted to the processor 680 to determine the type of touch event, and then the processor 680 is based on the touch event.
  • touch-sensitive surface 630a and display panel 640a are implemented as two separate components to implement input and input functions, in some embodiments, touch-sensitive surface 630a can be integrated with display panel 640a for input. And output function.
  • Terminal 600 may also include at least one type of sensor 650, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 640a according to the brightness of the ambient light, and the proximity sensor may close the display panel 640a when the terminal 600 moves to the ear. And / or backlight.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the terminal 600 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • the audio circuit 660, the speaker 660a, and the microphone 660b can provide an audio interface between the user and the terminal 600.
  • the audio circuit 660 can transmit the converted electrical data of the received audio data to the speaker 660a for conversion to the sound signal output by the speaker 660a; on the other hand, the microphone 660b converts the collected sound signal into an electrical signal by the audio circuit 660. Convert to audio data after receiving, then tone
  • the frequency data output processor 680 processes, passes through the RF circuit 610 to, for example, another terminal, or outputs the audio data to the memory 620 for further processing.
  • the audio circuit 660 may also include an earbud jack to provide communication of the peripheral earphones with the terminal 600.
  • the terminal may be configured with a wireless communication unit 670, which may be a WiFi module.
  • WiFi is a short-range wireless transmission technology, and the terminal 600 can help users to send and receive emails, browse web pages, and access streaming media through the wireless communication unit 670, which provides wireless broadband Internet access for users.
  • FIG. 7 shows the wireless communication unit 670, it can be understood that it does not belong to the essential configuration of the terminal 600, and may be omitted as needed within the scope of not changing the essence of the invention.
  • Processor 680 is the control center of terminal 600, which connects various portions of the entire handset using various interfaces and lines, by running or executing software programs and/or modules stored in memory 620, and recalling data stored in memory 620, The various functions and processing data of the terminal 600 are performed to perform overall monitoring of the mobile phone.
  • the processor 680 may include one or more processing cores; preferably, the processor 680 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 680.
  • the terminal 600 also includes a power source 690 (such as a battery) that supplies power to the various components.
  • a power source 690 such as a battery
  • the power source can be logically coupled to the processor 680 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • Power supply 690 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal 600 may further include a camera, a Bluetooth module, and the like, and details are not described herein.
  • terminal 600 An alternative structure of terminal 600 is presented above in connection with FIG. 10, wherein one or more modules are stored in the memory and configured to be executed by the one or more processors, the one or more modules having the following Features:
  • Texture mapping is performed on the deformed 3D face model to obtain a 3D face.
  • a preliminary 3D face model including:
  • the corresponding point is moved to the outermost end of the intersecting trajectory.
  • the performing the deformation adjustment on the preliminary 3D face model includes:
  • the preliminary 3D face model is adjusted according to the displacement amounts of the corresponding points and the non-corresponding points.
  • the construction of the interpolation function is performed by using a radial basis function when constructing the interpolation function.
  • the performing texture mapping on the deformed 3D face model includes:
  • Texture mapping is performed using a similar mesh parameterization method.
  • the 3D face reconstruction method provided by the embodiment of the present application first determines a feature point on the acquired 2D face image, determines a posture parameter of the face according to the feature point, and adjusts the general three-dimensional face model according to the posture parameter. Attitude, then determine the corresponding points of the feature points on the general three-dimensional face model, and adjust the corresponding points in the occlusion state to obtain the preliminary 3D face model, and then perform deformation adjustment on the preliminary 3D face model, and Texture mapping the deformed 3D face model to obtain the final 3D face.
  • the 2D face image obtained by the present application may be a human face image, and the posture parameter of the side face image is determined according to the feature point, and then the posture of the general three-dimensional face model is adjusted, so that the posture and the person of the universal three-dimensional face model are The face gestures are consistent, and then the 3D face is obtained through subsequent processing. Since the present application does not limit the shooting angle of the 2D image, the robustness thereof is higher, and the accuracy of the face recognition is further improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

一种3D人脸重建方法、装置及服务器,其中方法包括:对于获取的2D人脸图像,确定其上的特征点(S200);依据特征点确定人脸的姿态参数,并按照姿态参数调整通用三维人脸模型的姿态(S210);确定特征点在通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,以得到初步3D人脸模型(S220);对初步3D人脸模型进行变形调整(S230),并对变形后的3D人脸模型进行纹理映射,得到最终的3D人脸(S240)。

Description

一种3D人脸重建方法、装置及服务器
本申请要求于2015年5月22日提交中国专利局、申请号为201510268521.X、发明名称为“一种3D人脸重建方法、装置及服务器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,更具体地说,涉及一种3D人脸重建方法、装置及服务器。
背景技术
近年来,随着人脸识别技术、人脸视频会议、人脸3D动画、虚拟技术的发展,三维人脸建模受到了越来越多的研究者们的关注,其中3D人脸形状重建是三维人脸建模的关键技术之一。
目前3D人脸重建的方法大多是从多个角度采集多张人脸照片,或者采集一张人脸正面照片,得到3D人脸,进而依据该3D人脸照片进行3D人脸的重建。但是,现实场景中,很多情况下我们是无法采集到人脸正面图像的,比如在进行人脸识别时,很有可能仅采集到人的侧脸,此时现有的3D人脸重建方法将无法构建出3D人脸,或者构建的3D人脸效果极差。
发明内容
有鉴于此,本申请提供了一种3D人脸重建方法、装置及服务器,以便针 对仅包含人侧脸的照片仍能够构建3D人脸。
为了实现上述目的,现提出的方案如下:
一种3D人脸重建方法,包括:
获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点,所述特征点用于表征人脸轮廓;
利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态;
确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型;
对所述初步3D人脸模型进行变形调整,以使所述初步3D人脸模型上对应点间关系与所述2D人脸图像上特征点间的位置关系一致,得到变形后3D人脸模型;
对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
一种3D人脸重建装置,包括:
图像特征点确定单元,用于获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点,所述特征点用于表征人脸轮廓;
姿态调整单元,用于利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态;
特征点匹配单元,用于确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型;
模型变形单元,用于对所述初步3D人脸模型进行变形调整,以使所述初步3D人脸模型上对应点间位置关系与所述2D人脸图像上特征点间的位置关 系一致,得到变形后3D人脸模型;
纹理映射单元,用于对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
一种服务器,包括上述3D人脸重建装置。
从上述的技术方案可以看出,本申请实施例提供的3D人脸重建方法,对于获取的2D人脸图像,首先确定出其上的特征点,依据特征点确定人脸的姿态参数,并按照姿态参数调整通用三维人脸模型的姿态,然后确定出特征点在通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,以得到初步3D人脸模型,接着对初步3D人脸模型进行变形调整,并对变形后的3D人脸模型进行纹理映射,得到最终的3D人脸。本申请获取的2D人脸图像可以是人的侧脸图像,依据特征点确定出该侧脸图像的姿态参数,进而对通用三维人脸模型进行姿态调整,使得通用三维人脸模型的姿态与人脸姿态一致,进而通过后续处理得到3D人脸。由于本申请不限定2D图像的拍摄角度,因而其鲁棒性更高,人脸识别的准确度也进一步得到提高。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例公开的一种服务器硬件结构示意图;
图2为本申请实施例公开的一种3D人脸重建方法流程图;
图3为本申请实施例示例的一种2D人脸图像特征点选取方式示意图;
图4为本申请实施例示例的一种人脸图像在空间直角坐标系中旋转示意图;
图5为本申请实施例公开的另一种3D人脸重建方法流程图;
图6为本申请实施例公开的又一种3D人脸重建方法流程图;
图7为本申请实施例公开的一种3D人脸重建装置结构示意图;
图8为本申请实施例公开的一种特征点匹配单元结构示意图;
图9为本申请实施例公开的一种模型变形单元结构示意图;
图10为本申请实施例公开的一种3D人脸重建终端的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的3D人脸重建方法基于服务器实现,该服务器可以是电脑、笔记本等。在介绍本申请的3D人脸重建方法之前,首先介绍一下服务器的硬件结构。如图1所示,该服务器可以包括:
处理器1,通信接口2,存储器3,通信总线4,和显示屏5;
其中处理器1、通信接口2、存储器3和显示屏5通过通信总线4完成相互间的通信。
接下来,我们结合服务器硬件结构,对本申请的3D人脸重建方法进行介绍,如图2所示,该方法包括步骤S200至步骤S240。
步骤S200、获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上预设数量的特征点。
可以通过通信接口2获取2D人脸图像。可选的,通信接口2可以为通信模块的接口,如GSM模块的接口。获取的2D人脸图像可以存储到存储器3中,存储器3可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
可选的,对于获取的2D人脸图像,其可以是本地存储的,也可以是从网络上下载的,对于图像的格式本实施例不进行限定,其可以是JPEG格式、BMP格式等等。本申请中获取的2D人脸图像可以不是人脸正面图像,如偏离正面一定角度的人脸图像也可以。可选地,在一实施例中,在所述2D人脸图像中能够看到人脸上的两个眼睛即可。
对于选择的特征点,可以是预先设定好的人脸上的点,可以为多个特征点。所选择的特征点用于表征人脸轮廓,参见图3所示,人脸中的眼眉、眼睛、鼻子、嘴巴、下巴以及两侧脸颊能够较好地刻画一张人脸,因此可以将这些位置或者这些位置上的某几个点作为特征点。确定特征点的方式可以是自动化确定,也可以是人工方式。对于前者,可以按照一定的算法编写程序,从图像中自动读取特征点的坐标;对于后者,可以通过手工方式在图像上进行位置指定。
步骤S210、利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态。
该步骤可以由处理器1来执行,该步骤对应的代码可以存储在存储器3中,在执行时该步骤时由处理器1进行调用。处理器1可以是一个中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或 者是被配置成实施本申请实施例的一个或多个集成电路。
处理器1可以利用各个特征点之间的位置关系可以确定出人脸的姿态参数。姿态参数可以以人脸在空间直角坐标系的旋转方向及角度来确定,参见图4,姿态参数可以由人脸图像沿X轴、Y轴和Z轴方向旋转的角度来构成。
对于通用三维人脸模型,按照已经确定出的人脸图像的姿态参数进行调整,使之保持与人脸图像的姿态一致。步骤S220、确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型。
本发明实施例中,三维人脸模型对应点的深度信息可以作为对应特征点的深度信息。
该步骤可以由处理器1来执行,该步骤对应的代码可以存储在存储器3中,在执行时该步骤时由处理器1进行调用。
由于一般设置多个特征点,因此处理器1将2D人脸图像上的特征点在通用三维人脸模型上对应进行标注时,可以得到一系列的对应点。由于步骤S210中对通用三维人脸模型进行了姿态调整,也即当2D人脸图像非正面时,调整后的通用三维人脸模型也非正面模型,因此在其上标注的对应点有可能会出现一侧对应点出现遮挡的现象。对于此类处于遮挡状态下的对应点可以进行平移调整,使之可见。调整后得到初步3D人脸模型。
步骤S230、对所述初步3D人脸模型进行变形调整。
该步骤可以由处理器1来执行,该步骤对应的代码可以存储在存储器3中,在执行时该步骤时由处理器1进行调用。
通过对初步3D人脸模型进行变形调整,使得初步3D人脸模型上各对应 点间位置关系与所述2D人脸图像上特征点间的位置关系一致,得到变形后3D人脸模型。例如,比照2D人脸图像,对初步3D人脸模型进行变形调整,调整人脸的高矮、胖瘦。
步骤S240、对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
该步骤可以由处理器1来执行,该步骤对应的代码可以存储在存储器3中,在执行时该步骤时由处理器1进行调用。
3D人脸主要由几何结构信息和纹理信息两部分构成,当变形得到特定人脸模型后,也就得到了人脸的几何机构信息,再通过纹理映射技术给模型添加纹理,即可得到逼真的三维人脸。对于最终得到的3D人脸,可以通过显示屏5进行显示。
本申请实施例提供的3D人脸重建方法,对于获取的2D人脸图像,首先确定出其上预设个数的特征点,依据特征点确定人脸的姿态参数,并按照姿态参数调整通用三维人脸模型的姿态,然后确定出各个特征点在通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行平移调整,以得到初步3D人脸模型,接着对初步3D人脸模型进行变形调整,并对变形后的3D人脸模型进行纹理映射,得到最终的3D人脸。
本申请获取的2D人脸图像可以是人的侧脸图像,依据特征点确定出该侧脸图像的姿态参数,进而对通用三维人脸模型进行姿态调整,使得通用三维人脸模型的姿态与人脸姿态一致,进而通过后续处理得到3D人脸。由于本申请不限定2D图像的拍摄角度,因而其鲁棒性更高,人脸识别的准确度也进一步得到提高。
在本申请的另一个实施例中,提供了另一种3D人脸重建方法,参见图5。
如图5所示,该方法包括步骤S500至步骤S560。
步骤S500、获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点。
步骤S510、利用各个所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态。
步骤S520、确定所述特征点在所述通用三维人脸模型上的对应点,对于处于遮挡状态下的对应点,确定其所在的垂直于人脸纵轴的平面。
参见图4,当人脸沿Y轴转动时,会出现人脸右侧或左侧脸颊被遮挡的情况。对于被遮挡的对应点,需要确定出其所在的垂直于Y轴的平面。
步骤S530、确定所述平面与所述通用三维人脸模型的相交轨迹。
根据上一步骤所确定出的平面,进一步确定出该平面与通用三维人脸模型的相交轨迹。
步骤S540、将该对应点移至所述相交轨迹的最外端。
对于处于遮挡状态的对应点,按照上一步骤所确定的相交轨迹,将其移至相交轨迹的最外端。这里,相交轨迹的最外端也可以看作所述通用三维人脸模型投影到平面后,距离Y轴最远的位置。
步骤S550、对所述初步3D人脸模型进行变形调整。
步骤S560、对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
在本实施例中,公开了对处于遮挡状态下的对应点进行平移调整的具体实施方式,通过平移调整,使得遮挡住的对应点能够显现出来。
在本申请的又一个实施例中,提供了又一种3D人脸重建方法,参见图6。
如图6所示,该方法包括步骤S600至步骤S660。
步骤S600、获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点。
本发明实施例中,所述用于进行3D人脸重建的2D人脸图像为一张2D人脸图像即可,特征点可以选择为多个。
步骤S610、利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态。
步骤S620、确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型。
步骤S630、参考所述2D人脸图像上特征点间的比例关系,计算初步3D人脸模型上各对应点相比于所述特征点的位移量。
为了便于理解,通过具体实例进行介绍。假设2D人脸图像上存在A、B、C、D四个点,A位于两眉中间,B位于鼻尖,C位于上嘴唇中间,D位于下巴上。在初步3D人脸模型上,与A、B、C、D四个点分别对应的点是E、F、G、H。以AB间距LAB为标杆,假设CD间距为0.3LAB。进一步确定初步3D人脸模型上EF间距为0.9LAB,GH间距为0.4LAB。由此可知,EF间距需要增大0.1LAB,GH间距需要减少0.1LAB
步骤S640、构造插值函数,并依据所述插值函数计算初步3D人脸模型上非对应点的其它点的位移量。
在构造插值函数时,可以选用径向基函数进行插值函数的构建。通过构建插值函数,计算出初步3D人脸模型上除对应点之外的其它点的位移量。
步骤S650、按照各对应点及非对应点的位移量,调整所述初步3D人脸模型。
经过步骤S630和步骤S640得出初步3D人脸模型上各个点的位移量,进而按照该位移量对人脸模型进行变形调整,得到变形后的3D人脸模型。
步骤S660、对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
本实施例中介绍了一种对初步3D人脸模型进行变形调整的具体实施方式,通过对初步3D人脸模型进行变形,使得人脸模型更加贴近于实际情况。
此外,需要说明的是,以上对变形后的3D人脸模型进行纹理映射的过程,具体可以采用保相似的网格参数化方法进行纹理映射。
保相似的网格参数化方法从三角形表示的边角关系出发,利用相邻边长比值及夹角相等则三角形相似这一判定定理,在平面上构建与三维网格上对应的邻边的长度比值和夹角来建立全局的线性方程组,求解该方程组得到参数化后的二维平面网格,进而得到模型上顶点与二维平面各三角形顶点之间的映射关系。该方法计算简单快速,参数化后的三角形扭曲较小,可以产生较好的纹理映射效果。
本申请可以基于一张获取的2D人脸图像中的特征点确定出该图像的姿态参数,进而对通用三维人脸模型进行姿态调整,使得通用三维人脸模型的姿态与人脸姿态一致,进而通过后续处理得到3D人脸。由于本申请基于单张2D图像即可重建3D人脸,重建3D人脸的速度也进一步得到提高。
下面对本申请实施例提供的3D人脸重建装置进行描述,下文描述的3D人脸重建装置与上文描述的3D人脸重建方法可相互对应参照。本申请提供的3D 人脸重建装置可以应用于图1所示的服务器中。
参见图7,图7为本申请实施例公开的一种3D人脸重建装置结构示意图。
如图7所示,该装置包括:
图像特征点确定单元71,用于获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点,所述特征点用于表征人脸轮廓;
姿态调整单元72,用于利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态;
特征点匹配单元73,用于确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型;
模型变形单元74,用于对所述初步3D人脸模型进行变形调整,以使所述初步3D人脸模型上各对应点间位置关系与所述2D人脸图像上特征点间的位置关系一致,得到变形后3D人脸模型;
纹理映射单元75,用于对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
可选的,本申请实施例还公开了上述特征点匹配单元73的一种可选结构,如图8所示,特征点匹配单元73可以包括:
平面确定单元731,用于对于处于遮挡状态下的对应点,确定其所在的垂直于人脸纵轴的平面;
轨迹确定单元732,用于确定所述平面与所述通用三维人脸模型的相交轨迹;
对应点平移单元733,用于将该对应点移至所述相交轨迹的最外端。
可选的,本申请实施例还公开了上述模型变形单元74的一种可选结构, 如图9所示,模型变形单元74可以包括:
第一位移量计算单元741,用于参考所述2D人脸图像上特征点间的比例关系,计算初步3D人脸模型上各对应点相比于所述特征点的位移量;
第二位移量计算单元742,用于构造插值函数,并依据所述插值函数计算初步3D人脸模型上非对应点的其它点的位移量;
具体地,在构造插值函数时,可以选用径向基函数进行插值函数的构建。
位移量调整单元743,用于按照各对应点及非对应点的位移量,调整所述初步3D人脸模型。
可选的,上述纹理映射单元75可以包括:第一纹理映射子单元,用于采用保相似的网格参数化方法进行纹理映射。
本申请实施例提供的3D人脸重建装置,对于获取的2D人脸图像,首先确定出其上的特征点,依据特征点确定人脸的姿态参数,并按照姿态参数调整通用三维人脸模型的姿态,然后确定出各个特征点在通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,以得到初步3D人脸模型,接着对初步3D人脸模型进行变形调整,并对变形后的3D人脸模型进行纹理映射,得到最终的3D人脸。本申请获取的2D人脸图像可以是人的侧脸图像,依据特征点确定出该侧脸图像的姿态参数,进而对通用三维人脸模型进行姿态调整,使得通用三维人脸模型的姿态与人脸姿态一致,进而通过后续处理得到3D人脸。由于本申请不限定2D图像的拍摄角度,因而其鲁棒性更高,人脸识别的准确度也进一步得到提高。
参见图10,本发明本申请另一实施例提供了一种终端600,可以包括通信 单元610、包括有一个或一个以上非易失性可读存储介质的存储器620、输入单元630、显示单元640、传感器650、音频电路660、WiFi(wireless fidelity,无线保真)模块670、包括有一个或者一个以上处理核心的处理器680、以及电源690等部件。
本领域技术人员可以理解,图10中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
通信单元610可用于收发信息或通话过程中,信号的接收和发送,该通信单元610可以为RF(Radio Frequency,射频)电路、路由器、调制解调器、等网络通信设备。特别地,当通信单元610为RF电路时,将基站的下行信息接收后,交由一个或者一个以上处理器680处理;另外,将涉及上行的数据发送给基站。通常,作为通信单元的RF电路包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM)卡、收发信机、耦合器、LNA(Low Noise Amplifier,低噪声放大器)、双工器等。此外,通信单元610还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA(Code Division Multiple Access,码分多址)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、LTE(Long Term Evolution,长期演进)、电子邮件、SMS(Short Messaging Service,短消息服务)等。存储器620可用于存储软件程序以及模块,处理器680通过运行存储在存储器620的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器620可主要包括存储程 序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端600的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器620可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器620还可以包括存储器控制器,以提供处理器680和输入单元630对存储器620的访问。
输入单元630可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。可选地,输入单元630可包括触敏表面630a以及其他输入设备630b。触敏表面630a,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面630a上或在触敏表面630a附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面630a可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器680,并能接收处理器680发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面630a。除了触敏表面630a,输入单元630还可以包括其他输入设备630b。可选地,其他输入设备630b可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元640可用于显示由用户输入的信息或提供给用户的信息以及终 端600的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元640可包括显示面板640a,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板640a。进一步的,触敏表面630a可覆盖显示面板640a,当触敏表面630a检测到在其上或附近的触摸操作后,传送给处理器680以确定触摸事件的类型,随后处理器680根据触摸事件的类型在显示面板640a上提供相应的视觉输出。虽然在图7中,触敏表面630a与显示面板640a是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面630a与显示面板640a集成而实现输入和输出功能。
终端600还可包括至少一种传感器650,比如光传感器、运动传感器以及其他传感器。可选地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板640a的亮度,接近传感器可在终端600移动到耳边时,关闭显示面板640a和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端600还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路660、扬声器660a,传声器660b可提供用户与终端600之间的音频接口。音频电路660可将接收到的音频数据转换后的电信号,传输到扬声器660a,由扬声器660a转换为声音信号输出;另一方面,传声器660b将收集的声音信号转换为电信号,由音频电路660接收后转换为音频数据,再将音 频数据输出处理器680处理后,经RF电路610以发送给比如另一终端,或者将音频数据输出至存储器620以便进一步处理。音频电路660还可能包括耳塞插孔,以提供外设耳机与终端600的通信。
为了实现无线通信,该终端上可以配置有无线通信单元670,该无线通信单元670可以为WiFi模块。WiFi属于短距离无线传输技术,终端600通过无线通信单元670可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图7示出了无线通信单元670,但是可以理解的是,其并不属于终端600的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器680是终端600的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器620内的软件程序和/或模块,以及调用存储在存储器620内的数据,执行终端600的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器680可包括一个或多个处理核心;优选的,处理器680可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器680中。
终端600还包括给各个部件供电的电源690(比如电池),优选的,电源可以通过电源管理系统与处理器680逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源690还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管未示出,终端600还可以包括摄像头、蓝牙模块等,在此不再赘述。
以上结合图10给出了终端600的可选结构,其中一个或多个模块存储于所述存储器中并被配置成由所述一个或多个处理器执行,所述一个或多个模块具有如下功能:
获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点,所述特征点用于表征人脸轮廓;
利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态;
确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型;
对所述初步3D人脸模型进行变形调整,以使所述初步3D人脸模型上对应点间位置关系与所述2D人脸图像上特征点间的位置关系一致,得到变形后3D人脸模型;
对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
其中,所述对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型,包括:
对于处于遮挡状态下的对应点,确定其所在的垂直于人脸纵轴的平面;
确定所述平面与所述通用三维人脸模型的相交轨迹;
将该对应点移至所述相交轨迹的最外端。
其中,所述对所述初步3D人脸模型进行变形调整,包括:
参考所述2D人脸图像上特征点间的比例关系,计算初步3D人脸模型上各对应点相比于所述特征点的位移量;
构造插值函数,并依据所述插值函数计算初步3D人脸模型上非对应点的 其它点的位移量;
按照各对应点及非对应点的位移量,调整所述初步3D人脸模型。
其中,在构造插值函数时采用径向基函数进行插值函数的构造。
其中,所述对所述变形后3D人脸模型进行纹理映射,包括:
采用保相似的网格参数化方法进行纹理映射。
本申请实施例提供的3D人脸重建方法,对于获取的2D人脸图像,首先确定出其上的特征点,依据特征点确定人脸的姿态参数,并按照姿态参数调整通用三维人脸模型的姿态,然后确定出特征点在通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,以得到初步3D人脸模型,接着对初步3D人脸模型进行变形调整,并对变形后的3D人脸模型进行纹理映射,得到最终的3D人脸。本申请获取的2D人脸图像可以是人的侧脸图像,依据特征点确定出该侧脸图像的姿态参数,进而对通用三维人脸模型进行姿态调整,使得通用三维人脸模型的姿态与人脸姿态一致,进而通过后续处理得到3D人脸。由于本申请不限定2D图像的拍摄角度,因而其鲁棒性更高,人脸识别的准确度也进一步得到提高。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的 要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (12)

  1. 一种3D人脸重建方法,其特征在于,包括:
    获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点,所述特征点用于表征人脸轮廓;
    利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态;
    确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型;
    对所述初步3D人脸模型进行变形调整,以使所述初步3D人脸模型上对应点间位置关系与所述2D人脸图像上特征点间的位置关系一致,得到变形后3D人脸模型;
    对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
  2. 根据权利要求1所述的方法,其特征在于,所述对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型,包括:
    对于处于遮挡状态下的对应点,确定其所在的垂直于人脸纵轴的平面;
    确定所述平面与所述通用三维人脸模型的相交轨迹;
    将该对应点移至所述相交轨迹的最外端。
  3. 根据权利要求1所述的方法,其特征在于,所述对所述初步3D人脸模型进行变形调整,包括:
    参考所述2D人脸图像上特征点间的比例关系,计算初步3D人脸模型上 各对应点相比于所述特征点的位移量;
    构造插值函数,并依据所述插值函数计算初步3D人脸模型上非对应点的其它点的位移量;
    按照各对应点及非对应点的位移量,调整所述初步3D人脸模型。
  4. 根据权利要求3所述的方法,其特征在于,在构造插值函数时采用径向基函数进行插值函数的构造。
  5. 根据权利要求1所述的方法,其特征在于,所述对所述变形后3D人脸模型进行纹理映射,包括:
    采用保相似的网格参数化方法进行纹理映射。
  6. 一种3D人脸重建装置,其特征在于,包括:
    图像特征点确定单元,用于获取用于进行3D人脸重建的2D人脸图像,并确定所述2D人脸图像上的特征点,所述特征点用于表征人脸轮廓;
    姿态调整单元,用于利用所述特征点确定人脸的姿态参数,并按照所述姿态参数调整预先获取的通用三维人脸模型的姿态;
    特征点匹配单元,用于确定所述特征点在所述通用三维人脸模型上的对应点,并对处于遮挡状态下的对应点进行调整,得到初步3D人脸模型;
    模型变形单元,用于对所述初步3D人脸模型进行变形调整,以使所述初步3D人脸模型上对应点间位置关系与所述2D人脸图像上特征点间的位置关系一致,得到变形后3D人脸模型;
    纹理映射单元,用于对所述变形后3D人脸模型进行纹理映射,得到3D人脸。
  7. 根据权利要求6所述的装置,其特征在于,所述特征点匹配单元包括:
    平面确定单元,用于对于处于遮挡状态下的对应点,确定其所在的垂直于人脸纵轴的平面;
    轨迹确定单元,用于确定所述平面与所述通用三维人脸模型的相交轨迹;
    对应点平移单元,用于将该对应点移至所述相交轨迹的最外端。
  8. 根据权利要求6所述的装置,其特征在于,所述模型变形单元包括:
    第一位移量计算单元,用于参考所述2D人脸图像上特征点间的比例关系,计算初步3D人脸模型上各对应点相比于所述特征点的位移量;
    第二位移量计算单元,用于构造插值函数,并依据所述插值函数计算初步3D人脸模型上非对应点的其它点的位移量;
    位移量调整单元,用于按照各对应点及非对应点的位移量,调整所述初步3D人脸模型。
  9. 根据权利要求6所述的装置,其特征在于,所述纹理映射单元包括:
    第一纹理映射子单元,用于采用保相似的网格参数化方法进行纹理映射。
  10. 一种服务器,其特征在于,包括权利要求6-9任一项所述的3D人脸重建装置。
  11. 一种3D人脸重建装置,其特征在于,包括:
    一个或多个处理器,配置为执行存储于存储介质上的程序指令,使所述3D人脸重建装置执行权利要求1-5中任一项所述的方法
  12. 一种非暂停计算机可读存储介质,包括程序指令,所述指令当由计算装置的处理器运行时,配置所述存储介质执行权利要求1-5中任一项所述的方法。
PCT/CN2016/081452 2015-05-22 2016-05-09 一种3d人脸重建方法、装置及服务器 WO2016188318A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/652,009 US10055879B2 (en) 2015-05-22 2017-07-17 3D human face reconstruction method, apparatus and server

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510268521.XA CN104966316B (zh) 2015-05-22 2015-05-22 一种3d人脸重建方法、装置及服务器
CN201510268521.X 2015-05-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/652,009 Continuation US10055879B2 (en) 2015-05-22 2017-07-17 3D human face reconstruction method, apparatus and server

Publications (1)

Publication Number Publication Date
WO2016188318A1 true WO2016188318A1 (zh) 2016-12-01

Family

ID=54220347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081452 WO2016188318A1 (zh) 2015-05-22 2016-05-09 一种3d人脸重建方法、装置及服务器

Country Status (3)

Country Link
US (1) US10055879B2 (zh)
CN (1) CN104966316B (zh)
WO (1) WO2016188318A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958439A (zh) * 2017-11-09 2018-04-24 北京小米移动软件有限公司 图像处理方法及装置
CN109151540A (zh) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 视频图像的交互处理方法及装置
CN109615688A (zh) * 2018-10-23 2019-04-12 杭州趣维科技有限公司 一种移动设备上的实时人脸三维重建系统及方法
CN110796083A (zh) * 2019-10-29 2020-02-14 腾讯科技(深圳)有限公司 图像显示方法、装置、终端及存储介质
CN111402401A (zh) * 2020-03-13 2020-07-10 北京华捷艾米科技有限公司 一种获取3d人脸数据方法、人脸识别方法及装置
CN112070681A (zh) * 2019-05-24 2020-12-11 北京小米移动软件有限公司 图像处理方法及装置
CN112530003A (zh) * 2020-12-11 2021-03-19 北京奇艺世纪科技有限公司 一种三维人手重建方法、装置及电子设备

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966316B (zh) 2015-05-22 2019-03-15 腾讯科技(深圳)有限公司 一种3d人脸重建方法、装置及服务器
CN107025678A (zh) * 2016-01-29 2017-08-08 掌赢信息科技(上海)有限公司 一种3d虚拟模型的驱动方法及装置
CN106447785A (zh) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 一种驱动虚拟角色的方法和装置
EP3381017B1 (en) * 2016-10-31 2019-11-06 Google LLC Face reconstruction from a learned embedding
CN106920274B (zh) * 2017-01-20 2020-09-04 南京开为网络科技有限公司 移动端2d关键点快速转换为3d融合变形的人脸建模方法
CN108305312B (zh) * 2017-01-23 2021-08-17 腾讯科技(深圳)有限公司 3d虚拟形象的生成方法和装置
CN106997244A (zh) * 2017-03-31 2017-08-01 上海曼晟网络科技有限公司 一种基于自定义图形的显示方法及系统
CN108961384B (zh) * 2017-05-19 2021-11-30 中国科学院苏州纳米技术与纳米仿生研究所 三维图像重建方法
CN107749084A (zh) * 2017-10-24 2018-03-02 广州增强信息科技有限公司 一种基于图像三维重建技术的虚拟试戴方法和系统
WO2019100216A1 (zh) * 2017-11-21 2019-05-31 深圳市柔宇科技有限公司 3d建模方法、电子设备、存储介质及程序产品
US10706577B2 (en) * 2018-03-06 2020-07-07 Fotonation Limited Facial features tracker with advanced training for natural rendering of human faces in real-time
CN108537199A (zh) * 2018-04-18 2018-09-14 西安第六镜网络科技有限公司 基于3d重建的人脸图像矫正增益装置及方法
CN108596827B (zh) * 2018-04-18 2022-06-17 太平洋未来科技(深圳)有限公司 三维人脸模型生成方法、装置及电子设备
US11328153B1 (en) * 2018-04-25 2022-05-10 Snap Inc. Secure biometric metadata generation
CN110473294B (zh) * 2018-05-11 2023-09-01 杭州海康威视数字技术股份有限公司 一种基于三维模型的纹理映射方法、装置及设备
CN108765551B (zh) * 2018-05-15 2022-02-01 福建省天奕网络科技有限公司 一种实现3d模型捏脸的方法及终端
CN110647782A (zh) * 2018-06-08 2020-01-03 北京信息科技大学 三维人脸重建与多姿态人脸识别方法及装置
CN108898665A (zh) * 2018-06-15 2018-11-27 上饶市中科院云计算中心大数据研究院 三维人脸重建方法、装置、设备及计算机可读存储介质
CN109086798A (zh) * 2018-07-03 2018-12-25 迈吉客科技(北京)有限公司 一种数据标注方法和标注装置
CN109218699B (zh) * 2018-08-31 2020-12-01 盎锐(上海)信息科技有限公司 基于3d摄像机的影像处理装置及方法
CN109584145A (zh) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 卡通化方法和装置、电子设备和计算机存储介质
CN109741438B (zh) * 2018-11-23 2023-01-06 重庆灵翎互娱科技有限公司 三维人脸建模方法、装置、设备及介质
TWI712002B (zh) * 2018-11-27 2020-12-01 國立交通大學 三維人臉重建方法
CN109584358A (zh) * 2018-11-28 2019-04-05 深圳市商汤科技有限公司 一种三维人脸重建方法及装置、设备和存储介质
CN111383350A (zh) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 图片三维姿态标注方法、装置、计算机存储介质和终端
CN109767487A (zh) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 人脸三维重建方法、装置、电子设备及存储介质
CN110163953B (zh) * 2019-03-11 2023-08-25 腾讯科技(深圳)有限公司 三维人脸重建方法、装置、存储介质和电子装置
CN111862296B (zh) * 2019-04-24 2023-09-29 京东方科技集团股份有限公司 三维重建方法及装置、系统、模型训练方法、存储介质
CN110222583A (zh) * 2019-05-14 2019-09-10 武汉奥贝赛维数码科技有限公司 一种基于面部识别的面部生成技术
CN110119457B (zh) * 2019-05-17 2021-08-10 北京字节跳动网络技术有限公司 用于生成信息的方法和装置
CN110837332A (zh) * 2019-11-13 2020-02-25 北京字节跳动网络技术有限公司 面部图像变形方法、装置、电子设备和计算机可读介质
CN111161397B (zh) * 2019-12-02 2022-08-12 支付宝(杭州)信息技术有限公司 人脸三维重建方法、装置、电子设备及可读存储介质
CN111292415B (zh) * 2020-02-25 2022-03-29 华南理工大学 一种基于球坐标位置图的单视图三维人脸重建方法
CN111476211B (zh) * 2020-05-15 2023-05-26 深圳市英威诺科技有限公司 基于Tensorflow框架的人脸定位方法及系统
CN111710008B (zh) * 2020-05-29 2023-07-11 北京百度网讯科技有限公司 人流密度的生成方法、装置、电子设备以及存储介质
CN113744384B (zh) * 2020-05-29 2023-11-28 北京达佳互联信息技术有限公司 三维人脸重建方法、装置、电子设备及存储介质
CN112037320B (zh) * 2020-09-01 2023-10-20 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备以及计算机可读存储介质
US10979672B1 (en) * 2020-10-20 2021-04-13 Katmai Tech Holdings LLC Web-based videoconference virtual environment with navigable avatars, and applications thereof
CN112381928A (zh) * 2020-11-19 2021-02-19 北京百度网讯科技有限公司 图像显示的方法、装置、设备以及存储介质
CN112884881B (zh) * 2021-01-21 2022-09-27 魔珐(上海)信息科技有限公司 三维人脸模型重建方法、装置、电子设备及存储介质
CN112967364A (zh) * 2021-02-09 2021-06-15 咪咕文化科技有限公司 一种图像处理方法、装置及设备
CN112966575B (zh) * 2021-02-23 2023-04-18 光控特斯联(重庆)信息技术有限公司 一种应用于智慧社区的目标人脸识别方法及装置
WO2023272725A1 (zh) * 2021-07-02 2023-01-05 华为技术有限公司 人脸图像处理方法、装置和车辆
CN113469091B (zh) * 2021-07-09 2022-03-25 北京的卢深视科技有限公司 人脸识别方法、训练方法、电子设备及存储介质
CN113807327B (zh) * 2021-11-18 2022-02-08 武汉博特智能科技有限公司 一种基于光线补偿的深度学习侧脸图像处理方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404091A (zh) * 2008-11-07 2009-04-08 重庆邮电大学 基于两步形状建模的三维人脸重建方法和系统
US7876931B2 (en) * 2001-12-17 2011-01-25 Technest Holdings, Inc. Face recognition system and method
CN102054291A (zh) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 一种基于单幅人脸图像实现三维人脸重建的方法及其装置
CN102081733A (zh) * 2011-01-13 2011-06-01 西北工业大学 多模态信息结合的多姿态三维人脸面部五官标志点定位方法
CN104036546A (zh) * 2014-06-30 2014-09-10 清华大学 一种基于自适应形变模型的任意视角人脸三维重构方法
CN104966316A (zh) * 2015-05-22 2015-10-07 腾讯科技(深圳)有限公司 一种3d人脸重建方法、装置及服务器

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6806898B1 (en) * 2000-03-20 2004-10-19 Microsoft Corp. System and method for automatically adjusting gaze and head orientation for video conferencing
JP3954484B2 (ja) * 2002-12-12 2007-08-08 株式会社東芝 画像処理装置およびプログラム
US8090160B2 (en) * 2007-10-12 2012-01-03 The University Of Houston System Automated method for human face modeling and relighting with application to face recognition
US8860795B2 (en) * 2008-10-28 2014-10-14 Nec Corporation Masquerading detection system, masquerading detection method, and computer-readable storage medium
US8861800B2 (en) * 2010-07-19 2014-10-14 Carnegie Mellon University Rapid 3D face reconstruction from a 2D image and methods using such rapid 3D face reconstruction
US8447099B2 (en) * 2011-01-11 2013-05-21 Eastman Kodak Company Forming 3D models using two images
CN103765479A (zh) * 2011-08-09 2014-04-30 英特尔公司 基于图像的多视点3d脸部生成
CN103116902A (zh) * 2011-11-16 2013-05-22 华为软件技术有限公司 三维虚拟人头像生成方法、人头像运动跟踪方法和装置
US9286715B2 (en) * 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7876931B2 (en) * 2001-12-17 2011-01-25 Technest Holdings, Inc. Face recognition system and method
CN101404091A (zh) * 2008-11-07 2009-04-08 重庆邮电大学 基于两步形状建模的三维人脸重建方法和系统
CN102054291A (zh) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 一种基于单幅人脸图像实现三维人脸重建的方法及其装置
CN102081733A (zh) * 2011-01-13 2011-06-01 西北工业大学 多模态信息结合的多姿态三维人脸面部五官标志点定位方法
CN104036546A (zh) * 2014-06-30 2014-09-10 清华大学 一种基于自适应形变模型的任意视角人脸三维重构方法
CN104966316A (zh) * 2015-05-22 2015-10-07 腾讯科技(深圳)有限公司 一种3d人脸重建方法、装置及服务器

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151540A (zh) * 2017-06-28 2019-01-04 武汉斗鱼网络科技有限公司 视频图像的交互处理方法及装置
CN109151540B (zh) * 2017-06-28 2021-11-09 武汉斗鱼网络科技有限公司 视频图像的交互处理方法及装置
CN107958439B (zh) * 2017-11-09 2021-04-27 北京小米移动软件有限公司 图像处理方法及装置
CN107958439A (zh) * 2017-11-09 2018-04-24 北京小米移动软件有限公司 图像处理方法及装置
CN109615688A (zh) * 2018-10-23 2019-04-12 杭州趣维科技有限公司 一种移动设备上的实时人脸三维重建系统及方法
CN112070681B (zh) * 2019-05-24 2024-02-13 北京小米移动软件有限公司 图像处理方法及装置
CN112070681A (zh) * 2019-05-24 2020-12-11 北京小米移动软件有限公司 图像处理方法及装置
CN110796083B (zh) * 2019-10-29 2023-07-04 腾讯科技(深圳)有限公司 图像显示方法、装置、终端及存储介质
CN110796083A (zh) * 2019-10-29 2020-02-14 腾讯科技(深圳)有限公司 图像显示方法、装置、终端及存储介质
CN111402401A (zh) * 2020-03-13 2020-07-10 北京华捷艾米科技有限公司 一种获取3d人脸数据方法、人脸识别方法及装置
CN111402401B (zh) * 2020-03-13 2023-08-18 北京华捷艾米科技有限公司 一种获取3d人脸数据方法、人脸识别方法及装置
CN112530003A (zh) * 2020-12-11 2021-03-19 北京奇艺世纪科技有限公司 一种三维人手重建方法、装置及电子设备
CN112530003B (zh) * 2020-12-11 2023-10-27 北京奇艺世纪科技有限公司 一种三维人手重建方法、装置及电子设备

Also Published As

Publication number Publication date
US10055879B2 (en) 2018-08-21
CN104966316A (zh) 2015-10-07
US20170316598A1 (en) 2017-11-02
CN104966316B (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
WO2016188318A1 (zh) 一种3d人脸重建方法、装置及服务器
US10713812B2 (en) Method and apparatus for determining facial pose angle, and computer storage medium
CN109087239B (zh) 一种人脸图像处理方法、装置及存储介质
WO2019184889A1 (zh) 增强现实模型的调整方法、装置、存储介质和电子设备
WO2018103525A1 (zh) 人脸关键点跟踪方法和装置、存储介质
WO2019233229A1 (zh) 一种图像融合方法、装置及存储介质
CN108985220B (zh) 一种人脸图像处理方法、装置及存储介质
WO2020108404A1 (zh) 三维人脸模型的参数配置方法、装置、设备及存储介质
US9811611B2 (en) Method and apparatus for creating curved surface model
JP7480355B2 (ja) 未読メッセージの表示方法、装置及び電子機器
US10019219B2 (en) Display device for displaying multiple screens and method for controlling the same
EP3561667B1 (en) Method for displaying 2d application in vr device, and terminal
WO2018177207A1 (zh) 操作控制方法、装置及存储介质
CN106445340B (zh) 一种双屏终端显示立体图像的方法和装置
US20140320537A1 (en) Method, device and storage medium for controlling electronic map
US10776979B2 (en) Virtual skeleton based on computing device capability profile
KR20150092962A (ko) 입력 처리 방법 및 그 전자 장치
CN109618055B (zh) 一种位置共享方法及移动终端
CN109117037B (zh) 一种图像处理的方法及终端设备
CN108833791A (zh) 一种拍摄方法和装置
CN112818733A (zh) 信息处理方法、装置、存储介质及终端
CN109739399B (zh) 一种图标选择控制方法、终端及计算机可读存储介质
CN113780291A (zh) 一种图像处理方法、装置、电子设备及存储介质
CN110969085B (zh) 脸部特征点定位方法及电子设备
WO2021233046A1 (zh) 一种碰撞范围确定方法和相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16799207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.04.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16799207

Country of ref document: EP

Kind code of ref document: A1