WO2020019665A1 - 基于人脸的三维特效生成方法、装置和电子设备 - Google Patents

基于人脸的三维特效生成方法、装置和电子设备 Download PDF

Info

Publication number
WO2020019665A1
WO2020019665A1 PCT/CN2018/123641 CN2018123641W WO2020019665A1 WO 2020019665 A1 WO2020019665 A1 WO 2020019665A1 CN 2018123641 W CN2018123641 W CN 2018123641W WO 2020019665 A1 WO2020019665 A1 WO 2020019665A1
Authority
WO
WIPO (PCT)
Prior art keywords
special effect
dimensional
face image
generating
parameters
Prior art date
Application number
PCT/CN2018/123641
Other languages
English (en)
French (fr)
Inventor
王晶
林鑫
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Priority to GB2100222.5A priority Critical patent/GB2589505B/en
Priority to JP2020571659A priority patent/JP7168694B2/ja
Priority to US16/967,962 priority patent/US11276238B2/en
Publication of WO2020019665A1 publication Critical patent/WO2020019665A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the present disclosure relates to the field of image technology, and in particular, to a method, an apparatus, a hardware device, and a computer-readable storage medium for generating a three-dimensional special effect based on a human face.
  • the application range of smart terminals has been widely expanded, such as listening to music, playing games, chatting on the Internet, and taking photos through smart terminals.
  • the camera pixels have reached more than 10 million pixels, which has higher resolution and is comparable to that of professional cameras.
  • the current special effect production APPs are pre-made with some special effects and cannot be flexibly edited, and the special effects can only be fixed at a fixed position of the image.
  • the present disclosure provides a method for generating a 3D special effect based on a face, including: displaying a standard face image; and creating a 3D special effect, the 3D special effect being created by configuring at least a 3D model and a material of the 3D model.
  • the three-dimensional special effect is located on the standard face image; generating special effect parameters according to the three-dimensional special effect; obtaining a first face image identified from an image sensor; and according to the special effect parameter, on the first face image Generating the three-dimensional special effect.
  • the configuring the three-dimensional model includes: acquiring the three-dimensional model and displaying it on a standard face image; and configuring a position and a size of the three-dimensional model.
  • configuring the material of the three-dimensional model includes: creating the material and adjusting parameters of the material; the parameters of the material include rendering blend mode parameters, whether to enable depth test parameters, whether to enable depth write parameters, Whether to enable one or more of the culling parameters.
  • the creating a three-dimensional special effect further includes: configuring a light, the light including one of a point light source, a parallel light source, and a spotlight light source.
  • the configuring the light includes: configuring the position, direction, color, and intensity of the light.
  • the creating a three-dimensional special effect further includes: configuring a texture of the three-dimensional model, including: obtaining a map of the texture; and configuring a texture wrapping mode.
  • the standard face image includes facial feature points
  • the method includes: receiving a reference point selection command, selecting at least one feature point as a reference point; and according to the special effect parameter
  • generating the three-dimensional special effect on the first face image includes generating the three-dimensional special effect on the first face image according to the special effect parameter and the reference point.
  • the present disclosure provides a device for generating a three-dimensional special effect based on a face, including: a display module for displaying a standard face image; and a three-dimensional special effect creation module for creating a three-dimensional special effect. Created by configuring at least a three-dimensional model and the material of the three-dimensional model; a special effect parameter generating module for generating special effect parameters according to the three-dimensional special effect; and a face image acquisition module for acquiring a first human face identified from an image sensor An image; a three-dimensional special effect generating module, configured to generate the three-dimensional special effect on a first face image according to the special effect parameter.
  • configuring the three-dimensional model includes: obtaining the three-dimensional model and displaying it on a standard face image; and configuring a position and a size of the three-dimensional model.
  • configuring the material of the three-dimensional model includes: creating the material and adjusting parameters of the material; the parameters of the material include a rendering blend mode, whether to enable depth testing, whether to enable depth writing, and whether to enable culling One or more of them.
  • the three-dimensional special effect creation module is further configured to configure a light, the light including one of a point light source, a parallel light source, and a spotlight light source.
  • the configuring the light includes: configuring the position, direction, color, and intensity of the light.
  • the three-dimensional special effect creation module is further configured to: configure a texture of the three-dimensional model, including: obtaining a map of the texture; and configuring a texture wrapping mode.
  • the standard face image includes face feature points
  • the face-based 3D special effect generating device further includes a reference point selection module for receiving a reference point selection command, and selecting at least one feature point as Reference point; the three-dimensional special effect generating module has: generating the three-dimensional special effect on a first face image according to the special effect parameter and the reference point.
  • the present disclosure provides an electronic device including: a memory for storing non-transitory computer-readable instructions; and a processor for running the computer-readable instructions such that the processing The implementation of the processor implements the steps described in any of the above methods.
  • the present disclosure provides a computer-readable storage medium for storing non-transitory computer-readable instructions, which when executed by a computer, cause the computer to execute The steps described in any of the above methods.
  • Embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a three-dimensional special effect based on a human face.
  • the face-based 3D special effect generating method includes: displaying a standard face image; creating a 3D special effect, the 3D special effect is created by configuring at least a 3D model and a material of the 3D model, and the 3D special effect is located in the standard person On the face image; generating special effect parameters according to the three-dimensional special effect; obtaining a first face image recognized from an image sensor; and generating the three-dimensional special effect on the first face image according to the special effect parameter.
  • the user can conveniently configure and edit the 3D special effect, and can generate the 3D special effect parameter so that the 3D special effect generation algorithm can use the 3D special effect parameter on the face image collected in real time.
  • FIG. 1 is a schematic flowchart of a face-based 3D special effect generating method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for generating a 3D special effect based on a human face according to another embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a face-based 3D special effect generating device according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a face-based 3D special effect generating device according to another embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a face-based 3D special effect generating terminal according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a method for generating a three-dimensional special effect based on a human face.
  • the method for generating a 3D special effect based on a human face mainly includes the following steps S1 to S5. among them:
  • Step S1 Display a standard face image.
  • a standard face image is displayed on the display device.
  • the standard face image is a preset face image.
  • the standard face image is a front face image.
  • the standard face image may have preset feature points, in which the number of feature points can be set, and the user can freely set the required number of feature points.
  • the feature points of an image are points in the image that have distinctive characteristics and can effectively reflect the essential characteristics of the image and can identify the target object in the image. If the target object is a human face, then the key points of the face need to be obtained. If the target image is a house, then the key points of the house need to be obtained. Take the human face as an example to illustrate how to obtain key points.
  • the face contour mainly includes 5 parts: eyebrows, eyes, nose, mouth, and cheeks, and sometimes also includes pupils and nostrils. Generally, a complete description of the face contour is achieved.
  • the number of key points required is about 60. If only the basic structure is described, the details of each part do not need to be described in detail, or the cheeks need not be described, then the number of key points can be reduced accordingly. If you need to describe the pupils, nostrils, or need More detailed features can increase the number of key points.
  • Face keypoint extraction on the image is equivalent to finding the corresponding position coordinates of each face contour keypoint in the face image, that is, keypoint positioning. This process needs to be performed based on the features corresponding to the keypoints.
  • a search and comparison is performed in the image based on this feature to accurately locate the positions of the key points on the image.
  • the feature points occupy only a very small area in the image (usually only a few to tens of pixels in size)
  • the area occupied by the features corresponding to the feature points on the image is also usually very limited and local.
  • the features currently used There are two kinds of extraction methods: (1) one-dimensional range image feature extraction along the vertical contour; (2) two-dimensional range image feature extraction of square neighborhood of feature points.
  • ASM and AAM methods statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on.
  • the number of key points, accuracy, and speed used by the above various implementation methods are different, which are suitable for different application scenarios.
  • Step S2 Create a three-dimensional special effect, which is created by configuring at least a three-dimensional model and a material of the three-dimensional model, and the three-dimensional special effect is located on the standard face image.
  • a three-dimensional special effect to be used is created.
  • the three-dimensional special effect is a 3D sticker.
  • the 3D models do not carry any color information, depth information, material information, texture information, etc.
  • the size of the 3D model can be set by the drag box of the 3D model, or
  • the length, width, and height of the drag box are directly input to set, and the rotation angle includes the rotation angles on the x-axis, y-axis, and z-axis, and can be set by dragging the drag box or directly inputting the angle.
  • the parameters of the material include one or more of rendering blending mode, whether depth test is enabled, whether depth writing is enabled, whether culling is enabled, and the parameters of the material also include the RGB component of the body surface to the color light that is incident on the surface.
  • the reflectivity specifically includes the degree of reflection of ambient light, diffused light, specular light, and self-luminous different rays and different color components.
  • the rendering blending refers to mixing two colors together, and specifically to the present disclosure refers to blending a color at a pixel position with a color to be painted to achieve a special effect, and a rendering blending mode Refers to the method used for mixing.
  • the mixing method refers to the calculation of the source color and the target color to obtain the mixed color. In actual applications, the result obtained by multiplying the source color by the source factor and the target color is often multiplied. The result obtained by the target factor is used for calculation to obtain the mixed color.
  • BLENDcolor SRC_color * SCR_factor + DST_color * DST_factor, where 0 ⁇ SCR_factor ⁇ 1, 0 ⁇ DST_factor ⁇ 1.
  • the four components of the source color referring to red, green, blue, and alpha values
  • the four components of the target color are (Rd, Gd, Bd, Ad )
  • the source factor is (Sr, Sg, Sb, Sa)
  • the target factor is (Dr, Dg, Db, Da).
  • the new color produced by the blend can be expressed as: (Rs * Sr + Rd * Dr, Gs * Sg + Gd * Dg, Bs * Sb + Bd * Db, As * Sa + Ad * Da), where the alpha value represents transparency, 0 ⁇ alpha ⁇ 1.
  • the above mixing method is only an example. In practical applications, the mixing method can be defined or selected by itself. The calculation can be the larger of the addition, the subtraction, the multiplication, the division, the larger one, the smaller of the two, and a logical operation. (And, or, XOR, etc.). The above mixing method is only an example. In practical applications, the mixing method can be defined or selected by itself. The calculation can be the larger of the addition, the subtraction, the multiplication, the division, the larger one, the smaller of the two, and a logical operation. (And, or, XOR, etc.).
  • the depth test refers to setting a depth buffer, which corresponds to the color buffer.
  • the depth buffer stores the depth information of the pixels, and the color information of the pixels stored in the color buffer is used to determine whether to draw an object.
  • the depth value of the corresponding pixel of the surface is first compared with the value stored in the depth buffer. If it is greater than or equal to the value in the depth buffer, this part is discarded; otherwise, the depth value and color value corresponding to this pixel are used, respectively. Update the depth and color buffers. This process is called Depth Testing.
  • DF_NEVER always fails the test, at this time the values in the depth buffer and color buffer will always be maintained, that is, no pixel will be drawn to the screen;
  • DF_LESS at the current depth value ⁇ Passed when stored depth value;
  • DF_LEQUAL passed when current depth value ⁇ stored depth value;
  • DF_GREATER passed when current depth value> stored depth value;
  • DF_NOTEQUAL passes when the current depth value ⁇ the stored depth value;
  • the deep write is associated with the depth test.
  • the depth test is enabled and the result of the depth test may update the value of the depth buffer
  • the deep write needs to be turned on to update the value of the depth buffer.
  • the following example illustrates the image drawing process when the depth test is turned on and the depth is written. Assume that two color blocks, red and yellow, are to be drawn. In the rendering queue, the red block is in front, the yellow block is behind, and the red block has a depth of 0.5. , The depth value of the yellow block is 0.2, and the depth test comparison function used is DF_LEQUAL. At this time, 0.5 is written in the depth buffer, red is written in the color buffer, and then 0.2 is obtained through the comparison function when rendering yellow. ⁇ 0.5, if the test is passed, the value of the depth buffer is updated to 0.2, and the color buffer is updated to yellow, which means that because the depth of yellow is relatively shallow, it is necessary to cover the deep red.
  • the culling refers to that in a three-dimensional space, although a polygon has two faces, we cannot see those polygons on the back, and some polygons are obscured by other polygons although they are on the front. Treating invisible polygons with visible polygons will undoubtedly reduce our efficiency in processing graphics. At this time, unnecessary faces can be eliminated.
  • culling is turned on, you can set the face to be culled, such as setting the back and / or front to be culled.
  • the reflectivity of the material to various light rays is also set, where the color component of each light rays is set to reflectivity, such as for ambient light, whose color components are red, yellow, blue, and red reflection
  • the rate is 0.5
  • the reflectivity for yellow is 0.1
  • the reflectivity for blue is 0.2, so when the ambient light is set, the surface of the 3D model will show a color and gloss, which can show the reflection properties of the material to different light .
  • the creating a three-dimensional special effect further includes: configuring a light, the light includes one of a point light source, a parallel light source, and a spotlight light source.
  • a light the light includes one of a point light source, a parallel light source, and a spotlight light source.
  • different colors can be formed on the surface of the 3D model according to the light reflectance of the 3D material.
  • For point and spot light sources you can also set the attenuation radius of the light to simulate real lighting conditions.
  • spot light sources you can also set the light's emission angle to attenuate the angle, etc. It is not listed here one by one. It can be understood that the settings of the lights may not be limited to the above types of lights, and all light sources that can be applied to the technical solutions of the present disclosure are introduced into the present disclosure.
  • the creating a three-dimensional special effect further comprises: configuring a texture of the three-dimensional model, which specifically includes: obtaining a map of the texture; and configuring a wrapping mode of the texture.
  • a texture representing a texture needs to be obtained, and the texture map can usually be imported using an import method; then a texture wrapping mode can be configured, which refers to how to handle textures when the three-dimensional model is larger than the texture mapping.
  • the simplest way is the REPEAT mode, which is to repeat the texture mapping until the 3D model is completely covered by the texture mapping. This is also the most commonly used mode.
  • CLAMP interception mode which is not covered by the texture mapping. , Will be overwritten with the color of the edge of the texture map.
  • Step S3 Generate special effect parameters according to the three-dimensional special effect.
  • the special effect parameter refers to the configured special effect parameter after creating the three-dimensional special effect in step S2, and a three-dimensional special effect drawing parameter package can be generated according to the special effect parameter, so as to be used when generating the three-dimensional special effect.
  • the purpose of this step is to provide a drawing parameter package for the 3D special effect generation algorithm so as to load the 3D special effect on other face images.
  • Step S4 Acquire a first face image identified from the image sensor.
  • a face image recognized from a camera is obtained.
  • the face image may be a face recognized from a real person, or a face recognized by using a camera to take a picture or video including the face This disclosure does not limit, in short, the face image is different from the standard face image.
  • Face detection is any given image or a set of image sequences, and it is searched with a certain strategy to determine the position and area of all faces.
  • Common face detection methods can be divided into four categories: (1) knowledge-based methods, which encode typical faces into a rule base to encode faces and locate faces through the relationship between facial features; (2) Feature invariant method. The purpose of this method is to find stable features when the pose, perspective, or lighting conditions are changed, and then use these features to determine the face. (3) Template matching method, which stores several standard face patterns.
  • the feature template used for the calculation of general Haar features uses a simple rectangle combination consisting of two or more congruent rectangles, where the feature template has two rectangles, black and white; after that, use
  • the AdaBoost algorithm finds a part of the features that play a key role from a large number of Haar features, and uses these features to generate an effective classifier.
  • the constructed classifier can detect faces in the image. There may be one or more human faces in the image in this embodiment.
  • each face detection algorithm has its own advantages and different adaptation ranges, multiple different detection algorithms can be set to automatically switch different algorithms for different environments, such as in images with relatively simple background environments. , You can use the algorithm with a lower detection rate but faster; in the image with a more complex background environment, you can use the algorithm with a higher detection rate but a slower speed; for the same image, you can use multiple algorithms multiple times Detection to improve detection rate.
  • Step S5 Generate the three-dimensional special effect on the first face image according to the special effect parameters.
  • the special effect parameters generated in step S4 the same three-dimensional special effect as that on the standard face image is generated on the first face image recognized from the camera, and the three-dimensional special effect is on the first face image.
  • the position on is the same as that on the standard face image.
  • the special effect parameters may include parameters set when the three-dimensional special effect is created in step S2.
  • the position of the three-dimensional special effect on the first face image is determined by the position of the three-dimensional model in the parameters, and the three-dimensional special effect is determined according to other parameters.
  • the user can select one face image that needs to be created by 3D special effects, and can also select multiple face images for the same or different processing.
  • 3D special effects you can number standard faces, such as ID1 and ID2, and set 3D special effects on the standard face images of ID1 and ID2, respectively.
  • the 3D special effects can be the same or different. Recognize multiple face images and add 3D special effects to the multiple face images according to the identified order. For example, if the first face is identified, then the 3D special effect on the standard face image of ID1 is added to the first face. , And then recognize the No. 2 face, add the 3D special effect on the standard face image of ID2 to the No.
  • a three-dimensional special effect is created on a standard face image.
  • the three-dimensional special effect can be configured to generate three-dimensional special effect parameters, and then the three-dimensional special effect on the actual face image is generated according to the three-dimensional special effect parameter.
  • 3D special effects need to be produced by a third-party tool, which lacks flexibility when used, and cannot be configured in real time, and can only be placed in existing images or videos, and cannot be generated on face images collected in real time. 3D special effects.
  • the user can conveniently configure and edit the 3D special effect, and can generate the 3D special effect parameter so that the 3D special effect generation algorithm can use the 3D special effect parameter on the face image collected in real time.
  • the standard face image includes facial feature points. As shown in FIG. 2, after displaying the standard face image in step S1, the method includes:
  • Step S11 Receive a reference point selection command, and select at least one feature point as a reference point.
  • the step S5 generates the three-dimensional special effect on the first face image according to the special effect parameter, and specifically includes S12:
  • the tracking methods can be divided into fixed 3D special effects and tracking 3D special effects.
  • a fixed three-dimensional special effect is used. This three-dimensional special effect is relatively simple. It only needs to set the absolute position of the entire three-dimensional special effect range in the image sensor.
  • the implementation method may be to combine the display device with the pixels of the image acquisition window of the image sensor. Point-to-point correspondence, determine the position of the 3D special effect in the display device, and then perform corresponding 3D special effect processing on the corresponding position of the image collected by the image sensor acquisition window.
  • the advantage of this 3D special effect processing method is simple and easy to operate.
  • the parameters used in the method are relative to the position of the acquisition window.
  • when generating a three-dimensional special effect image first obtain the feature points of the standard face image in step S1, and determine the three-dimensional special effect through the feature points. Position in a standard face image; The first face image corresponding to the standard face image is identified in the image collected by the processor; the position determined in the standard face image is mapped to the first face image; and the three-dimensional special effect is performed on the first face image Processing to generate 3D special effects images.
  • the relative position of the three-dimensional special effect in the first face image is determined.
  • the three-dimensional special effect is always located at the relative position, and the purpose of tracking the three-dimensional special effect is achieved.
  • the standard face image is triangulated and has 106 feature points.
  • the relative position of the action range in the face image is determined using the 3D special effects and the relative positions of the feature points.
  • the face image is subjected to the same triangulation.
  • the three-dimensional special effect can be fixed on the relative position of the human face to achieve the effect of tracking the three-dimensional special effect.
  • the reference point plays a role of determining the relative position.
  • the reference point on the standard face image has corresponding feature points on the first face image. Through the reference points and the corresponding feature points on the first face image, Generate a mapping relationship. Through the mapping relationship, a 3D special effect can be generated to a corresponding position of the first face image.
  • the three-dimensional special effect triggers the display only when certain conditions are met, and the triggering conditions may be user actions, expressions, sounds, or parameters of the terminal, and so on.
  • the action may be a facial action, such as blinking, mouth opening, shaking his head, nodding, eyebrow provocation.
  • a 3D sticker with 3D effects as glasses the trigger condition may be set to blink twice quickly. When the user is detected to blink twice quickly Then, a 3D sticker of glasses is displayed on the user's eyes; the expression can be happy, frustrated, angry, etc.
  • a 3D sticker with a three-dimensional special effect as tears a trigger condition can be set to a depressed expression.
  • the trigger conditions can be any Suitable for the triggering conditions in the technical solution of the present disclosure.
  • the triggering conditions may be one or more.
  • the trigger may be the start of the trigger or the disappearance of the trigger.
  • the trigger condition may further include a delay after the trigger. Time is how long the 3D special effects appear or disappear after the trigger condition appears.
  • the following is a device embodiment of the present disclosure.
  • the device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure. For convenience of explanation, only parts related to the embodiments of the present disclosure are shown. Reference is made to the method embodiments of the present disclosure.
  • An embodiment of the present disclosure provides a three-dimensional special effect generating device based on a human face.
  • the device can perform the steps described in the embodiment of the method for generating a 3D special effect based on a face.
  • the device mainly includes a display module 31, a three-dimensional special effect creation module 32, a special effect parameter generation module 33, a face image acquisition module 34, and a three-dimensional special effect generation module 35.
  • a display module 31 is used to display a standard face image
  • a three-dimensional special effect creation module 32 is used to create a three-dimensional special effect.
  • the three-dimensional special effect is created by configuring at least a three-dimensional model and a material of the three-dimensional model.
  • a special effect parameter generating module 33 is configured to generate special effect parameters according to the three-dimensional special effect
  • a face image acquisition module 34 is configured to obtain a first face image recognized from an image sensor
  • three-dimensional special effect generation Module 35 is configured to generate the three-dimensional special effect on the first face image according to the special effect parameter.
  • the above-mentioned face-based 3D special effect generating device corresponds to the face-based 3D special effect generating method in the embodiment shown in FIG. 1 above.
  • the face-based 3D special effect generating method described above please refer to the description of the face-based 3D special effect generating method described above. More details.
  • the face-based 3D special effect generating device further includes:
  • the reference point selection module 36 is configured to receive a reference point selection command and select at least one feature point as a reference point.
  • the three-dimensional special effect generating module 35 is configured to generate the three-dimensional special effect on the first face image according to the special effect parameter and the reference point.
  • the above-mentioned face-based 3D special effect generating device corresponds to the face-based 3D special effect generating method in the embodiment shown in FIG. 2 above.
  • the face-based 3D special effect generating method For specific details, please refer to the above description of the face-based 3D special effect generating method. More details.
  • a three-dimensional special effect is created on a standard face image.
  • the three-dimensional special effect can be configured to generate three-dimensional special effect parameters, and then the three-dimensional special effect on the actual face image is generated according to the three-dimensional special effect parameter.
  • 3D special effects need to be produced by a third-party tool, which lacks flexibility when used, and cannot be configured in real time, and can only be placed in existing images or videos, and cannot be generated on face images collected in real time. 3D special effects.
  • the user can conveniently configure and edit the 3D special effect, and can generate the 3D special effect parameter so that the 3D special effect generation algorithm can use the 3D special effect parameter on the face image collected in real time.
  • FIG. 5 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 5, the electronic device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
  • the memory 51 is configured to store non-transitory computer-readable instructions.
  • the memory 51 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 52 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions.
  • the processor 52 is configured to run the computer-readable instructions stored in the memory 51 so that the electronic device 50 executes the face-based special effect generating method of the foregoing embodiments of the present disclosure. All or part of the steps.
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
  • FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 60 stores non-transitory computer-readable instructions 61 thereon.
  • the non-transitory computer-readable instruction 61 is executed by a processor, all or part of the steps of the face-based special effect generating method of the foregoing embodiments of the present disclosure are performed.
  • the computer-readable storage medium 60 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example, CD-ROM and DVD
  • magneto-optical storage media for example, MO
  • magnetic storage media for example, magnetic tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 7 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure.
  • the face-based 3D special effect generating terminal 70 includes the embodiment of the face-based 3D special effect generating device described above.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal may further include other components.
  • the face-based 3D special effect generating terminal 70 may include a power source unit 71, a wireless communication unit 72, an A / V (audio / video) input unit 73, a user input unit 74, a sensing unit 75, and an interface.
  • FIG. 7 illustrates a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 72 allows radio communication between the terminal 70 and a wireless communication system or network.
  • the A / V input unit 73 is used to receive audio or video signals.
  • the user input unit 74 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 75 detects the current state of the terminal 70, the position of the terminal 70, the presence or absence of a user's touch input to the terminal 70, the orientation of the terminal 70, the acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a signal for controlling the terminal 70's operation command or signal.
  • the interface unit 76 functions as an interface through which at least one external device can be connected to the terminal 70.
  • the output unit 78 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the storage unit 79 may store software programs and the like for processing and control operations performed by the controller 77, or may temporarily store data that has been output or is to be output.
  • the storage unit 79 may include at least one type of storage medium.
  • the terminal 70 can cooperate with a network storage device that performs a storage function of the storage unit 79 through a network connection.
  • the controller 77 generally controls the overall operation of the terminal device.
  • the controller 77 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 77 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images.
  • the power supply unit 71 receives external power or internal power under the control of the controller 77 and provides appropriate power required to operate each element and component.
  • Various embodiments of the face-based three-dimensional special effect generation method proposed by the present disclosure may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof.
  • various embodiments of the face-based 3D special effect generation method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable At least one of a logic device (PLD), a field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein is implemented in some
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • FPGA field programmable gate array
  • a processor a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein is implemented in some
  • various embodiments of the face-based 3D special effect generation method proposed by the present disclosure may be implemented with a separate software module that allows execution of at least one function or operation.
  • the software codes may be implemented by a software application program (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 79 and executed by the controller 77.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于人脸的三维特效生成方法、装置、电子设备和计算机可读存储介质。其中,该基于人脸的三维特效生成方法包括:显示标准人脸图像(S1);创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建,所述三维特效位于所述标准人脸图像上(S2);根据所述三维特效生成特效参数(S3);获取从图像传感器中识别出的第一人脸图像(S4);根据所述特效参数,在第一人脸图像上生成所述三维特效(S5)。通过上述方法,可以方便的对三维特效进行配置和编辑,并且可以通过生成三维特效参数,使三维特效生成算法可以使用三维特效参数在实时采集的人脸图像上生成三维特效。

Description

基于人脸的三维特效生成方法、装置和电子设备
交叉引用
本公开引用于2018年07月27日递交的名称为“基于人脸的三维特效生成方法、装置和电子设备”的、申请号为201810838414.X的中国专利申请,其通过引用被全部并入本申请。
技术领域
本公开涉及一种图像技术领域,特别是涉及一种基于人脸的三维特效生成方法、装置、硬件装置和计算机可读存储介质。
背景技术
随着计算机技术的发展,智能终端的应用范围得到了广泛的扩展,例如可以通过智能终端听音乐、玩游戏、上网聊天和拍照等。对于智能终端的拍照技术来说,其拍照像素已经达到千万像素以上,具有较高的清晰度和媲美专业相机的拍照效果。
目前在采用智能终端进行拍照或者拍视频时,不仅可以使用出厂时内置的拍照软件实现传统功能的拍照和视频效果,还可以通过从网络端下载应用程序(Application,简称为:APP)来实现具有附加功能的拍照效果或者视频效果。
目前的特效制作APP都是预制了一些特效,无法灵活编辑,并且所述特效只能固定在图像的固定位置上。
发明内容
根据本公开的一个方面,本公开提供一种基于人脸的三维特效生成方法,包括:显示标准人脸图像;创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建,所述三维特效位于所述标准人脸图像上;根据所述三维特效生成特效参数;获取从图像传感器中识别出的第一人脸图像;根据所述特效参数,在第一人脸图像上生成所述三维特效。
可选的,所述配置三维模型,包括:获取所述三维模型,并将其显示在 标准人脸图像上;配置所述三维模型的位置以及尺寸。
可选的,所述配置三维模型的材质,包括:创建所述材质,并调节所述材质的参数;所述材质的参数包括渲染混合模式参数、是否启用深度测试参数、是否启用深度写参数、是否开启剔除参数中的一个或多个。
可选的,所述创建三维特效,还包括:配置灯光,所述灯光包括点光源、平行光源及聚光灯光源中的一种。
可选的,所述配置灯光,包括:配置灯光的位置、方向、颜色以及强度。
可选的,所述创建三维特效,还包括:配置所述三维模型的纹理,包括:获取所述纹理的贴图;配置纹理的环绕模式。
可选的,所述标准人脸图像上包括人脸特征点,所述显示标准人脸图像之后,包括:接收参考点选择命令,选择至少一个特征点作为参考点;所述根据所述特效参数,在第一人脸图像上生成所述三维特效,包括:根据所述特效参数以及所述参考点,在第一人脸图像上生成所述三维特效。
根据本公开的另一个方面,本公开提供一种基于人脸的三维特效生成装置,包括:显示模块,用于显示标准人脸图像;三维特效创建模块,用于创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建;特效参数生成模块,用于根据所述三维特效生成特效参数;人脸图像获取模块,用于获取从图像传感器中识别出的第一人脸图像;三维特效生成模块,用于根据所述特效参数,在第一人脸图像上生成所述三维特效。
可选的,所述配置三维模型,包括:获取所述三维模型,并将其显示在标准人脸图像上;配置所述三维模型的位置以及尺寸。
可选的,所述配置三维模型的材质,包括:创建所述材质,并调节所述材质的参数;所述材质的参数包括渲染混合模式、是否启用深度测试、是否启用深度写、是否开启剔除中的一个或多个。
可选的,所述三维特效创建模块,还用于:配置灯光,所述灯光包括点 光源、平行光源及聚光灯光源中的一种。
可选的,所述配置灯光,包括:配置灯光的位置、方向、颜色以及强度。
可选的,所述三维特效创建模块,还用于:配置所述三维模型的纹理,包括:获取所述纹理的贴图;配置纹理的环绕模式。
可选的,所述标准人脸图像上包括人脸特征点,所述基于人脸的三维特效生成装置,还包括:参考点选择模块,用于接收参考点选择命令,选择至少一个特征点作为参考点;所述三维特效生成模块,拥有:根据所述特效参数以及所述参考点,在第一人脸图像上生成所述三维特效。
根据本公开的又一个方面,本公开提供一种电子设备,包括:存储器,用于存储非暂时性计算机可读指令;以及,处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述任一方法所述的步骤。
根据本公开的又一个方面,本公开提供一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述任一方法中所述的步骤。
本公开实施例提供一种基于人脸的三维特效生成方法、装置、电子设备和计算机可读存储介质。其中,该基于人脸的三维特效生成方法包括:显示标准人脸图像;创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建,所述三维特效位于所述标准人脸图像上;根据所述三维特效生成特效参数;获取从图像传感器中识别出的第一人脸图像;根据所述特效参数,在第一人脸图像上生成所述三维特效。而本实施例中,通过三维特效创建操作,用户可以方便的对三维特效进行配置和编辑,并且可以通过生成三维特效参数,使三维特效生成算法可以使用三维特效参数在实时采集的人脸图像上生成三维特效,因此相较于已有技术,大大降低了三维特效的编辑难度以及编辑时间,且三维特效可以同步于任何实时采集到的人脸图像上,从而提高了用户体验效果。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
图1为根据本公开一个实施例的基于人脸的三维特效生成方法的流程示意图;
图2为根据本公开又一实施例的基于人脸的三维特效生成方法的流程示意图;
图3为根据本公开一实施例的基于人脸的三维特效生成装置的结构示意图;
图4为根据本公开又一实施例的基于人脸的三维特效生成装置的结构示意图;
图5为根据本公开一个实施例的电子设备的结构示意图;
图6为根据本公开一个实施例的计算机可读存储介质的结构示意图;
图7为根据本公开一个实施例的基于人脸的三维特效生成终端的结构示意图。
具体实施方式
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用 本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
为了解决如何提高用户体验效果的技术问题,本公开实施例提供一种基于人脸的三维特效生成方法。如图1所示,该基于人脸的三维特效生成方法主要包括如下步骤S1至步骤S5。其中:
步骤S1:显示标准人脸图像。
在显示装置上显示标准人脸图像,所述标准人脸图像是预先设置好的人脸图像,通常来说,该标准人脸图像为正面人脸图像。该标准人脸图像可以带有预先设置好的特征点,其中特征点的数量可以设置,用户可以自由设定所需要的特征点的数量。图像的特征点是指图像中具有鲜明特性并能够有效反映图像本质特征且能够标识图像中目标物体的点。如果目标物体为人脸,那么就需要获取人脸关键点,如果目标图像为一栋房子,那么就需要获取房子的关键点。以人脸为例说明关键点的获取方法,人脸轮廓主要包括眉毛、眼睛、鼻子、嘴巴和脸颊5个部分,有时还会包括瞳孔和鼻孔,一般来说实现对人脸轮廓较为完整的描述,需要关键点的个数在60个左右,如果只描述基本结构,不需要对各部位细节进行详细描述,或不需要描述脸颊,则可以相应降低关键点数目,如果需要描述瞳孔、鼻孔或者需要更细节的五官特征,则可以增加关键点的数目。在图像上进行人脸关键点提取,相当于寻找每个人脸轮廓关键点在人脸图像中的对应位置坐标,即关键点定位,这一过程需要基于关键点对应的特征进行,在获得了能够清晰标识关键点的图像特征之后,依据此特征在图像中进行搜索比对,在图像上精确定位关键点的位置。由于特征点在图像中仅占据非常小的面积(通常只有几个至几十个像素的大小),特征点对应的特征在图像上所占据的区域通常也是非 常有限和局部的,目前用的特征提取方式有两种:(1)沿轮廓垂向的一维范围图像特征提取;(2)特征点方形邻域的二维范围图像特征提取。上述两种方式有很多种实现方法,如ASM和AAM类方法、统计能量函数类方法、回归分析方法、深度学习方法、分类器方法、批量提取方法等等。上述各种实现方法所使用的关键点个数,准确度以及速度各不相同,适用于不同的应用场景。
步骤S2:创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建,所述三维特效位于所述标准人脸图像上。
在该实施例中,创建需要使用的三维特效,在一个典型应用中,所述三维特效是3D贴纸。创建三维特效时,可以导入使用第三方软件制作的三维模型,所述三维模型上不带有任何颜色信息、深度信息、材质信息、纹理信息等,导入三维模型之后,可以对三维模型做参数配置,比如可以配置三维模型在屏幕中的位置或者在标准人脸图像上的位置;可以配置三维模型的大小和旋转角度,所述三维模型的大小可以通过三维模型的拖拽框来设置,或者通过直接输入拖拽框的长宽高来设置,所述旋转角度包括在x轴、y轴、z轴上的旋转角度,可以通过拖动拖拽框或者直接输入角度来设置。
在导入三维模型之后,还需要配置三维模型的材质。在配置材质时,首先需要创建材质,之后调节所述材质的参数。所述材质的参数包括渲染混合模式、是否开启深度测试、是否开启深度写、是否开启剔除中的一个或多个,所述材质的参数还包括体表面对射到表面上的色光的RGB分量的反射率,具体的包括对环境光、漫射光、镜面光和自发光的不同光线、不同颜色分量的反射程度。
具体的,所述渲染混合是指将两种颜色混合在一起,具体到本公开中是指将某一像素位置的颜色与将要画上去的颜色混合在一起,从而实现特殊效果,而渲染混合模式是指混合所使用的方式,一般来说混合方式是指将源颜色和目标颜色做计算,得出混合后的颜色,在实际应用中常常将源颜色乘以源因子得到的结果与目标颜色乘以目标因子得到的结果做计算,得到混合后的颜色,举例来说,所述计算为加,则BLENDcolor=SRC_color*SCR_factor+DST_color*DST_factor,其中0≤SCR_factor≤1,0≤DST_factor≤1。根据上述运算公式,假设源颜色的四个分量(指红色,绿色,蓝色,alpha值)是(Rs,Gs,Bs,As),目标颜色的四个分量是(Rd, Gd,Bd,Ad),又设源因子为(Sr,Sg,Sb,Sa),目标因子为(Dr,Dg,Db,Da)。则混合产生的新颜色可以表示为:(Rs*Sr+Rd*Dr,Gs*Sg+Gd*Dg,Bs*Sb+Bd*Db,As*Sa+Ad*Da),其中alpha值表示透明度,0≤alpha≤1。上述混合方式仅仅是举例,实际应用中,可以自行定义或者选择混合方式,所述计算可以是加、减、乘、除、取两者中较大的、取两者中较小的、逻辑运算(和、或、异或等等)。上述混合方式仅仅是举例,实际应用中,可以自行定义或者选择混合方式,所述计算可以是加、减、乘、除、取两者中较大的、取两者中较小的、逻辑运算(和、或、异或等等)。
所述深度测试,是指设置一个深度缓冲区,该深度缓冲区与颜色缓冲区相对应,深度缓冲区存储像素的深度信息,颜色缓冲区存储的像素的颜色信息,在决定是否绘制一个物体的表面时,首先将表面对应像素的深度值与存储在深度缓冲区中的值进行比较,如果大于等于深度缓冲区中值,则丢弃这部分;否则利用这个像素对应的深度值和颜色值,分别更新深度缓冲区和颜色缓冲区。这一过程称之为深度测试(Depth Testing)。在绘制场景前,清除颜色缓冲区时,清除深度缓冲区,清除深度缓冲区时将深度缓冲区的值设置为1,表示最大的深度值,深度值的范围在[0,1]之间,值越小表示越靠近观察者,值越大表示远离观察者。在开启深度测试时,还需要设置深度测试的比较函数,典型的函数如下:DF_ALWAYS,总是通过测试,此时与不开启深度测试是一样的,总是使用当前像素的深度值和颜色值更新深度缓冲区和颜色缓冲区;DF_NEVER,总是不通过测试,此时会一直保持深度缓冲和颜色缓冲区中的值,就是任何像素点都不会被绘制到屏幕上;DF_LESS,在当前深度值<存储的深度值时通过;DF_EQUAL,在当前深度值=存储的深度值时通过;DF_LEQUAL,在当前深度值≤存储的深度值时通过;DF_GREATER,在当前深度值>存储的深度值时通过;DF_NOTEQUAL,在当前深度值≠存储的深度值时通过;DF_GEQUAL,在当前深度值>=存储的深度值时通过。所述深度写是与深度测试关联的,一般来说如果开启了深度测试,并且深度测试的结果有可能会更新深度缓冲区的值时,需要开启深度写,以便对深度缓冲区的值进行更新。以下举例说明开启深度测试以及深度写时的图像绘制过程,假设要绘制两个色块,分别为红色和黄色,在渲染队列中,红色块在前,黄色块在后,红色块深度值为0.5,黄色块深度值为0.2,使用的深度测试比较函数为DF_LEQUAL,此时深度缓冲区中会先被写入0.5,颜色缓冲区中写入红色,之后在渲染黄色时,通过比较函数得出0.2<0.5, 通过测试,则将深度缓冲区的值更新为0.2,颜色缓冲区更新为黄色,也就是说因为黄色的深度比较浅,因此需要覆盖到深度较深的红色。
所述剔除,是指在三维空间中,一个多边形虽然有两个面,但我们无法看见背面的那些多边形,而一些多边形虽然是正面的,但被其他多边形所遮挡。如果将无法看见的多边形和可见的多边形同等对待,无疑会降低我们处理图形的效率。在这种时候,可以将不必要的面剔除。当开启剔除时,可以设置需要剔除的面,比如设置剔除背面和/或正面。
在该实施例中,还对材质对各种光线的反射率做设置,其中对每种光线的颜色分量设置反射率,比如对环境光,其颜色分量为红、黄、蓝,对红色的反射率为0.5,对黄色的反射率为0.1,对蓝色的反射率为0.2,这样当设置了环境光之后,三维模型的表面会呈现一种颜色和光泽,可以展示材质对不同光线的反射属性。
在一个实施例中,所述创建三维特效,还包括:配置灯光,所述灯光包括点光源、平行光源以及聚光灯光源中的一种,对于这三种光源,均需要配置灯光的位置、灯光的方向、灯光的颜色以及灯光的强度,配置好灯光之后,当与之前配置好的三维模型材质结合,可以根据三维材质的光线反射率结合,在三维模型的表面形成不同的颜色。对于点光源和聚光灯光源,还可以设置灯光的衰减半径,以模拟真实灯光情况,对于聚光灯光源,还可以设置灯光的发射角度以衰减角度等。在此不一一列举,可以理解的是,灯光的设置可以不限于上述灯光类型,对于可以应用于本公开技术方案中的光源都引入到本公开中。
在一个实施例中,所述创建三维特效,还包括:配置所述三维模型的纹理,具体包括:获取所述纹理的贴图;配置纹理的环绕模式。在该实施例中,首先需要获取表示纹理的贴图,通常可以使用导入的方式将纹理贴图导入;之后可以配置纹理的环绕模式,所述环绕模式是指当三维模型大于纹理贴图时,如何处理纹理,最简单的方式是REPEAT模式,就是重复纹理贴图直到三维模型被纹理贴图完全覆盖住,这也是最常用的一种模式,还有一种模式为CLAMP截取模式,纹理贴图覆盖不到的三维模型部分,会使用纹理贴图边缘的颜色覆盖。
步骤S3:根据所述三维特效生成特效参数。
在该实施例中,所述特效参数是指在步骤S2中,创建三维特效之后, 所配置好的特效参数,可以根据所述特效参数生成一个三维特效绘制参数包,以便在生成三维特效时使用,该步骤的目的是为三维特效生成算法提供绘制参数包,以便在其他人脸图像上加载所述三维特效。
步骤S4:获取从图像传感器中识别出的第一人脸图像。
该步骤中,获取从摄像头中识别出来的人脸图像,该人脸图像可以是从真实的人识别出来的人脸,也可以是使用摄像头拍摄包括人脸的图片或者视频所识别出的人脸,本公开不做限制,总之该人脸图像有别于标准人脸图像。
识别人脸图像,主要是在图像中检测出人脸,人脸检测是任意给定一个图像或者一组图像序列,采用一定策略对其进行搜索,以确定所有人脸的位置和区域的一个过程,从各种不同图像或图像序列中确定人脸是否存在,并确定人脸数量和空间分布的过程。通常人脸检测的方法可以分为4类:(1)基于知识的方法,它将典型的人脸形成规则库对人脸进行编码,通过面部特征之间的关系进行人脸定位;(2)特征不变方法,该方法的目的是在姿态、视角或光照条件改变的情况下找到稳定的特征,然后使用这些特征确定人脸;(3)模板匹配方法,存储几种标准的人脸模式,用来分别描述整个人脸和面部特征,然后计算输入图像和存储的模式间的相互关系并用于检测;(4)基于外观的方法,与模板匹配方法相反,从训练图像集中进行学习从而获得模型,并将这些模型用于检测。在此使用第(4)种方法中的一个实现方式来说明人脸检测的过程:首先需要提取特征完成建模,本实施例使用Haar特征作为判断人脸的关键特征,Haar特征是一种简单的矩形特征,提取速度快,一般Haar特征的计算所使用的特征模板采用简单的矩形组合由两个或多个全等的矩形组成,其中特征模板内有黑色和白色两种矩形;之后,使用AdaBoost算法从大量的Haar特征中找到起关键作用的一部分特征,并用这些特征产生有效的分类器,通过构建出的分类器可以对图像中的人脸进行检测。本实施例中图像中的人脸可以是一个或多个。
可以理解的是,由于每种人脸检测算法各有优点,适应范围也不同,因此可以设置多个不同的检测算法,针对不同的环境自动切换不同的算法,比如在背景环境比较简单的图像中,可以使用检出率较差但是速度较快的算法;在背景环境比较复杂的图像中,可以使用检出率较高但是速度较慢的算法;对于同一图像,也可以使用多种算法多次检测以提高检出率。
步骤S5:根据所述特效参数,在第一人脸图像上生成所述三维特效。
在该步骤中,根据步骤S4中生成的特效参数,在从摄像头中识别出的第一人脸图像上生成与标准人脸图像上相同的三维特效,且所述三维特效在第一人脸图像上的位置与其在标准人脸图像上的位置相同。具体的,所述特效参数中可以包括步骤S2中创建三维特效时所设置的参数,通过参数中的三维模型的位置来确定三维特效在第一人脸图像上的位置,根据其他参数确定三维特效的属性,通过将这些参数加载到三维特效生成算法中,在第一人脸图像上自动生成所述三维特效。
可以理解的是,当图像中识别出多个人脸图像时,用户可以选择需要进行三维特效创建的一个人脸图像,也可以选择多个人脸图像做相同或者不同的处理。举例来说,在创建三维特效时,可以对标准人脸进行编号,如ID1和ID2,分别对ID1和ID2标准人脸图像设置三维特效,所述三维特效可以相同也可以不同,当从摄像头中识别出多个人脸图像,根据识别出的顺序对所述多个人脸图像添加三维特效,比如先识别出1号人脸,则在1号人脸上添加ID1的标准人脸图像上的三维特效,之后识别出2号人脸,则在2号人脸上添加ID2的标准人脸图像上的三维特效;如果只创建了ID1的标准人脸图像三维特效,则可以在1号和2号人脸图像上均添加ID1的标准人脸图像上的三维特效,也可以只在1号人脸上添加三维特效;所述多个人脸图像可以通过不同的动作交换三维特效,比如2号人脸通过甩头的动作可以将ID2的标准人脸图像三维特效添加到1号人脸上,具体的动作在本公开中不做限制。
本公开实施例中,在标准人脸图像上创建三维特效,可以对三维特效进行配置,生成三维特效参数,之后根据三维特效参数生成实际人脸图像上的三维特效。已有技术中,三维特效需要通过第三方工具制作,使用时灵活性不足,不能实时对特效进行配置,且只能放置于已有的图像或者视频中,无法在实时采集的人脸图像上生成三维特效。而本实施例中,通过三维特效创建操作,用户可以方便的对三维特效进行配置和编辑,并且可以通过生成三维特效参数,使三维特效生成算法可以使用三维特效参数在实时采集的人脸图像上生成三维特效,因此相较于已有技术,大大降低了三维特效的编辑难度以及编辑时间,且三维特效可以同步于任何实时采集到的人脸图像上,从而提高了用户体验效果。
在一个可选的实施例中,标准人脸图像上包括人脸特征点,如图2所 示,步骤S1即显示标准人脸图像之后,包括:
步骤S11,接收参考点选择命令,选择至少一个特征点作为参考点。
所述步骤S5根据所述特效参数,在第一人脸图像上生成所述三维特效,具体包括S12:
根据所述特效参数以及所述参考点,在第一人脸图像上生成所述三维特效。
由于标准人脸图像上的三维特效到图像传感器所采集到第一人脸图像的三维特效需要有一个映射关系,根据映射的方式不同,跟踪的方式可以分为固定三维特效和跟踪三维特效,在一个实施例中使用固定三维特效,这种三维特效比较简单,只需要设置整个三维特效范围在图像传感器中的绝对位置即可,其实现方式可以是将显示装置与图像传感器的图像采集窗口的像素点一一对应,判断三维特效在显示装置中的位置,之后对图像传感器采集窗口采集到的图像的对应位置进行相应的三维特效处理,这种三维特效处理方式的优点是简单易操作,该实现方式所使用的参数都相对于采集窗口的位置;在另一个实施例中,生成三维特效图像时,先获取步骤S1中的标准人脸图像的特征点,通过所述特征点确定所述三维特效在标准人脸图像中的位置;从通过图像传感器所采集到的图像中识别与标准人脸图像对应的第一人脸图像;将在标准人脸图像中所确定的位置映射到第一人脸图像中;对第一人脸图像做三维特效处理,生成三维特效图像。该方式中,确定三维特效在第一人脸图像中的相对位置,无论第一人脸图像如何移动变化,所述三维特效总位于该相对位置上,达到跟踪三维特效的目的。在一个典型的应用中,所述标准人脸图像经过三角剖分,有106个特征点,利用三维特效和特征点的相对位置确定作用范围在人脸图像中的相对位置,对摄像头采集到的人脸图像做同样的三角剖分,之后当摄像头中的人脸发生移动或转动时,所述三维特效可以一直固定在人脸上的相对位置上,以达到追踪三维特效的效果。
所述参考点起到了确定相对位置的作用,标准人脸图像上的参考点在第一人脸图像上有与之对应的特征点,通过参考点和第一人脸图像上对应的特征点,生成映射关系,通过映射关系,可以将三维特效生成到第一人脸图像的对应位置上。
在该可选实施例中,三维特效只有满足一定的条件才会触发显示,所述 触发的条件可以是用户的动作、表情、声音或者终端的参数等等。所述动作可以是面部动作,比如眨眼、嘴巴大张、摇头、点头、眉毛挑动,比如三维特效为眼镜的3D贴纸,则可以设置触发条件为快速眨眼两次,当检测到用户快速的眨眼两次,在用户的眼睛上显示眼镜的3D贴纸;所述表情可以是高兴、沮丧、愤怒等等,比如三维特效为眼泪的3D贴纸,则可以设置触发条件为沮丧的表情,当检测到用户的表情为沮丧时,在用户的眼睛下方显示眼泪的3D贴纸;当触发条件为声音时,可以检测用户的语音或者环境音,当检测到预定声音时,触发对应的三维特效;当触发条件为终端参数时,可以监控终端中的各部件的参数,比如终端的姿态、晃动等等,通过姿态或者晃动触发对应的三维特效,在此不再一一列举,可以理解的是该触发条件可以是任何适用于本公开技术方案中的触发条件,触发条件可以是1个或多个,在此不做限制。所述触发可以是触发开始或者触发消失,触发开始为出现触发条件时,对应的三维特效出现,触发消失为出现触发条件时,对应的三维特效消失;所述触发条件还可以包括触发后的延迟时间,就是触发条件出现之后,延迟多长时间三维特效出现或者消失。
在上文中,虽然按照上述的顺序描述了上述方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。
本公开实施例提供一种基于人脸的三维特效生成装置。该装置可以执行上述基于人脸的三维特效生成方法实施例中所述的步骤。如图3所示,该装置主要包括:显示模块31、三维特效创建模块32、特效参数生成模块33、人脸图像获取模块34和三维特效生成模块35。其中:显示模块31,用于显示标准人脸图像;三维特效创建模块32,用于创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建,所述三维特效位于所述标准人脸图像上;特效参数生成模块33,用于根据所述三维特效生成特效参数;人脸图像获取模块34,用于获取从图像传感器中识别出的第一人脸图像;三维特效生成模块35,用于根据所述特效参数,在第一 人脸图像上生成所述三维特效。
上述基于人脸的三维特效生成装置与上述图1所示实施例中的基于人脸的三维特效生成方法对应一致,具体细节可参考上述对基于人脸的三维特效生成方法的描述,在此不再赘述。
如图4所示,在一个可选的实施例中,所述基于人脸的三维特效生成装置进一步包括:
参考点选择模块36,用于接收参考点选择命令,选择至少一个特征点作为参考点。
所述三维特效生成模块35,用于根据所述特效参数以及所述参考点,在第一人脸图像上生成所述三维特效。
上述基于人脸的三维特效生成装置与上述图2所示实施例中的基于人脸的三维特效生成方法对应一致,具体细节可参考上述对基于人脸的三维特效生成方法的描述,在此不再赘述。
本公开实施例中,在标准人脸图像上创建三维特效,可以对三维特效进行配置,生成三维特效参数,之后根据三维特效参数生成实际人脸图像上的三维特效。已有技术中,三维特效需要通过第三方工具制作,使用时灵活性不足,不能实时对特效进行配置,且只能放置于已有的图像或者视频中,无法在实时采集的人脸图像上生成三维特效。而本实施例中,通过三维特效创建操作,用户可以方便的对三维特效进行配置和编辑,并且可以通过生成三维特效参数,使三维特效生成算法可以使用三维特效参数在实时采集的人脸图像上生成三维特效,因此相较于已有技术,大大降低了三维特效的编辑难度以及编辑时间,且三维特效可以同步于任何实时采集到的人脸图像上,从而提高了用户体验效果。
图5是图示根据本公开的实施例的电子设备框图。如图5所示,根据本公开实施例的电子设备50包括存储器51和处理器52。
该存储器51用于存储非暂时性计算机可读指令。具体地,存储器51可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪 存等。
该处理器52可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备50中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器52用于运行该存储器51中存储的该计算机可读指令,使得该电子设备50执行前述的本公开各实施例的基于人脸的特效生成方法的全部或部分步骤。
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图6是图示根据本公开的实施例的计算机可读存储介质的示意图。如图6所示,根据本公开实施例的计算机可读存储介质60,其上存储有非暂时性计算机可读指令61。当该非暂时性计算机可读指令61由处理器运行时,执行前述的本公开各实施例的基于人脸的特效生成方法的全部或部分步骤。
上述计算机可读存储介质60包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图7是图示根据本公开实施例的终端设备的硬件结构示意图。如图7所示,该基于人脸的三维特效生成终端70包括上述基于人脸的三维特效生成装置实施例。
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。
作为等同替换的实施方式,该终端还可以包括其他组件。如图7所示,该基于人脸的三维特效生成终端70可以包括电源单元71、无线通信单元72、A/V(音频/视频)输入单元73、用户输入单元74、感测单元75、接口单元76、控制器77、输出单元78和存储单元79等等。图7示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。
其中,无线通信单元72允许终端70与无线通信系统或网络之间的无线电通信。A/V输入单元73用于接收音频或视频信号。用户输入单元74可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元75检测终端70的当前状态、终端70的位置、用户对于终端70的触摸输入的有无、终端70的取向、终端70的加速或减速移动和方向等等,并且生成用于控制终端70的操作的命令或信号。接口单元76用作至少一个外部装置与终端70连接可以通过的接口。输出单元78被构造为以视觉、音频和/或触觉方式提供输出信号。存储单元79可以存储由控制器77执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储单元79可以包括至少一种类型的存储介质。而且,终端70可以与通过网络连接执行存储单元79的存储功能的网络存储装置协作。控制器77通常控制终端设备的总体操作。另外,控制器77可以包括用于再现或回放多媒体数据的多媒体模块。控制器77可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元71在控制器77的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
本公开提出的基于人脸的三维特效生成方法的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的基于人脸的三维特效生成方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的基于人脸的三维特效生成方法的各种实施方式可以在控制器77中实施。对于软件实施,本公开提出的基于人脸的三维特效生成方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程 语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储单元79中并且由控制器77执行。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。
还需要指出的是,在本公开的系统和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显 而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (10)

  1. 一种基于人脸的三维特效生成方法,其特征在于,包括:
    显示标准人脸图像;
    创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建,所述三维特效位于所述标准人脸图像上;
    根据所述三维特效生成特效参数;
    获取从图像传感器中识别出的第一人脸图像;
    根据所述特效参数,在第一人脸图像上生成所述三维特效。
  2. 根据权利要求1所述的方法,其特征在于,所述配置三维模型,包括:
    获取所述三维模型,并将其显示在标准人脸图像上;
    配置所述三维模型的位置以及尺寸。
  3. 根据权利要求1所述的方法,其特征在于,所述配置三维模型的材质,包括:
    创建所述材质,并调节所述材质的参数;
    所述材质的参数包括渲染混合模式参数、是否启用深度测试参数、是否启用深度写参数、是否开启剔除参数中的一个或多个。
  4. 根据权利要求1所述的方法,其特征在于,所述创建三维特效,还包括:
    配置灯光,所述灯光包括点光源、平行光源及聚光灯光源中的一种。
  5. 根据权利要求4所述的方法,其特征在于,所述配置灯光,包括:
    配置灯光的位置、方向、颜色以及强度。
  6. 根据权利要求1所述的方法,其特征在于,所述创建三维特效,还 包括:
    配置所述三维模型的纹理,包括:
    获取所述纹理的贴图;
    配置纹理的环绕模式。
  7. 根据权利要求1所述的方法,其特征在于:
    所述标准人脸图像上包括人脸特征点,所述显示标准人脸图像之后,包括:
    接收参考点选择命令,选择至少一个特征点作为参考点;
    所述根据所述特效参数,在第一人脸图像上生成所述三维特效,包括:
    根据所述特效参数以及所述参考点,在第一人脸图像上生成所述三维特效。
  8. 一种基于人脸的三维特效生成装置,其特征在于,包括:
    显示模块,用于显示标准人脸图像;
    三维特效创建模块,用于创建三维特效,所述三维特效至少通过配置三维模型和配置三维模型的材质来创建,所述三维特效位于所述标准人脸图像上;
    特效参数生成模块,用于根据所述三维特效生成特效参数;
    人脸图像获取模块,用于获取从图像传感器中识别出的第一人脸图像;
    三维特效生成模块,用于根据所述特效参数,在第一人脸图像上生成所述三维特效。
  9. 一种电子设备,其特征在于,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-7任一所述的基于人脸的三维特效生成方法。
  10. 一种非暂态计算机可读存储介质,其特征在于,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求1-7任一所述的基于人脸的三维特效生成方法。
PCT/CN2018/123641 2018-07-27 2018-12-25 基于人脸的三维特效生成方法、装置和电子设备 WO2020019665A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB2100222.5A GB2589505B (en) 2018-07-27 2018-12-25 Method, apparatus and electronic device for generating a three-dimensional effect based on a face
JP2020571659A JP7168694B2 (ja) 2018-07-27 2018-12-25 ヒューマンフェースによる3d特殊効果生成方法、装置および電子装置
US16/967,962 US11276238B2 (en) 2018-07-27 2018-12-25 Method, apparatus and electronic device for generating a three-dimensional effect based on a face

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810838414.XA CN109147023A (zh) 2018-07-27 2018-07-27 基于人脸的三维特效生成方法、装置和电子设备
CN201810838414.X 2018-07-27

Publications (1)

Publication Number Publication Date
WO2020019665A1 true WO2020019665A1 (zh) 2020-01-30

Family

ID=64798161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123641 WO2020019665A1 (zh) 2018-07-27 2018-12-25 基于人脸的三维特效生成方法、装置和电子设备

Country Status (5)

Country Link
US (1) US11276238B2 (zh)
JP (1) JP7168694B2 (zh)
CN (1) CN109147023A (zh)
GB (1) GB2589505B (zh)
WO (1) WO2020019665A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188640A (zh) * 2022-12-09 2023-05-30 北京百度网讯科技有限公司 三维虚拟形象的生成方法、装置、设备和介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903375B (zh) * 2019-02-21 2023-06-06 Oppo广东移动通信有限公司 模型生成方法、装置、存储介质及电子设备
CN110401801A (zh) * 2019-07-22 2019-11-01 北京达佳互联信息技术有限公司 视频生成方法、装置、电子设备及存储介质
CN110570505B (zh) * 2019-09-11 2020-11-17 腾讯科技(深圳)有限公司 一种图像渲染方法、装置、设备及存储介质
CN111242881B (zh) * 2020-01-07 2021-01-12 北京字节跳动网络技术有限公司 显示特效的方法、装置、存储介质及电子设备
CN111428665B (zh) * 2020-03-30 2024-04-12 咪咕视讯科技有限公司 一种信息确定方法、设备及计算机可读存储介质
CN111833461B (zh) * 2020-07-10 2022-07-01 北京字节跳动网络技术有限公司 一种图像特效的实现方法、装置、电子设备及存储介质
CN113096225B (zh) * 2021-03-19 2023-11-21 北京达佳互联信息技术有限公司 一种图像特效的生成方法、装置、电子设备及存储介质
CN114494556A (zh) * 2022-01-30 2022-05-13 北京大甜绵白糖科技有限公司 一种特效渲染方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452582A (zh) * 2008-12-18 2009-06-10 北京中星微电子有限公司 一种实现三维视频特效的方法和装置
CN102800130A (zh) * 2012-07-04 2012-11-28 哈尔滨工程大学 一种近水面飞行器机动飞行视景仿真方法
CN105354872A (zh) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 一种基于3d网页游戏的渲染引擎、实现方法及制作工具
CN108062791A (zh) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 一种重建人脸三维模型的方法和装置
CN108073669A (zh) * 2017-01-12 2018-05-25 北京市商汤科技开发有限公司 业务对象展示方法、装置和电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006507585A (ja) * 2002-11-15 2006-03-02 ワーナー・ブロス・エンターテイメント・インコーポレーテッド デジタルモデリング用のリバースレンダリング法
US7324117B1 (en) * 2004-03-29 2008-01-29 Nvidia Corporation Method and apparatus for using non-power of two dimension texture maps
JP4865472B2 (ja) * 2006-09-21 2012-02-01 株式会社バンダイナムコゲームス プログラム、情報記憶媒体及び画像生成システム
KR100873638B1 (ko) * 2007-01-16 2008-12-12 삼성전자주식회사 영상 처리 방법 및 장치
JP2009053981A (ja) * 2007-08-28 2009-03-12 Kao Corp 化粧シミュレーション装置
EP2845384A1 (en) * 2012-05-02 2015-03-11 Koninklijke Philips N.V. Quality metric for processing 3d video
WO2015144209A1 (en) * 2014-03-25 2015-10-01 Metaio Gmbh Method and system for representing a virtual object in a view of a real environment
US9245358B2 (en) * 2014-05-30 2016-01-26 Apple Inc. Systems and methods for generating refined, high fidelity normal maps for 2D and 3D textures
US9817248B2 (en) * 2014-12-23 2017-11-14 Multimedia Image Solution Limited Method of virtually trying on eyeglasses
CN106341720B (zh) * 2016-08-18 2019-07-26 北京奇虎科技有限公司 一种在视频直播中添加脸部特效的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452582A (zh) * 2008-12-18 2009-06-10 北京中星微电子有限公司 一种实现三维视频特效的方法和装置
CN102800130A (zh) * 2012-07-04 2012-11-28 哈尔滨工程大学 一种近水面飞行器机动飞行视景仿真方法
CN105354872A (zh) * 2015-11-04 2016-02-24 深圳墨麟科技股份有限公司 一种基于3d网页游戏的渲染引擎、实现方法及制作工具
CN108073669A (zh) * 2017-01-12 2018-05-25 北京市商汤科技开发有限公司 业务对象展示方法、装置和电子设备
CN108062791A (zh) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 一种重建人脸三维模型的方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188640A (zh) * 2022-12-09 2023-05-30 北京百度网讯科技有限公司 三维虚拟形象的生成方法、装置、设备和介质
CN116188640B (zh) * 2022-12-09 2023-09-08 北京百度网讯科技有限公司 三维虚拟形象的生成方法、装置、设备和介质

Also Published As

Publication number Publication date
CN109147023A (zh) 2019-01-04
US20210035369A1 (en) 2021-02-04
JP7168694B2 (ja) 2022-11-09
US11276238B2 (en) 2022-03-15
JP2021528770A (ja) 2021-10-21
GB202100222D0 (en) 2021-02-24
GB2589505A (en) 2021-06-02
GB2589505B (en) 2023-01-18

Similar Documents

Publication Publication Date Title
WO2020019665A1 (zh) 基于人脸的三维特效生成方法、装置和电子设备
US11354825B2 (en) Method, apparatus for generating special effect based on face, and electronic device
US11017580B2 (en) Face image processing based on key point detection
EP3855288B1 (en) Spatial relationships for integration of visual images of physical environment into virtual reality
US20220245906A1 (en) Location-based virtual element modality in three-dimensional content
US20210312695A1 (en) Hair rendering method, device, electronic apparatus, and storage medium
WO2020019664A1 (zh) 基于人脸的形变图像生成方法和装置
JP7008733B2 (ja) 挿入される画像コンテンツについての影生成
WO2020024569A1 (zh) 动态生成人脸三维模型的方法、装置、电子设备
US11176355B2 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
WO2020019666A1 (zh) 人脸特效的多人脸跟踪方法、装置和电子设备
WO2020019667A1 (zh) 三维粒子特效生成方法、装置和电子设备
TWI752419B (zh) 影像處理方法及裝置、圖像設備及儲存媒介
JP2023541275A (ja) 環境内のオブジェクトと相互作用する方法
WO2020037923A1 (zh) 图像合成方法和装置
CN109064387A (zh) 图像特效生成方法、装置和电子设备
US20210271883A1 (en) Methods and apparatus for projecting augmented reality enhancements to real objects in response to user gestures detected in a real environment
WO2022105677A1 (zh) 虚拟现实设备的键盘透视方法、装置及虚拟现实设备
WO2020037924A1 (zh) 动画生成方法和装置
WO2020001015A1 (zh) 场景操控的方法、装置及电子设备
WO2020001016A1 (zh) 运动图像生成方法、装置、电子设备及计算机可读存储介质
CN104378620A (zh) 一种图像处理方法及电子设备
CN109147054A (zh) Ar的3d模型朝向的设置方法、装置、存储介质及终端
US10825258B1 (en) Systems and methods for graph-based design of augmented-reality effects
Lim et al. 3D caricature generation system on the mobile handset using a single photograph

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927605

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020571659

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 202100222

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20181225

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18927605

Country of ref document: EP

Kind code of ref document: A1