WO2022057576A1 - 人脸图像显示方法、装置、电子设备及存储介质 - Google Patents

人脸图像显示方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022057576A1
WO2022057576A1 PCT/CN2021/114237 CN2021114237W WO2022057576A1 WO 2022057576 A1 WO2022057576 A1 WO 2022057576A1 CN 2021114237 W CN2021114237 W CN 2021114237W WO 2022057576 A1 WO2022057576 A1 WO 2022057576A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
face mask
target
preset
displaying
Prior art date
Application number
PCT/CN2021/114237
Other languages
English (en)
French (fr)
Inventor
刘佳成
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to JP2023507827A priority Critical patent/JP2023537721A/ja
Priority to EP21868404.1A priority patent/EP4177724A4/en
Priority to KR1020237003737A priority patent/KR20230034351A/ko
Priority to BR112023001930A priority patent/BR112023001930A2/pt
Publication of WO2022057576A1 publication Critical patent/WO2022057576A1/zh
Priority to US18/060,128 priority patent/US11935176B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a method, device, electronic device and storage medium for displaying a face image.
  • the existing applications can process various types of face images through image processing, and display the processed face images, the existing applications cannot The need for diversity of interaction methods in the process.
  • the present disclosure provides a method, device, electronic device and storage medium for displaying a face image, which are used to solve the technical problem that the user's demand for diversity of interaction modes in the process of displaying a face image cannot be met at present.
  • an embodiment of the present disclosure provides a method for displaying a face image, including:
  • the object face At a preset relative position of the object face, dynamically display a face mask sequence according to a preset motion mode, and the face mask sequence includes a plurality of face masks corresponding to the object face;
  • the target face mask In response to a trigger instruction acting on the target face mask, the target face mask is fused to the target face for display, wherein the target face mask is any mask in the face mask sequence .
  • an embodiment of the present disclosure provides a device for displaying a face image, including:
  • a display module configured to dynamically display a face mask sequence at a preset relative position of the subject face according to a preset motion mode, where the face mask sequence includes a plurality of face masks corresponding to the subject face;
  • the acquisition module is used to acquire the trigger instruction acting on the target face mask
  • a processing module further configured to fuse the target face mask to the target face, wherein the target face mask is any mask in the face mask sequence;
  • the display module is further configured to display the target face after fusion with the target face mask.
  • embodiments of the present disclosure provide an electronic device, including: at least one processor and a memory;
  • the memory stores computer-executable instructions
  • the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the method for displaying a face image as described in the first aspect and various possible designs of the first aspect above.
  • embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first Aspect the face image display method described in the various possible designs.
  • embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the face image described in the first aspect and various possible designs of the first aspect. Display method.
  • an embodiment of the present disclosure provides a computer program that, when executed by a processor, implements the method for displaying a face image as described in the first aspect and various possible designs of the first aspect.
  • a face image display method, device, electronic device, and storage medium provided by the embodiments of the present disclosure dynamically display a face mask sequence according to a preset motion mode at a preset relative position of a subject's face, and when a user triggers After the target face mask in the face mask sequence, the target face mask is fused to the target face for display, thereby enhancing the interactivity in the face image display process, and can also be triggered when the user triggers the target face mask. Then, the target face mask is fused to the target face for display, so as to realize the effect of displaying a specific face mask on the target face.
  • FIG. 1 is a schematic flowchart of a method for displaying a face image according to an exemplary embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of generating a single face mask according to an exemplary embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of generating a face mask sequence according to an exemplary embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a triggering process of a method for displaying a face image according to an exemplary embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of a post-trigger fusion display step according to an exemplary embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of a method for displaying a face image according to another exemplary embodiment of the present disclosure
  • FIG. 7 is a schematic flowchart of a rendering step according to an exemplary embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram showing the result after rendering of the rendering step shown in FIG. 7;
  • FIG. 9 is a schematic structural diagram of a face image display device according to an exemplary embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • facial image processing can be performed on the face image
  • facial image processing can be performed on the face image
  • face image processing can be performed on the face image.
  • many application scenarios have also been derived, but they are all directly applied based on the processed face image, which cannot satisfy the user's diverse interaction methods during the display of the face image. sexual needs.
  • a face mask sequence is first generated according to the subject's face.
  • the face mask sequence includes multiple face masks.
  • the face mask in the face mask sequence may be a face mask directly corresponding to the target face, or it may be a face mask generated after performing related processing (for example: deformation processing, beauty processing) on the target face. face mask.
  • the face mask sequence may include multiple masks corresponding to different face shapes.
  • the face masks in the face mask sequence may be arranged according to a specific distribution rule.
  • the face masks in the face mask sequence may be arranged according to a preset circumferential direction , wherein the preset circumferential direction may be centered on the subject's face, and the head direction of the subject's face may be the central axis, so as to form an effect that the face mask sequence surrounds the subject's face.
  • the face mask sequence can also be dynamically displayed according to a preset motion mode, for example, the face mask sequence can be rotated around the subject's face.
  • the rotation speed of the face mask sequence rotating around the object face may be determined according to the user's physical characteristics, for example, may be determined according to the user's mouth opening, the user's smile and the user's related gestures.
  • it can be illustrated by taking the opening degree of the user's mouth as an example.
  • the rotation speed of the face mask sequence rotating around the face of the object can be faster as the opening degree of the user's mouth increases, that is, the opening degree of the user's mouth increases. The larger the value, the faster the sequence of face masks will rotate around the subject's face. It can be seen that the user can accelerate and decelerate the rotation speed of the face mask sequence around the subject's face by adjusting the degree of mouth opening.
  • the user When the user triggers any mask in the face mask sequence, for example, the user selects a mask in the face mask sequence by clicking on the screen, which is the target face mask. After triggering, the rotation speed of the face mask sequence around the target face will decrease first until the target face mask moves to face the target face. At this time, the rotation of the face mask sequence can be stopped, and the target face The face mask is fused to the subject's face for display. Further, for other face masks other than the target face mask in the face mask sequence, gradual disappearance may be performed according to a preset transparency change rule.
  • the face mask sequence is dynamically displayed according to the preset motion mode at the preset relative position of the target face, and after the user triggers the target face mask in the face mask sequence, the The target face mask is fused to the target face for display. Therefore, before the user triggers the target face mask, the display mode of the face mask sequence and the target face have an interactive effect, and after the user triggers the target face mask, the target face mask is fused to the target face mask.
  • the object face is displayed, and the effect of displaying a specific face mask on the object face can be realized.
  • FIG. 1 is a schematic flowchart of a method for displaying a face image according to an exemplary embodiment of the present disclosure. As shown in FIG. 1 , the method for displaying a face image provided by this embodiment includes:
  • Step 101 Dynamically display a face mask sequence at a preset relative position of the subject's face according to a preset motion mode.
  • the face-mask sequence may be dynamically displayed at a preset relative position of the subject's face according to a preset motion mode, wherein the face-mask sequence may include multiple The face mask corresponding to the object's face.
  • the face mask in the face mask sequence may be a face mask generated without processing the target face, or it may be a face mask obtained after performing related processing (for example: deformation processing, beauty processing) on the target face. Generated face masks.
  • a related 3D face processing tool may be used for the above-mentioned generation of a face mask according to the object face.
  • the face mask sequence may include multiple masks corresponding to different face shapes.
  • the face masks in the face mask sequence may be arranged according to a specific distribution rule.
  • the face masks in the face mask sequence may be arranged according to a preset circumferential direction , and may also be arranged in a sequential manner in a preset direction.
  • the face masks in the face mask sequence can be arranged according to a preset circumferential direction
  • the preset circumferential direction can be centered on the subject's face and centered on the top direction of the subject's face.
  • the face mask sequence can also be dynamically displayed according to a preset motion mode, for example, the face mask sequence can be rotated around the subject's face.
  • the face mask sequence can slide from the front of the subject's face according to the sequence of the face masks arrangement.
  • the specific arrangement of the face mask sequence and the relative movement mode between the face mask sequence and the target face are not specifically limited in this embodiment, and the specific form can be adapted according to specific scene requirements. However, in this embodiment, it is intended to exemplarily illustrate that there is a specific relative positional relationship and relative motion relationship between the object face and the face mask sequence.
  • Step 102 In response to the triggering instruction acting on the target face mask, the target face mask is fused to the target face for display.
  • the user When the user triggers any mask in the face mask sequence, for example, the user selects a face mask in the face mask sequence by clicking on the screen, which is the target face mask. After triggering, the rotation speed of the face mask sequence around the target face will decrease first until the target face mask moves to face the target face. At this time, the rotation of the face mask sequence can be stopped, and the target face The face mask is fused to the subject's face for display. Further, for other face masks other than the target face mask in the face mask sequence, gradual disappearance may be performed according to a preset transparency change rule.
  • the face mask sequence is dynamically displayed according to the preset motion mode at the preset relative position of the target face, and after the user triggers the target face mask in the face mask sequence, the target face The mask is fused to the target face for display, thereby enhancing the interactivity in the process of displaying the face image.
  • the target face mask can be fused to the target face for display, so as to Realize the effect of displaying a specific face mask on the face of the object, which enhances the fun and experience of user interaction.
  • the face masks included in the face mask sequence may include the original face masks corresponding to the object faces, and may also include the original face masks generated after processing the original face masks by using a 3D face processing tool.
  • Shapeshifting face mask For example, the face mask sequence may include 8 human face masks, and may include 2 original human face masks and 6 deformed human face masks.
  • 8 face mask entities can be created first, and then a pre-designed 3D model with deformation effects is imported, and a deformer is added to each face mask entity to adjust the degree of corresponding deformation. Since the deformation form of each face mask is different, each deformation form needs to be imported separately, and multiple deformers can also be implemented into the same model.
  • FIG. 2 is a schematic diagram of generating a single face mask according to an exemplary embodiment of the present disclosure. As shown in Figure 2, the original position of the vertices before the deformation of the face mask in the current model space and the position offset after deformation can be obtained. Then, since it is necessary to ensure that the face mask is always displayed outward relative to the object face, it is necessary to First do the displacement operation, and then do the rotation operation.
  • the original position of the vertex plus the deformed position offset may be added to the coordinate offset on the Z axis, so as to realize the offset from the original position of the object's face.
  • the generated face mask is also scaled, for example, it can be scaled to 90% of the original size , so that each face mask in the face mask sequence is a scaled face mask corresponding to the object face. It can be seen that, by the above method, a face mask after being scaled to a certain ratio can be displayed at a position where the subject's face deviates from the preset distance. The specific effect can be seen in FIG. 2 .
  • the subject face can be used as the center, and the top direction of the subject face can be the central axis (for example: Y axis), rotated by the following rotation matrix:
  • a single face mask can be rotated to 90% of the original size, with the face of the object as the center, the top of the head as the central axis, a circle with a specified radius, and the front of the face mask always faces outward. sports.
  • the face mask sequence can be dynamically displayed according to the preset motion mode at the preset relative position of the object face.
  • the case where the face mask sequence includes 8 face masks can be illustrated as an example:
  • FIG. 3 is a schematic diagram of generating a face mask sequence according to an exemplary embodiment of the present disclosure.
  • the same offset displacement and scaling ratio can be assigned.
  • the rotation angles are defined in turn according to the initialization sequence, so that the 8 face masks after initialization will be placed at intervals of 45 degrees, so that the face mask sequence formed by the 8 face masks forms a complete circle, so that the
  • the face masks in the face mask sequence are arranged according to a preset circumferential direction.
  • 8 face masks can also be assigned different deformation models and configured with algorithms for controlling the selection of face masks, so that the movement of these face masks can be controlled uniformly in a system-level script.
  • FIG. 4 is a schematic diagram of a triggering process of a method for displaying a face image according to an exemplary embodiment of the present disclosure. As shown in Figure 4, the user can select a mask in the face mask sequence as the target face mask by clicking on the screen.
  • the rotation speed of the face mask sequence around the target face will decrease first until the target face mask moves to face the target face. At this time, the rotation of the face mask sequence can be stopped, and the target face The face mask is fused to the subject's face for display. Further, for other face masks other than the target face mask in the face mask sequence, gradual disappearance may be performed according to a preset transparency change rule.
  • FIG. 5 is a schematic flowchart of a post-trigger fusion display step according to an exemplary embodiment of the present disclosure.
  • step 102 in the above embodiment may specifically include:
  • Step 1020 Obtain a trigger instruction acting on the target face mask.
  • the user can select a mask in the face mask sequence as the target face mask by clicking on the screen.
  • Step 1021 Determine whether the rotation speed is less than the preset target speed. If the judgment result is yes, go to step 1022; if the judgment result is no, go to step 1023.
  • Step 1022 setting the current rotation angle.
  • Step 1023 reducing the rotation speed.
  • Step 1024 Determine whether the rotation speed is less than the preset target speed. If the judgment result is yes, go to step 1025; if the judgment result is no, go to step 1023.
  • Step 1025 Calculate the target rotation angle.
  • the target face mask After triggering the target face mask in the face mask sequence, it is necessary to determine whether the rotation speed is less than the preset target speed, where the preset target speed may be a rotation speed threshold. If the judgment result is yes, it means that the rotation speed of the current face mask is slow, and the target face mask can be controlled to move to the target position by directly calculating the target rotation angle, for example, the position directly in front of the target face, and then Perform subsequent fusion display operations. However, if the current rotation speed is greater than the preset target speed, and the target face mask is directly controlled to move to the target position by calculating the target rotation angle, it will cause the movement state trend of the face mask sequence to change too much, resulting in an interactive experience. Poor delivery.
  • the rotation speed needs to be reduced first, and after the rotation speed is reduced to less than the preset target speed, the subsequent calculation of the target rotation angle is performed to control the movement of the target face mask to the target position.
  • the target rotation angle is the rotation angle when the movement of the face mask stops.
  • the case where the face mask sequence includes 8 face masks can be continued as an example. Since one face mask must be facing the user during the stop, the rotation angle of the face mask is 0, so the target rotation angle must be is an integer multiple of 45.
  • the target rotation angle of each face mask in the face mask sequence can be calculated by the following formula:
  • is the target rotation angle
  • is the current rotation angle
  • floor represents the largest integer less than the result in the brackets, adding 1 in the brackets can ensure that the direction of the relationship between the target angle and the current angle is consistent with the direction of the rotation speed, and the current rotation angle is the current corresponding to each face mask face angle.
  • Step 1026 Record the serial number of the face mask rotated to the target position.
  • each face mask in the face mask sequence can be assigned a serial number, and the serial number is used to uniquely identify each face mask.
  • the two face masks are assigned the number 2
  • the third face mask is assigned the number 3
  • the numbers are assigned in turn until the eighth face mask is assigned the number 8.
  • the number of the face mask rotated to the target position at each moment is recorded at the current moment. For example, at the first moment, the first face mask corresponding to No. 1 is rotated to the target position, At the second moment, the second face mask corresponding to the number 2 rotates to the target position.
  • Step 1027 Determine whether the target face mask is rotated to the target position. If the judgment result is yes, go to step 1025; if the judgment result is no, go to step 1028 and step 1029.
  • the target face mask When the target face mask is facing the target face, that is, when the target rotation angle of the target face mask is 0, it means that the target face mask has reached the target position. At this time, the rotation motion of the face mask sequence can be stopped.
  • the number corresponding to the target face mask may be recorded, and during the rotation process of the face mask sequence, if the number recorded at the current moment is the number corresponding to the target face mask number, it means that the target face mask is facing the target face, at this time, the rotation movement of the face mask sequence can be stopped.
  • the target face mask triggered by the user is a third face mask, record number 3, and during the rotation process of the face mask sequence, when the face mask corresponding to record number 3 rotates to the target position, it means that the number 3 is recorded.
  • the rotation of the face mask sequence is stopped.
  • Step 1028 Display the fused face image on the face of the object.
  • the target face mask may be fitted to the target face according to a preset path, and the target face mask and the target face may be fused to generate a fused face image, wherein the preset path starts from The first position points to the second position, the first position is the position of the current target face mask in the face mask sequence, and the second position is the position of the current target face.
  • the deviation value of the face masks in the Z-axis direction may be reduced, so as to realize the effect that all face masks move toward the target face to reduce the radius of the circle.
  • the fused face image is displayed on the subject's face.
  • Step 1029 gradually disappear the other face mask.
  • the alpha channel of the face mask not facing the subject's face may be gradually reduced, wherein the alpha channel is used to adjust the transparency, so that the face mask not facing the subject's face will gradually become transparent.
  • the implementation can refer to the following formula:
  • alpha max((alpha′- ⁇ t ⁇ t,0))
  • alpha is the transparency of the face mask of the current frame
  • alpha' is the transparency of the face mask of the current frame
  • ⁇ t is the time coefficient
  • ⁇ t is the time difference between the current frame and the previous frame.
  • FIG. 6 is a schematic flowchart of a method for displaying a face image according to another exemplary embodiment of the present disclosure. As shown in FIG. 6 , the method for displaying a face image provided by this embodiment includes:
  • Step 201 Determine a rendering area and a non-rendering area according to the current position parameters of each texel on each face mask of the face mask sequence and the position parameters of the target face, and only render the rendering area.
  • each face mask needs to be rendered before the face mask sequence is displayed. If each face mask is rendered indiscriminately and in all directions, it will easily lead to excessive rendering calculations, which in turn takes up too many computing resources. Therefore, the rendering area and the non-rendering area can be determined according to the current position parameters of each texel on each face mask of the face mask sequence and the position parameters of the target face, and only the rendering area is rendered.
  • FIG. 7 is a schematic flowchart of a rendering step according to an exemplary embodiment of the present disclosure. As shown in FIG. 7 , the above rendering steps in this embodiment may include:
  • Step 2012 If it is a frontal area, determine whether the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction. If the judgment result is yes, execute step 2015; if the judgment result is no, execute step 2014.
  • the third direction may be a direction in which there is a visual space occlusion between the subject's face and the face mask.
  • Step 2013 If it is the back area, determine whether the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction. If the judgment result is yes, go to step 2015; if the judgement result is no, go to step 2016.
  • the front area and the back area of the face mask can be completed through corresponding processing processes, wherein the first processing process is only used for rendering the front area, and the second processing process is only used for rendering the front area Used to render the reverse area.
  • the two processing processes can share a texel shader for rendering, in which the texel shader samples the current object face to return the real-time portrait mask, so as to realize the rendering of the face mask.
  • the current position of the texel is within the range of the position corresponding to the target face, and is located at the feature key point of the target face in the third direction (for example, the Z-axis direction) (for example: the target face Sideburns position), determine whether the texel needs to be rendered.
  • the third direction for example, the Z-axis direction
  • the target face Sideburns position determines whether the texel needs to be rendered.
  • the texel when the current position of the texel is within the position range corresponding to the subject's face, the texel may be in front of the subject's face, or may be behind the subject's face. However, if the current position of the texel is located in front of the sideburns of the subject's face in the Z-axis direction, it can be determined that the texel is in front of the subject's face and is visible, and the texel is visible to the user. Therefore, it is necessary to Render this texel.
  • Step 2014 Render the frontal area according to the face of the object.
  • rendering can be performed according to the object face to display the specific appearance of the object face corresponding to the face mask.
  • Step 2015 do not render the texels of the face mask located in the non-rendering area.
  • texels of the face mask located in the non-rendering area whether they belong to the front area or the back area, no rendering is performed. It can be understood that the texels in this area are set to be transparent.
  • Step 2016 Render the back area to a preset fixed texture.
  • face masks that are texels in the rendering area, but belong to the back area, they can be rendered according to a preset fixed texture, for example, they can be rendered in gray.
  • FIG. 8 is a schematic diagram of a post-rendering display result of the rendering step shown in FIG. 7 .
  • area A it is located in the rendering area and belongs to the frontal area, so rendering is performed according to the object face to display the specific appearance of the object face corresponding to the face mask.
  • area B it is in the rendering area and belongs to the back area, so it is rendered as gray for example.
  • step 202 at a preset relative position of the subject's face, dynamically display a face mask sequence according to a preset motion mode.
  • step 202 in this embodiment, reference may be made to the specific description of step 101 in the embodiment shown in FIG. 1 .
  • step 101 in the embodiment shown in FIG. 1 when the sequence of face masks is dynamically displayed according to the preset motion mode, it can be determined according to the physical characteristics of the user, for example, it can be determined according to the user's mouth opening degree, User smile level and user related gestures.
  • the opening degree of the user's mouth As an example.
  • the rotation speed of the face mask sequence rotating around the face of the object can be faster as the opening degree of the user's mouth increases, that is, the opening of the user's mouth increases. The larger the value, the faster the sequence of face masks will rotate around the subject's face. It can be seen that the user can accelerate and decelerate the rotation speed of the face mask sequence around the subject's face by adjusting the degree of mouth opening.
  • the characteristic parameters of the target part on the face of the subject may be obtained first, then the rotation speed is determined according to the characteristic parameters, and the face mask sequence is dynamically displayed according to the rotation speed.
  • the mouth feature parameters and the eye feature parameters of the face of the object can be obtained, wherein the mouth feature parameters include the coordinates of the key point of the upper lip and the coordinates of the key point of the lower lip, and the eye feature parameters include the coordinates of the key point of the left eye and the coordinates of the right eye. Keypoint coordinates.
  • the first coordinate difference in the first direction (eg, the Y axis) is determined according to the coordinates of the upper lip key point and the coordinate of the lower lip key point
  • the second coordinate difference is determined according to the coordinates of the left eye key point and the right eye key point.
  • the second coordinate difference in the direction (eg, X-axis).
  • a characteristic parameter is determined according to the ratio of the first coordinate difference value and the second coordinate difference value, wherein the characteristic parameter can be used to characterize the degree of mouth opening. It should be noted that by determining the mouth opening degree by the ratio between the first coordinate difference value and the second coordinate difference value, fluctuations in the mouth opening degree caused by changes in the distance of the subject face from the camera can be avoided.
  • the rotation speed is the first preset speed. If the characteristic parameter is greater than the preset first threshold, the rotation speed is the sum of the first preset speed and the additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the difference between the characteristic parameters is the characteristic parameter and the preset first threshold Difference. And when the sum of the first preset speed and the additional speed is greater than or equal to a second preset speed, the rotation speed is determined as the second preset speed.
  • the rotation speed of the face mask in the current frame can be calculated according to the degree of mouth opening.
  • the specific calculation formula is as follows:
  • min( ⁇ min +max((Dd),0) ⁇ ⁇ , ⁇ max )
  • is the rotation speed
  • ⁇ min is the minimum rotation speed
  • D is the mouth opening degree
  • d is the mouth opening detection threshold
  • ⁇ ⁇ is the speed coefficient
  • ⁇ max is the maximum rotation speed.
  • the mouth opening detection threshold means that the mouth opening degree is greater than the threshold value to determine the mouth opening
  • the speed coefficient refers to a constant that needs to be multiplied when converting the mouth opening degree parameter into the rotation speed.
  • the above-mentioned preset first threshold is the mouth opening detection threshold d
  • the first preset speed is the minimum rotation speed ⁇ min
  • the additional speed is: (Dd) ⁇ ⁇
  • the second preset speed is the maximum rotation speed speed ⁇ max .
  • the rotation angle of each face mask in the current frame can also be set through the rotation speed determined above.
  • the specific formula is as follows:
  • Step 203 In response to the triggering instruction acting on the target face mask, the target face mask is fused to the target face for display.
  • step 203 in this embodiment, reference may be made to the specific description of step 102 in the embodiment shown in FIG. 1 , which will not be repeated here.
  • FIG. 9 is a schematic structural diagram of a face image display device according to an exemplary embodiment of the present disclosure.
  • a human face image display device 300 provided in this embodiment includes:
  • a display module 301 is used to dynamically display a sequence of face masks at a preset relative position of the object's face according to a preset motion mode, and the sequence of face masks includes a plurality of face masks corresponding to the subject's faces;
  • an obtaining module 302 configured to obtain a trigger instruction acting on the target face mask
  • the processing module 303 is further configured to fuse the target face mask to the target face, wherein the target face mask is any mask in the face mask sequence;
  • the display module 301 is further configured to display the target face after fusion with the target face mask.
  • the face mask sequence includes at least one deformed face mask corresponding to the subject face.
  • the display module 301 is specifically used for:
  • the face masks in the face mask sequence are preset according to the preset Arranged in the circumferential direction.
  • the face masks in the face mask sequence are scaled face masks corresponding to the subject face.
  • the display module 301 is specifically used for:
  • the rotation speed of the rotation motion is determined according to the characteristic parameter, and the face mask sequence is dynamically displayed according to the rotation speed.
  • the obtaining module 302 is further configured to obtain mouth feature parameters and eye feature parameters of the subject's face, where the mouth feature parameters include key point coordinates of the upper lip and lower lip key point coordinates, the eye feature parameters include left eye key point coordinates and right eye key point coordinates;
  • the processing module 303 is further configured to determine the first coordinate difference in the first direction according to the upper lip key point coordinates and the lower lip key point coordinates, and according to the left eye key point coordinates and the The coordinates of the right eye key point determine the second coordinate difference in the second direction;
  • the processing module 303 is further configured to determine the characteristic parameter according to the ratio of the first coordinate difference and the second coordinate difference.
  • the processing module 303 is specifically configured to:
  • the rotation speed is the first preset speed
  • the rotation speed is the sum of the first preset speed and an additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the characteristic
  • the parameter difference is the difference between the characteristic parameter and the preset first threshold
  • the rotation speed is determined as the second preset speed.
  • the processing module 303 is further configured to determine that the target face mask is rotated to a target position, and the target position and the target face conform to a preset positional relationship.
  • the target position is a position directly in front of the subject's face.
  • the rotational speed of the rotational motion is reduced to a preset target speed.
  • the processing module 303 is further configured to fit the target face mask to the subject face according to a preset path, and attach the target face mask to the target face mask.
  • the subject face is fused to generate a fused face image, wherein the preset path points from a first position to a second position, and the first position is the current position of the target face mask on the face mask. the position in the sequence, the second position is the current position of the subject's face;
  • the display module 301 is further configured to display the fused face image on the subject's face.
  • the display module 301 is further configured to, according to a preset transparency change rule, perform an operation on other face masks other than the target face mask in the face mask sequence. Gradient disappears.
  • the processing module 303 is further configured to, according to the current position parameter of each texel on each face mask of the face mask sequence and the position parameter of the object face A rendering area and a non-rendering area are determined, and only the rendering area is rendered.
  • the texel belongs to the non-rendering area if the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction.
  • the processing module 303 is further configured to render the frontal area according to the face of the object;
  • the processing module 303 is further configured to render the back area into a preset fixed texture.
  • the device for displaying a face image provided by the embodiment shown in FIG. 9 can be used to execute the method provided by any of the above embodiments, and the specific implementation manner and technical effect are similar, and are not repeated here.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in FIG. 10 , it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player, PMP), in-vehicle terminals (for example, in-vehicle navigation terminals) and other mobile terminals with image acquisition functions, as well as fixed terminals with image acquisition devices such as digital TVs, desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD Portable multimedia players
  • PMP Portable Media Player
  • in-vehicle terminals for example, in-vehicle navigation terminals
  • other mobile terminals with image acquisition functions as well as fixed terminals with image acquisition devices such as digital TVs, desktop computers, and
  • the electronic device 400 may include a processor (eg, a central processing unit, a graphics processing unit, etc.) 401 , which may be based on a program stored in a read-only memory (Read-Only Memory, ROM) 402 or from a memory 408 A program loaded into a random access memory (RAM) 403 performs various appropriate actions and processes. In the RAM 403, various programs and data required for the operation of the electronic device 400 are also stored.
  • the processor 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An Input/Output (I/O) interface 405 is also connected to the bus 404 .
  • the memory is used to store programs for executing the methods described in the above method embodiments; the processor is configured to execute the programs stored in the memory.
  • the following devices can be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) output device 407 , a speaker, a vibrator, etc.; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409 .
  • Communication means 409 may allow electronic device 400 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 10 shows electronic device 400 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer-readable medium, the computer program comprising a program for performing the methods shown in the flowcharts of the embodiments of the present disclosure code.
  • the computer program may be downloaded and installed from the network via the communication device 409, or from the storage device 408, or from the ROM 402.
  • the processor 401 When the computer program is executed by the processor 401, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (Electrically-Erasable Programmable Read-Only Memory, EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage devices, magnetic storage devices, or the above any suitable combination.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: at the preset relative position of the subject's face, dynamically display the human body according to the preset motion mode.
  • a face mask sequence the face mask sequence includes a plurality of face masks corresponding to the target face; in response to a triggering instruction acting on the target face mask, the target face mask is fused to the target face The face is displayed, wherein the target face mask is any mask in the face mask sequence.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, can be connected to an external computer ( For example, using an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • clients and servers can communicate using any currently known or future developed network protocols, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • Communication eg, a communication network
  • Examples of communication networks include local area networks (LANs), wide area networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
  • LANs local area networks
  • WANs wide area networks
  • the Internet eg, the Internet
  • peer-to-peer networks eg, ad hoc peer-to-peer networks
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments of the present disclosure may be implemented in software or hardware.
  • the name of the module does not constitute a limitation of the unit itself in some cases, for example, the display module can also be described as "a unit that displays the face of the object and the sequence of face masks".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable logic device, CPLD) and so on.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Products
  • SOC System on Chip
  • complex programmable logic device Complex Programmable logic device, CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • a method for displaying a face image including:
  • the object face At a preset relative position of the object face, dynamically display a face mask sequence according to a preset motion mode, and the face mask sequence includes a plurality of face masks corresponding to the object face;
  • the target face mask In response to a trigger instruction acting on the target face mask, the target face mask is fused to the target face for display, wherein the target face mask is any mask in the face mask sequence .
  • the face mask sequence includes at least one deformed face mask corresponding to the subject face.
  • the dynamic display of a face mask sequence at a preset relative position of the subject's face according to a preset motion mode includes:
  • the face masks in the face mask sequence are preset according to the preset Arranged in the circumferential direction.
  • the face masks in the face mask sequence are scaled face masks corresponding to the subject face.
  • the dynamic display of the face mask sequence according to a rotational motion includes:
  • the rotation speed of the rotation motion is determined according to the characteristic parameter, and the face mask sequence is dynamically displayed according to the rotation speed.
  • the acquiring the characteristic parameters of the target part on the face of the subject includes:
  • the mouth feature parameters include the coordinates of the upper lip key point and the lower lip key point coordinates
  • the eye feature parameters include the left eye key point coordinates and the right eye key point coordinates
  • the first coordinate difference in the first direction is determined according to the coordinates of the upper lip key point and the coordinates of the lower lip key point
  • the second coordinate difference is determined according to the coordinates of the left eye key point and the right eye key point. the second coordinate difference in the direction;
  • the characteristic parameter is determined according to the ratio of the first coordinate difference and the second coordinate difference.
  • the determining the rotation speed according to the characteristic parameter includes:
  • the rotation speed is the first preset speed
  • the rotation speed is the sum of the first preset speed and an additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the characteristic
  • the parameter difference is the difference between the characteristic parameter and the preset first threshold
  • the rotation speed is determined as the second preset speed.
  • the method before the fusion of the target face mask to the target face for display, the method further includes:
  • the target face mask is rotated to a target position, and the target position and the target face conform to a preset positional relationship.
  • the target position is a position directly in front of the subject's face.
  • the rotational speed of the rotational motion is reduced to a preset target speed.
  • the displaying by fusing the target face mask to the target face includes:
  • the target face mask Fit the target face mask to the target face according to a preset path, and fuse the target face mask with the target face to generate a fused face image, wherein the preset Let the path point from the first position to the second position, the first position is the current position of the target face mask in the face mask sequence, and the second position is the current position of the target face. location;
  • the fused face image is displayed on the subject's face.
  • the displaying by fusing the target face mask to the target face further includes:
  • the other face masks in the face mask sequence except the target face mask are gradually disappeared.
  • the method for displaying a face image further includes:
  • the rendering area and the non-rendering area are determined according to the current position parameter of each texel on each face mask of the face mask sequence and the position parameter of the object face, and only the rendering area is rendered.
  • the rendering area and the non-rendering area are determined according to the current position parameter of each texel on each face mask of the face mask sequence and the position parameter of the object face ,include:
  • the texel belongs to the non-rendering area.
  • the rendering area includes a front area and a back area
  • the rendering of the rendering area includes:
  • a device for displaying a face image including:
  • a display module configured to dynamically display a face mask sequence at a preset relative position of the subject face according to a preset motion mode, where the face mask sequence includes a plurality of face masks corresponding to the subject face;
  • the acquisition module is used to acquire the trigger instruction acting on the target face mask
  • a processing module further configured to fuse the target face mask to the target face, wherein the target face mask is any mask in the face mask sequence;
  • the display module is further configured to display the target face after fusion with the target face mask.
  • the face mask sequence includes at least one deformed face mask corresponding to the subject face.
  • the display module is specifically used for:
  • the face masks in the face mask sequence are preset according to the preset Arranged in the circumferential direction.
  • the face masks in the face mask sequence are scaled face masks corresponding to the subject face.
  • the display module is specifically used for:
  • the rotation speed of the rotation motion is determined according to the characteristic parameter, and the face mask sequence is dynamically displayed according to the rotation speed.
  • the acquisition module is further configured to acquire mouth feature parameters and eye feature parameters of the subject's face, where the mouth feature parameters include key point coordinates of the upper lip and lower Lip key point coordinates, the eye feature parameters include left eye key point coordinates and right eye key point coordinates;
  • the processing module is further configured to determine the first coordinate difference in the first direction according to the upper lip key point coordinates and the lower lip key point coordinates, and determine the first coordinate difference in the first direction according to the left eye key point coordinates and the right The eye key point coordinates determine the second coordinate difference in the second direction;
  • the processing module is further configured to determine the characteristic parameter according to the ratio of the first coordinate difference and the second coordinate difference.
  • the processing module is specifically configured to:
  • the rotation speed is the first preset speed
  • the rotation speed is the sum of the first preset speed and an additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the characteristic
  • the parameter difference is the difference between the characteristic parameter and the preset first threshold
  • the rotation speed is determined as the second preset speed.
  • the processing module is further configured to determine that the target face mask is rotated to a target position, and the target position and the target face conform to a preset positional relationship.
  • the target position is a position directly in front of the subject's face.
  • the rotational speed of the rotational motion is reduced to a preset target speed.
  • the processing module is further configured to fit the target face mask to the subject face according to a preset path, and attach the target face mask to the target face mask.
  • the target face is fused to generate a fused face image, wherein the preset path points from a first position to a second position, and the first position is the current face mask of the target in the face mask sequence where the second position is the current position of the subject's face;
  • the display module is further configured to display the fused face image on the subject's face.
  • the display module is further configured to, according to a preset transparency change rule, perform gradients on other face masks except the target face mask in the face mask sequence disappear.
  • the processing module is further configured to determine according to the current position parameters of each texel on each face mask of the face mask sequence and the position parameter of the object face Rendering area as well as non-rendering area, and render only the rendering area.
  • the texel belongs to the non-rendering area if the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction.
  • the processing module is further configured to render the frontal area according to the face of the object
  • the processing module is further configured to render the back area into a preset fixed texture.
  • embodiments of the present disclosure provide an electronic device, including: at least one processor and a memory;
  • the memory stores computer-executable instructions
  • the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the method for displaying a face image as described in the first aspect and various possible designs of the first aspect above.
  • embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first Aspect the face image display method described in the various possible designs.
  • embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the face image described in the first aspect and various possible designs of the first aspect. Display method.
  • an embodiment of the present disclosure provides a computer program that, when executed by a processor, implements the method for displaying a face image as described in the first aspect and various possible designs of the first aspect.

Abstract

本公开提供一种人脸图像显示方法、装置、电子设备及存储介质。本公开提供的人脸图像显示方法,通过在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,并且,在用户触发人脸面具序列中的目标人脸面具之后,将目标人脸面具融合至对象人脸进行显示,从而增强了人脸图像显示过程中的交互性,并且,还可以在用户对目标人脸面具触发后,将目标人脸面具融合至对象人脸进行显示,以实现在对象人脸上显示特定人脸面具的效果。

Description

人脸图像显示方法、装置、电子设备及存储介质
相关申请交叉引用
本申请要求于2020年9月17日提交中国专利局、申请号为202010981627.5、发明名称为“人脸图像显示方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种人脸图像显示方法、装置、电子设备及存储介质。
背景技术
随着科技的发展,人脸图像的显示也衍生出了很多应用场景,其中,在某些应用场景下用户希望可以对人脸图像进行处理后显示。
虽然,现有的应用程序可以通过图像处理的方式,对人脸图像进行各类的处理,并将处理后的人脸图像进行显示,但是,现有应用程序并不能满足用户在人脸图像显示过程中对于交互方式多样性的需求。
发明内容
本公开提供一种人脸图像显示方法、装置、电子设备及存储介质,用于解决当前无法满足用户在人脸图像显示过程中对于交互方式多样性的需求的技术问题。
第一方面,本公开实施例提供一种人脸图像显示方法,包括:
在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;
响应于作用在目标人脸面具上的触发指令,将所述目标人脸面具融合至所述对象人脸进行显示,其中,所述目标人脸面具为所述人脸面具序列中的任一面具。
第二方面,本公开实施例提供一种人脸图像显示装置,包括:
显示模块,用于在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;
获取模块,用于获取作用于目标人脸面具上的触发指令;
处理模块,还用于将所述目标人脸面具进行融合至所述对象人脸,其中,所述目标人脸面具为所述人脸面具序列中的任一面具;
所述显示模块,还用于对融合所述目标人脸面具后的所述对象人脸进行显示。
第三方面,本公开实施例提供一种电子设备,包括:至少一个处理器和存储器;
所述存储器存储计算机执行指令;
所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
本公开实施例提供的一种人脸图像显示方法、装置、电子设备及存储介质,通过在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,并且,在用户触发人脸面具序列中的目标人脸面具之后,将目标人脸面具融合至对象人脸进行显示,从而增强了人脸图像显示过程中的交互性,并且,还可以在用户对目标人脸面具触发后,将目标人脸面具融合至对象人脸进行显示,以实现在对象人脸上显示特定人脸面具的效果。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开根据一示例实施例示出的人脸图像显示方法的流程示意图;
图2为本公开根据一示例实施例示出的单张人脸面具的生成示意图;
图3为本公开根据一示例实施例示出的人脸面具序列的生成示意图;
图4为本公开根据一示例实施例示出的人脸图像显示方法的触发过程场景示意图;
图5为本公开根据一示例实施例示出的触发后融合显示步骤的流程示意图;
图6为本公开根据另一示例实施例示出的人脸图像显示方法的流程示意图;
图7为本公开根据一示例实施例示出的渲染步骤的流程示意图;
图8为图7所示的渲染步骤的渲染后显示结果示意图;
图9为本公开根据一示例实施例示出的人脸图像显示装置的结构示意图;
图10为本公开根据一示例实施例示出的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
目前,针对人脸图像的处理方式多种多样,例如,可以对人脸图像进行美颜处理,以及可以对人脸图像进行变形处理。而针对处理后的人脸图像,也衍生出了很多应用场景,但是,都是基于已经处理完成后的人脸图像进行直接应用,而并不能满足用户在人脸图像显示过程中对于交互方式多样性的需求。
而在本公开所提供的实施例中,用户可以通过终端设备(例如:个人电脑、笔记本电脑、平板电脑以及智能手机等设备)获取到对象人脸之后,先根据对象人脸生成人脸面具序列,其中,该人脸面具序列中包括多张人脸面具。值得说明的,人脸面具序列中的人脸面具可以是对象人脸直接对应的人脸面具,也可以是针对对象人脸进行相关处理(例如:变形处理、美颜处理)后所生成的人脸面具。此外,在一个实施例中,人脸面具序列中可以是包括多个不同的人脸形态所对应的面具。
在生成人脸面具序列之后,人脸面具序列中的人脸面具可以是按照特定的分布规律进行排布,例如,人脸面具序列中的人脸面具可以是以按照预设圆周方向进行排布,其中,该预设圆周方向可以是以对象人脸为中心,以对象人脸的头顶方向为中心轴,从而形成人脸面具序列环绕对象人脸的效果。并且,人脸面具序列还可以按照预设运动方式进行动态显示,例如,可以是人脸面具序列环绕对象人脸进行旋转运动。
此外,对于人脸面具序列环绕对象人脸进行旋转运动的旋转速度可以是根据用户的身体特征进行确定,例如,可以是根据用户嘴巴张开程度、用户微笑程度以及用户相关手势来确定。此处,可以以根据用户嘴巴张开程度为例进行说明,人脸面具序列环绕对象人脸进行旋转运动的旋转速度可以随着用户嘴巴张开程度的变大而变快,即用户嘴巴张的越大,人脸面具序列环绕对象人脸旋转的速度就越快。可见,用户可以通过调整嘴巴张开的程度来对人脸面具序列环绕对象人脸旋转的速度进行加速以及减速。
在用户对人脸面具序列中的任一面具进行触发时,例如,用户通过点击屏幕的方式,选择了人脸面具序列中一张面具,即为目标人脸面具。在触发之后,人脸面具序列环绕对象人脸旋转的速度就会先下降,直至目标人脸面具运动至正对对象人脸,此时,可以停止人脸面具序列的旋转,并将该目标人脸面具融合至对象人脸上进行显示。进一步的,对于人脸面具序列中除所述目标人脸面具之外的其他人脸面具,可以按照预设透明度变化规则进行渐变消失。
在本公开提供的实施例中,通过在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,并且,在用户触发人脸面具序列中的目标人脸面具之后,将目标人脸面具融合至对象人脸进行显示。从而实现在用户对目标人脸面具触发前,人脸面具序列的显示方式与对象人脸之间具有交互性的效果,并且,在用户对目标人脸面具触发后,将目标人脸面 具融合至对象人脸进行显示,可以实现在对象人脸上显示特定人脸面具的效果。下面通过几个具体实现方式对该图像处理方法进行详细说明。
图1为本公开根据一示例实施例示出的人脸图像显示方法的流程示意图。如图1所示,本实施例提供的人脸图像显示方法,包括:
步骤101、在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列。
在本步骤中,当用户通过终端设备获取到对象人脸之后,可以在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,其中,人脸面具序列可以包括多张对象人脸对应的人脸面具。值得说明的,人脸面具序列中的人脸面具可以是对象人脸未经处理所生成的人脸面具,也可以是针对对象人脸进行相关处理(例如:变形处理、美颜处理)后所生成的人脸面具。其中,对于上述根据对象人脸生成人脸面具可以是利用相关的3D人脸处理工具。此外,在一个实施例中,人脸面具序列中可以是包括多个不同的人脸形态所对应的面具。
在生成人脸面具序列之后,人脸面具序列中的人脸面具可以是按照特定的分布规律进行排布,例如,人脸面具序列中的人脸面具可以是以按照预设圆周方向进行排布,还可以是按照在预设方向上依次排序的方式进行排布。
可选的,当人脸面具序列中的人脸面具可以是以按照预设圆周方向进行排布时,该预设圆周方向可以是以对象人脸为中心,以对象人脸的头顶方向为中心轴,从而形成人脸面具序列环绕对象人脸的效果。并且,人脸面具序列还可以按照预设运动方式进行动态显示,例如,可以是人脸面具序列环绕对象人脸进行旋转运动。
而当人脸面具序列中的人脸面具是按照在预设方向上依次排序的方式进行排布时,人脸面具序列则可以按照人脸面具排布的先后顺序从对象人脸前方进行滑动运动。
值得说明的,对于人脸面具序列的具体排布方式,以及人脸面具序列与对象人脸之间相对运动方式在本实施例中不作具体限定,其具体形式可以根据具体的场景需求进行适配设置,而在本实施例中旨在示例性说明对象人脸与人脸面具序列之间存在特定的相对位置关系以及相对运动关系。
步骤102、响应作用于目标人脸面具上的触发指令,将目标人脸面具融合至对象人脸进行显示。
而在用户对人脸面具序列中的任一面具进行触发时,例如,用户通过点击屏幕的方式,选择了人脸面具序列中一张人脸面具,即为目标人脸面具。在触发之后,人脸面具序列环绕对象人脸旋转的速度就会先下降,直至目标人脸面具运动至正对对象人脸,此时,可以停止人脸面具序列的旋转,并将该目标人脸面具融合至对象人脸上进行显示。进一步的,对于人脸面具序列中除所述目标人脸面具之外的其他人脸面具,可以按照预设透明度变化规则进行渐变消失。
在本实施例中,通过在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,并且,在用户触发人脸面具序列中的目标人脸面具之后,将目标人脸面具融合至对象人脸进行显示,从而增强了人脸图像显示过程中的交互性,并且,还可以在用户对目标人脸面具触发后,将目标人脸面具融合至对象人脸进行显示,以实现在对象人脸上显示特定人脸面具的效果,增强了用户交互的乐趣和体验。
在上述实施例中,人脸面具序列中所包括的人脸面具可以包括对象人脸所对应的原始人脸面具,还可以包括利用3D人脸处理工具对原始人脸面具进行处理后所生成的变形人脸面具。 例如,脸面具序列可以是包括8张人脸面具,其中,可以是包括2张原始人脸面具以及6张变形人脸面具。具体的,可以是先创建8个人脸面具实体,然后将预先设计好的具备变形效果的3D模型导入,并在每个人脸面具实体中增加变形器来调整对应变形的程度。由于每张人脸面具的变形的形式都是不一样的,所以需要将每种变形的形式进行单独导入,并且,还可以是将多种变形器做到同一个模型中。
通过3D人脸处理工具生成的人脸面具由于底层算法的设置,其默初始认位置跟原始人脸相一致。而为了将其显示至对象人脸的预设相对位置,可以在模型空间的人脸坐标系下对所生成的人脸面具进行位移、选择以及缩放等操作。其中,图2为本公开根据一示例实施例示出的单张人脸面具的生成示意图。如图2所示,可以是获得当前模型空间中人脸面具未变形前的顶点原始位置以及变形后的位置偏移,然后,由于需要保证人脸面具相对于对象人脸始终朝外显示,所以需要先做进行位移操作,然后再做旋转操作。
具体的,可以是在顶点原始位置加上变形后的位置偏移的基础上与Z轴上的坐标偏移量相加,从而实现与对象人脸原始位置的偏移。此外,为了使人脸面具在围成圈环绕对象人脸做旋转运动时,不会完全遮挡对象人脸,因此,还对所生成的人脸面具进行缩放处理,例如可以缩放到原始大小90%,从而使得人脸面具序列中的各个人脸面具均为对象人脸所对应的缩放人脸面具。可见,通过上述方式,可以实现了在对象人脸偏离预设距离的位置显示一张缩放一定比例之后的人脸面具,具体效果可以参见图2所示。
而为了实现人脸面具可以按照预设运动方式动态显示,例如,为了使得人脸面具可以围绕对象人脸进行旋转,可以以对象人脸为中心,对象人脸的头顶方向为中心轴(例如:Y轴),通过以下旋转矩阵进行旋转:
Figure PCTCN2021114237-appb-000001
其中,θ为旋转角度。
可见,通过上述方式,可以实现一个单张人脸面具在缩放到原始大小90%后以对象人脸为中心、头顶方向为中心轴、指定半径圆形且人脸面具的正面永远朝外的旋转运动。
此外,在实现单张人脸面具的动态显示之后,还可以进一步实现在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列。其中,可以人脸面具序列包括8张人脸面具的情况进行举例说明:
图3为本公开根据一示例实施例示出的人脸面具序列的生成示意图。如图3所示,在初始化人脸面具序列中各张人脸面具的位置参数时,可以赋予相同的偏离位移以及缩放比例。然后,根据初始化顺序依次定义旋转角度,这样初始化后的8个人脸面具将以每45度为一个间隔放置,从而使得8个人脸面具所形成的人脸面具序列围成一个完整的圆,以使在对人脸面具序列进行显示时,人脸面具序列中的人脸面具按照预设圆周方向进行排布。
此外,还可以将8个人脸面具赋予不同的变形模型和配置好用于控制人脸面具选择的算法,这样就可以实现在一个系统级脚本中统一地控制这些人脸面具的运动。
在实现以对象人脸为中心,头顶方向为中心轴,通过旋转运动的方式动态显示人脸面具序列之后,用户可以对人脸面具序列中的任一面具进行触发,从而选定目标人脸面具,以将该目标人脸面具融合显示于对象人脸上。图4为本公开根据一示例实施例示出的人脸图像显示方法的触发过程场景示意图。如图4所示,用户可以通过点击屏幕的方式,选择人脸面具 序列中一张面具作为目标人脸面具。
在触发之后,人脸面具序列环绕对象人脸旋转的速度就会先下降,直至目标人脸面具运动至正对对象人脸,此时,可以停止人脸面具序列的旋转,并将该目标人脸面具融合至对象人脸上进行显示。进一步的,对于人脸面具序列中除所述目标人脸面具之外的其他人脸面具,可以按照预设透明度变化规则进行渐变消失。
在一种可能的设计中,图5为本公开根据一示例实施例示出的触发后融合显示步骤的流程示意图。如图5所示,上述实施例中步骤102,具体可以包括:
步骤1020、获取作用在目标人脸面具上的触发指令。
例如,用户可以通过点击屏幕的方式,选择了人脸面具序列中一张面具作为目标人脸面具。
步骤1021、判断旋转速度是否小于预设目标速度。若判读结果为是,则执行步骤1022;若判断结果为否,则执行步骤1023。
步骤1022、设置当前旋转角度。
步骤1023、降低旋转速度。
步骤1024、判断旋转速度是否小于预设目标速度。若判读结果为是,则执行步骤1025;若判断结果为否,则执行步骤1023。
步骤1025、计算目标旋转角度。
在对人脸面具序列中的目标人脸面具进行触发之后,需要判断旋转速度是否小于预设目标速度,其中,预设目标速度可以为一个旋转速度阈值。如果判断结果为是,则说明在当前人脸面具的旋转速度较慢,可以通过直接计算目标旋转角度的方式,控制目标人脸面具运动至目标位置,例如,对象人脸正前方的位置,然后进行后续的融合显示操作。但是,如果当前旋转速度大于预设目标速度,直接采用计算目标旋转角度的方式,控制目标人脸面具运动至目标位置,则会导致人脸面具序列的运动状态趋势变化过大,而导致交互感受交较差。因此,在计算目标旋转角度之前,需要先降低旋转速度,将旋转速度降低至小于预设目标速度之后,再进行后续目标旋转角度的计算,以控制目标人脸面具运动至目标位置。值得说明的,目标旋转角度为人脸面具运动停止时的旋转角度。可以继续以人脸面具序列包括8张人脸面具的情况进行举例说明,由于在停留时,必须有一个人脸面具正对用户,此时该人脸面具的旋转角度为0,所以目标旋转角度必须是45的整数倍。
具体的,对于人脸面具序列中各个人脸面具的目标旋转角度可以通过以下公式进行计算:
Figure PCTCN2021114237-appb-000002
其中,δ为目标旋转角度,
Figure PCTCN2021114237-appb-000003
为当前旋转角度,floor表示获得小于括号内结果的最大整数,括号内加1能保证目标角度与当前角度之间的关系方向与旋转速度方向一致,而当前旋转角度为各个人脸面具当前所对应的人脸角度。
步骤1026、记录旋转至目标位置的人脸面具的编号。
具体的,在生成人脸面具序列时,可以为人脸面具序列中的各个人脸面具分配编号,该编号用于唯一标识各个人脸面具,例如,为第一人脸面具分配编号1,为第二人脸面具分配编号2,为第三人脸面具分配编号3,依次分配编号,直至为第八人脸面具分配编号8。而在人脸面具序列的旋转过程中,当前时刻记录各个时刻下旋转至目标位置的人脸面具的编号,例 如,在第一时刻下,编号1对应的第一人脸面具旋转至目标位置,而在第二时刻,编号2对应的第二人脸面具旋转至目标位置。
步骤1027、判断目标人脸面具是否旋转至目标位置。若判读结果为是,则执行步骤1025;若判断结果为否,则执行步骤1028以及步骤1029。
当目标人脸面具正对对象人脸,即目标人脸面具的目标旋转角度为0时,则说明目标人脸面具到达目标位置,此时,可以停止人脸面具序列的旋转运动。
具体的,可以是在触发目标人脸面具时,记录下目标人脸面具所对应的编号,而在人脸面具序列的旋转过程中,若当前时刻所记录的编号为目标人脸面具所对应的编号,则说明目标人脸面具正对对象人脸,此时,可以停止人脸面具序列的旋转运动。例如,若用户触发的目标人脸面具为第三人脸面具,则记录编号3,在人脸面具序列的旋转过程中,当记录编号3对应的人脸面具旋转至目标位置时,即说明第三人脸面具旋转至目标位置,则停止人脸面具序列的旋转运动。
步骤1028、在对象人脸上显示融合人脸图像。
在一个实施例中,可以按照预设路径将目标人脸面具贴合至对象人脸上,并将目标人脸面具与对象人脸进行融合,以生成融合人脸图像,其中,预设路径从第一位置指向第二位置,第一位置为当前目标人脸面具在人脸面具序列中所处的位置,第二位置为当前对象人脸所处的位置。在本步骤中,可以是通过缩小人脸面具在Z轴方向上的偏离值,从而实现所有人脸面具朝向对象人脸以缩小圆半径运动的效果。最后,在对象人脸上显示该融合人脸图像。
步骤1029、对其他人脸面具进行渐变消失。
此外,还可以按照预设透明度变化规则,对人脸面具序列中除目标人脸面具之外的其他人脸面具进行渐变消失。在本步骤中,可以是逐渐降低不正对对象人脸的人脸面具的alpha通道,其中,alpha通道用于调节透明度,从而让不正对对象人脸的人脸面具逐渐变透明。具体的,实现方式可以参照如下公式:
alpha=max((alpha′-σ t×Δt,0))
其中,alpha为当前帧人脸面具的透明度,alpha′为当上一帧人脸面具的透明度,σ t为时间系数,Δt为当前帧与上一帧之间的时间差。
图6为本公开根据另一示例实施例示出的人脸图像显示方法的流程示意图。如图6所示,本实施例提供的人脸图像显示方法,包括:
步骤201、根据人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及对象人脸的位置参数确定渲染区域以及非渲染区域,并仅对渲染区域进行渲染。
在本步骤中,在对人脸面具序列进行显示之前,还需对各个人脸面具进行渲染。如果对各个人脸面具进行无差别全方位的渲染,则容易导致渲染计算量过大,进而占用过多的运算资源。因此,可以根据人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及对象人脸的位置参数确定渲染区域以及非渲染区域,并仅对渲染区域进行渲染。
具体的,图7为本公开根据一示例实施例示出的渲染步骤的流程示意图。如图7所示,本实施例中上述渲染步骤,可以包括:
步骤2011、判断是否为人脸面具的正面区域。
步骤2012、若为正面区域,判断纹素当前位置是否位于对象人脸所对应的位置范围之内,且在第三方向上位于对象人脸的特征关键点之后。若判断结果为是,则执行步骤2015;若判 断结果为否,则执行步骤2014。
值得理解的,对于第三方向可以是对象人脸与人脸面具之间存在视觉空间遮挡的方向。
步骤2013、若为背面区域,判断纹素当前位置是否位于对象人脸所对应的位置范围之内,且在第三方向上位于对象人脸的特征关键点之后。若判断结果为是,则执行步骤2015;若判断结果为否,则执行步骤2016。
在本实施例中,对于人脸面具的正面区域和背面区域可以分别通过对应的处理进程来进行完成,其中,第一个处理进程只用于对正面区域进行渲染,而第二个处理进程只用于对反面区域进行渲染。但是,两个处理进程可以是共用一个纹素着色引擎进行渲染,其中,纹素着色引擎都会采样当前的对象人脸,以返回实时人像遮罩,从而实现对于人脸面具进行渲染。
其中,可以通过判断纹素当前位置是否位于对象人脸所对应的位置范围之内,且在第三方向(例如,Z轴方向)上位于对象人脸的特征关键点(例如:对象人脸的鬓角位置)之后,确定该纹素是否需要进行渲染。
具体的,当纹素当前位置位于对象人脸所对应的位置范围之内,则纹素可能是在对象人脸的前方,也可能是在对象人脸的后方。但是,如果纹素当前位置在Z轴方向上位于对象人脸的鬓角位置的前方,则可以确定纹素是在对象人脸的前方,可见,该纹素对于用户为可见的部分,因此,需要对该纹素进行渲染。然而,如果纹素当前位置在Z轴方向上位于对象人脸的鬓角位置的后方,则可以确定纹素是在对象人脸的后方,可见,该纹素对于用户为不可见的部分,因此,无需对该纹素进行渲染,从而节省不必要的渲染过程所导致的计算资源浪费。
步骤2014、根据对象人脸渲染正面区域。
具体的,对于人脸面具位于渲染区域内纹素,如果也属于正面区域,则可以根据对象人脸进行渲染,以显示人脸面具所对应的对象人脸的具体外观。
步骤2015、不渲染人脸面具位于非渲染区域内的纹素。
而对于人脸面具位于非渲染区域内的纹素,无论是属于正面区域还是背面区域,均不进行渲染,可以理解为,将该区域的纹素设置为透明。
步骤2016、将背面区域渲染为预设固定纹理。
而对于人脸面具位于渲染区域内纹素,但是属于背面区域,则可以根据预设固定纹理进行渲染,例如,可以将其渲染为灰色。
图8为图7所示的渲染步骤的渲染后显示结果示意图。如图8所示,对于区域A,则是位于渲染区域并且属于正面区域,因此根据对象人脸进行渲染,以显示人脸面具所对应的对象人脸的具体外观。而对于区域B,则是位于渲染区域并且属于背面区域,因此将其渲染为例如灰色。
返回至图6,步骤202、在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列。
值得说明的,本实施例中步骤202的具体实现方式可以参照图1所示实施例中步骤101的具体描述。
此外,在图1所示实施例中步骤101的基础上,在按照预设运动方式动态显示人脸面具序列时,可以根据用户的身体特征进行确定,例如,可以是根据用户嘴巴张开程度、用户微笑程度以及用户相关手势。此处,可以以根据用户嘴巴张开程度为例进行说明,人脸面具序 列环绕对象人脸进行旋转运动的旋转速度可以随着用户嘴巴张开程度的变大而变快,即用户嘴巴张的越大,人脸面具序列环绕对象人脸旋转的速度就越快。可见,用户可以通过调整嘴巴张开的程度来对人脸面具序列环绕对象人脸旋转的速度进行加速以及减速。
具体的,可以是先获取对象人脸上目标部位的特征参数,然后,根据特征参数确定旋转速度,并按照旋转速度动态显示人脸面具序列。例如,可以获取对象人脸的嘴部特征参数以及眼部特征参数,其中,嘴部特征参数包括上嘴唇关键点坐标以及下嘴唇关键点坐标,眼部特征参数包括左眼关键点坐标以及右眼关键点坐标。然后,根据上嘴唇关键点坐标以及下嘴唇关键点坐标确定在第一方向(例如,Y轴)上的第一坐标差值,并根据左眼关键点坐标以及右眼关键点坐标确定在第二方向(例如,X轴)上的第二坐标差值。最后,根据第一坐标差值以及第二坐标差值的比值确定特征参数,其中,可以将该特征参数用于表征张嘴程度。值得说明的,通过以第一坐标差值以及第二坐标差值之间的比值来确定张嘴程度,可以避免因为对象人脸距离摄像头远近变化而导致张嘴程度的波动。
其中,若特征参数小于或等于预设第一阈值,则旋转速度为第一预设速度。若特征参数大于预设第一阈值,则旋转速度为第一预设速度与附加速度之和,其中,附加速度与特征参数差值成正比,特征参数差值为特征参数与预设第一阈值之差。而当所述第一预设速度与所述附加速度之和大于或等于第二预设速度时,则将所述旋转速度确定为所述第二预设速度。
通过上述方式,可以实现根据张嘴程度计算当前帧人脸面具的旋转速度,具体计算公式:
γ=min(γ min+max((D-d),0)×σ γmax)
其中,γ为旋转速度,γ min为最小旋转速度,D为张嘴程度,d为张嘴检测阈值,σ γ为速度系数,γ max为最大旋转速度。值得说明的,张嘴检测阈值是指在张嘴程度大于该阈值时才能判定为张嘴,而速度系数则是指将张嘴程度参数转化为旋转速度时需要相乘的一个常数。
值得说明的,上述预设第一阈值即为张嘴检测阈值d,第一预设速度即为最小旋转速度γ min,附加速度为:(D-d)×σ γ,而第二预设速度为最大旋转速度γ max
由此可见,通过本步骤,可以通过控制张嘴程度来实现人脸面具序列围绕对象人脸以不同旋转速度进行旋转的效果。
此外,还可以通过上述确定的旋转速度,来设置当前帧中各个人脸面具的的旋转角度,具体公式如下:
Figure PCTCN2021114237-appb-000004
其中,
Figure PCTCN2021114237-appb-000005
为当前帧中人脸面具的旋转角度,
Figure PCTCN2021114237-appb-000006
为上一帧中人脸面具的旋转角度,γ为旋转速度,Δt为当前帧与上一帧之间的时间差。
值得说明的,对于上述公式中对于公式尾部的取余操作是为了让旋转角度能保证在[0,360]的区间之内,以防止数字太大导致内存溢出。
步骤203、响应作用于目标人脸面具上的触发指令,将目标人脸面具融合至对象人脸进行显示。
值得说明的,本实施例中步骤203的具体实现方式可以参照图1所示实施例中步骤102的具体描述,此处不再进行赘述。
图9为本公开根据一示例实施例示出的人脸图像显示装置的结构示意图。如图9所示,本实施例提供的一种人脸图像显示装置300,包括:
显示模块301,用于在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序 列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;
获取模块302,用于获取作用于目标人脸面具上的触发指令;
处理模块303,还用于将所述目标人脸面具进行融合至所述对象人脸,其中,所述目标人脸面具为所述人脸面具序列中的任一面具;
所述显示模块301,还用于对融合所述目标人脸面具后的所述对象人脸进行显示。
根据本公开的一个或多个实施例,所述人脸面具序列包括至少一张所述对象人脸对应的变形人脸面具。
根据本公开的一个或多个实施例,所述显示模块301,具体用于:
以所述对象人脸为中心,所述对象人脸的头顶方向为中心轴,并按照旋转运动的方式动态显示所述人脸面具序列,所述人脸面具序列中的人脸面具按照预设圆周方向进行排布。
根据本公开的一个或多个实施例,所述人脸面具序列中的人脸面具为所述对象人脸对应的缩放人脸面具。
根据本公开的一个或多个实施例,所述显示模块301,具体用于:
获取所述对象人脸上目标部位的特征参数;
根据所述特征参数确定所述旋转运动的旋转速度,并按照所述旋转速度动态显示所述人脸面具序列。
根据本公开的一个或多个实施例,所述获取模块302,还用于获取所述对象人脸的嘴部特征参数以及眼部特征参数,所述嘴部特征参数包括上嘴唇关键点坐标以及下嘴唇关键点坐标,所述眼部特征参数包括左眼关键点坐标以及右眼关键点坐标;
所述处理模块303,还用于根据所述上嘴唇关键点坐标以及所述下嘴唇关键点坐标确定在第一方向上的第一坐标差值,并根据所述左眼关键点坐标以及所述右眼关键点坐标确定在第二方向上的第二坐标差值;
所述处理模块303,还用于根据所述第一坐标差值以及所述第二坐标差值的比值确定所述特征参数。
根据本公开的一个或多个实施例,所述处理模块303,具体用于:
若所述特征参数小于或等于预设第一阈值,则所述旋转速度为第一预设速度;
若所述特征参数大于所述预设第一阈值,则所述旋转速度为所述第一预设速度与附加速度之和,其中,所述附加速度与特征参数差值成正比,所述特征参数差值为所述特征参数与所述预设第一阈值之差;
当所述第一预设速度与所述附加速度之和大于或等于第二预设速度时,则将所述旋转速度确定为所述第二预设速度。
根据本公开的一个或多个实施例,所述处理模块303,还用于确定所述目标人脸面具旋转至目标位置,所述目标位置与所述对象人脸之间符合预设位置关系。
根据本公开的一个或多个实施例,所述目标位置为所述对象人脸正前方的位置。
根据本公开的一个或多个实施例,当所述目标人脸面具旋转至所述目标位置时,将所述旋转运动的旋转速度下降至预设目标速度。
根据本公开的一个或多个实施例,所述处理模块303,还用于按照预设路径将所述目标人脸面具贴合至所述对象人脸上,并将所述目标人脸面具与所述对象人脸进行融合,以生成融合人脸图像,其中,所述预设路径从第一位置指向第二位置,所述第一位置为当前所述目标 人脸面具在所述人脸面具序列中所处的位置,所述第二位置为当前所述对象人脸所处的位置;
所述显示模块301,还用于在所述对象人脸上显示所述融合人脸图像。
根据本公开的一个或多个实施例,所述显示模块301,还用于按照预设透明度变化规则,对所述人脸面具序列中除所述目标人脸面具之外的其他人脸面具进行渐变消失。
根据本公开的一个或多个实施例,所述处理模块303,还用于根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,并仅对所述渲染区域进行渲染。
根据本公开的一个或多个实施例,若所述纹素当前位置位于所述对象人脸所对应的位置范围之内,且在第三方向上位于所述对象人脸的特征关键点之后,则所述纹素属于所述非渲染区域。
根据本公开的一个或多个实施例,所述处理模块303,还用于根据所述对象人脸渲染所述正面区域;
所述处理模块303,还用于将所述背面区域渲染为预设固定纹理。
值得说明的,图9所示实施例提供的人脸图像显示装置,可用于执行上述任一实施例提供的方法,具体实现方式和技术效果类似,这里不再赘述。
图10为本公开根据一示例实施例示出的电子设备的结构示意图。如图10所示,其示出了适于用来实现本公开实施例的电子设备400的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等具有图像获取功能的移动终端以及诸如数字TV、台式计算机等等外接有具有图像获取设备的固定终端。图10示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,电子设备400可以包括处理器(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(Read-Only Memory,ROM)402中的程序或者从存储器408加载到随机访问存储器(Random Access Memory,RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理器401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(Input/Output,I/O)接口405也连接至总线404。存储器用于存储执行上述各个方法实施例所述方法的程序;处理器被配置为执行存储器中存储的程序。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行本公开实施例的流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存 储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理器401执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Electrically-Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;响应于作用在目标人脸面具上的触发指令,将所述目标人脸面具融合至所述对象人脸进行显示,其中,所述目标人脸面具为所述人脸面具序列中的任一面具。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(LAN),广域网(WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品 的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该单元本身的限定,例如,显示模块还可以被描述为“显示对象人脸以及人脸面具序列的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable logic device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
第一方面,根据本公开的一个或多个实施例,提供了一种人脸图像显示方法,包括:
在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;
响应于作用在目标人脸面具上的触发指令,将所述目标人脸面具融合至所述对象人脸进行显示,其中,所述目标人脸面具为所述人脸面具序列中的任一面具。
根据本公开的一个或多个实施例,所述人脸面具序列包括至少一张所述对象人脸对应的变形人脸面具。
根据本公开的一个或多个实施例,所述在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,包括:
以所述对象人脸为中心,所述对象人脸的头顶方向为中心轴,并按照旋转运动的方式动态显示所述人脸面具序列,所述人脸面具序列中的人脸面具按照预设圆周方向进行排布。
根据本公开的一个或多个实施例,所述人脸面具序列中的人脸面具为所述对象人脸对应的缩放人脸面具。
根据本公开的一个或多个实施例,所述按照旋转运动的方式动态显示所述人脸面具序列,包括:
获取所述对象人脸上目标部位的特征参数;
根据所述特征参数确定所述旋转运动的旋转速度,并按照所述旋转速度动态显示所述人脸面具序列。
根据本公开的一个或多个实施例,所述获取所述对象人脸上目标部位的特征参数,包括:
获取所述对象人脸的嘴部特征参数以及眼部特征参数,所述嘴部特征参数包括上嘴唇关键点坐标以及下嘴唇关键点坐标,所述眼部特征参数包括左眼关键点坐标以及右眼关键点坐标;
根据所述上嘴唇关键点坐标以及所述下嘴唇关键点坐标确定在第一方向上的第一坐标差值,并根据所述左眼关键点坐标以及所述右眼关键点坐标确定在第二方向上的第二坐标差值;
根据所述第一坐标差值以及所述第二坐标差值的比值确定所述特征参数。
根据本公开的一个或多个实施例,所述根据所述特征参数确定旋转速度,包括:
若所述特征参数小于或等于预设第一阈值,则所述旋转速度为第一预设速度;
若所述特征参数大于所述预设第一阈值,则所述旋转速度为所述第一预设速度与附加速度之和,其中,所述附加速度与特征参数差值成正比,所述特征参数差值为所述特征参数与所述预设第一阈值之差;
当所述第一预设速度与所述附加速度之和大于或等于第二预设速度时,则将所述旋转速度确定为所述第二预设速度。
根据本公开的一个或多个实施例,在所述将所述目标人脸面具融合至所述对象人脸进行显示之前,还包括:
确定所述目标人脸面具旋转至目标位置,所述目标位置与所述对象人脸之间符合预设位置关系。
根据本公开的一个或多个实施例,所述目标位置为所述对象人脸正前方的位置。
根据本公开的一个或多个实施例,当所述目标人脸面具旋转至所述目标位置时,将所述旋转运动的旋转速度下降至预设目标速度。
根据本公开的一个或多个实施例,所述将所述目标人脸面具融合至所述对象人脸进行显示,包括:
按照预设路径将所述目标人脸面具贴合至所述对象人脸上,并将所述目标人脸面具与所述对象人脸进行融合,以生成融合人脸图像,其中,所述预设路径从第一位置指向第二位置,所述第一位置为当前所述目标人脸面具在所述人脸面具序列中所处的位置,所述第二位置为当前所述对象人脸所处的位置;
在所述对象人脸上显示所述融合人脸图像。
根据本公开的一个或多个实施例,所述将所述目标人脸面具融合至所述对象人脸进行显示,还包括:
按照预设透明度变化规则,对所述人脸面具序列中除所述目标人脸面具之外的其他人脸面具进行渐变消失。
根据本公开的一个或多个实施例,所述的人脸图像显示方法,还包括:
根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,并仅对所述渲染区域进行渲染。
根据本公开的一个或多个实施例,所述根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,包括:
若所述纹素当前位置位于所述对象人脸所对应的位置范围之内,且在第三方向上位于所述对象人脸的特征关键点之后,则所述纹素属于所述非渲染区域。
根据本公开的一个或多个实施例,所述渲染区域包括正面区域与背面区域,所述对所述渲染区域进行渲染,包括:
根据所述对象人脸渲染所述正面区域;
将所述背面区域渲染为预设固定纹理。
第二方面,根据本公开的一个或多个实施例,提供了一种人脸图像显示装置,包括:
显示模块,用于在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;
获取模块,用于获取作用于目标人脸面具上的触发指令;
处理模块,还用于将所述目标人脸面具进行融合至所述对象人脸,其中,所述目标人脸面具为所述人脸面具序列中的任一面具;
所述显示模块,还用于对融合所述目标人脸面具后的所述对象人脸进行显示。
根据本公开的一个或多个实施例,所述人脸面具序列包括至少一张所述对象人脸对应的变形人脸面具。
根据本公开的一个或多个实施例,所述显示模块,具体用于:
以所述对象人脸为中心,所述对象人脸的头顶方向为中心轴,并按照旋转运动的方式动态显示所述人脸面具序列,所述人脸面具序列中的人脸面具按照预设圆周方向进行排布。
根据本公开的一个或多个实施例,所述人脸面具序列中的人脸面具为所述对象人脸对应的缩放人脸面具。
根据本公开的一个或多个实施例,所述显示模块,具体用于:
获取所述对象人脸上目标部位的特征参数;
根据所述特征参数确定所述旋转运动的旋转速度,并按照所述旋转速度动态显示所述人脸面具序列。
根据本公开的一个或多个实施例,所述获取模块,还用于获取所述对象人脸的嘴部特征参数以及眼部特征参数,所述嘴部特征参数包括上嘴唇关键点坐标以及下嘴唇关键点坐标,所述眼部特征参数包括左眼关键点坐标以及右眼关键点坐标;
所述处理模块,还用于根据所述上嘴唇关键点坐标以及所述下嘴唇关键点坐标确定在第一方向上的第一坐标差值,并根据所述左眼关键点坐标以及所述右眼关键点坐标确定在第二方向上的第二坐标差值;
所述处理模块,还用于根据所述第一坐标差值以及所述第二坐标差值的比值确定所述特征参数。
根据本公开的一个或多个实施例,所述处理模块,具体用于:
若所述特征参数小于或等于预设第一阈值,则所述旋转速度为第一预设速度;
若所述特征参数大于所述预设第一阈值,则所述旋转速度为所述第一预设速度与附加速度之和,其中,所述附加速度与特征参数差值成正比,所述特征参数差值为所述特征参数与所述预设第一阈值之差;
当所述第一预设速度与所述附加速度之和大于或等于第二预设速度时,则将所述旋转速度确定为所述第二预设速度。
根据本公开的一个或多个实施例,所述处理模块,还用于确定所述目标人脸面具旋转至目标位置,所述目标位置与所述对象人脸之间符合预设位置关系。
根据本公开的一个或多个实施例,所述目标位置为所述对象人脸正前方的位置。
根据本公开的一个或多个实施例,当所述目标人脸面具旋转至所述目标位置时,将所述旋转运动的旋转速度下降至预设目标速度。
根据本公开的一个或多个实施例,所述处理模块,还用于按照预设路径将所述目标人脸面具贴合至所述对象人脸上,并将所述目标人脸面具与所述对象人脸进行融合,以生成融合人脸图像,其中,所述预设路径从第一位置指向第二位置,所述第一位置为当前所述目标人脸面具在所述人脸面具序列中所处的位置,所述第二位置为当前所述对象人脸所处的位置;
所述显示模块,还用于在所述对象人脸上显示所述融合人脸图像。
根据本公开的一个或多个实施例,所述显示模块,还用于按照预设透明度变化规则,对所述人脸面具序列中除所述目标人脸面具之外的其他人脸面具进行渐变消失。
根据本公开的一个或多个实施例,所述处理模块,还用于根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,并仅对所述渲染区域进行渲染。
根据本公开的一个或多个实施例,若所述纹素当前位置位于所述对象人脸所对应的位置范围之内,且在第三方向上位于所述对象人脸的特征关键点之后,则所述纹素属于所述非渲染区域。
根据本公开的一个或多个实施例,所述处理模块,还用于根据所述对象人脸渲染所述正面区域;
所述处理模块,还用于将所述背面区域渲染为预设固定纹理。
第三方面,本公开实施例提供一种电子设备,包括:至少一个处理器和存储器;
所述存储器存储计算机执行指令;
所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面各种可能的设计中所述的人脸图像显示方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的 特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (20)

  1. 一种人脸图像显示方法,其特征在于,包括:
    在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;响应于作用在目标人脸面具上的触发指令,将所述目标人脸面具融合至所述对象人脸进行显示,其中,所述目标人脸面具为所述人脸面具序列中的任一面具。
  2. 根据权利要求1所述的人脸图像显示方法,其特征在于,所述人脸面具序列包括至少一张所述对象人脸对应的变形人脸面具。
  3. 根据权利要求1或2所述的人脸图像显示方法,其特征在于,所述在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,包括:
    以所述对象人脸为中心,所述对象人脸的头顶方向为中心轴,并按照旋转运动的方式动态显示所述人脸面具序列,所述人脸面具序列中的人脸面具按照预设圆周方向进行排布。
  4. 根据权利要求3所述的人脸图像显示方法,其特征在于,所述人脸面具序列中的人脸面具为所述对象人脸对应的缩放人脸面具。
  5. 根据权利要求3或4所述的人脸图像显示方法,其特征在于,所述按照旋转运动的方式动态显示所述人脸面具序列,包括:
    获取所述对象人脸上目标部位的特征参数;
    根据所述特征参数确定所述旋转运动的旋转速度,并按照所述旋转速度动态显示所述人脸面具序列。
  6. 根据权利要求5所述的人脸图像显示方法,其特征在于,所述获取所述对象人脸上目标部位的特征参数,包括:
    获取所述对象人脸的嘴部特征参数以及眼部特征参数,所述嘴部特征参数包括上嘴唇关键点坐标以及下嘴唇关键点坐标,所述眼部特征参数包括左眼关键点坐标以及右眼关键点坐标;
    根据所述上嘴唇关键点坐标以及所述下嘴唇关键点坐标确定在第一方向上的第一坐标差值,并根据所述左眼关键点坐标以及所述右眼关键点坐标确定在第二方向上的第二坐标差值;
    根据所述第一坐标差值以及所述第二坐标差值的比值确定所述特征参数。
  7. 根据权利要求6所述的人脸图像显示方法,其特征在于,所述根据所述特征参数确定所述旋转运动的旋转速度,包括:
    若所述特征参数小于或等于预设第一阈值,则所述旋转速度为第一预设速度;
    若所述特征参数大于所述预设第一阈值,则所述旋转速度为所述第一预设速度与附加速度之和,其中,所述附加速度与特征参数差值成正比,所述特征参数差值为所述特征参数与所述预设第一阈值之差;
    当所述第一预设速度与所述附加速度之和大于或等于第二预设速度时,则将所述旋转速度确定为所述第二预设速度。
  8. 根据权利要求3-7中任意一项所述的人脸图像显示方法,其特征在于,在所述将所述目标人脸面具融合至所述对象人脸进行显示之前,还包括:
    确定所述目标人脸面具旋转至目标位置,所述目标位置与所述对象人脸之间符合预设位置关系。
  9. 根据权利要求8所述的人脸图像显示方法,其特征在于,所述目标位置为所述对象人脸正前方的位置。
  10. 根据权利要求9所述的人脸图像显示方法,其特征在于,当所述目标人脸面具旋转至所述目标位置时,将所述旋转运动的旋转速度下降至预设目标速度。
  11. 根据权利要求1-10中任意一项所述的人脸图像显示方法,其特征在于,所述将所述目标人脸面具融合至所述对象人脸进行显示,包括:
    按照预设路径将所述目标人脸面具贴合至所述对象人脸上,并将所述目标人脸面具与所述对象人脸进行融合,以生成融合人脸图像,其中,所述预设路径从第一位置指向第二位置,所述第一位置为当前所述目标人脸面具在所述人脸面具序列中所处的位置,所述第二位置为当前所述对象人脸所处的位置;
    在所述对象人脸上显示所述融合人脸图像。
  12. 根据权利要求11所述的人脸图像显示方法,其特征在于,所述将所述目标人脸面具融合至所述对象人脸进行显示,还包括:
    按照预设透明度变化规则,对所述人脸面具序列中除所述目标人脸面具之外的其他人脸面具进行渐变消失。
  13. 根据权利要求1-12中任意一项所述的人脸图像显示方法,其特征在于,还包括:
    根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,并仅对所述渲染区域进行渲染。
  14. 根据权利要求13所述的人脸图像显示方法,其特征在于,所述根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,包括:
    若所述纹素当前位置位于所述对象人脸所对应的位置范围之内,且在第三方向上位于所述对象人脸的特征关键点之后,则所述纹素属于所述非渲染区域。
  15. 根据权利要求14所述的人脸图像显示方法,其特征在于,所述渲染区域包括正面区域与背面区域,所述对所述渲染区域进行渲染,包括:
    根据所述对象人脸渲染所述正面区域;
    将所述背面区域渲染为预设固定纹理。
  16. 一种人脸图像显示装置,其特征在于,包括:
    显示模块,用于在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;
    获取模块,用于获取作用于目标人脸面具上的触发指令;
    处理模块,还用于将所述目标人脸面具进行融合至所述对象人脸,其中,所述目标人脸面具为所述人脸面具序列中的任一面具;
    所述显示模块,还用于对融合所述目标人脸面具后的所述对象人脸进行显示。
  17. 一种电子设备,其特征在于,包括:至少一个处理器和存储器;
    所述存储器存储计算机执行指令;
    所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处 理器执行如权利要求1-15中任意一项所述的人脸图像显示方法。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1-15中任意一项所述的人脸图像显示方法。
  19. 一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1-15中任意一项所述的人脸图像显示方法。
  20. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1-15中任意一项所述的人脸图像显示方法。
PCT/CN2021/114237 2020-09-17 2021-08-24 人脸图像显示方法、装置、电子设备及存储介质 WO2022057576A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2023507827A JP2023537721A (ja) 2020-09-17 2021-08-24 顔画像表示方法、装置、電子機器及び記憶媒体
EP21868404.1A EP4177724A4 (en) 2020-09-17 2021-08-24 FACE IMAGE DISPLAY METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
KR1020237003737A KR20230034351A (ko) 2020-09-17 2021-08-24 얼굴 이미지 표시 방법, 장치, 전자기기 및 저장매체
BR112023001930A BR112023001930A2 (pt) 2020-09-17 2021-08-24 Método e aparelho de exibição de imagem facial, dispositivo eletrônico e meio de armazenamento
US18/060,128 US11935176B2 (en) 2020-09-17 2022-11-30 Face image displaying method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010981627.5 2020-09-17
CN202010981627.5A CN112099712B (zh) 2020-09-17 2020-09-17 人脸图像显示方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/060,128 Continuation US11935176B2 (en) 2020-09-17 2022-11-30 Face image displaying method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022057576A1 true WO2022057576A1 (zh) 2022-03-24

Family

ID=73760305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114237 WO2022057576A1 (zh) 2020-09-17 2021-08-24 人脸图像显示方法、装置、电子设备及存储介质

Country Status (7)

Country Link
US (1) US11935176B2 (zh)
EP (1) EP4177724A4 (zh)
JP (1) JP2023537721A (zh)
KR (1) KR20230034351A (zh)
CN (1) CN112099712B (zh)
BR (1) BR112023001930A2 (zh)
WO (1) WO2022057576A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099712B (zh) 2020-09-17 2022-06-07 北京字节跳动网络技术有限公司 人脸图像显示方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170163958A1 (en) * 2015-12-04 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and device for image rendering processing
CN110322416A (zh) * 2019-07-09 2019-10-11 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN110619615A (zh) * 2018-12-29 2019-12-27 北京时光荏苒科技有限公司 用于处理图像方法和装置
CN110992493A (zh) * 2019-11-21 2020-04-10 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112099712A (zh) * 2020-09-17 2020-12-18 北京字节跳动网络技术有限公司 人脸图像显示方法、装置、电子设备及存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8028250B2 (en) * 2004-08-31 2011-09-27 Microsoft Corporation User interface having a carousel view for representing structured data
US20130080976A1 (en) * 2011-09-28 2013-03-28 Microsoft Corporation Motion controlled list scrolling
CN107852443B (zh) * 2015-07-21 2020-01-07 索尼公司 信息处理设备、信息处理方法和程序
US10430867B2 (en) * 2015-08-07 2019-10-01 SelfieStyler, Inc. Virtual garment carousel
US20170092002A1 (en) * 2015-09-30 2017-03-30 Daqri, Llc User interface for augmented reality system
CN105357466A (zh) * 2015-11-20 2016-02-24 小米科技有限责任公司 视频通信方法及装置
US20180000179A1 (en) * 2016-06-30 2018-01-04 Alan Jeffrey Simon Dynamic face mask with configurable electronic display
US10534809B2 (en) * 2016-08-10 2020-01-14 Zeekit Online Shopping Ltd. Method, system, and device of virtual dressing utilizing image processing, machine learning, and computer vision
CN107247548B (zh) * 2017-05-31 2018-09-04 腾讯科技(深圳)有限公司 图像显示方法、图像处理方法及装置
CN107820591A (zh) * 2017-06-12 2018-03-20 美的集团股份有限公司 控制方法、控制器、智能镜子和计算机可读存储介质
CN109410119A (zh) * 2017-08-18 2019-03-01 北京凤凰都市互动科技有限公司 面具图像变形方法及其系统
CN109034063A (zh) * 2018-07-27 2018-12-18 北京微播视界科技有限公司 人脸特效的多人脸跟踪方法、装置和电子设备
US11457196B2 (en) * 2019-08-28 2022-09-27 Snap Inc. Effects for 3D data in a messaging system
US11381756B2 (en) * 2020-02-14 2022-07-05 Snap Inc. DIY effects image modification
EP4136623A4 (en) * 2020-04-13 2024-05-01 Snap Inc AUGMENTED REALITY CONTENT GENERATORS WITH 3D DATA IN A MESSAGE DELIVERY SYSTEM

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170163958A1 (en) * 2015-12-04 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and device for image rendering processing
CN110619615A (zh) * 2018-12-29 2019-12-27 北京时光荏苒科技有限公司 用于处理图像方法和装置
CN110322416A (zh) * 2019-07-09 2019-10-11 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN110992493A (zh) * 2019-11-21 2020-04-10 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112099712A (zh) * 2020-09-17 2020-12-18 北京字节跳动网络技术有限公司 人脸图像显示方法、装置、电子设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4177724A4 *

Also Published As

Publication number Publication date
US20230090457A1 (en) 2023-03-23
KR20230034351A (ko) 2023-03-09
CN112099712B (zh) 2022-06-07
US11935176B2 (en) 2024-03-19
EP4177724A1 (en) 2023-05-10
EP4177724A4 (en) 2023-12-06
BR112023001930A2 (pt) 2023-03-28
CN112099712A (zh) 2020-12-18
JP2023537721A (ja) 2023-09-05

Similar Documents

Publication Publication Date Title
CN110766777B (zh) 虚拟形象的生成方法、装置、电子设备及存储介质
WO2021139408A1 (zh) 显示特效的方法、装置、存储介质及电子设备
EP3713220A1 (en) Video image processing method and apparatus, and terminal
CN110782515A (zh) 虚拟形象的生成方法、装置、电子设备及存储介质
WO2022007627A1 (zh) 一种图像特效的实现方法、装置、电子设备及存储介质
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
WO2022088928A1 (zh) 弹性对象的渲染方法、装置、设备及存储介质
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
CN112053449A (zh) 基于增强现实的显示方法、设备及存储介质
CN112470164A (zh) 姿态校正
CN112766215A (zh) 人脸融合方法、装置、电子设备及存储介质
TW201901615A (zh) 改善圖像品質的方法及虛擬實境裝置
JP7467780B2 (ja) 画像処理方法、装置、デバイス及び媒体
WO2022057576A1 (zh) 人脸图像显示方法、装置、电子设备及存储介质
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
US20230298265A1 (en) Dynamic fluid effect processing method and apparatus, and electronic device and readable medium
WO2023207354A1 (zh) 特效视频确定方法、装置、电子设备及存储介质
US20230368422A1 (en) Interactive dynamic fluid effect processing method and device, and electronic device
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质
CN109685881B (zh) 一种体绘制方法、装置及智能设备
CN109472855B (zh) 一种体绘制方法、装置及智能设备
WO2023025181A1 (zh) 图像识别方法、装置和电子设备
RU2802724C1 (ru) Способ и устройство обработки изображений, электронное устройство и машиночитаемый носитель информации
WO2021218118A1 (zh) 图像处理方法及装置
WO2023020283A1 (zh) 图像处理方法、装置、设备、介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868404

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237003737

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2023507827

Country of ref document: JP

Kind code of ref document: A

Ref document number: 2021868404

Country of ref document: EP

Effective date: 20230131

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112023001930

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112023001930

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20230201

NENP Non-entry into the national phase

Ref country code: DE