WO2022057576A1 - 人脸图像显示方法、装置、电子设备及存储介质 - Google Patents
人脸图像显示方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2022057576A1 WO2022057576A1 PCT/CN2021/114237 CN2021114237W WO2022057576A1 WO 2022057576 A1 WO2022057576 A1 WO 2022057576A1 CN 2021114237 W CN2021114237 W CN 2021114237W WO 2022057576 A1 WO2022057576 A1 WO 2022057576A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- face mask
- target
- preset
- displaying
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000001815 facial effect Effects 0.000 title abstract description 14
- 238000009877 rendering Methods 0.000 claims description 54
- 238000012545 processing Methods 0.000 claims description 42
- 238000004590 computer program Methods 0.000 claims description 18
- 230000008859 change Effects 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 16
- 230000000694 effects Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 10
- 238000013461 design Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000008034 disappearance Effects 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000003796 beauty Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Definitions
- the present disclosure relates to the technical field of image processing, and in particular, to a method, device, electronic device and storage medium for displaying a face image.
- the existing applications can process various types of face images through image processing, and display the processed face images, the existing applications cannot The need for diversity of interaction methods in the process.
- the present disclosure provides a method, device, electronic device and storage medium for displaying a face image, which are used to solve the technical problem that the user's demand for diversity of interaction modes in the process of displaying a face image cannot be met at present.
- an embodiment of the present disclosure provides a method for displaying a face image, including:
- the object face At a preset relative position of the object face, dynamically display a face mask sequence according to a preset motion mode, and the face mask sequence includes a plurality of face masks corresponding to the object face;
- the target face mask In response to a trigger instruction acting on the target face mask, the target face mask is fused to the target face for display, wherein the target face mask is any mask in the face mask sequence .
- an embodiment of the present disclosure provides a device for displaying a face image, including:
- a display module configured to dynamically display a face mask sequence at a preset relative position of the subject face according to a preset motion mode, where the face mask sequence includes a plurality of face masks corresponding to the subject face;
- the acquisition module is used to acquire the trigger instruction acting on the target face mask
- a processing module further configured to fuse the target face mask to the target face, wherein the target face mask is any mask in the face mask sequence;
- the display module is further configured to display the target face after fusion with the target face mask.
- embodiments of the present disclosure provide an electronic device, including: at least one processor and a memory;
- the memory stores computer-executable instructions
- the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the method for displaying a face image as described in the first aspect and various possible designs of the first aspect above.
- embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first Aspect the face image display method described in the various possible designs.
- embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the face image described in the first aspect and various possible designs of the first aspect. Display method.
- an embodiment of the present disclosure provides a computer program that, when executed by a processor, implements the method for displaying a face image as described in the first aspect and various possible designs of the first aspect.
- a face image display method, device, electronic device, and storage medium provided by the embodiments of the present disclosure dynamically display a face mask sequence according to a preset motion mode at a preset relative position of a subject's face, and when a user triggers After the target face mask in the face mask sequence, the target face mask is fused to the target face for display, thereby enhancing the interactivity in the face image display process, and can also be triggered when the user triggers the target face mask. Then, the target face mask is fused to the target face for display, so as to realize the effect of displaying a specific face mask on the target face.
- FIG. 1 is a schematic flowchart of a method for displaying a face image according to an exemplary embodiment of the present disclosure
- FIG. 2 is a schematic diagram of generating a single face mask according to an exemplary embodiment of the present disclosure
- FIG. 3 is a schematic diagram of generating a face mask sequence according to an exemplary embodiment of the present disclosure
- FIG. 4 is a schematic diagram of a triggering process of a method for displaying a face image according to an exemplary embodiment of the present disclosure
- FIG. 5 is a schematic flowchart of a post-trigger fusion display step according to an exemplary embodiment of the present disclosure
- FIG. 6 is a schematic flowchart of a method for displaying a face image according to another exemplary embodiment of the present disclosure
- FIG. 7 is a schematic flowchart of a rendering step according to an exemplary embodiment of the present disclosure.
- FIG. 8 is a schematic diagram showing the result after rendering of the rendering step shown in FIG. 7;
- FIG. 9 is a schematic structural diagram of a face image display device according to an exemplary embodiment of the present disclosure.
- FIG. 10 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
- the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
- the term “based on” is “based at least in part on.”
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
- facial image processing can be performed on the face image
- facial image processing can be performed on the face image
- face image processing can be performed on the face image.
- many application scenarios have also been derived, but they are all directly applied based on the processed face image, which cannot satisfy the user's diverse interaction methods during the display of the face image. sexual needs.
- a face mask sequence is first generated according to the subject's face.
- the face mask sequence includes multiple face masks.
- the face mask in the face mask sequence may be a face mask directly corresponding to the target face, or it may be a face mask generated after performing related processing (for example: deformation processing, beauty processing) on the target face. face mask.
- the face mask sequence may include multiple masks corresponding to different face shapes.
- the face masks in the face mask sequence may be arranged according to a specific distribution rule.
- the face masks in the face mask sequence may be arranged according to a preset circumferential direction , wherein the preset circumferential direction may be centered on the subject's face, and the head direction of the subject's face may be the central axis, so as to form an effect that the face mask sequence surrounds the subject's face.
- the face mask sequence can also be dynamically displayed according to a preset motion mode, for example, the face mask sequence can be rotated around the subject's face.
- the rotation speed of the face mask sequence rotating around the object face may be determined according to the user's physical characteristics, for example, may be determined according to the user's mouth opening, the user's smile and the user's related gestures.
- it can be illustrated by taking the opening degree of the user's mouth as an example.
- the rotation speed of the face mask sequence rotating around the face of the object can be faster as the opening degree of the user's mouth increases, that is, the opening degree of the user's mouth increases. The larger the value, the faster the sequence of face masks will rotate around the subject's face. It can be seen that the user can accelerate and decelerate the rotation speed of the face mask sequence around the subject's face by adjusting the degree of mouth opening.
- the user When the user triggers any mask in the face mask sequence, for example, the user selects a mask in the face mask sequence by clicking on the screen, which is the target face mask. After triggering, the rotation speed of the face mask sequence around the target face will decrease first until the target face mask moves to face the target face. At this time, the rotation of the face mask sequence can be stopped, and the target face The face mask is fused to the subject's face for display. Further, for other face masks other than the target face mask in the face mask sequence, gradual disappearance may be performed according to a preset transparency change rule.
- the face mask sequence is dynamically displayed according to the preset motion mode at the preset relative position of the target face, and after the user triggers the target face mask in the face mask sequence, the The target face mask is fused to the target face for display. Therefore, before the user triggers the target face mask, the display mode of the face mask sequence and the target face have an interactive effect, and after the user triggers the target face mask, the target face mask is fused to the target face mask.
- the object face is displayed, and the effect of displaying a specific face mask on the object face can be realized.
- FIG. 1 is a schematic flowchart of a method for displaying a face image according to an exemplary embodiment of the present disclosure. As shown in FIG. 1 , the method for displaying a face image provided by this embodiment includes:
- Step 101 Dynamically display a face mask sequence at a preset relative position of the subject's face according to a preset motion mode.
- the face-mask sequence may be dynamically displayed at a preset relative position of the subject's face according to a preset motion mode, wherein the face-mask sequence may include multiple The face mask corresponding to the object's face.
- the face mask in the face mask sequence may be a face mask generated without processing the target face, or it may be a face mask obtained after performing related processing (for example: deformation processing, beauty processing) on the target face. Generated face masks.
- a related 3D face processing tool may be used for the above-mentioned generation of a face mask according to the object face.
- the face mask sequence may include multiple masks corresponding to different face shapes.
- the face masks in the face mask sequence may be arranged according to a specific distribution rule.
- the face masks in the face mask sequence may be arranged according to a preset circumferential direction , and may also be arranged in a sequential manner in a preset direction.
- the face masks in the face mask sequence can be arranged according to a preset circumferential direction
- the preset circumferential direction can be centered on the subject's face and centered on the top direction of the subject's face.
- the face mask sequence can also be dynamically displayed according to a preset motion mode, for example, the face mask sequence can be rotated around the subject's face.
- the face mask sequence can slide from the front of the subject's face according to the sequence of the face masks arrangement.
- the specific arrangement of the face mask sequence and the relative movement mode between the face mask sequence and the target face are not specifically limited in this embodiment, and the specific form can be adapted according to specific scene requirements. However, in this embodiment, it is intended to exemplarily illustrate that there is a specific relative positional relationship and relative motion relationship between the object face and the face mask sequence.
- Step 102 In response to the triggering instruction acting on the target face mask, the target face mask is fused to the target face for display.
- the user When the user triggers any mask in the face mask sequence, for example, the user selects a face mask in the face mask sequence by clicking on the screen, which is the target face mask. After triggering, the rotation speed of the face mask sequence around the target face will decrease first until the target face mask moves to face the target face. At this time, the rotation of the face mask sequence can be stopped, and the target face The face mask is fused to the subject's face for display. Further, for other face masks other than the target face mask in the face mask sequence, gradual disappearance may be performed according to a preset transparency change rule.
- the face mask sequence is dynamically displayed according to the preset motion mode at the preset relative position of the target face, and after the user triggers the target face mask in the face mask sequence, the target face The mask is fused to the target face for display, thereby enhancing the interactivity in the process of displaying the face image.
- the target face mask can be fused to the target face for display, so as to Realize the effect of displaying a specific face mask on the face of the object, which enhances the fun and experience of user interaction.
- the face masks included in the face mask sequence may include the original face masks corresponding to the object faces, and may also include the original face masks generated after processing the original face masks by using a 3D face processing tool.
- Shapeshifting face mask For example, the face mask sequence may include 8 human face masks, and may include 2 original human face masks and 6 deformed human face masks.
- 8 face mask entities can be created first, and then a pre-designed 3D model with deformation effects is imported, and a deformer is added to each face mask entity to adjust the degree of corresponding deformation. Since the deformation form of each face mask is different, each deformation form needs to be imported separately, and multiple deformers can also be implemented into the same model.
- FIG. 2 is a schematic diagram of generating a single face mask according to an exemplary embodiment of the present disclosure. As shown in Figure 2, the original position of the vertices before the deformation of the face mask in the current model space and the position offset after deformation can be obtained. Then, since it is necessary to ensure that the face mask is always displayed outward relative to the object face, it is necessary to First do the displacement operation, and then do the rotation operation.
- the original position of the vertex plus the deformed position offset may be added to the coordinate offset on the Z axis, so as to realize the offset from the original position of the object's face.
- the generated face mask is also scaled, for example, it can be scaled to 90% of the original size , so that each face mask in the face mask sequence is a scaled face mask corresponding to the object face. It can be seen that, by the above method, a face mask after being scaled to a certain ratio can be displayed at a position where the subject's face deviates from the preset distance. The specific effect can be seen in FIG. 2 .
- the subject face can be used as the center, and the top direction of the subject face can be the central axis (for example: Y axis), rotated by the following rotation matrix:
- a single face mask can be rotated to 90% of the original size, with the face of the object as the center, the top of the head as the central axis, a circle with a specified radius, and the front of the face mask always faces outward. sports.
- the face mask sequence can be dynamically displayed according to the preset motion mode at the preset relative position of the object face.
- the case where the face mask sequence includes 8 face masks can be illustrated as an example:
- FIG. 3 is a schematic diagram of generating a face mask sequence according to an exemplary embodiment of the present disclosure.
- the same offset displacement and scaling ratio can be assigned.
- the rotation angles are defined in turn according to the initialization sequence, so that the 8 face masks after initialization will be placed at intervals of 45 degrees, so that the face mask sequence formed by the 8 face masks forms a complete circle, so that the
- the face masks in the face mask sequence are arranged according to a preset circumferential direction.
- 8 face masks can also be assigned different deformation models and configured with algorithms for controlling the selection of face masks, so that the movement of these face masks can be controlled uniformly in a system-level script.
- FIG. 4 is a schematic diagram of a triggering process of a method for displaying a face image according to an exemplary embodiment of the present disclosure. As shown in Figure 4, the user can select a mask in the face mask sequence as the target face mask by clicking on the screen.
- the rotation speed of the face mask sequence around the target face will decrease first until the target face mask moves to face the target face. At this time, the rotation of the face mask sequence can be stopped, and the target face The face mask is fused to the subject's face for display. Further, for other face masks other than the target face mask in the face mask sequence, gradual disappearance may be performed according to a preset transparency change rule.
- FIG. 5 is a schematic flowchart of a post-trigger fusion display step according to an exemplary embodiment of the present disclosure.
- step 102 in the above embodiment may specifically include:
- Step 1020 Obtain a trigger instruction acting on the target face mask.
- the user can select a mask in the face mask sequence as the target face mask by clicking on the screen.
- Step 1021 Determine whether the rotation speed is less than the preset target speed. If the judgment result is yes, go to step 1022; if the judgment result is no, go to step 1023.
- Step 1022 setting the current rotation angle.
- Step 1023 reducing the rotation speed.
- Step 1024 Determine whether the rotation speed is less than the preset target speed. If the judgment result is yes, go to step 1025; if the judgment result is no, go to step 1023.
- Step 1025 Calculate the target rotation angle.
- the target face mask After triggering the target face mask in the face mask sequence, it is necessary to determine whether the rotation speed is less than the preset target speed, where the preset target speed may be a rotation speed threshold. If the judgment result is yes, it means that the rotation speed of the current face mask is slow, and the target face mask can be controlled to move to the target position by directly calculating the target rotation angle, for example, the position directly in front of the target face, and then Perform subsequent fusion display operations. However, if the current rotation speed is greater than the preset target speed, and the target face mask is directly controlled to move to the target position by calculating the target rotation angle, it will cause the movement state trend of the face mask sequence to change too much, resulting in an interactive experience. Poor delivery.
- the rotation speed needs to be reduced first, and after the rotation speed is reduced to less than the preset target speed, the subsequent calculation of the target rotation angle is performed to control the movement of the target face mask to the target position.
- the target rotation angle is the rotation angle when the movement of the face mask stops.
- the case where the face mask sequence includes 8 face masks can be continued as an example. Since one face mask must be facing the user during the stop, the rotation angle of the face mask is 0, so the target rotation angle must be is an integer multiple of 45.
- the target rotation angle of each face mask in the face mask sequence can be calculated by the following formula:
- ⁇ is the target rotation angle
- ⁇ is the current rotation angle
- floor represents the largest integer less than the result in the brackets, adding 1 in the brackets can ensure that the direction of the relationship between the target angle and the current angle is consistent with the direction of the rotation speed, and the current rotation angle is the current corresponding to each face mask face angle.
- Step 1026 Record the serial number of the face mask rotated to the target position.
- each face mask in the face mask sequence can be assigned a serial number, and the serial number is used to uniquely identify each face mask.
- the two face masks are assigned the number 2
- the third face mask is assigned the number 3
- the numbers are assigned in turn until the eighth face mask is assigned the number 8.
- the number of the face mask rotated to the target position at each moment is recorded at the current moment. For example, at the first moment, the first face mask corresponding to No. 1 is rotated to the target position, At the second moment, the second face mask corresponding to the number 2 rotates to the target position.
- Step 1027 Determine whether the target face mask is rotated to the target position. If the judgment result is yes, go to step 1025; if the judgment result is no, go to step 1028 and step 1029.
- the target face mask When the target face mask is facing the target face, that is, when the target rotation angle of the target face mask is 0, it means that the target face mask has reached the target position. At this time, the rotation motion of the face mask sequence can be stopped.
- the number corresponding to the target face mask may be recorded, and during the rotation process of the face mask sequence, if the number recorded at the current moment is the number corresponding to the target face mask number, it means that the target face mask is facing the target face, at this time, the rotation movement of the face mask sequence can be stopped.
- the target face mask triggered by the user is a third face mask, record number 3, and during the rotation process of the face mask sequence, when the face mask corresponding to record number 3 rotates to the target position, it means that the number 3 is recorded.
- the rotation of the face mask sequence is stopped.
- Step 1028 Display the fused face image on the face of the object.
- the target face mask may be fitted to the target face according to a preset path, and the target face mask and the target face may be fused to generate a fused face image, wherein the preset path starts from The first position points to the second position, the first position is the position of the current target face mask in the face mask sequence, and the second position is the position of the current target face.
- the deviation value of the face masks in the Z-axis direction may be reduced, so as to realize the effect that all face masks move toward the target face to reduce the radius of the circle.
- the fused face image is displayed on the subject's face.
- Step 1029 gradually disappear the other face mask.
- the alpha channel of the face mask not facing the subject's face may be gradually reduced, wherein the alpha channel is used to adjust the transparency, so that the face mask not facing the subject's face will gradually become transparent.
- the implementation can refer to the following formula:
- alpha max((alpha′- ⁇ t ⁇ t,0))
- alpha is the transparency of the face mask of the current frame
- alpha' is the transparency of the face mask of the current frame
- ⁇ t is the time coefficient
- ⁇ t is the time difference between the current frame and the previous frame.
- FIG. 6 is a schematic flowchart of a method for displaying a face image according to another exemplary embodiment of the present disclosure. As shown in FIG. 6 , the method for displaying a face image provided by this embodiment includes:
- Step 201 Determine a rendering area and a non-rendering area according to the current position parameters of each texel on each face mask of the face mask sequence and the position parameters of the target face, and only render the rendering area.
- each face mask needs to be rendered before the face mask sequence is displayed. If each face mask is rendered indiscriminately and in all directions, it will easily lead to excessive rendering calculations, which in turn takes up too many computing resources. Therefore, the rendering area and the non-rendering area can be determined according to the current position parameters of each texel on each face mask of the face mask sequence and the position parameters of the target face, and only the rendering area is rendered.
- FIG. 7 is a schematic flowchart of a rendering step according to an exemplary embodiment of the present disclosure. As shown in FIG. 7 , the above rendering steps in this embodiment may include:
- Step 2012 If it is a frontal area, determine whether the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction. If the judgment result is yes, execute step 2015; if the judgment result is no, execute step 2014.
- the third direction may be a direction in which there is a visual space occlusion between the subject's face and the face mask.
- Step 2013 If it is the back area, determine whether the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction. If the judgment result is yes, go to step 2015; if the judgement result is no, go to step 2016.
- the front area and the back area of the face mask can be completed through corresponding processing processes, wherein the first processing process is only used for rendering the front area, and the second processing process is only used for rendering the front area Used to render the reverse area.
- the two processing processes can share a texel shader for rendering, in which the texel shader samples the current object face to return the real-time portrait mask, so as to realize the rendering of the face mask.
- the current position of the texel is within the range of the position corresponding to the target face, and is located at the feature key point of the target face in the third direction (for example, the Z-axis direction) (for example: the target face Sideburns position), determine whether the texel needs to be rendered.
- the third direction for example, the Z-axis direction
- the target face Sideburns position determines whether the texel needs to be rendered.
- the texel when the current position of the texel is within the position range corresponding to the subject's face, the texel may be in front of the subject's face, or may be behind the subject's face. However, if the current position of the texel is located in front of the sideburns of the subject's face in the Z-axis direction, it can be determined that the texel is in front of the subject's face and is visible, and the texel is visible to the user. Therefore, it is necessary to Render this texel.
- Step 2014 Render the frontal area according to the face of the object.
- rendering can be performed according to the object face to display the specific appearance of the object face corresponding to the face mask.
- Step 2015 do not render the texels of the face mask located in the non-rendering area.
- texels of the face mask located in the non-rendering area whether they belong to the front area or the back area, no rendering is performed. It can be understood that the texels in this area are set to be transparent.
- Step 2016 Render the back area to a preset fixed texture.
- face masks that are texels in the rendering area, but belong to the back area, they can be rendered according to a preset fixed texture, for example, they can be rendered in gray.
- FIG. 8 is a schematic diagram of a post-rendering display result of the rendering step shown in FIG. 7 .
- area A it is located in the rendering area and belongs to the frontal area, so rendering is performed according to the object face to display the specific appearance of the object face corresponding to the face mask.
- area B it is in the rendering area and belongs to the back area, so it is rendered as gray for example.
- step 202 at a preset relative position of the subject's face, dynamically display a face mask sequence according to a preset motion mode.
- step 202 in this embodiment, reference may be made to the specific description of step 101 in the embodiment shown in FIG. 1 .
- step 101 in the embodiment shown in FIG. 1 when the sequence of face masks is dynamically displayed according to the preset motion mode, it can be determined according to the physical characteristics of the user, for example, it can be determined according to the user's mouth opening degree, User smile level and user related gestures.
- the opening degree of the user's mouth As an example.
- the rotation speed of the face mask sequence rotating around the face of the object can be faster as the opening degree of the user's mouth increases, that is, the opening of the user's mouth increases. The larger the value, the faster the sequence of face masks will rotate around the subject's face. It can be seen that the user can accelerate and decelerate the rotation speed of the face mask sequence around the subject's face by adjusting the degree of mouth opening.
- the characteristic parameters of the target part on the face of the subject may be obtained first, then the rotation speed is determined according to the characteristic parameters, and the face mask sequence is dynamically displayed according to the rotation speed.
- the mouth feature parameters and the eye feature parameters of the face of the object can be obtained, wherein the mouth feature parameters include the coordinates of the key point of the upper lip and the coordinates of the key point of the lower lip, and the eye feature parameters include the coordinates of the key point of the left eye and the coordinates of the right eye. Keypoint coordinates.
- the first coordinate difference in the first direction (eg, the Y axis) is determined according to the coordinates of the upper lip key point and the coordinate of the lower lip key point
- the second coordinate difference is determined according to the coordinates of the left eye key point and the right eye key point.
- the second coordinate difference in the direction (eg, X-axis).
- a characteristic parameter is determined according to the ratio of the first coordinate difference value and the second coordinate difference value, wherein the characteristic parameter can be used to characterize the degree of mouth opening. It should be noted that by determining the mouth opening degree by the ratio between the first coordinate difference value and the second coordinate difference value, fluctuations in the mouth opening degree caused by changes in the distance of the subject face from the camera can be avoided.
- the rotation speed is the first preset speed. If the characteristic parameter is greater than the preset first threshold, the rotation speed is the sum of the first preset speed and the additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the difference between the characteristic parameters is the characteristic parameter and the preset first threshold Difference. And when the sum of the first preset speed and the additional speed is greater than or equal to a second preset speed, the rotation speed is determined as the second preset speed.
- the rotation speed of the face mask in the current frame can be calculated according to the degree of mouth opening.
- the specific calculation formula is as follows:
- ⁇ min( ⁇ min +max((Dd),0) ⁇ ⁇ , ⁇ max )
- ⁇ is the rotation speed
- ⁇ min is the minimum rotation speed
- D is the mouth opening degree
- d is the mouth opening detection threshold
- ⁇ ⁇ is the speed coefficient
- ⁇ max is the maximum rotation speed.
- the mouth opening detection threshold means that the mouth opening degree is greater than the threshold value to determine the mouth opening
- the speed coefficient refers to a constant that needs to be multiplied when converting the mouth opening degree parameter into the rotation speed.
- the above-mentioned preset first threshold is the mouth opening detection threshold d
- the first preset speed is the minimum rotation speed ⁇ min
- the additional speed is: (Dd) ⁇ ⁇
- the second preset speed is the maximum rotation speed speed ⁇ max .
- the rotation angle of each face mask in the current frame can also be set through the rotation speed determined above.
- the specific formula is as follows:
- Step 203 In response to the triggering instruction acting on the target face mask, the target face mask is fused to the target face for display.
- step 203 in this embodiment, reference may be made to the specific description of step 102 in the embodiment shown in FIG. 1 , which will not be repeated here.
- FIG. 9 is a schematic structural diagram of a face image display device according to an exemplary embodiment of the present disclosure.
- a human face image display device 300 provided in this embodiment includes:
- a display module 301 is used to dynamically display a sequence of face masks at a preset relative position of the object's face according to a preset motion mode, and the sequence of face masks includes a plurality of face masks corresponding to the subject's faces;
- an obtaining module 302 configured to obtain a trigger instruction acting on the target face mask
- the processing module 303 is further configured to fuse the target face mask to the target face, wherein the target face mask is any mask in the face mask sequence;
- the display module 301 is further configured to display the target face after fusion with the target face mask.
- the face mask sequence includes at least one deformed face mask corresponding to the subject face.
- the display module 301 is specifically used for:
- the face masks in the face mask sequence are preset according to the preset Arranged in the circumferential direction.
- the face masks in the face mask sequence are scaled face masks corresponding to the subject face.
- the display module 301 is specifically used for:
- the rotation speed of the rotation motion is determined according to the characteristic parameter, and the face mask sequence is dynamically displayed according to the rotation speed.
- the obtaining module 302 is further configured to obtain mouth feature parameters and eye feature parameters of the subject's face, where the mouth feature parameters include key point coordinates of the upper lip and lower lip key point coordinates, the eye feature parameters include left eye key point coordinates and right eye key point coordinates;
- the processing module 303 is further configured to determine the first coordinate difference in the first direction according to the upper lip key point coordinates and the lower lip key point coordinates, and according to the left eye key point coordinates and the The coordinates of the right eye key point determine the second coordinate difference in the second direction;
- the processing module 303 is further configured to determine the characteristic parameter according to the ratio of the first coordinate difference and the second coordinate difference.
- the processing module 303 is specifically configured to:
- the rotation speed is the first preset speed
- the rotation speed is the sum of the first preset speed and an additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the characteristic
- the parameter difference is the difference between the characteristic parameter and the preset first threshold
- the rotation speed is determined as the second preset speed.
- the processing module 303 is further configured to determine that the target face mask is rotated to a target position, and the target position and the target face conform to a preset positional relationship.
- the target position is a position directly in front of the subject's face.
- the rotational speed of the rotational motion is reduced to a preset target speed.
- the processing module 303 is further configured to fit the target face mask to the subject face according to a preset path, and attach the target face mask to the target face mask.
- the subject face is fused to generate a fused face image, wherein the preset path points from a first position to a second position, and the first position is the current position of the target face mask on the face mask. the position in the sequence, the second position is the current position of the subject's face;
- the display module 301 is further configured to display the fused face image on the subject's face.
- the display module 301 is further configured to, according to a preset transparency change rule, perform an operation on other face masks other than the target face mask in the face mask sequence. Gradient disappears.
- the processing module 303 is further configured to, according to the current position parameter of each texel on each face mask of the face mask sequence and the position parameter of the object face A rendering area and a non-rendering area are determined, and only the rendering area is rendered.
- the texel belongs to the non-rendering area if the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction.
- the processing module 303 is further configured to render the frontal area according to the face of the object;
- the processing module 303 is further configured to render the back area into a preset fixed texture.
- the device for displaying a face image provided by the embodiment shown in FIG. 9 can be used to execute the method provided by any of the above embodiments, and the specific implementation manner and technical effect are similar, and are not repeated here.
- FIG. 10 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in FIG. 10 , it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
- Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player, PMP), in-vehicle terminals (for example, in-vehicle navigation terminals) and other mobile terminals with image acquisition functions, as well as fixed terminals with image acquisition devices such as digital TVs, desktop computers, and the like.
- PDA Personal Digital Assistant
- PAD Portable multimedia players
- PMP Portable Media Player
- in-vehicle terminals for example, in-vehicle navigation terminals
- other mobile terminals with image acquisition functions as well as fixed terminals with image acquisition devices such as digital TVs, desktop computers, and
- the electronic device 400 may include a processor (eg, a central processing unit, a graphics processing unit, etc.) 401 , which may be based on a program stored in a read-only memory (Read-Only Memory, ROM) 402 or from a memory 408 A program loaded into a random access memory (RAM) 403 performs various appropriate actions and processes. In the RAM 403, various programs and data required for the operation of the electronic device 400 are also stored.
- the processor 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
- An Input/Output (I/O) interface 405 is also connected to the bus 404 .
- the memory is used to store programs for executing the methods described in the above method embodiments; the processor is configured to execute the programs stored in the memory.
- the following devices can be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) output device 407 , a speaker, a vibrator, etc.; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409 .
- Communication means 409 may allow electronic device 400 to communicate wirelessly or by wire with other devices to exchange data.
- FIG. 10 shows electronic device 400 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer-readable medium, the computer program comprising a program for performing the methods shown in the flowcharts of the embodiments of the present disclosure code.
- the computer program may be downloaded and installed from the network via the communication device 409, or from the storage device 408, or from the ROM 402.
- the processor 401 When the computer program is executed by the processor 401, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
- Computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (Electrically-Erasable Programmable Read-Only Memory, EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage devices, magnetic storage devices, or the above any suitable combination.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
- the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: at the preset relative position of the subject's face, dynamically display the human body according to the preset motion mode.
- a face mask sequence the face mask sequence includes a plurality of face masks corresponding to the target face; in response to a triggering instruction acting on the target face mask, the target face mask is fused to the target face The face is displayed, wherein the target face mask is any mask in the face mask sequence.
- Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, can be connected to an external computer ( For example, using an Internet service provider to connect via the Internet).
- LAN Local Area Network
- WAN Wide Area Network
- clients and servers can communicate using any currently known or future developed network protocols, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
- Communication eg, a communication network
- Examples of communication networks include local area networks (LANs), wide area networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
- LANs local area networks
- WANs wide area networks
- the Internet eg, the Internet
- peer-to-peer networks eg, ad hoc peer-to-peer networks
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
- the modules involved in the embodiments of the present disclosure may be implemented in software or hardware.
- the name of the module does not constitute a limitation of the unit itself in some cases, for example, the display module can also be described as "a unit that displays the face of the object and the sequence of face masks".
- exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable logic device, CPLD) and so on.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSP Application Specific Standard Products
- SOC System on Chip
- complex programmable logic device Complex Programmable logic device, CPLD
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- a method for displaying a face image including:
- the object face At a preset relative position of the object face, dynamically display a face mask sequence according to a preset motion mode, and the face mask sequence includes a plurality of face masks corresponding to the object face;
- the target face mask In response to a trigger instruction acting on the target face mask, the target face mask is fused to the target face for display, wherein the target face mask is any mask in the face mask sequence .
- the face mask sequence includes at least one deformed face mask corresponding to the subject face.
- the dynamic display of a face mask sequence at a preset relative position of the subject's face according to a preset motion mode includes:
- the face masks in the face mask sequence are preset according to the preset Arranged in the circumferential direction.
- the face masks in the face mask sequence are scaled face masks corresponding to the subject face.
- the dynamic display of the face mask sequence according to a rotational motion includes:
- the rotation speed of the rotation motion is determined according to the characteristic parameter, and the face mask sequence is dynamically displayed according to the rotation speed.
- the acquiring the characteristic parameters of the target part on the face of the subject includes:
- the mouth feature parameters include the coordinates of the upper lip key point and the lower lip key point coordinates
- the eye feature parameters include the left eye key point coordinates and the right eye key point coordinates
- the first coordinate difference in the first direction is determined according to the coordinates of the upper lip key point and the coordinates of the lower lip key point
- the second coordinate difference is determined according to the coordinates of the left eye key point and the right eye key point. the second coordinate difference in the direction;
- the characteristic parameter is determined according to the ratio of the first coordinate difference and the second coordinate difference.
- the determining the rotation speed according to the characteristic parameter includes:
- the rotation speed is the first preset speed
- the rotation speed is the sum of the first preset speed and an additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the characteristic
- the parameter difference is the difference between the characteristic parameter and the preset first threshold
- the rotation speed is determined as the second preset speed.
- the method before the fusion of the target face mask to the target face for display, the method further includes:
- the target face mask is rotated to a target position, and the target position and the target face conform to a preset positional relationship.
- the target position is a position directly in front of the subject's face.
- the rotational speed of the rotational motion is reduced to a preset target speed.
- the displaying by fusing the target face mask to the target face includes:
- the target face mask Fit the target face mask to the target face according to a preset path, and fuse the target face mask with the target face to generate a fused face image, wherein the preset Let the path point from the first position to the second position, the first position is the current position of the target face mask in the face mask sequence, and the second position is the current position of the target face. location;
- the fused face image is displayed on the subject's face.
- the displaying by fusing the target face mask to the target face further includes:
- the other face masks in the face mask sequence except the target face mask are gradually disappeared.
- the method for displaying a face image further includes:
- the rendering area and the non-rendering area are determined according to the current position parameter of each texel on each face mask of the face mask sequence and the position parameter of the object face, and only the rendering area is rendered.
- the rendering area and the non-rendering area are determined according to the current position parameter of each texel on each face mask of the face mask sequence and the position parameter of the object face ,include:
- the texel belongs to the non-rendering area.
- the rendering area includes a front area and a back area
- the rendering of the rendering area includes:
- a device for displaying a face image including:
- a display module configured to dynamically display a face mask sequence at a preset relative position of the subject face according to a preset motion mode, where the face mask sequence includes a plurality of face masks corresponding to the subject face;
- the acquisition module is used to acquire the trigger instruction acting on the target face mask
- a processing module further configured to fuse the target face mask to the target face, wherein the target face mask is any mask in the face mask sequence;
- the display module is further configured to display the target face after fusion with the target face mask.
- the face mask sequence includes at least one deformed face mask corresponding to the subject face.
- the display module is specifically used for:
- the face masks in the face mask sequence are preset according to the preset Arranged in the circumferential direction.
- the face masks in the face mask sequence are scaled face masks corresponding to the subject face.
- the display module is specifically used for:
- the rotation speed of the rotation motion is determined according to the characteristic parameter, and the face mask sequence is dynamically displayed according to the rotation speed.
- the acquisition module is further configured to acquire mouth feature parameters and eye feature parameters of the subject's face, where the mouth feature parameters include key point coordinates of the upper lip and lower Lip key point coordinates, the eye feature parameters include left eye key point coordinates and right eye key point coordinates;
- the processing module is further configured to determine the first coordinate difference in the first direction according to the upper lip key point coordinates and the lower lip key point coordinates, and determine the first coordinate difference in the first direction according to the left eye key point coordinates and the right The eye key point coordinates determine the second coordinate difference in the second direction;
- the processing module is further configured to determine the characteristic parameter according to the ratio of the first coordinate difference and the second coordinate difference.
- the processing module is specifically configured to:
- the rotation speed is the first preset speed
- the rotation speed is the sum of the first preset speed and an additional speed, wherein the additional speed is proportional to the difference between the characteristic parameters, and the characteristic
- the parameter difference is the difference between the characteristic parameter and the preset first threshold
- the rotation speed is determined as the second preset speed.
- the processing module is further configured to determine that the target face mask is rotated to a target position, and the target position and the target face conform to a preset positional relationship.
- the target position is a position directly in front of the subject's face.
- the rotational speed of the rotational motion is reduced to a preset target speed.
- the processing module is further configured to fit the target face mask to the subject face according to a preset path, and attach the target face mask to the target face mask.
- the target face is fused to generate a fused face image, wherein the preset path points from a first position to a second position, and the first position is the current face mask of the target in the face mask sequence where the second position is the current position of the subject's face;
- the display module is further configured to display the fused face image on the subject's face.
- the display module is further configured to, according to a preset transparency change rule, perform gradients on other face masks except the target face mask in the face mask sequence disappear.
- the processing module is further configured to determine according to the current position parameters of each texel on each face mask of the face mask sequence and the position parameter of the object face Rendering area as well as non-rendering area, and render only the rendering area.
- the texel belongs to the non-rendering area if the current position of the texel is within the position range corresponding to the target face, and is located behind the feature key point of the target face in the third direction.
- the processing module is further configured to render the frontal area according to the face of the object
- the processing module is further configured to render the back area into a preset fixed texture.
- embodiments of the present disclosure provide an electronic device, including: at least one processor and a memory;
- the memory stores computer-executable instructions
- the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the method for displaying a face image as described in the first aspect and various possible designs of the first aspect above.
- embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first Aspect the face image display method described in the various possible designs.
- embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the face image described in the first aspect and various possible designs of the first aspect. Display method.
- an embodiment of the present disclosure provides a computer program that, when executed by a processor, implements the method for displaying a face image as described in the first aspect and various possible designs of the first aspect.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (20)
- 一种人脸图像显示方法,其特征在于,包括:在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;响应于作用在目标人脸面具上的触发指令,将所述目标人脸面具融合至所述对象人脸进行显示,其中,所述目标人脸面具为所述人脸面具序列中的任一面具。
- 根据权利要求1所述的人脸图像显示方法,其特征在于,所述人脸面具序列包括至少一张所述对象人脸对应的变形人脸面具。
- 根据权利要求1或2所述的人脸图像显示方法,其特征在于,所述在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,包括:以所述对象人脸为中心,所述对象人脸的头顶方向为中心轴,并按照旋转运动的方式动态显示所述人脸面具序列,所述人脸面具序列中的人脸面具按照预设圆周方向进行排布。
- 根据权利要求3所述的人脸图像显示方法,其特征在于,所述人脸面具序列中的人脸面具为所述对象人脸对应的缩放人脸面具。
- 根据权利要求3或4所述的人脸图像显示方法,其特征在于,所述按照旋转运动的方式动态显示所述人脸面具序列,包括:获取所述对象人脸上目标部位的特征参数;根据所述特征参数确定所述旋转运动的旋转速度,并按照所述旋转速度动态显示所述人脸面具序列。
- 根据权利要求5所述的人脸图像显示方法,其特征在于,所述获取所述对象人脸上目标部位的特征参数,包括:获取所述对象人脸的嘴部特征参数以及眼部特征参数,所述嘴部特征参数包括上嘴唇关键点坐标以及下嘴唇关键点坐标,所述眼部特征参数包括左眼关键点坐标以及右眼关键点坐标;根据所述上嘴唇关键点坐标以及所述下嘴唇关键点坐标确定在第一方向上的第一坐标差值,并根据所述左眼关键点坐标以及所述右眼关键点坐标确定在第二方向上的第二坐标差值;根据所述第一坐标差值以及所述第二坐标差值的比值确定所述特征参数。
- 根据权利要求6所述的人脸图像显示方法,其特征在于,所述根据所述特征参数确定所述旋转运动的旋转速度,包括:若所述特征参数小于或等于预设第一阈值,则所述旋转速度为第一预设速度;若所述特征参数大于所述预设第一阈值,则所述旋转速度为所述第一预设速度与附加速度之和,其中,所述附加速度与特征参数差值成正比,所述特征参数差值为所述特征参数与所述预设第一阈值之差;当所述第一预设速度与所述附加速度之和大于或等于第二预设速度时,则将所述旋转速度确定为所述第二预设速度。
- 根据权利要求3-7中任意一项所述的人脸图像显示方法,其特征在于,在所述将所述目标人脸面具融合至所述对象人脸进行显示之前,还包括:确定所述目标人脸面具旋转至目标位置,所述目标位置与所述对象人脸之间符合预设位置关系。
- 根据权利要求8所述的人脸图像显示方法,其特征在于,所述目标位置为所述对象人脸正前方的位置。
- 根据权利要求9所述的人脸图像显示方法,其特征在于,当所述目标人脸面具旋转至所述目标位置时,将所述旋转运动的旋转速度下降至预设目标速度。
- 根据权利要求1-10中任意一项所述的人脸图像显示方法,其特征在于,所述将所述目标人脸面具融合至所述对象人脸进行显示,包括:按照预设路径将所述目标人脸面具贴合至所述对象人脸上,并将所述目标人脸面具与所述对象人脸进行融合,以生成融合人脸图像,其中,所述预设路径从第一位置指向第二位置,所述第一位置为当前所述目标人脸面具在所述人脸面具序列中所处的位置,所述第二位置为当前所述对象人脸所处的位置;在所述对象人脸上显示所述融合人脸图像。
- 根据权利要求11所述的人脸图像显示方法,其特征在于,所述将所述目标人脸面具融合至所述对象人脸进行显示,还包括:按照预设透明度变化规则,对所述人脸面具序列中除所述目标人脸面具之外的其他人脸面具进行渐变消失。
- 根据权利要求1-12中任意一项所述的人脸图像显示方法,其特征在于,还包括:根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,并仅对所述渲染区域进行渲染。
- 根据权利要求13所述的人脸图像显示方法,其特征在于,所述根据所述人脸面具序列的各张人脸面具上各个纹素的当前位置参数以及所述对象人脸的位置参数确定渲染区域以及非渲染区域,包括:若所述纹素当前位置位于所述对象人脸所对应的位置范围之内,且在第三方向上位于所述对象人脸的特征关键点之后,则所述纹素属于所述非渲染区域。
- 根据权利要求14所述的人脸图像显示方法,其特征在于,所述渲染区域包括正面区域与背面区域,所述对所述渲染区域进行渲染,包括:根据所述对象人脸渲染所述正面区域;将所述背面区域渲染为预设固定纹理。
- 一种人脸图像显示装置,其特征在于,包括:显示模块,用于在对象人脸的预设相对位置,按照预设运动方式动态显示人脸面具序列,所述人脸面具序列包括多张所述对象人脸对应的人脸面具;获取模块,用于获取作用于目标人脸面具上的触发指令;处理模块,还用于将所述目标人脸面具进行融合至所述对象人脸,其中,所述目标人脸面具为所述人脸面具序列中的任一面具;所述显示模块,还用于对融合所述目标人脸面具后的所述对象人脸进行显示。
- 一种电子设备,其特征在于,包括:至少一个处理器和存储器;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处 理器执行如权利要求1-15中任意一项所述的人脸图像显示方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1-15中任意一项所述的人脸图像显示方法。
- 一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1-15中任意一项所述的人脸图像显示方法。
- 一种计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1-15中任意一项所述的人脸图像显示方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237003737A KR20230034351A (ko) | 2020-09-17 | 2021-08-24 | 얼굴 이미지 표시 방법, 장치, 전자기기 및 저장매체 |
JP2023507827A JP7560208B2 (ja) | 2020-09-17 | 2021-08-24 | 顔画像表示方法、装置、電子機器及び記憶媒体 |
EP21868404.1A EP4177724A4 (en) | 2020-09-17 | 2021-08-24 | FACE IMAGE DISPLAY METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM |
BR112023001930A BR112023001930A2 (pt) | 2020-09-17 | 2021-08-24 | Método e aparelho de exibição de imagem facial, dispositivo eletrônico e meio de armazenamento |
US18/060,128 US11935176B2 (en) | 2020-09-17 | 2022-11-30 | Face image displaying method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010981627.5 | 2020-09-17 | ||
CN202010981627.5A CN112099712B (zh) | 2020-09-17 | 2020-09-17 | 人脸图像显示方法、装置、电子设备及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/060,128 Continuation US11935176B2 (en) | 2020-09-17 | 2022-11-30 | Face image displaying method and apparatus, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022057576A1 true WO2022057576A1 (zh) | 2022-03-24 |
Family
ID=73760305
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/114237 WO2022057576A1 (zh) | 2020-09-17 | 2021-08-24 | 人脸图像显示方法、装置、电子设备及存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US11935176B2 (zh) |
EP (1) | EP4177724A4 (zh) |
JP (1) | JP7560208B2 (zh) |
KR (1) | KR20230034351A (zh) |
CN (1) | CN112099712B (zh) |
BR (1) | BR112023001930A2 (zh) |
WO (1) | WO2022057576A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112099712B (zh) * | 2020-09-17 | 2022-06-07 | 北京字节跳动网络技术有限公司 | 人脸图像显示方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170163958A1 (en) * | 2015-12-04 | 2017-06-08 | Le Holdings (Beijing) Co., Ltd. | Method and device for image rendering processing |
CN110322416A (zh) * | 2019-07-09 | 2019-10-11 | 腾讯科技(深圳)有限公司 | 图像数据处理方法、装置以及计算机可读存储介质 |
CN110619615A (zh) * | 2018-12-29 | 2019-12-27 | 北京时光荏苒科技有限公司 | 用于处理图像方法和装置 |
CN110992493A (zh) * | 2019-11-21 | 2020-04-10 | 北京达佳互联信息技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN112099712A (zh) * | 2020-09-17 | 2020-12-18 | 北京字节跳动网络技术有限公司 | 人脸图像显示方法、装置、电子设备及存储介质 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030234871A1 (en) | 2002-06-25 | 2003-12-25 | Squilla John R. | Apparatus and method of modifying a portrait image |
US8028250B2 (en) * | 2004-08-31 | 2011-09-27 | Microsoft Corporation | User interface having a carousel view for representing structured data |
JP4655212B2 (ja) | 2005-08-26 | 2011-03-23 | 富士フイルム株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
US20130080976A1 (en) * | 2011-09-28 | 2013-03-28 | Microsoft Corporation | Motion controlled list scrolling |
US10916044B2 (en) * | 2015-07-21 | 2021-02-09 | Sony Corporation | Information processing apparatus, information processing method, and program |
US10430867B2 (en) * | 2015-08-07 | 2019-10-01 | SelfieStyler, Inc. | Virtual garment carousel |
US20170092002A1 (en) * | 2015-09-30 | 2017-03-30 | Daqri, Llc | User interface for augmented reality system |
CN105357466A (zh) * | 2015-11-20 | 2016-02-24 | 小米科技有限责任公司 | 视频通信方法及装置 |
US20180000179A1 (en) * | 2016-06-30 | 2018-01-04 | Alan Jeffrey Simon | Dynamic face mask with configurable electronic display |
US10534809B2 (en) * | 2016-08-10 | 2020-01-14 | Zeekit Online Shopping Ltd. | Method, system, and device of virtual dressing utilizing image processing, machine learning, and computer vision |
JP2018152646A (ja) | 2017-03-10 | 2018-09-27 | 株式会社リコー | 撮像装置、画像表示システム、操作方法およびプログラム |
CN107247548B (zh) * | 2017-05-31 | 2018-09-04 | 腾讯科技(深圳)有限公司 | 图像显示方法、图像处理方法及装置 |
JP2019537758A (ja) | 2017-06-12 | 2019-12-26 | 美的集団股▲フン▼有限公司Midea Group Co., Ltd. | 制御方法、コントローラ、スマートミラー及びコンピュータ読み取り可能な記憶媒体 |
CN109410119A (zh) * | 2017-08-18 | 2019-03-01 | 北京凤凰都市互动科技有限公司 | 面具图像变形方法及其系统 |
JP6981106B2 (ja) | 2017-08-29 | 2021-12-15 | 株式会社リコー | 撮像装置、画像表示システム、操作方法、プログラム |
CN109034063A (zh) * | 2018-07-27 | 2018-12-18 | 北京微播视界科技有限公司 | 人脸特效的多人脸跟踪方法、装置和电子设备 |
US11457196B2 (en) * | 2019-08-28 | 2022-09-27 | Snap Inc. | Effects for 3D data in a messaging system |
US11381756B2 (en) * | 2020-02-14 | 2022-07-05 | Snap Inc. | DIY effects image modification |
KR102697772B1 (ko) * | 2020-04-13 | 2024-08-26 | 스냅 인코포레이티드 | 메시징 시스템 내의 3d 데이터를 포함하는 증강 현실 콘텐츠 생성기들 |
CN111494934B (zh) * | 2020-04-16 | 2024-05-17 | 网易(杭州)网络有限公司 | 游戏中虚拟道具展示方法、装置、终端及存储介质 |
-
2020
- 2020-09-17 CN CN202010981627.5A patent/CN112099712B/zh active Active
-
2021
- 2021-08-24 BR BR112023001930A patent/BR112023001930A2/pt unknown
- 2021-08-24 WO PCT/CN2021/114237 patent/WO2022057576A1/zh active Application Filing
- 2021-08-24 KR KR1020237003737A patent/KR20230034351A/ko active Search and Examination
- 2021-08-24 JP JP2023507827A patent/JP7560208B2/ja active Active
- 2021-08-24 EP EP21868404.1A patent/EP4177724A4/en active Pending
-
2022
- 2022-11-30 US US18/060,128 patent/US11935176B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170163958A1 (en) * | 2015-12-04 | 2017-06-08 | Le Holdings (Beijing) Co., Ltd. | Method and device for image rendering processing |
CN110619615A (zh) * | 2018-12-29 | 2019-12-27 | 北京时光荏苒科技有限公司 | 用于处理图像方法和装置 |
CN110322416A (zh) * | 2019-07-09 | 2019-10-11 | 腾讯科技(深圳)有限公司 | 图像数据处理方法、装置以及计算机可读存储介质 |
CN110992493A (zh) * | 2019-11-21 | 2020-04-10 | 北京达佳互联信息技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN112099712A (zh) * | 2020-09-17 | 2020-12-18 | 北京字节跳动网络技术有限公司 | 人脸图像显示方法、装置、电子设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4177724A4 * |
Also Published As
Publication number | Publication date |
---|---|
CN112099712A (zh) | 2020-12-18 |
EP4177724A4 (en) | 2023-12-06 |
BR112023001930A2 (pt) | 2023-03-28 |
CN112099712B (zh) | 2022-06-07 |
JP7560208B2 (ja) | 2024-10-02 |
EP4177724A1 (en) | 2023-05-10 |
KR20230034351A (ko) | 2023-03-09 |
US20230090457A1 (en) | 2023-03-23 |
JP2023537721A (ja) | 2023-09-05 |
US11935176B2 (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11393154B2 (en) | Hair rendering method, device, electronic apparatus, and storage medium | |
WO2021139408A1 (zh) | 显示特效的方法、装置、存储介质及电子设备 | |
EP3713220A1 (en) | Video image processing method and apparatus, and terminal | |
CN110766777A (zh) | 虚拟形象的生成方法、装置、电子设备及存储介质 | |
CN110782515A (zh) | 虚拟形象的生成方法、装置、电子设备及存储介质 | |
WO2022088928A1 (zh) | 弹性对象的渲染方法、装置、设备及存储介质 | |
WO2023179346A1 (zh) | 特效图像处理方法、装置、电子设备及存储介质 | |
CN112053449A (zh) | 基于增强现实的显示方法、设备及存储介质 | |
WO2022007627A1 (zh) | 一种图像特效的实现方法、装置、电子设备及存储介质 | |
US11587280B2 (en) | Augmented reality-based display method and device, and storage medium | |
CN112766215B (zh) | 人脸图像处理方法、装置、电子设备及存储介质 | |
WO2023151524A1 (zh) | 图像显示方法、装置、电子设备及存储介质 | |
WO2022057576A1 (zh) | 人脸图像显示方法、装置、电子设备及存储介质 | |
US20230368422A1 (en) | Interactive dynamic fluid effect processing method and device, and electronic device | |
US20230298265A1 (en) | Dynamic fluid effect processing method and apparatus, and electronic device and readable medium | |
CN114049403A (zh) | 一种多角度三维人脸重建方法、装置及存储介质 | |
WO2020259152A1 (zh) | 贴纸生成方法、装置、介质和电子设备 | |
WO2023207354A1 (zh) | 特效视频确定方法、装置、电子设备及存储介质 | |
CN109685881B (zh) | 一种体绘制方法、装置及智能设备 | |
CN109472855B (zh) | 一种体绘制方法、装置及智能设备 | |
WO2023025181A1 (zh) | 图像识别方法、装置和电子设备 | |
RU2802724C1 (ru) | Способ и устройство обработки изображений, электронное устройство и машиночитаемый носитель информации | |
WO2021218118A1 (zh) | 图像处理方法及装置 | |
WO2022135018A1 (zh) | 动态流体显示方法、装置、电子设备和可读介质 | |
US20240346732A1 (en) | Method and apparatus for adding video effect, and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21868404 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202327006304 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 20237003737 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2023507827 Country of ref document: JP Kind code of ref document: A Ref document number: 2021868404 Country of ref document: EP Effective date: 20230131 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112023001930 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112023001930 Country of ref document: BR Kind code of ref document: A2 Effective date: 20230201 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |