CN111240482A - Special effect display method and device - Google Patents

Special effect display method and device Download PDF

Info

Publication number
CN111240482A
CN111240482A CN202010027410.0A CN202010027410A CN111240482A CN 111240482 A CN111240482 A CN 111240482A CN 202010027410 A CN202010027410 A CN 202010027410A CN 111240482 A CN111240482 A CN 111240482A
Authority
CN
China
Prior art keywords
user
face image
special effect
virtual object
specified action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010027410.0A
Other languages
Chinese (zh)
Other versions
CN111240482B (en
Inventor
叶欣靖
刘佳成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010027410.0A priority Critical patent/CN111240482B/en
Publication of CN111240482A publication Critical patent/CN111240482A/en
Application granted granted Critical
Publication of CN111240482B publication Critical patent/CN111240482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The disclosure provides a special effect display method and a device, comprising the following steps: responding to a task starting instruction, and acquiring a user face image; detecting whether a user makes a specified action or not based on the collected face image of the user, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action; and adjusting the display special effect of the face image of the user in the graphical user interface based on the state information of the first virtual object.

Description

Special effect display method and device
Technical Field
The disclosure relates to the technical field of computers, in particular to a special effect display method and device.
Background
With the development of information technology, the variety of applications is increasing. In order to attract users, more and more application programs provide various interesting display special effects for the users, however, the display special effects provided for the users by the application programs are mostly fixed display special effects, and the display effect is single; and because the special effect is fixedly displayed for the user, the interaction with the user is not needed in the display process, and the special effect display mode is monotonous.
Disclosure of Invention
The embodiment of the disclosure at least provides a special effect display method and device to improve the safety of face recognition.
In a first aspect, an embodiment of the present disclosure provides a special effect display method, including:
responding to a task starting instruction, and acquiring a user face image;
detecting whether a user makes a specified action or not based on the collected face image of the user, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
and adjusting the display special effect of the face image of the user in the graphical user interface based on the state information of the first virtual object.
In a possible implementation manner, the graphical user interface comprises a face image display area; the method further comprises the following steps:
and displaying the collected user face image in the face image display area.
In a possible embodiment, a second virtual object is presented on the graphical user interface; the method further comprises the following steps:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible embodiment, the adjusting, based on the state information of the first virtual object, a display special effect of the user face image in the graphical user interface includes:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In one possible implementation, after obtaining the state information of the first virtual object, the method further includes:
determining a face recognition result of the user based on the state information of the first virtual object.
In a possible embodiment, the determining a face recognition result of the user based on the state information of the first virtual object includes:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation, after determining the face recognition result, the method further includes:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In one possible embodiment, the first virtual object is a virtual character, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
In a second aspect, an embodiment of the present disclosure further provides a special effect display device, including:
the acquisition module is used for responding to a task starting instruction and acquiring a face image of a user;
the acquisition module is used for detecting whether the user makes a specified action or not based on the acquired face image of the user and acquiring the state information of the first virtual object displayed in the graphical user interface when the user is determined to make the specified action;
and the adjusting module is used for adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
In a possible implementation manner, the graphical user interface comprises a face image display area; the device also includes a display module for:
and displaying the collected user face image in the face image display area.
In a possible embodiment, a second virtual object is presented on the graphical user interface; the display module is further configured to:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible implementation manner, the adjusting module, when adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In a possible implementation, the apparatus further includes an identification module, configured to: after the state information of the first virtual object is acquired, the face recognition result of the user is determined based on the state information of the first virtual object.
In a possible implementation, the recognition module, when determining the face recognition result of the user based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation manner, after determining the face recognition result, the presentation module is further configured to:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In one possible embodiment, the first virtual object is a virtual character, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the effect description of the special effect display apparatus, the electronic device, and the computer-readable storage medium, reference is made to the description of the special effect display method, and details are not repeated here.
According to the special effect display method and device provided by the embodiment of the disclosure, when the user is determined to make a specified action based on the collected face image, the state information of the first virtual object is obtained, and then the display special effect of the user face image in the graphical user interface is adjusted based on the state information of the first virtual object; in the process, the type of the special effect displayed by the user face in the graphical user interface is related to the time when the user makes the specified action and the state information of the first virtual object, so that on one hand, various special effects are displayed, the style of the special effects is enriched, on the other hand, the interaction between the user and the graphical user interface is increased, and the display mode of the special effects is enriched.
Furthermore, the face recognition can be performed on the user based on the state information of the first virtual object, the time for the user to perform the specified action needs to be matched with the time for changing the state information of the first virtual object, and the time for changing the state information of the first virtual object is random, so that the current user cannot borrow illegal means such as video clipping and the like to perform the face recognition, and the real reliability of the face recognition process is improved.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a special effect display method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a graphical user presentation interface schematic provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a method for training a motion recognition model provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an architecture of a special effects display apparatus provided in an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, each application program attracts a user, and a display special effect is added to the application program to enrich the interface display effect, however, the displayed special effect is a fixed display special effect, and in the special effect display process, an interaction process is not available between the application program and the user, so that the special effect display mode is monotonous.
Based on the above, the present disclosure provides a special effect display method, which can determine what kind of display special effect is displayed based on the time when the user makes the designated action and the time when the state information of the first virtual object changes, so that on one hand, a plurality of display special effects are displayed, the style of the display special effects is enriched, on the other hand, the interaction between the user and the graphical user interface is increased, and the display mode of the display special effects is enriched.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is to be understood that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The components of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the embodiment, first, a specific effect displaying method disclosed in the embodiment of the present disclosure is described in detail, an execution subject of the specific effect displaying method provided in the embodiment of the present disclosure is generally a computer device with a certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the special effect presentation method may be implemented by a processor calling computer readable instructions stored in a memory.
The computer equipment executing the special effect display method can carry the image acquisition device by itself, and can also be externally connected with the image acquisition device, when the computer equipment is externally connected with the image acquisition device, the connection mode can be wired connection or wireless connection, and the wireless connection mode can be Bluetooth connection, wireless-fidelity (wi-fi) connection and the like.
The following describes the special effect display method provided by the embodiment of the present disclosure by taking the execution subject as the terminal device.
Referring to fig. 1, a flowchart of a special effect displaying method provided by the embodiment of the present disclosure includes the following steps:
step 101, responding to a task starting instruction, and collecting a user face image authorized by a user.
Step 102, detecting whether the user performs the specified action or not based on the collected face image of the user authorized by the user, and acquiring the state information of the first virtual object displayed in the graphical user interface when the user is determined to perform the specified action.
And 103, adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
The following is a detailed description of the above steps 101 to 103.
For step 101:
the task start instruction in step 101 may be generated after the user clicks a start button in the screen, and the clicking manner includes, but is not limited to, single-machine, double-click, long-press, and re-press; in another possible embodiment, the task start instruction may also be a voice instruction input by a user, for example, after receiving the voice input by the user, the voice input by the user may be converted into text, and whether the text contains a preset keyword (for example, it may be a start verification) is detected, and if so, the task start instruction is generated.
In one possible embodiment, after responding to the task start instruction, the image of the face of the user authorized by the user may be collected for a preset time period after responding to the task start instruction, for example, a video containing the face of the user authorized by the user may be collected.
After a task starting instruction is responded, a display picture in a screen can be switched to a graphic user display interface from an initial interface, the graphic user display interface comprises a face image display area, and after a face image of a user authorized by the user is collected, the collected face image of the user authorized by the user can be displayed in the face image display area in the graphic user display interface in real time.
In order to increase interestingness, when the collected user face image authorized by the user is displayed in the face image display area in the graphical user display interface, special effect stickers can be arranged in the face image display area. In one possible application scenario, after a facial image of a user authorized by the user is acquired, user attributes of the user may be identified through the facial image of the user, for example, the gender, age, and the like of the user may be identified, and then a special effect sticker matching the user attributes is added to the facial image display area based on the identified user attributes.
For example, the positions of the first virtual object, the second virtual object, and the face image display area in the graphical user display interface may be as shown in fig. 2.
In a possible implementation manner, in order to increase the interest, a preset music special effect can be displayed after a user face image authorized by a user is collected in response to a task starting instruction.
With respect to step 102:
the specified action by the user may be, for example, opening a mouth, blinking, or the like, and the specified action may include one or more.
When detecting whether the user makes a specified action, the method can be any one of the following methods:
the method comprises the steps of identifying key points in a collected user face image authorized by a user, and determining whether the user in the user face image makes a specified action or not based on the position coordinates of the key points.
The key points in the face image of the user may include, for example, corners of the mouth, corners of the eyes, and the like.
And secondly, inputting the acquired user face image authorized by the user into a pre-trained motion recognition model, and recognizing whether the user makes a specified motion in the user face image based on the motion recognition model.
In the training process of the motion recognition model, the label added to the sample image is a motion label, the motion label is used for representing the motion made by the user included in the sample image, and it should be noted that the motion label includes a designated motion.
Specifically, after the collected user face image authorized by the user is input into the pre-trained motion recognition model, the motion recognition model can predict the probability of each motion corresponding to the input user face image, and then determine the motion with the maximum probability as the motion made by the user of the input user face image.
In a possible implementation, the method for training the motion recognition model may be as shown in fig. 3, and includes the following steps:
step 301, sample images are obtained, and each sample image is provided with a corresponding action label.
The action label corresponding to the sample image can be manually added according to the sample image.
Step 302, inputting the sample images into a motion recognition model to be trained, and predicting to obtain a motion corresponding to each sample image.
After the sample image is input into the motion model to be trained, the probability that the sample image corresponds to each motion is output, and then the motion with the maximum probability is determined as the motion corresponding to the sample image.
And step 303, determining the accuracy of the training process based on the motion predicted by each sample image and the motion label corresponding to the sample image, and adjusting the model parameters in the training process when the accuracy does not meet the conditions.
In a possible implementation manner, a second virtual object can be displayed in the graphical user display interface; when the user is determined to make a specified action based on the acquired face image, the interactive special effect between the second virtual object and the face image displayed in the face image display area can be displayed on the graphical user interface.
For example, the second virtual object may be a rotating platform on which a plurality of virtual objects are placed, the interaction special effect between the second virtual object and the face image of the user displayed in the face image display area may be an interaction special effect between the virtual object and the mouth of the face image of the user, for example, if the specified action is mouth opening, the virtual object may be virtual food, and the interaction special effect may be that when the user opens the mouth, the food reaches the mouth position from big to small and disappears at the mouth position, so as to form an interaction special effect that the user opens the mouth to eat the virtual food.
In one possible application scenario, the first virtual object may be a virtual character, and the state information of the first virtual object may be state information of whether a return event occurs to the virtual character.
For step 103:
the display special effect of the user face image in the graphical user interface comprises a special effect paster added to a face area in the user face image.
When the display special effect paster of the user face image in the graphical user interface is adjusted based on the state information of the first virtual object, the method comprises the following two conditions:
in the first situation, if the state of a first virtual object is changed when a user is detected to make a specified action, a display special effect paster of the user face image in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface into a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
The display special effect of the user face in the graphical user interface is adjusted based on the state information of the first virtual object, so that the interaction with the user can be increased in the special effect display process, and the display mode of the display special effect is enriched.
In the related art, when face recognition is performed, a face photo is mainly acquired through an image acquisition device of equipment, then biological features in the acquired face photo are extracted, and the extracted biological features are matched with the biological features stored in a database, so that the identity of a user is recognized. However, the method cannot guarantee that the current live user is a real photo, for example, when the current live user collects a face photo in an image collection device, the image collection device cannot acquire the real photo of the current user through a counterfeiting means, but the biological features of the corresponding user can be extracted from the collected user photo, so that the method for recognizing the face is low in reliability.
In addition, some face recognition technologies require the user to perform actions such as closing eyes and opening mouth during face recognition in order to prevent the above situations, but this method requires the user to perform fixed actions, so the user can forge the detection object by means of clipping video, and the security is also low.
Based on this, in another embodiment of the present application, after the state information of the first virtual object is obtained, a face recognition result of the user may also be determined based on the state information of the first virtual object.
Specifically, when determining the face recognition result based on the state information of the first virtual object, the following two cases are included:
in the first situation, if it is detected that the state of the first virtual object changes when the user performs the specified action, it is determined that the current face recognition result is recognition failure.
In one embodiment of the disclosure, under the condition that the current face recognition result is determined to be the recognition failure, a display special effect representing the recognition failure can be added on the user face image in the face image display area; or, the special effect paster arranged in the face image display area can be replaced by the display special effect used for representing the recognition failure under the condition that the current face recognition result is determined to be the recognition failure.
In addition, under the condition that the current face recognition result is determined to be failed in recognition, besides the display special effect for representing the failure in recognition, a music special effect for representing the failure in recognition can be displayed for prompting the failure in recognition of the user.
In addition, in order to increase the recognition accuracy, if it is detected that the user performs a specified action and the state of the first virtual object changes, the acquisition of the face image of the user may be stopped within a first preset time after the first virtual object changes, then the acquisition of the face image of the user may be restarted again after the first preset time, and the length of time during which the state of the first virtual object does not change is recorded.
And secondly, if the situation that the state of the first virtual object is not changed when the user is detected to make the specified action, determining that the current face recognition result is successful.
And under the condition that the current face recognition result is successful, a display special effect for representing successful recognition and a music special effect for representing successful recognition can be added for prompting the user of successful recognition.
In a possible implementation manner, in order to increase the recognition accuracy, if it is detected that the state of the first virtual object does not change when the user performs the specified action, the duration of the specified action performed by the user is recorded, a user integral is determined according to the duration, then a duration during which the user performs the specified action and the state of the first virtual object does not change is determined within a preset time from the start of the acquisition of the head portrait of the face of the user authorized by the user, that is, within a preset time from the start of the acquisition of the head portrait of the face of the user authorized by the user, the total integral of the user is determined as a success in recognition, when the total integral of the user exceeds a set integral threshold, the face recognition result of the user is determined as a failure in recognition.
In order to improve the safety of face recognition, before determining the face recognition result of the user based on the state information of the first virtual object, the identity of the current user may be verified. Specifically, login verification can be performed based on the collected face image of the user authorized by the user, and after the verification is passed, whether the user performs a specified action or not is detected.
When login verification is performed based on the acquired user face image authorized by the user, facial features in the acquired user face image authorized by the user can be extracted, then the extracted facial features are matched with facial features stored in a database in advance, and if matching is successful, verification is determined to be passed.
According to the special effect display method provided by the embodiment of the disclosure, when the user is determined to make a specified action based on the collected face image, the state information of the first virtual object is obtained, and then the display special effect of the user face image in the graphical user interface is adjusted based on the state information of the first virtual object; in the process, the type of the special effect displayed by the user face in the graphical user interface is related to the time when the user makes the specified action and the state information of the first virtual object, so that on one hand, various special effects are displayed, the style of the special effects is enriched, on the other hand, the interaction between the user and the graphical user interface is increased, and the display mode of the special effects is enriched.
Furthermore, the face recognition can be performed on the user based on the state information of the first virtual object, the time for the user to perform the specified action needs to be matched with the time for changing the state information of the first virtual object, and the time for changing the state information of the first virtual object is random, so that the current user cannot borrow illegal means such as video clipping and the like to perform the face recognition, and the real reliability of the face recognition process is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a special effect display apparatus corresponding to the special effect display method, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the special effect display method described above in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 4, which is a schematic diagram of an architecture of a special effect display apparatus according to a fifth embodiment of the present disclosure, the apparatus includes: the system comprises a collection module 401, an acquisition module 402, an adjustment module 403, a display module 404 and an identification module 405; wherein the content of the first and second substances,
the acquisition module 401 is configured to respond to a task start instruction and acquire a user face image;
an obtaining module 402, configured to detect whether a user makes a specified action based on an acquired face image of the user, and obtain state information of a first virtual object displayed in a graphical user interface when it is determined that the user makes the specified action;
an adjusting module 403, configured to adjust a display special effect of the user face image in the graphical user interface.
In a possible implementation manner, the graphical user interface comprises a face image display area; the device further comprises a presentation module 404, the presentation module 404 being configured to:
and displaying the collected user face image in the face image display area.
In a possible embodiment, a second virtual object is presented on the graphical user interface; the display module 404 is further configured to:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible implementation manner, the adjusting module 403, when adjusting the display special effect of the facial image of the user in the graphical user interface based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In a possible implementation, the apparatus further includes an identification module 405, and the identification module 404 is configured to: after the state information of the first virtual object is acquired, the face recognition result of the user is determined based on the state information of the first virtual object.
In a possible implementation, the recognition module 405, when determining the face recognition result of the user based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation, after determining the face recognition result, the presentation module 404 is further configured to:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In one possible embodiment, the first virtual object is a virtual character, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 5, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the electronic device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
responding to a task starting instruction, and acquiring a user face image;
detecting whether a user makes a specified action or not based on the collected face image of the user, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
and adjusting the display special effect of the face image of the user in the graphical user interface based on the state information of the first virtual object.
In a possible implementation, the processor 501 executes instructions, where the graphical user interface includes a face image display area; the method further comprises the following steps:
and displaying the collected user face image in the face image display area.
In a possible embodiment, the processor 501 executes instructions in which a second virtual object is presented on the graphical user interface;
the instructions executed by the processor 501 further include:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible implementation, the processor 501 executes instructions to adjust the display special effect of the facial image of the user in the graphical user interface based on the state information of the first virtual object, including:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In a possible implementation manner, after the processor 501 executes the instructions to acquire the state information of the first virtual object, the method further includes:
determining a face recognition result of the user based on the state information of the first virtual object.
In a possible implementation, the instructions executed by processor 501, which determine the face recognition result of the user based on the state information of the first virtual object, include:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation, in the instructions executed by the processor 501, after determining the face recognition result, the method further includes:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In a possible implementation manner, in the instructions executed by the processor 501, the first virtual object is a virtual person, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the processor 501 executes instructions, wherein the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the special effect displaying method in the foregoing method embodiment. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the special effect display method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the special effect display method in the above method embodiments, which may be referred to specifically for the above method embodiments, and details are not repeated here.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A special effect display method is characterized by comprising the following steps:
responding to a task starting instruction, and acquiring a user face image;
detecting whether a user makes a specified action or not based on the collected face image of the user, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
and adjusting the display special effect of the face image of the user in the graphical user interface based on the state information of the first virtual object.
2. The method of claim 1, wherein the graphical user interface includes a face image display area; the method further comprises the following steps:
and displaying the collected user face image in the face image display area.
3. The method of claim 2, wherein a second virtual object is presented on the graphical user interface; the method further comprises the following steps:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
4. The method of claim 1, wherein the adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object comprises:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
5. The method of claim 4, wherein the displaying of the special effect of the face image of the user in the graphical display interface comprises adding a special effect sticker to a face area in the face image of the user.
6. The method of claim 1, wherein after obtaining the state information of the first virtual object, the method further comprises:
determining a face recognition result of the user based on the state information of the first virtual object.
7. The method of claim 1, wherein determining the face recognition result of the user based on the state information of the first virtual object comprises:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
8. The method of claim 6 or 7, wherein after determining the face recognition result, the method further comprises:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
9. The method of any of claims 1-8, wherein the first virtual object is a virtual character, and wherein the state information of the first virtual object comprises: status information of whether the avatar has a return event.
10. The method of claim 3, wherein the second virtual object is a rotating table on which a plurality of virtual items are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
11. A special effects display apparatus, comprising:
the acquisition module is used for responding to a task starting instruction and acquiring a face image of a user;
the acquisition module is used for detecting whether the user makes a specified action or not based on the acquired face image of the user and acquiring the state information of the first virtual object displayed in the graphical user interface when the user is determined to make the specified action;
and the adjusting module is used for adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
12. A computer device, comprising: processor, memory and bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine readable instructions when executed by the processor performing the steps of the special effects presentation method of any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the special effects presentation method according to one of claims 1 to 10.
CN202010027410.0A 2020-01-10 2020-01-10 Special effect display method and device Active CN111240482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010027410.0A CN111240482B (en) 2020-01-10 2020-01-10 Special effect display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010027410.0A CN111240482B (en) 2020-01-10 2020-01-10 Special effect display method and device

Publications (2)

Publication Number Publication Date
CN111240482A true CN111240482A (en) 2020-06-05
CN111240482B CN111240482B (en) 2023-06-30

Family

ID=70864313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010027410.0A Active CN111240482B (en) 2020-01-10 2020-01-10 Special effect display method and device

Country Status (1)

Country Link
CN (1) CN111240482B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190536A (en) * 2018-08-23 2019-01-11 百度在线网络技术(北京)有限公司 Face image processing process, device and equipment
CN111638798A (en) * 2020-06-07 2020-09-08 上海商汤智能科技有限公司 AR group photo method, AR group photo device, computer equipment and storage medium
CN111773676A (en) * 2020-07-23 2020-10-16 网易(杭州)网络有限公司 Method and device for determining virtual role action
CN111857923A (en) * 2020-07-17 2020-10-30 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
CN111899192A (en) * 2020-07-23 2020-11-06 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
WO2021258978A1 (en) * 2020-06-24 2021-12-30 北京字节跳动网络技术有限公司 Operation control method and apparatus
WO2022252509A1 (en) * 2021-06-03 2022-12-08 北京市商汤科技开发有限公司 Data display method and apparatus, device, storage medium, and computer program product

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130201105A1 (en) * 2012-02-02 2013-08-08 Raymond William Ptucha Method for controlling interactive display system
US20150370323A1 (en) * 2014-06-19 2015-12-24 Apple Inc. User detection by a computing device
CN105518715A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Living body detection method, equipment and computer program product
WO2016113969A1 (en) * 2015-01-13 2016-07-21 三菱電機株式会社 Gesture recognition device and method, program, and recording medium
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
CN107277599A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of live broadcasting method of virtual reality, device and system
CN107944542A (en) * 2017-11-21 2018-04-20 北京光年无限科技有限公司 A kind of multi-modal interactive output method and system based on visual human
CN108073669A (en) * 2017-01-12 2018-05-25 北京市商汤科技开发有限公司 Business object methods of exhibiting, device and electronic equipment
CN109190536A (en) * 2018-08-23 2019-01-11 百度在线网络技术(北京)有限公司 Face image processing process, device and equipment
WO2019024750A1 (en) * 2017-08-03 2019-02-07 腾讯科技(深圳)有限公司 Video communications method and apparatus, terminal, and computer readable storage medium
CN109697404A (en) * 2018-09-28 2019-04-30 中国银联股份有限公司 Identification system and method, terminal and computer storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002197376A (en) * 2000-12-27 2002-07-12 Fujitsu Ltd Method and device for providing virtual world customerized according to user
CN103647922A (en) * 2013-12-20 2014-03-19 百度在线网络技术(北京)有限公司 Virtual video call method and terminals
WO2017006872A1 (en) * 2015-07-03 2017-01-12 学校法人慶應義塾 Facial expression identification system, facial expression identification method, and facial expression identification program
CN108109209A (en) * 2017-12-11 2018-06-01 广州市动景计算机科技有限公司 A kind of method for processing video frequency and its device based on augmented reality
JP2019179080A (en) * 2018-03-30 2019-10-17 ソニー株式会社 Information processing apparatus, information processing method, and program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130201105A1 (en) * 2012-02-02 2013-08-08 Raymond William Ptucha Method for controlling interactive display system
US20150370323A1 (en) * 2014-06-19 2015-12-24 Apple Inc. User detection by a computing device
WO2016113969A1 (en) * 2015-01-13 2016-07-21 三菱電機株式会社 Gesture recognition device and method, program, and recording medium
CN105518715A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Living body detection method, equipment and computer program product
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
CN108073669A (en) * 2017-01-12 2018-05-25 北京市商汤科技开发有限公司 Business object methods of exhibiting, device and electronic equipment
CN107277599A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of live broadcasting method of virtual reality, device and system
WO2019024750A1 (en) * 2017-08-03 2019-02-07 腾讯科技(深圳)有限公司 Video communications method and apparatus, terminal, and computer readable storage medium
CN107944542A (en) * 2017-11-21 2018-04-20 北京光年无限科技有限公司 A kind of multi-modal interactive output method and system based on visual human
CN109190536A (en) * 2018-08-23 2019-01-11 百度在线网络技术(北京)有限公司 Face image processing process, device and equipment
CN109697404A (en) * 2018-09-28 2019-04-30 中国银联股份有限公司 Identification system and method, terminal and computer storage medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190536A (en) * 2018-08-23 2019-01-11 百度在线网络技术(北京)有限公司 Face image processing process, device and equipment
CN109190536B (en) * 2018-08-23 2023-12-26 百度在线网络技术(北京)有限公司 Face image processing method, device and equipment
CN111638798A (en) * 2020-06-07 2020-09-08 上海商汤智能科技有限公司 AR group photo method, AR group photo device, computer equipment and storage medium
WO2021258978A1 (en) * 2020-06-24 2021-12-30 北京字节跳动网络技术有限公司 Operation control method and apparatus
CN111857923A (en) * 2020-07-17 2020-10-30 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
WO2022012182A1 (en) * 2020-07-17 2022-01-20 北京字节跳动网络技术有限公司 Special effect display method and apparatus, electronic device, and computer readable medium
CN111773676A (en) * 2020-07-23 2020-10-16 网易(杭州)网络有限公司 Method and device for determining virtual role action
CN111899192A (en) * 2020-07-23 2020-11-06 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
WO2022017184A1 (en) * 2020-07-23 2022-01-27 北京字节跳动网络技术有限公司 Interaction method and apparatus, and electronic device and computer-readable storage medium
CN111899192B (en) * 2020-07-23 2022-02-01 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
US11842425B2 (en) 2020-07-23 2023-12-12 Beijing Bytedance Network Technology Co., Ltd. Interaction method and apparatus, and electronic device and computer-readable storage medium
WO2022252509A1 (en) * 2021-06-03 2022-12-08 北京市商汤科技开发有限公司 Data display method and apparatus, device, storage medium, and computer program product

Also Published As

Publication number Publication date
CN111240482B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN111240482A (en) Special effect display method and device
US20200412975A1 (en) Content capture with audio input feedback
CN107704834B (en) Micro-surface examination assisting method, device and storage medium
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
CN108009521B (en) Face image matching method, device, terminal and storage medium
CN104838336A (en) Data and user interaction based on device proximity
CN111339420A (en) Image processing method, image processing device, electronic equipment and storage medium
US20230215072A1 (en) Animated expressive icon
US20200412864A1 (en) Modular camera interface
CN109670385B (en) Method and device for updating expression in application program
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN111198724A (en) Application program starting method and device, storage medium and terminal
US20230418910A1 (en) Multimodal sentiment classification
CN112150349A (en) Image processing method and device, computer equipment and storage medium
US20230410222A1 (en) Information processing apparatus, control method, and program
CN111026967A (en) Method, device, equipment and medium for obtaining user interest tag
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
CN111104923A (en) Face recognition method and device
WO2015118061A1 (en) Method and system for displaying content to a user
CN114898395A (en) Interaction method, device, equipment, storage medium and program product
WO2022212669A1 (en) Determining classification recommendations for user content
CN113486730A (en) Intelligent reminding method based on face recognition and related device
US11210335B2 (en) System and method for judging situation of object
CN111640185A (en) Virtual building display method and device
US11659273B2 (en) Information processing apparatus, information processing method, and non-transitory storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.