Special effect display method and device
Technical Field
The disclosure relates to the technical field of computers, in particular to a special effect display method and device.
Background
With the development of information technology, the variety of applications is increasing. In order to attract users, more and more application programs provide various interesting display special effects for the users, however, the display special effects provided for the users by the application programs are mostly fixed display special effects, and the display effect is single; and because the special effect is fixedly displayed for the user, the interaction with the user is not needed in the display process, and the special effect display mode is monotonous.
Disclosure of Invention
The embodiment of the disclosure at least provides a special effect display method and device to improve the safety of face recognition.
In a first aspect, an embodiment of the present disclosure provides a special effect display method, including:
responding to a task starting instruction, and acquiring a user face image;
detecting whether a user makes a specified action or not based on the collected face image of the user, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
and adjusting the display special effect of the face image of the user in the graphical user interface based on the state information of the first virtual object.
In a possible implementation manner, the graphical user interface comprises a face image display area; the method further comprises the following steps:
and displaying the collected user face image in the face image display area.
In a possible embodiment, a second virtual object is presented on the graphical user interface; the method further comprises the following steps:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible embodiment, the adjusting, based on the state information of the first virtual object, a display special effect of the user face image in the graphical user interface includes:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In one possible implementation, after obtaining the state information of the first virtual object, the method further includes:
determining a face recognition result of the user based on the state information of the first virtual object.
In a possible embodiment, the determining a face recognition result of the user based on the state information of the first virtual object includes:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation, after determining the face recognition result, the method further includes:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In one possible embodiment, the first virtual object is a virtual character, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
In a second aspect, an embodiment of the present disclosure further provides a special effect display device, including:
the acquisition module is used for responding to a task starting instruction and acquiring a face image of a user;
the acquisition module is used for detecting whether the user makes a specified action or not based on the acquired face image of the user and acquiring the state information of the first virtual object displayed in the graphical user interface when the user is determined to make the specified action;
and the adjusting module is used for adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
In a possible implementation manner, the graphical user interface comprises a face image display area; the device also includes a display module for:
and displaying the collected user face image in the face image display area.
In a possible embodiment, a second virtual object is presented on the graphical user interface; the display module is further configured to:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible implementation manner, the adjusting module, when adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In a possible implementation, the apparatus further includes an identification module, configured to: after the state information of the first virtual object is acquired, the face recognition result of the user is determined based on the state information of the first virtual object.
In a possible implementation, the recognition module, when determining the face recognition result of the user based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation manner, after determining the face recognition result, the presentation module is further configured to:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In one possible embodiment, the first virtual object is a virtual character, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the effect description of the special effect display apparatus, the electronic device, and the computer-readable storage medium, reference is made to the description of the special effect display method, and details are not repeated here.
According to the special effect display method and device provided by the embodiment of the disclosure, when the user is determined to make a specified action based on the collected face image, the state information of the first virtual object is obtained, and then the display special effect of the user face image in the graphical user interface is adjusted based on the state information of the first virtual object; in the process, the type of the special effect displayed by the user face in the graphical user interface is related to the time when the user makes the specified action and the state information of the first virtual object, so that on one hand, various special effects are displayed, the style of the special effects is enriched, on the other hand, the interaction between the user and the graphical user interface is increased, and the display mode of the special effects is enriched.
Furthermore, the face recognition can be performed on the user based on the state information of the first virtual object, the time for the user to perform the specified action needs to be matched with the time for changing the state information of the first virtual object, and the time for changing the state information of the first virtual object is random, so that the current user cannot borrow illegal means such as video clipping and the like to perform the face recognition, and the real reliability of the face recognition process is improved.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a special effect display method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a graphical user presentation interface schematic provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a method for training a motion recognition model provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an architecture of a special effects display apparatus provided in an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, each application program attracts a user, and a display special effect is added to the application program to enrich the interface display effect, however, the displayed special effect is a fixed display special effect, and in the special effect display process, an interaction process is not available between the application program and the user, so that the special effect display mode is monotonous.
Based on the above, the present disclosure provides a special effect display method, which can determine what kind of display special effect is displayed based on the time when the user makes the designated action and the time when the state information of the first virtual object changes, so that on one hand, a plurality of display special effects are displayed, the style of the display special effects is enriched, on the other hand, the interaction between the user and the graphical user interface is increased, and the display mode of the display special effects is enriched.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is to be understood that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The components of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the embodiment, first, a specific effect displaying method disclosed in the embodiment of the present disclosure is described in detail, an execution subject of the specific effect displaying method provided in the embodiment of the present disclosure is generally a computer device with a certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the special effect presentation method may be implemented by a processor calling computer readable instructions stored in a memory.
The computer equipment executing the special effect display method can carry the image acquisition device by itself, and can also be externally connected with the image acquisition device, when the computer equipment is externally connected with the image acquisition device, the connection mode can be wired connection or wireless connection, and the wireless connection mode can be Bluetooth connection, wireless-fidelity (wi-fi) connection and the like.
The following describes the special effect display method provided by the embodiment of the present disclosure by taking the execution subject as the terminal device.
Referring to fig. 1, a flowchart of a special effect displaying method provided by the embodiment of the present disclosure includes the following steps:
step 101, responding to a task starting instruction, and collecting a user face image authorized by a user.
Step 102, detecting whether the user performs the specified action or not based on the collected face image of the user authorized by the user, and acquiring the state information of the first virtual object displayed in the graphical user interface when the user is determined to perform the specified action.
And 103, adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
The following is a detailed description of the above steps 101 to 103.
For step 101:
the task start instruction in step 101 may be generated after the user clicks a start button in the screen, and the clicking manner includes, but is not limited to, single-machine, double-click, long-press, and re-press; in another possible embodiment, the task start instruction may also be a voice instruction input by a user, for example, after receiving the voice input by the user, the voice input by the user may be converted into text, and whether the text contains a preset keyword (for example, it may be a start verification) is detected, and if so, the task start instruction is generated.
In one possible embodiment, after responding to the task start instruction, the image of the face of the user authorized by the user may be collected for a preset time period after responding to the task start instruction, for example, a video containing the face of the user authorized by the user may be collected.
After a task starting instruction is responded, a display picture in a screen can be switched to a graphic user display interface from an initial interface, the graphic user display interface comprises a face image display area, and after a face image of a user authorized by the user is collected, the collected face image of the user authorized by the user can be displayed in the face image display area in the graphic user display interface in real time.
In order to increase interestingness, when the collected user face image authorized by the user is displayed in the face image display area in the graphical user display interface, special effect stickers can be arranged in the face image display area. In one possible application scenario, after a facial image of a user authorized by the user is acquired, user attributes of the user may be identified through the facial image of the user, for example, the gender, age, and the like of the user may be identified, and then a special effect sticker matching the user attributes is added to the facial image display area based on the identified user attributes.
For example, the positions of the first virtual object, the second virtual object, and the face image display area in the graphical user display interface may be as shown in fig. 2.
In a possible implementation manner, in order to increase the interest, a preset music special effect can be displayed after a user face image authorized by a user is collected in response to a task starting instruction.
With respect to step 102:
the specified action by the user may be, for example, opening a mouth, blinking, or the like, and the specified action may include one or more.
When detecting whether the user makes a specified action, the method can be any one of the following methods:
the method comprises the steps of identifying key points in a collected user face image authorized by a user, and determining whether the user in the user face image makes a specified action or not based on the position coordinates of the key points.
The key points in the face image of the user may include, for example, corners of the mouth, corners of the eyes, and the like.
And secondly, inputting the acquired user face image authorized by the user into a pre-trained motion recognition model, and recognizing whether the user makes a specified motion in the user face image based on the motion recognition model.
In the training process of the motion recognition model, the label added to the sample image is a motion label, the motion label is used for representing the motion made by the user included in the sample image, and it should be noted that the motion label includes a designated motion.
Specifically, after the collected user face image authorized by the user is input into the pre-trained motion recognition model, the motion recognition model can predict the probability of each motion corresponding to the input user face image, and then determine the motion with the maximum probability as the motion made by the user of the input user face image.
In a possible implementation, the method for training the motion recognition model may be as shown in fig. 3, and includes the following steps:
step 301, sample images are obtained, and each sample image is provided with a corresponding action label.
The action label corresponding to the sample image can be manually added according to the sample image.
Step 302, inputting the sample images into a motion recognition model to be trained, and predicting to obtain a motion corresponding to each sample image.
After the sample image is input into the motion model to be trained, the probability that the sample image corresponds to each motion is output, and then the motion with the maximum probability is determined as the motion corresponding to the sample image.
And step 303, determining the accuracy of the training process based on the motion predicted by each sample image and the motion label corresponding to the sample image, and adjusting the model parameters in the training process when the accuracy does not meet the conditions.
In a possible implementation manner, a second virtual object can be displayed in the graphical user display interface; when the user is determined to make a specified action based on the acquired face image, the interactive special effect between the second virtual object and the face image displayed in the face image display area can be displayed on the graphical user interface.
For example, the second virtual object may be a rotating platform on which a plurality of virtual objects are placed, the interaction special effect between the second virtual object and the face image of the user displayed in the face image display area may be an interaction special effect between the virtual object and the mouth of the face image of the user, for example, if the specified action is mouth opening, the virtual object may be virtual food, and the interaction special effect may be that when the user opens the mouth, the food reaches the mouth position from big to small and disappears at the mouth position, so as to form an interaction special effect that the user opens the mouth to eat the virtual food.
In one possible application scenario, the first virtual object may be a virtual character, and the state information of the first virtual object may be state information of whether a return event occurs to the virtual character.
For step 103:
the display special effect of the user face image in the graphical user interface comprises a special effect paster added to a face area in the user face image.
When the display special effect paster of the user face image in the graphical user interface is adjusted based on the state information of the first virtual object, the method comprises the following two conditions:
in the first situation, if the state of a first virtual object is changed when a user is detected to make a specified action, a display special effect paster of the user face image in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface into a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
The display special effect of the user face in the graphical user interface is adjusted based on the state information of the first virtual object, so that the interaction with the user can be increased in the special effect display process, and the display mode of the display special effect is enriched.
In the related art, when face recognition is performed, a face photo is mainly acquired through an image acquisition device of equipment, then biological features in the acquired face photo are extracted, and the extracted biological features are matched with the biological features stored in a database, so that the identity of a user is recognized. However, the method cannot guarantee that the current live user is a real photo, for example, when the current live user collects a face photo in an image collection device, the image collection device cannot acquire the real photo of the current user through a counterfeiting means, but the biological features of the corresponding user can be extracted from the collected user photo, so that the method for recognizing the face is low in reliability.
In addition, some face recognition technologies require the user to perform actions such as closing eyes and opening mouth during face recognition in order to prevent the above situations, but this method requires the user to perform fixed actions, so the user can forge the detection object by means of clipping video, and the security is also low.
Based on this, in another embodiment of the present application, after the state information of the first virtual object is obtained, a face recognition result of the user may also be determined based on the state information of the first virtual object.
Specifically, when determining the face recognition result based on the state information of the first virtual object, the following two cases are included:
in the first situation, if it is detected that the state of the first virtual object changes when the user performs the specified action, it is determined that the current face recognition result is recognition failure.
In one embodiment of the disclosure, under the condition that the current face recognition result is determined to be the recognition failure, a display special effect representing the recognition failure can be added on the user face image in the face image display area; or, the special effect paster arranged in the face image display area can be replaced by the display special effect used for representing the recognition failure under the condition that the current face recognition result is determined to be the recognition failure.
In addition, under the condition that the current face recognition result is determined to be failed in recognition, besides the display special effect for representing the failure in recognition, a music special effect for representing the failure in recognition can be displayed for prompting the failure in recognition of the user.
In addition, in order to increase the recognition accuracy, if it is detected that the user performs a specified action and the state of the first virtual object changes, the acquisition of the face image of the user may be stopped within a first preset time after the first virtual object changes, then the acquisition of the face image of the user may be restarted again after the first preset time, and the length of time during which the state of the first virtual object does not change is recorded.
And secondly, if the situation that the state of the first virtual object is not changed when the user is detected to make the specified action, determining that the current face recognition result is successful.
And under the condition that the current face recognition result is successful, a display special effect for representing successful recognition and a music special effect for representing successful recognition can be added for prompting the user of successful recognition.
In a possible implementation manner, in order to increase the recognition accuracy, if it is detected that the state of the first virtual object does not change when the user performs the specified action, the duration of the specified action performed by the user is recorded, a user integral is determined according to the duration, then a duration during which the user performs the specified action and the state of the first virtual object does not change is determined within a preset time from the start of the acquisition of the head portrait of the face of the user authorized by the user, that is, within a preset time from the start of the acquisition of the head portrait of the face of the user authorized by the user, the total integral of the user is determined as a success in recognition, when the total integral of the user exceeds a set integral threshold, the face recognition result of the user is determined as a failure in recognition.
In order to improve the safety of face recognition, before determining the face recognition result of the user based on the state information of the first virtual object, the identity of the current user may be verified. Specifically, login verification can be performed based on the collected face image of the user authorized by the user, and after the verification is passed, whether the user performs a specified action or not is detected.
When login verification is performed based on the acquired user face image authorized by the user, facial features in the acquired user face image authorized by the user can be extracted, then the extracted facial features are matched with facial features stored in a database in advance, and if matching is successful, verification is determined to be passed.
According to the special effect display method provided by the embodiment of the disclosure, when the user is determined to make a specified action based on the collected face image, the state information of the first virtual object is obtained, and then the display special effect of the user face image in the graphical user interface is adjusted based on the state information of the first virtual object; in the process, the type of the special effect displayed by the user face in the graphical user interface is related to the time when the user makes the specified action and the state information of the first virtual object, so that on one hand, various special effects are displayed, the style of the special effects is enriched, on the other hand, the interaction between the user and the graphical user interface is increased, and the display mode of the special effects is enriched.
Furthermore, the face recognition can be performed on the user based on the state information of the first virtual object, the time for the user to perform the specified action needs to be matched with the time for changing the state information of the first virtual object, and the time for changing the state information of the first virtual object is random, so that the current user cannot borrow illegal means such as video clipping and the like to perform the face recognition, and the real reliability of the face recognition process is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a special effect display apparatus corresponding to the special effect display method, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to the special effect display method described above in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 4, which is a schematic diagram of an architecture of a special effect display apparatus according to a fifth embodiment of the present disclosure, the apparatus includes: the system comprises a collection module 401, an acquisition module 402, an adjustment module 403, a display module 404 and an identification module 405; wherein the content of the first and second substances,
the acquisition module 401 is configured to respond to a task start instruction and acquire a user face image;
an obtaining module 402, configured to detect whether a user makes a specified action based on an acquired face image of the user, and obtain state information of a first virtual object displayed in a graphical user interface when it is determined that the user makes the specified action;
an adjusting module 403, configured to adjust a display special effect of the user face image in the graphical user interface.
In a possible implementation manner, the graphical user interface comprises a face image display area; the device further comprises a presentation module 404, the presentation module 404 being configured to:
and displaying the collected user face image in the face image display area.
In a possible embodiment, a second virtual object is presented on the graphical user interface; the display module 404 is further configured to:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible implementation manner, the adjusting module 403, when adjusting the display special effect of the facial image of the user in the graphical user interface based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In a possible implementation, the apparatus further includes an identification module 405, and the identification module 404 is configured to: after the state information of the first virtual object is acquired, the face recognition result of the user is determined based on the state information of the first virtual object.
In a possible implementation, the recognition module 405, when determining the face recognition result of the user based on the state information of the first virtual object, is configured to:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation, after determining the face recognition result, the presentation module 404 is further configured to:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In one possible embodiment, the first virtual object is a virtual character, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 5, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the electronic device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
responding to a task starting instruction, and acquiring a user face image;
detecting whether a user makes a specified action or not based on the collected face image of the user, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
and adjusting the display special effect of the face image of the user in the graphical user interface based on the state information of the first virtual object.
In a possible implementation, the processor 501 executes instructions, where the graphical user interface includes a face image display area; the method further comprises the following steps:
and displaying the collected user face image in the face image display area.
In a possible embodiment, the processor 501 executes instructions in which a second virtual object is presented on the graphical user interface;
the instructions executed by the processor 501 further include:
and when the user is determined to make a specified action based on the collected user face image, displaying an interactive special effect between the second virtual object and the user face image displayed in the face image display area on the graphical user interface.
In a possible implementation, the processor 501 executes instructions to adjust the display special effect of the facial image of the user in the graphical user interface based on the state information of the first virtual object, including:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, adjusting the display special effect of the user face image in the graphical user interface into a first special effect; the first special effect is used for indicating that the time for making the specified action meets the requirement;
if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, adjusting the display special effect paster of the user face image in the graphical user interface to be a second special effect; the second special effect is used for indicating that the time for making the specified action is not satisfactory.
In one possible embodiment, the display special effect of the user face image in the graphical display interface includes a special effect sticker added to a face area in the user face image.
In a possible implementation manner, after the processor 501 executes the instructions to acquire the state information of the first virtual object, the method further includes:
determining a face recognition result of the user based on the state information of the first virtual object.
In a possible implementation, the instructions executed by processor 501, which determine the face recognition result of the user based on the state information of the first virtual object, include:
if the situation that the state of the first virtual object changes when the user performs the specified action is detected, determining that the current face recognition result is recognition failure;
and if the situation that the state of the first virtual object is not changed when the user performs the specified action is detected, determining that the current face recognition result is successful.
In a possible implementation, in the instructions executed by the processor 501, after determining the face recognition result, the method further includes:
and under the condition that the current face recognition result is determined to be recognition failure, adding a display special effect representing the recognition failure on the user face image in the face image display area.
In a possible implementation manner, in the instructions executed by the processor 501, the first virtual object is a virtual person, and the state information of the first virtual object includes: status information of whether the avatar has a return event.
In one possible embodiment, the processor 501 executes instructions, wherein the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects comprise: and the interactive special effect between the virtual article and the mouth of the human face image.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the special effect displaying method in the foregoing method embodiment. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the special effect display method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the special effect display method in the above method embodiments, which may be referred to specifically for the above method embodiments, and details are not repeated here.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.