CN111240482B - Special effect display method and device - Google Patents

Special effect display method and device Download PDF

Info

Publication number
CN111240482B
CN111240482B CN202010027410.0A CN202010027410A CN111240482B CN 111240482 B CN111240482 B CN 111240482B CN 202010027410 A CN202010027410 A CN 202010027410A CN 111240482 B CN111240482 B CN 111240482B
Authority
CN
China
Prior art keywords
user
face image
virtual object
special effect
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010027410.0A
Other languages
Chinese (zh)
Other versions
CN111240482A (en
Inventor
叶欣靖
刘佳成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010027410.0A priority Critical patent/CN111240482B/en
Publication of CN111240482A publication Critical patent/CN111240482A/en
Application granted granted Critical
Publication of CN111240482B publication Critical patent/CN111240482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The disclosure provides a special effect display method and device, comprising the following steps: responding to a task starting instruction, and collecting a face image of a user; based on the collected face images of the user, detecting whether the user makes a specified action, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action; and adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.

Description

Special effect display method and device
Technical Field
The disclosure relates to the technical field of computers, in particular to a special effect display method and device.
Background
With the development of information technology, the variety of application programs is increasing. In order to attract users, more and more application programs provide various interesting display special effects for the users, however, most of the display special effects provided by the current application programs for the users are fixed display special effects, and the display effect is single; and because the special effect is fixedly displayed for the user, the user does not need to interact in the display process, and the special effect display mode is monotonous.
Disclosure of Invention
The embodiment of the disclosure at least provides a special effect display method and device for improving the safety of face recognition.
In a first aspect, an embodiment of the present disclosure provides a special effect display method, including:
responding to a task starting instruction, and collecting a face image of a user;
based on the collected face images of the user, detecting whether the user makes a specified action, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
and adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
In one possible implementation, the graphical user interface includes a face image display area; the method further comprises the steps of:
and displaying the acquired face image of the user in the face image display area.
In a possible implementation manner, a second virtual object is displayed on the graphical user interface; the method further comprises the steps of:
and when the user is determined to make a specified action based on the collected face image of the user, displaying the interactive special effect between the second virtual object and the face image of the user displayed in the face image display area on the graphical user interface.
In a possible implementation manner, the adjusting the display special effects of the user face image in the graphical user interface based on the state information of the first virtual object includes:
if the state of the first virtual object changes when the user makes the appointed action is detected, the display special effect of the face image of the user in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the appointed action meets the requirement;
if the state of the first virtual object is not changed when the user is detected to make the appointed action, the display special effect sticker of the face image of the user in the graphical user interface is adjusted to be a second special effect; the second special effect is used for indicating that the time for making the appointed action is not satisfactory.
In a possible implementation manner, the display effect of the face image of the user in the graphic display interface includes an effect sticker added to a face region in the face image of the user.
In a possible implementation manner, after acquiring the state information of the first virtual object, the method further includes:
and determining the face recognition result of the user based on the state information of the first virtual object.
In a possible implementation manner, the determining the face recognition result of the user based on the state information of the first virtual object includes:
if the state of the first virtual object changes when the user is detected to make the appointed action, determining that the face recognition result of the current time is recognition failure;
and if the state of the first virtual object is not changed when the user is detected to make the appointed action, determining that the face recognition result of the current time is successful.
In a possible implementation manner, after determining the face recognition result, the method further includes:
and under the condition that the face recognition result of the current time is determined to be recognition failure, adding a display special effect representing recognition failure on the face image of the user in the face image display area.
In a possible implementation manner, the first virtual object is a virtual character, and the state information of the first virtual object includes: and the virtual character is state information of whether a turn-around event occurs.
In a possible implementation manner, the second virtual object is a rotating table, and a plurality of virtual objects are placed on the rotating table; the interactive special effects include: and the interaction special effect between the virtual article and the mouth of the face image is achieved.
In a second aspect, an embodiment of the present disclosure further provides a special effect display device, including:
the acquisition module is used for responding to the task starting instruction and acquiring a face image of the user;
the acquisition module is used for detecting whether a user makes a specified action or not based on the acquired face image of the user, and acquiring state information of a first virtual object displayed in the graphical user interface when the user is determined to make the specified action;
and the adjusting module is used for adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
In one possible implementation, the graphical user interface includes a face image display area; the device also comprises a display module for:
and displaying the acquired face image of the user in the face image display area.
In a possible implementation manner, a second virtual object is displayed on the graphical user interface; the display module is further configured to:
and when the user is determined to make a specified action based on the collected face image of the user, displaying the interactive special effect between the second virtual object and the face image of the user displayed in the face image display area on the graphical user interface.
In a possible implementation manner, the adjusting module is configured to, when adjusting the display special effects of the user face image in the graphical user interface based on the state information of the first virtual object:
if the state of the first virtual object changes when the user makes the appointed action is detected, the display special effect of the face image of the user in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the appointed action meets the requirement;
if the state of the first virtual object is not changed when the user is detected to make the appointed action, the display special effect sticker of the face image of the user in the graphical user interface is adjusted to be a second special effect; the second special effect is used for indicating that the time for making the appointed action is not satisfactory.
In a possible implementation manner, the display effect of the face image of the user in the graphic display interface includes an effect sticker added to a face region in the face image of the user.
In a possible embodiment, the device further includes an identification module, where the identification module is configured to: after the state information of the first virtual object is acquired, determining the face recognition result of the user based on the state information of the first virtual object.
In a possible implementation manner, the identification module is configured to, when determining a face recognition result of the user based on the state information of the first virtual object:
if the state of the first virtual object changes when the user is detected to make the appointed action, determining that the face recognition result of the current time is recognition failure;
and if the state of the first virtual object is not changed when the user is detected to make the appointed action, determining that the face recognition result of the current time is successful.
In a possible implementation manner, after determining the face recognition result, the display module is further configured to:
and under the condition that the face recognition result of the current time is determined to be recognition failure, adding a display special effect representing recognition failure on the face image of the user in the face image display area.
In a possible implementation manner, the first virtual object is a virtual character, and the state information of the first virtual object includes: and the virtual character is state information of whether a turn-around event occurs.
In a possible implementation manner, the second virtual object is a rotating table, and a plurality of virtual objects are placed on the rotating table; the interactive special effects include: and the interaction special effect between the virtual article and the mouth of the face image is achieved.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
The effect descriptions of the special effect display device, the electronic device, and the computer readable storage medium refer to the description of the special effect display method, and are not repeated here.
According to the special effect display method and device provided by the embodiment of the disclosure, when the user is determined to make a specified action based on the collected face image, the state information of the first virtual object is obtained, and then the display special effect of the face image of the user in the graphical user interface is adjusted based on the state information of the first virtual object; in the process, the types of the display special effects of the user face in the graphical user interface are related to the time when the user makes the appointed action and the state information of the first virtual object, so that on one hand, various display special effects are displayed, the patterns of the display special effects are enriched, and on the other hand, the interaction between the user and the graphical user interface is increased, and the display modes of the display special effects are enriched.
Furthermore, the user can be identified based on the state information of the first virtual object, the time of the user making the appointed action is required to be matched with the time of the change of the state information of the first virtual object, and the time of the change of the state information of the first virtual object is random, so that in the case, the current user cannot borrow illegal means such as video editing and the like to pass through the face identification, and the real reliability of the face identification process is improved.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 shows a flow chart of a special effect display method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a graphical user presentation interface schematic provided by an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a method of training an action recognition model provided by an embodiment of the present disclosure;
fig. 4 shows a schematic architecture of a special effect display device according to an embodiment of the disclosure;
fig. 5 shows a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
In the related art, each application program attracts users, and the display effect of the interface is enriched by increasing the display effect in the application program, however, as the display effect is mostly a fixed display effect, and in the display process of the effect, no interaction process exists between the display effect and the users, and the display mode of the effect is monotonous.
Based on the method, the method for displaying the special effects can determine what kind of special effects are displayed based on the time when a user makes a specified action and the time when the state information of the first virtual object changes, so that on one hand, various special effects are displayed, patterns of the displayed special effects are enriched, on the other hand, interaction between the user and a graphical user interface is increased, and the display modes of the displayed special effects are enriched.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present disclosure. The components of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The present invention is implemented as a legal system.
For the sake of understanding the present embodiment, first, a detailed description will be given of a special effect display method disclosed in the embodiments of the present disclosure, where an execution subject of the special effect display method provided in the embodiments of the present disclosure is generally a computer device with a certain computing capability, where the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the special effect presentation method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
The computer equipment executing the special effect display method can carry an image acquisition device per se, can be externally connected with the image acquisition device, and can be connected in a wired or wireless mode when the computer equipment is externally connected with the image acquisition device, and the wireless connection mode can be Bluetooth connection, wireless local area network (wi-fi) connection and the like.
The special effect display method provided by the embodiment of the present disclosure is described below by taking the execution body as a terminal device as an example.
Referring to fig. 1, a flowchart of a special effect display method provided by an embodiment of the disclosure includes the following steps:
and step 101, responding to a task starting instruction, and collecting a user face image authorized by a user.
Step 102, based on the collected user face image authorized by the user, detecting whether the user makes a specified action, and acquiring state information of a first virtual object displayed in a graphical user interface when determining that the user makes the specified action.
And step 103, adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
The following is a detailed description of the above steps 101 to 103.
For step 101:
the task start instruction in step 101 may be generated after the user clicks a start button in the screen, where the clicking manner includes but is not limited to single machine, double click, long press, and double press; in another possible implementation manner, the task start instruction may also be a voice instruction input by the user, for example, after receiving the voice input by the user, the voice input by the user may be converted into text, and whether the text contains a preset keyword (for example, may be start verification) is detected, and if so, the task start instruction is generated.
In one possible embodiment, after responding to the task start instruction, the user face image authorized by the user may be collected for a preset time period after responding to the task start instruction, for example, a video including the user face authorized by the user may be collected.
After responding to the task starting instruction, the display picture in the screen can be switched from the initial interface to a graphic user display interface, the graphic user display interface comprises a face image display area, and after collecting the face image of the user authorized by the user, the collected face image of the user authorized by the user can be displayed in real time in the face image display area in the graphic user display interface.
In order to increase interestingness, when the face image display area in the graphic user display interface displays the collected face image of the user authorized by the user, special effect stickers can be arranged in the face image display area. In one possible application scenario, after a user face image authorized by a user is acquired, a user attribute of the user can be identified through the user face image, and then a special effect sticker matched with the user attribute is added in a face image display area based on the identified user attribute.
For example, the locations of the first virtual object, the second virtual object, and the face image presentation area in the graphical user presentation interface may be as shown in fig. 2.
In one possible implementation manner, in order to increase the interest, a preset special music effect may be displayed after the user face image authorized by the user is acquired in response to the task start instruction.
For step 102:
the specified action made by the user may be, for example, opening a mouth, blinking a eye, etc., and the specified action may include one or more.
In detecting whether a user makes a specified action, the user may be given any one of the following methods:
the method comprises the steps of firstly, identifying key points in acquired user face images authorized by users, and determining whether the users in the user face images do specified actions or not based on position coordinates of the key points.
The key points in the face image of the user may include, for example, corners of mouth, corners of eyes, and the like.
Inputting the acquired user face image authorized by the user into a pre-trained action recognition model, and recognizing whether the user in the user face image makes a specified action or not based on the action recognition model.
In the training process of the action recognition model, the labels added to the sample images are action labels, and the action labels are used for representing actions made by users contained in the sample images.
Specifically, after the collected user face image authorized by the user is input into the pre-trained motion recognition model, the motion recognition model can predict the probability that the input user face image corresponds to each motion, and then the motion with the highest probability is determined as the motion made by the user of the input user face image.
In one possible implementation manner, the training method of the motion recognition model may refer to fig. 3, and includes the following steps:
step 301, acquiring sample images, wherein each sample image is provided with a corresponding action label.
The action labels corresponding to the sample images can be manually added according to the sample images.
Step 302, inputting the sample images into a motion recognition model to be trained, and predicting to obtain a motion corresponding to each sample image.
After the sample image is input into the motion model to be trained, outputting the probability that each motion corresponding to the sample image can be obtained, and then determining the motion with the highest probability as the motion corresponding to the sample image.
Step 303, determining the accuracy of the training process based on the motion predicted by each sample image and the motion label corresponding to the sample image, and adjusting the model parameters in the training process when the accuracy does not meet the conditions.
In one possible implementation, a second virtual object may also be presented in the graphical user presentation interface; when the user is determined to make a specified action based on the collected face image, the interactive special effect between the second virtual object and the face image displayed in the face image display area can be displayed on the graphical user interface.
The second virtual object may be a rotating table, a plurality of virtual objects are placed on the rotating table, the interaction special effect between the second virtual object and the user face image displayed in the face image display area may be an interaction special effect between the virtual object and the mouth of the user face image, for example, if the action is designated as opening, the virtual object may be virtual food, when the user opens the mouth, the food reaches the mouth position from large to small and disappears at the mouth position, so that the interaction special effect that the user opens the mouth to eat the virtual food is formed.
In one possible application scenario, the first virtual object may be a virtual character, and the state information of the first virtual object may be state information of whether a turn-around event occurs to the virtual character.
For step 103:
the display special effects of the user face image in the graphical user interface comprise special effect stickers added in face areas in the user face image.
When the display special effect sticker of the face image of the user in the graphical user interface is adjusted based on the state information of the first virtual object, the method comprises the following two conditions:
if the state of the first virtual object changes when the user makes the appointed action is detected, the display special effect sticker of the face image of the user in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the appointed action meets the requirement;
if the state of the first virtual object is not changed when the user is detected to make the appointed action, the display special effect sticker of the user face image in the graphical user interface is adjusted to be a second special effect; the second special effect is used for indicating that the time for making the appointed action is not satisfactory.
The display special effects of the faces of the users in the graphical user interface are adjusted based on the state information of the first virtual object, so that interaction with the users in the special effect display process can be increased, and the display modes of displaying the special effects are enriched.
In the related art, when face recognition is performed, a face photo is mainly acquired through an image acquisition device of equipment, then biological characteristics in the acquired face photo are extracted, and the extracted biological characteristics are matched with biological characteristics stored in a database, so that identification of user identity is realized. However, the method cannot ensure that the actual photo of the current living user is acquired, for example, when the current living user acquires the face photo by the image acquisition device, the image acquisition device cannot acquire the actual photo of the current user by a forging means, but the biological characteristics of the corresponding user can be extracted from the acquired user photo, so that the reliability of the face recognition method is lower.
In addition, in order to prevent the occurrence of the above situation, some face recognition technologies require the user to perform actions such as closing eyes and opening mouths during face recognition, but the actions required by the user to perform by this method are fixed, so the user can forge the detection object by editing videos and the like, and the security is also low.
Based on this, in another embodiment of the present application, after the status information of the first virtual object is acquired, the face recognition result of the user may be determined based on the status information of the first virtual object.
Specifically, when determining the face recognition result based on the state information of the first virtual object, the following two cases are included:
in the first case, if the state of the first virtual object changes when the user is detected to make the appointed action, the face recognition result of the current time is determined to be recognition failure.
In an embodiment of the present disclosure, when it is determined that the current face recognition result is recognition failure, a display special effect for representing recognition failure may be further added to the face image of the user in the face image display area; or, the special effect sticker set in the face image display area can be replaced by the display special effect used for representing the recognition failure under the condition that the face recognition result is determined to be the recognition failure.
In addition, under the condition that the face recognition result of the current time is determined to be recognition failure, the music special effect representing the recognition failure can be displayed in addition to the display special effect representing the recognition failure, and the music special effect representing the recognition failure is used for prompting the user recognition failure.
In addition, in order to increase the recognition accuracy, if the user is detected to make a specified action and the state of the first virtual object changes, the acquisition of the user face image can be stopped within a first preset time after the first virtual object changes, then the acquisition of the user face image can be restarted again after the first preset time, the time length of the user making the specified action and the state of the first virtual object not changing is recorded, and according to the total length of the recorded time length within a second preset time range from the start of the acquisition of the user face image authorized by the user, when the total length exceeds the preset length, the face recognition result is determined to be successful recognition, and when the total length does not exceed the preset length, the face recognition result is determined to be failed recognition.
And in the second case, if the condition that the state of the first virtual object is not changed when the user makes the appointed action is detected, determining that the face recognition result of the current time is successful.
Under the condition that the face recognition result of the current time is that the recognition is successful, the display special effect which characterizes the recognition is successful and the music special effect which characterizes the recognition is successful can be added for prompting the user that the recognition is successful.
In one possible implementation manner, in order to increase the accuracy of the identification, if it is detected that the state of the first virtual object does not change when the user performs the specified action, the duration of the specified action by the user is recorded, and the user score is determined according to the duration, then the total score of the user is determined within a preset time from the start of collecting the user face head image authorized by the user, that is, within the preset time from the start of collecting the user face head image authorized by the user, the user performs the specified action, and the state of the first virtual object does not change, when the total score of the user exceeds a set score threshold, the face recognition result of the user is determined to be successful, and when the total score of the user does not exceed the set score threshold, the face recognition result of the user is determined to be failed in the identification.
In order to improve the security of face recognition, the identity of the current user may be verified before the face recognition result of the user is determined based on the state information of the first virtual object. Specifically, login verification can be performed based on the acquired user face image authorized by the user, and after verification is passed, whether the user makes a specified action is detected.
When login verification is performed based on the collected user face image authorized by the user, facial features in the collected user face image authorized by the user can be extracted, then the extracted facial features are matched with facial features stored in a database in advance, and if the matching is successful, verification is determined to be passed.
According to the special effect display method provided by the embodiment of the disclosure, when a user is determined to make a specified action based on the collected face image, the state information of the first virtual object is acquired, and then the display special effect of the face image of the user in the graphical user interface is adjusted based on the state information of the first virtual object; in the process, the types of the display special effects of the user face in the graphical user interface are related to the time when the user makes the appointed action and the state information of the first virtual object, so that on one hand, various display special effects are displayed, the patterns of the display special effects are enriched, and on the other hand, the interaction between the user and the graphical user interface is increased, and the display modes of the display special effects are enriched.
Furthermore, the user can be identified based on the state information of the first virtual object, the time of the user making the appointed action is required to be matched with the time of the change of the state information of the first virtual object, and the time of the change of the state information of the first virtual object is random, so that in the case, the current user cannot borrow illegal means such as video editing and the like to pass through the face identification, and the real reliability of the face identification process is improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the disclosure further provides a special effect display device corresponding to the special effect display method, and since the principle of solving the problem by the device in the embodiment of the disclosure is similar to that of the special effect display method in the embodiment of the disclosure, the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Referring to fig. 4, a schematic architecture diagram of a special effect display device according to a fifth embodiment of the disclosure is shown, where the device includes: the system comprises an acquisition module 401, an acquisition module 402, an adjustment module 403, a display module 404 and an identification module 405; wherein, the liquid crystal display device comprises a liquid crystal display device,
the acquisition module 401 is used for responding to a task starting instruction and acquiring a face image of a user;
the acquisition module 402 is configured to detect whether a user makes a specified action based on the acquired face image of the user, and acquire state information of a first virtual object displayed in the graphical user interface when determining that the user makes the specified action;
The adjustment module 403 adjusts the display special effect of the user face image in the graphical user interface.
In one possible implementation, the graphical user interface includes a face image display area; the apparatus further comprises a display module 404, the display module 404 being configured to:
and displaying the acquired face image of the user in the face image display area.
In a possible implementation manner, a second virtual object is displayed on the graphical user interface; the display module 404 is further configured to:
and when the user is determined to make a specified action based on the collected face image of the user, displaying the interactive special effect between the second virtual object and the face image of the user displayed in the face image display area on the graphical user interface.
In a possible implementation manner, the adjusting module 403 is configured to, when adjusting the display special effects of the user face image in the graphical user interface based on the state information of the first virtual object:
if the state of the first virtual object changes when the user makes the appointed action is detected, the display special effect of the face image of the user in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the appointed action meets the requirement;
If the state of the first virtual object is not changed when the user is detected to make the appointed action, the display special effect sticker of the face image of the user in the graphical user interface is adjusted to be a second special effect; the second special effect is used for indicating that the time for making the appointed action is not satisfactory.
In a possible implementation manner, the display effect of the face image of the user in the graphic display interface includes an effect sticker added to a face region in the face image of the user.
In a possible implementation manner, the device further includes an identification module 405, where the identification module 404 is configured to: after the state information of the first virtual object is acquired, determining the face recognition result of the user based on the state information of the first virtual object.
In a possible implementation manner, the identifying module 405 is configured to, when determining the face recognition result of the user based on the state information of the first virtual object:
if the state of the first virtual object changes when the user is detected to make the appointed action, determining that the face recognition result of the current time is recognition failure;
and if the state of the first virtual object is not changed when the user is detected to make the appointed action, determining that the face recognition result of the current time is successful.
In a possible implementation manner, after determining the face recognition result, the display module 404 is further configured to:
and under the condition that the face recognition result of the current time is determined to be recognition failure, adding a display special effect representing recognition failure on the face image of the user in the face image display area.
In a possible implementation manner, the first virtual object is a virtual character, and the state information of the first virtual object includes: and the virtual character is state information of whether a turn-around event occurs.
In a possible implementation manner, the second virtual object is a rotating table, and a plurality of virtual objects are placed on the rotating table; the interactive special effects include: and the interaction special effect between the virtual article and the mouth of the face image is achieved.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides electronic equipment. Referring to fig. 5, a schematic structural diagram of an electronic device according to an embodiment of the disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is configured to store execution instructions, including a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external memory 5022 such as a hard disk, the processor 501 exchanges data with the external memory 5022 through the memory 5021, and when the electronic device 500 is running, the processor 501 and the memory 502 communicate with each other through the bus 503, so that the processor 501 executes the following instructions:
Responding to a task starting instruction, and collecting a face image of a user;
based on the collected face images of the user, detecting whether the user makes a specified action, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
and adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object.
In a possible implementation manner, the instructions executed by the processor 501 include a face image display area on the graphical user interface; the method further comprises the steps of:
and displaying the acquired face image of the user in the face image display area.
In a possible implementation manner, in the instructions executed by the processor 501, a second virtual object is shown on the graphical user interface;
the instructions executed by the processor 501 further include:
and when the user is determined to make a specified action based on the collected face image of the user, displaying the interactive special effect between the second virtual object and the face image of the user displayed in the face image display area on the graphical user interface.
In a possible implementation manner, in the instructions executed by the processor 501, the adjusting, based on the state information of the first virtual object, the display special effects of the user face image in the graphical user interface includes:
If the state of the first virtual object changes when the user makes the appointed action is detected, the display special effect of the face image of the user in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the appointed action meets the requirement;
if the state of the first virtual object is not changed when the user is detected to make the appointed action, the display special effect sticker of the face image of the user in the graphical user interface is adjusted to be a second special effect; the second special effect is used for indicating that the time for making the appointed action is not satisfactory.
In a possible implementation manner, the display effect of the face image of the user in the graphic display interface includes an effect sticker added to a face region in the face image of the user.
In a possible implementation manner, in an instruction executed by the processor 501, after obtaining the state information of the first virtual object, the method further includes:
and determining the face recognition result of the user based on the state information of the first virtual object.
In a possible implementation manner, in the instructions executed by the processor 501, the determining, based on the state information of the first virtual object, a face recognition result of the user includes:
If the state of the first virtual object changes when the user is detected to make the appointed action, determining that the face recognition result of the current time is recognition failure;
and if the state of the first virtual object is not changed when the user is detected to make the appointed action, determining that the face recognition result of the current time is successful.
In a possible implementation manner, in the instructions executed by the processor 501, after determining the face recognition result, the method further includes:
and under the condition that the face recognition result of the current time is determined to be recognition failure, adding a display special effect representing recognition failure on the face image of the user in the face image display area.
In a possible implementation manner, in the instructions executed by the processor 501, the first virtual object is a virtual character, and the state information of the first virtual object includes: and the virtual character is state information of whether a turn-around event occurs.
In a possible implementation manner, in the instructions executed by the processor 501, the second virtual object is a rotating table, and a plurality of virtual objects are placed on the rotating table; the interactive special effects include: and the interaction special effect between the virtual article and the mouth of the face image is achieved.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the special effect presentation method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product of the special effect display method provided by the embodiment of the disclosure includes a computer readable storage medium storing program codes, and the instructions included in the program codes may be used to execute the steps of the special effect display method described in the method embodiment, and specifically, reference may be made to the method embodiment, which is not repeated herein.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A special effect display method, comprising:
responding to a task starting instruction, and collecting a face image of a user;
based on the collected face images of the user, detecting whether the user makes a specified action, and acquiring state information of a first virtual object displayed in a graphical user interface when the user is determined to make the specified action;
based on the state information of the first virtual object, adjusting the display special effect of the user face image in the graphical user interface;
wherein the adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object includes:
if the state of the first virtual object changes when the user makes the appointed action is detected, the display special effect of the face image of the user in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the appointed action meets the requirements.
2. The method of claim 1, wherein the graphical user interface includes a face image presentation area thereon; the method further comprises the steps of:
and displaying the acquired face image of the user in the face image display area.
3. The method of claim 2, wherein a second virtual object is presented on the graphical user interface; the method further comprises the steps of:
and when the user is determined to make a specified action based on the collected face image of the user, displaying the interactive special effect between the second virtual object and the face image of the user displayed in the face image display area on the graphical user interface.
4. The method of claim 1, wherein adjusting the presented special effects of the user face image in the graphical user interface based on the status information of the first virtual object further comprises:
if the state of the first virtual object is not changed when the user is detected to make the appointed action, the display special effect sticker of the face image of the user in the graphical user interface is adjusted to be a second special effect; the second special effect is used for indicating that the time for making the appointed action is not satisfactory.
5. The method of claim 4, wherein the displaying the effect of the user face image in the graphical display interface comprises adding an effect sticker to a face region in the user face image.
6. The method of claim 1, wherein after obtaining the state information of the first virtual object, the method further comprises:
and determining the face recognition result of the user based on the state information of the first virtual object.
7. The method of claim 6, wherein the determining the face recognition result of the user based on the state information of the first virtual object comprises:
if the state of the first virtual object changes when the user is detected to make the appointed action, determining that the face recognition result of the current time is recognition failure;
and if the state of the first virtual object is not changed when the user is detected to make the appointed action, determining that the face recognition result of the current time is successful.
8. The method according to claim 6 or 7, wherein after determining the face recognition result, the method further comprises:
and under the condition that the face recognition result of the current time is determined to be recognition failure, adding a display special effect representing recognition failure on the face image of the user in the face image display area.
9. The method of any of claims 1-7, wherein the first virtual object is a virtual character, and the state information of the first virtual object includes: and the virtual character is state information of whether a turn-around event occurs.
10. A method according to claim 3, wherein the second virtual object is a rotating table on which a plurality of virtual objects are placed; the interactive special effects include: and the interaction special effect between the virtual article and the mouth of the face image is achieved.
11. A special effect display device, comprising:
the acquisition module is used for responding to the task starting instruction and acquiring a face image of the user;
the acquisition module is used for detecting whether a user makes a specified action or not based on the acquired face image of the user, and acquiring state information of a first virtual object displayed in the graphical user interface when the user is determined to make the specified action;
the adjusting module is used for adjusting the display special effects of the user face image in the graphical user interface based on the state information of the first virtual object;
the adjustment module is configured to, when adjusting the display special effect of the user face image in the graphical user interface based on the state information of the first virtual object:
if the state of the first virtual object changes when the user makes the appointed action is detected, the display special effect of the face image of the user in the graphical user interface is adjusted to be a first special effect; the first special effect is used for indicating that the time for making the appointed action meets the requirements.
12. A computer device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating via the bus when the computer device is running, said machine readable instructions when executed by said processor performing the steps of the special effect presentation method according to any of claims 1 to 10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the special effect presentation method of any of claims 1 to 10.
CN202010027410.0A 2020-01-10 2020-01-10 Special effect display method and device Active CN111240482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010027410.0A CN111240482B (en) 2020-01-10 2020-01-10 Special effect display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010027410.0A CN111240482B (en) 2020-01-10 2020-01-10 Special effect display method and device

Publications (2)

Publication Number Publication Date
CN111240482A CN111240482A (en) 2020-06-05
CN111240482B true CN111240482B (en) 2023-06-30

Family

ID=70864313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010027410.0A Active CN111240482B (en) 2020-01-10 2020-01-10 Special effect display method and device

Country Status (1)

Country Link
CN (1) CN111240482B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190536B (en) * 2018-08-23 2023-12-26 百度在线网络技术(北京)有限公司 Face image processing method, device and equipment
CN111638798A (en) * 2020-06-07 2020-09-08 上海商汤智能科技有限公司 AR group photo method, AR group photo device, computer equipment and storage medium
CN111760265B (en) * 2020-06-24 2024-03-22 抖音视界有限公司 Operation control method and device
CN111857923B (en) * 2020-07-17 2022-10-28 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
CN111773676A (en) * 2020-07-23 2020-10-16 网易(杭州)网络有限公司 Method and device for determining virtual role action
CN111899192B (en) 2020-07-23 2022-02-01 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN113360805B (en) * 2021-06-03 2023-06-20 北京市商汤科技开发有限公司 Data display method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002197376A (en) * 2000-12-27 2002-07-12 Fujitsu Ltd Method and device for providing virtual world customerized according to user
WO2015090147A1 (en) * 2013-12-20 2015-06-25 百度在线网络技术(北京)有限公司 Virtual video call method and terminal
WO2017006872A1 (en) * 2015-07-03 2017-01-12 学校法人慶應義塾 Facial expression identification system, facial expression identification method, and facial expression identification program
WO2019114328A1 (en) * 2017-12-11 2019-06-20 广州市动景计算机科技有限公司 Augmented reality-based video processing method and device thereof
WO2019187732A1 (en) * 2018-03-30 2019-10-03 ソニー株式会社 Information processing device, information processing method, and program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8810513B2 (en) * 2012-02-02 2014-08-19 Kodak Alaris Inc. Method for controlling interactive display system
US9766702B2 (en) * 2014-06-19 2017-09-19 Apple Inc. User detection by a computing device
JP2018032055A (en) * 2015-01-13 2018-03-01 三菱電機株式会社 Gesture recognition device and method, and program and recording medium
WO2017000217A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
CN106341720B (en) * 2016-08-18 2019-07-26 北京奇虎科技有限公司 A kind of method and device for adding face's special efficacy in net cast
CN108073669A (en) * 2017-01-12 2018-05-25 北京市商汤科技开发有限公司 Business object methods of exhibiting, device and electronic equipment
CN107277599A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of live broadcasting method of virtual reality, device and system
CN109391792B (en) * 2017-08-03 2021-10-29 腾讯科技(深圳)有限公司 Video communication method, device, terminal and computer readable storage medium
CN107944542A (en) * 2017-11-21 2018-04-20 北京光年无限科技有限公司 A kind of multi-modal interactive output method and system based on visual human
CN109190536B (en) * 2018-08-23 2023-12-26 百度在线网络技术(北京)有限公司 Face image processing method, device and equipment
CN109697404A (en) * 2018-09-28 2019-04-30 中国银联股份有限公司 Identification system and method, terminal and computer storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002197376A (en) * 2000-12-27 2002-07-12 Fujitsu Ltd Method and device for providing virtual world customerized according to user
WO2015090147A1 (en) * 2013-12-20 2015-06-25 百度在线网络技术(北京)有限公司 Virtual video call method and terminal
WO2017006872A1 (en) * 2015-07-03 2017-01-12 学校法人慶應義塾 Facial expression identification system, facial expression identification method, and facial expression identification program
WO2019114328A1 (en) * 2017-12-11 2019-06-20 广州市动景计算机科技有限公司 Augmented reality-based video processing method and device thereof
WO2019187732A1 (en) * 2018-03-30 2019-10-03 ソニー株式会社 Information processing device, information processing method, and program

Also Published As

Publication number Publication date
CN111240482A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111240482B (en) Special effect display method and device
US11042785B2 (en) Systems and methods for machine learning enhanced by human measurements
JP6878572B2 (en) Authentication based on face recognition
US20200412975A1 (en) Content capture with audio input feedback
US9922239B2 (en) System, method, and program for identifying person in portrait
US10049287B2 (en) Computerized system and method for determining authenticity of users via facial recognition
CN106897658B (en) Method and device for identifying human face living body
US20170308739A1 (en) Human face recognition method and recognition system
CN108399665A (en) Method for safety monitoring, device based on recognition of face and storage medium
CN112328999B (en) Double-recording quality inspection method and device, server and storage medium
CN104838336A (en) Data and user interaction based on device proximity
US20200412864A1 (en) Modular camera interface
US11809479B2 (en) Content push method and apparatus, and device
EP3367277A1 (en) Electronic device and method for providing user information
US9424411B2 (en) Athentication of device users by gaze
CN109194689B (en) Abnormal behavior recognition method, device, server and storage medium
CN110476141A (en) Sight tracing and user terminal for executing this method
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
US20210304339A1 (en) System and a method for locally assessing a user during a test session
CN110619239A (en) Application interface processing method and device, storage medium and terminal
CN111428570A (en) Detection method and device for non-living human face, computer equipment and storage medium
CN111144266A (en) Facial expression recognition method and device
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
KR102476619B1 (en) Electronic device and control method thereof
EP2905678A1 (en) Method and system for displaying content to a user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder