CN111626254A - Display animation triggering method and device - Google Patents

Display animation triggering method and device Download PDF

Info

Publication number
CN111626254A
CN111626254A CN202010491640.2A CN202010491640A CN111626254A CN 111626254 A CN111626254 A CN 111626254A CN 202010491640 A CN202010491640 A CN 202010491640A CN 111626254 A CN111626254 A CN 111626254A
Authority
CN
China
Prior art keywords
target
display
feature information
face attribute
attribute feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010491640.2A
Other languages
Chinese (zh)
Other versions
CN111626254B (en
Inventor
孙红亮
揭志伟
王子彬
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010491640.2A priority Critical patent/CN111626254B/en
Publication of CN111626254A publication Critical patent/CN111626254A/en
Application granted granted Critical
Publication of CN111626254B publication Critical patent/CN111626254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The present disclosure provides a display animation triggering method and device, including: carrying out face attribute identification on a face image of a target user, and determining face attribute characteristic information of the target user; determining a target virtual scenery displaying animation matched with the face attribute feature information of the target user based on the face attribute feature information of the target user; and playing the target virtual scenery display animation through a target display device.

Description

Display animation triggering method and device
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to a display animation triggering method and device.
Background
In an exhibition hall, an electronic screen is generally arranged in some areas, and some virtual scenes related to the exhibition hall, such as flowers, trees and the like, are displayed through the electronic screen. A common way to present a virtual scene on an electronic screen is to automatically cycle through one or more pictures or animations.
Obviously, the current display mode is monotonous, the display effect is poor, and even the display mode can be directly ignored, so that the display resources are wasted.
Disclosure of Invention
The embodiment of the disclosure at least provides a display animation triggering method and device.
In a first aspect, an embodiment of the present disclosure provides a display animation triggering method, including:
carrying out face attribute identification on a face image of a target user, and determining face attribute characteristic information of the target user;
determining a target virtual scenery displaying animation matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and playing the target virtual scenery display animation through a target display device.
According to the method, different target virtual scenery display animations can be matched for different target users according to the characteristic information of the target users, then the corresponding target virtual scenery display animations are displayed, and the display contents of different target users are different, so that the display contents can be enriched, and the display effect is improved.
In a possible embodiment, the face attribute feature information includes at least one of the following information:
gender, age, smile value, color value, mood, skin color.
In one possible embodiment, the determining, based on the facial attribute feature information of the target user, a target virtual scene representation animation matching the facial attribute feature information of the target user includes:
determining a target virtual scenery type matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and based on the target virtual scenery type, selecting the target virtual scenery showing animation corresponding to the target virtual scenery type from virtual scenery showing animations corresponding to a plurality of pre-stored virtual scenery types.
In one possible implementation, determining the target virtual scene type matching with the facial attribute feature information of the target user based on the facial attribute feature information of the target user includes:
calculating the matching degree between the face attribute feature information of the target user and each virtual scenery type based on the plurality of face attribute features indicated by the face attribute feature information of the target user and the plurality of face attribute features matched with each virtual scenery type;
and selecting the virtual scenery type with the highest corresponding matching degree as the target virtual scenery type.
In one possible implementation, after determining the target virtual scene representation animation matched with the facial attribute feature information of the target user, the method further comprises:
determining a target display position of the target virtual scenery display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
the playing of the virtual scenery presentation animation by the target display device includes:
and controlling the target display device to display the target virtual scenery display animation at the target display position.
By the implementation mode, the target user can control the display position of the target virtual scenery display animation on the target display device by changing the relative position information with the target display device, so that the interaction process between the target user and the target display device is increased, and the display effect is improved.
In a possible implementation manner, if the face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scenery display animation at the target display position includes:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users do not have an overlapped area, controlling the target display device to synchronously display the corresponding target virtual scenery display animations at the target display positions corresponding to the target virtual scenery display animations respectively.
By synchronously displaying the plurality of target virtual scenery display animations, the display content of the target display device can be enriched, and the display effect is improved.
In a possible implementation manner, if the face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scenery display animation at the target display position includes:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users have the coincidence areas, sequentially playing the target virtual scenery display animations with the coincidence areas through a target display device, or selecting one target virtual scenery display animation from the target virtual scenery display animations with the coincidence areas for playing.
In a possible implementation manner, the performing face attribute recognition on a face image of a target user to determine face attribute feature information of the target user includes:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute feature information of the target user; the neural network is obtained by training based on a sample image marked with face attribute feature information.
In a second aspect, an embodiment of the present disclosure further provides a display animation triggering apparatus, including:
the first determining module is used for carrying out face attribute identification on the face image of the target user and determining face attribute characteristic information of the target user;
the second determination module is used for determining a target virtual scenery display animation matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and the playing module is used for playing the target virtual scenery display animation through the target display device.
In a possible embodiment, the face attribute feature information includes at least one of the following information:
gender, age, smile value, color value, mood, skin color.
In one possible implementation, the second determining module, when determining the target virtual scene representation animation matched with the facial attribute feature information of the target user based on the facial attribute feature information of the target user, is configured to:
determining a target virtual scenery type matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and based on the target virtual scenery type, selecting the target virtual scenery showing animation corresponding to the target virtual scenery type from virtual scenery showing animations corresponding to a plurality of pre-stored virtual scenery types.
In one possible implementation, the second determining module, when determining the target virtual scene category matching the face attribute feature information of the target user based on the face attribute feature information of the target user, is configured to:
calculating the matching degree between the face attribute feature information of the target user and each virtual scenery type based on the plurality of face attribute features indicated by the face attribute feature information of the target user and the plurality of face attribute features matched with each virtual scenery type;
and selecting the virtual scenery type with the highest corresponding matching degree as the target virtual scenery type.
In one possible implementation, after determining the target virtual scene representation animation matched with the facial attribute feature information of the target user, the second determining module is further configured to:
determining a target display position of the target virtual scenery display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
the playing module, when playing the virtual scenery displaying animation through the target display device, is configured to:
and controlling the target display device to display the target virtual scenery display animation at the target display position.
In a possible implementation manner, if the face attribute feature information of a plurality of target users is obtained, the playing module, when controlling the target display device to display the target virtual scene display animation at the target display position, is configured to:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users do not have an overlapped area, controlling the target display device to synchronously display the corresponding target virtual scenery display animations at the target display positions corresponding to the target virtual scenery display animations respectively.
In a possible implementation manner, if the face attribute feature information of a plurality of target users is obtained, the playing module, when controlling the target display device to display the target virtual scene display animation at the target display position, is configured to:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users have the coincidence areas, sequentially playing the target virtual scenery display animations with the coincidence areas through a target display device, or selecting one target virtual scenery display animation from the target virtual scenery display animations with the coincidence areas for playing.
In a possible implementation manner, the first determining module, when performing face attribute recognition on a face image of a target user to determine face attribute feature information of the target user, is configured to:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute feature information of the target user; the neural network is obtained by training based on a sample image marked with face attribute feature information.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method for triggering presentation animation according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a neural network training method provided by an embodiment of the present disclosure;
FIG. 3a is a display interface diagram of a target display apparatus provided by an embodiment of the present disclosure;
FIG. 3b illustrates a display interface diagram of another target display apparatus provided by an embodiment of the present disclosure;
FIG. 4 illustrates a display interface diagram of another target display apparatus provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating an architecture of a presentation animation trigger device according to an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a computer device 600 provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
When a virtual scene related to an exhibition hall is exhibited in the exhibition hall, one or more pictures or animations are automatically and circularly played, and the exhibition mode is monotonous and poor in exhibition effect.
Based on the above, the present disclosure provides a method and an apparatus for triggering display animation, which can match different target virtual scenery display animations for different target users according to the feature information of the target users, and then display corresponding target virtual scenery display animations, wherein the display contents of different target users are different, so that the display contents can be enriched, and the display effect can be improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a detailed description is given to a display animation triggering method disclosed in an embodiment of the present disclosure, where an execution subject of the display animation triggering method provided in the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a Personal Digital Assistant (PDA), a handheld device, a computing device, or a server or other processing device.
The animation display triggering method provided by the embodiment of the disclosure is described below by taking an execution subject as an electronic device as an example.
Referring to fig. 1, a flowchart of a method for triggering display animation provided in the embodiment of the present disclosure is shown, where the method includes steps 101 to 103, where:
step 101, performing face attribute identification on a face image of a target user, and determining face attribute feature information of the target user.
The face attribute feature information comprises at least one of the following information:
gender, age, smile value, color value, mood, skin color.
The electronic device executing the scheme may be equipped with an image capturing device (such as a camera), or the electronic device executing the scheme may be connected to the image capturing device in a manner including, but not limited to, wired connection and wireless connection, where the wireless connection may include, for example, bluetooth connection, wireless network connection, and the like. The position of the image acquisition device can be fixed, and therefore, the position area corresponding to the image acquired by the image acquisition device is also fixed.
The user entering the target detection area is a target user, and the target detection area is a position area corresponding to the image acquired by the image acquisition device. In specific implementation, the image acquisition device can acquire an image of a target detection area in real time and then transmit the image to the electronic device, and the electronic device can analyze the image acquired by the image acquisition device in real time to detect whether a target user is included in the target detection area.
In another possible implementation, an infrared detection device may be further disposed in the target detection area, and the infrared detection device is connected to the electronic device, and detects whether the target detection area contains the target user through the infrared detection device, and when the infrared device detects that the target detection area contains the target user, the electronic device may further control the image acquisition device to acquire a face image of the target user in the target detection area.
In specific implementation, when the face attribute recognition is performed on the face image of the target user and the face attribute feature information of the target user is determined, the face image of the target user may be obtained first, and then the face image is input into a trained neural network to obtain the face attribute feature information of the target user, wherein the neural network is obtained by training based on a sample image marked with the face attribute feature information.
When the neural network is trained, the neural network training method shown in fig. 2 may be referred to, and includes the following steps:
step 201, obtaining a sample image, wherein the sample image is marked with face attribute feature information.
Step 202, inputting the sample image into a neural network, and outputting to obtain the attribute feature information of the predicted human face.
And 203, determining a loss value in the training process based on the face attribute feature information marked by the sample image and the predicted face attribute feature information.
And 204, judging whether the loss value in the training process is smaller than a preset loss value.
If yes, go to step 205 in sequence;
if the result of the determination is negative, the network parameters of the neural network in the training process are adjusted, and the step 202 is executed again.
And step 205, determining the neural network used in the training process as the trained neural network.
And 102, determining a target virtual scenery display animation matched with the face attribute feature information of the target user based on the face attribute feature information of the target user.
In specific implementation, when the target virtual scenery showing animation matched with the face attribute feature information of the target user is determined based on the face attribute feature information of the target user, the type of the target virtual scenery matched with the face attribute feature information of the target user can be determined based on the face attribute feature information of the target user; and then selecting a target virtual scenery showing animation corresponding to the target virtual scenery type from a plurality of virtual scenery showing animations corresponding to a plurality of pre-stored virtual scenery based on the target virtual scenery type.
When determining a target virtual scene type matched with the face attribute feature information of the target user based on the face attribute feature information of the target user, calculating the matching degree between the face attribute feature information of the target user and each virtual scene type based on a plurality of face attribute features indicated by the face attribute feature information of the target user and a plurality of face attribute features matched with each virtual scene type; and then selecting the virtual scene type with the highest corresponding matching degree as the target virtual scene type.
The multiple face attribute characteristics matched with each virtual scenery are preset, illustratively, the face attribute characteristic information comprises gender and age, the virtual scenery comprises virtual sunflowers, virtual roses, virtual cactus and the like, and the multiple face attribute characteristics matched with the virtual scenery can be shown in the following table 1:
TABLE 1
Virtual scenery Face attribute features
Virtual sunflower 0-10 years old, female
Virtual rose Female aged 15-30 years old
Virtual cactus Male of 15-30 years old
When the matching degree between the face attribute feature information of the target user and each virtual scenery type is calculated based on the plurality of face attribute features indicated by the face attribute feature information of the target user and the plurality of face attribute features matched with each virtual scenery type, the intermediate matching degree between the gender and the age of the target user and the gender and the age matched with each virtual scenery type can be calculated respectively, and then the intermediate matching degree is weighted and summed according to preset weights to obtain the matching degree between the face attribute feature information of the target user and each virtual scenery type.
The virtual scenery type can be identification information of the virtual scenery showing animation, virtual scenery showing animation corresponding to various pre-stored virtual scenery, or virtual scenery showing animation corresponding to different types of virtual scenery respectively.
And 103, playing the target virtual scenery display animation through the target display device.
In a possible implementation manner, after the target virtual scenery displaying animation matched with the face attribute feature information of the target user is determined, the target displaying position of the target virtual scenery displaying animation corresponding to the target display device can be determined according to the relative position information of the target user relative to the target display device, and when the target virtual scenery displaying animation is played through the target display device, the target display device can be controlled to display the target virtual scenery displaying animation at the target displaying position.
Specifically, when the target display position of the target virtual scenery display animation corresponding to the target display device is determined according to the relative position information of the target user relative to the target display device, the position of the target user in the face image can be determined based on the face image of the target user, and the position and the orientation of the image acquisition device for acquiring the face image are fixed, so that the relative position information of the target user relative to the target display device can be determined according to the position of the target user in the face image through the preset corresponding relationship between the image position and the display position of the target display device.
Illustratively, if the position coordinate of a certain position point of the target user in the face image is (x)1,y1) Determining the position coordinate of the position point in the target display device as (x) through the preset corresponding relation between the image position and the display position of the target display device2,y2) Then can be at (x) of the target display device2,y2) The location point is shown at.
For example, as shown in fig. 3a, if the target user stands at a position on the left side of the target display device, a target virtual scenery displaying animation corresponding to the face attribute feature information of the target user may be displayed at the position on the left side of the target display position, which is shown as a flower in fig. 3a and 301 is the target display device; if the target user stands at the middle position of the target display device, the target virtual scenery displaying animation corresponding to the face attribute feature information of the target user can be displayed at the middle position of the target display device, and the effect schematic diagram can be shown in fig. 3b, 301 is the target display device.
When the face attribute feature information of a plurality of target users is acquired and a target virtual scene showing animation is shown on a target showing position by a target display device, different showing effects may be caused by the superposition of the target showing positions, and the following two situations can be specifically divided into:
in case 1, there is no overlapping area in the exhibition positions corresponding to the target virtual scenery exhibition animations matched with the face attribute feature information of a plurality of target users.
When the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users do not have the overlapping areas, the control target display device can synchronously display the corresponding target virtual scenery display animations at the target display positions corresponding to the target virtual scenery display animations respectively when displaying is carried out by the control target display device.
Illustratively, a target user a stands on the left side of a target display device, a target user B stands on the right side of the target display device, a target virtual scenery showing animation matched with the face attribute feature information of the target user a is flower open, a target virtual scenery showing animation matched with the face attribute feature information of the target user B is bear dancing, then the flower open can be shown on the left side of the target display device at the same time, the bear dancing is shown on the right side of the target display device, and an interface diagram can be as shown in fig. 4, 401 is the target display device.
Here, it should be noted that the left side, the right side, and the middle are all brief descriptions explaining the step, and the target display position of the specific target virtual scene display animation needs to be determined by calculation.
And 2, overlapping areas exist at display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users.
In this case, the target virtual scenery displaying animation in which the overlapping area exists may be sequentially played through the target display device, or one target virtual scenery displaying animation may be selected from the target virtual displaying animations in which the overlapping area exists and played.
When the target virtual scenery displaying animation with the overlapped area is sequentially played through the target device, the target virtual scenery displaying animation with the overlapped area can be sequentially played according to the sequence that the face attribute characteristic information of the target virtual scenery displaying animation with the overlapped area is detected. If the face attribute feature information of a plurality of target users is detected simultaneously and the target display positions of the target virtual scenery display animation corresponding to the face attribute feature information have the overlapping area, the target display device can be controlled to randomly select the target virtual scenery display animation corresponding to the face attribute feature information of the target users to play.
Illustratively, if a target user A enters a target detection area at 10:00, and based on the face attribute feature information of the target user, determining that a target virtual scenery display animation corresponding to the face attribute feature information of the target user A is a virtual scenery display animation 1, and the display position of the virtual scenery display animation 1 is an area A; the target user B enters a target detection area at a ratio of 10:01, and determines that a target virtual scenery displaying animation corresponding to the face attribute feature information of the target user A is a virtual scenery displaying animation 2 based on the face attribute feature information of the target user, the displaying position of the virtual scenery displaying animation 2 is an area B, and the area A and the area B have a superposition area, but because the time for detecting the face attribute feature information of the target user A is earlier than the time for detecting the face attribute feature information of the target user B, the virtual scenery displaying animation 1 is played first, and then the virtual scenery displaying animation 2 is played.
When the face attribute feature information of the target user is determined based on the neural network, after the face image of the target user is input to the neural network, the face attribute feature information of the target user is output, and besides, the accuracy of the face attribute feature information of the target user can also be output. When one target virtual scenery showing animation is selected from the target virtual showing animations with the overlapped areas to play, the target virtual showing animation with the overlapped areas can be selected according to the accuracy of the face attribute feature information of the target user.
Specifically, if the face attribute feature information of a plurality of target users is detected at the same time, the target display device may be controlled to play the target virtual scenery displaying animation corresponding to the face attribute feature information with the highest accuracy according to the accuracy of the face attribute feature information of the target users.
In a possible implementation manner, when displaying the target virtual scenery displaying animation corresponding to the face attribute feature information of the current target user, it is detected that the face attribute feature information of another target user exists, and an overlapping region exists between the target displaying position of the target virtual scenery displaying animation corresponding to the face attribute feature information of the another target user and the target displaying position of the target virtual scenery displaying animation currently being played.
In another possible implementation manner, the accuracy corresponding to the face attribute feature information of the current target user and the accuracy corresponding to the face attribute feature information of the other target user may also be determined, and the target virtual scenery displaying animation corresponding to the face attribute feature information of the target user with higher accuracy is displayed.
According to the display animation triggering method, different target virtual scenery display animations can be matched for different target users according to the characteristic information of the target users, then the corresponding target virtual scenery display animations are displayed, the display contents of the different target users are different, therefore, the display contents can be enriched, and the display effect is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a display animation triggering device corresponding to the display animation triggering method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the display animation triggering method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 5, there is shown an architecture diagram of an animation display triggering apparatus according to an embodiment of the present disclosure, the apparatus includes: a first determining module 501, a second determining module 502 and a playing module 503; wherein the content of the first and second substances,
a first determining module 501, configured to perform face attribute identification on a face image of a target user, and determine face attribute feature information of the target user;
a second determining module 502, configured to determine, based on the face attribute feature information of the target user, a target virtual scene display animation that matches the face attribute feature information of the target user;
a playing module 503, configured to play the target virtual scenery displaying animation through the target display device.
In a possible embodiment, the face attribute feature information includes at least one of the following information:
gender, age, smile value, color value, mood, skin color.
In one possible implementation, the second determining module 502, when determining the target virtual scene representation animation matched with the facial attribute feature information of the target user based on the facial attribute feature information of the target user, is configured to:
determining a target virtual scenery type matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and based on the target virtual scenery type, selecting the target virtual scenery showing animation corresponding to the target virtual scenery type from virtual scenery showing animations corresponding to a plurality of pre-stored virtual scenery types.
In one possible implementation, the second determining module 502, when determining the target virtual scene category matching the face attribute feature information of the target user based on the face attribute feature information of the target user, is configured to:
calculating the matching degree between the face attribute feature information of the target user and each virtual scenery type based on the plurality of face attribute features indicated by the face attribute feature information of the target user and the plurality of face attribute features matched with each virtual scenery type;
and selecting the virtual scenery type with the highest corresponding matching degree as the target virtual scenery type.
In one possible implementation, after determining the target virtual scene representation animation matched with the facial attribute feature information of the target user, the second determining module 502 is further configured to:
determining a target display position of the target virtual scenery display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
the playing module 503, when playing the virtual scene exhibition animation through the target display device, is configured to:
and controlling the target display device to display the target virtual scenery display animation at the target display position.
In a possible implementation manner, if the face attribute feature information of multiple target users is obtained, the playing module 503, when controlling the target display apparatus to display the target virtual scene display animation at the target display position, is configured to:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users do not have an overlapped area, controlling the target display device to synchronously display the corresponding target virtual scenery display animations at the target display positions corresponding to the target virtual scenery display animations respectively.
In a possible implementation manner, if the face attribute feature information of multiple target users is obtained, the playing module 503, when controlling the target display apparatus to display the target virtual scene display animation at the target display position, is configured to:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users have the coincidence areas, sequentially playing the target virtual scenery display animations with the coincidence areas through a target display device, or selecting one target virtual scenery display animation from the target virtual scenery display animations with the coincidence areas for playing.
In a possible implementation manner, the first determining module 501, when performing face attribute recognition on a face image of a target user to determine face attribute feature information of the target user, is configured to:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute feature information of the target user; the neural network is obtained by training based on a sample image marked with face attribute feature information.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the application also provides computer equipment. Referring to fig. 6, a schematic structural diagram of a computer device 600 provided in the embodiment of the present application includes a processor 601, a memory 602, and a bus 603. The memory 602 is used for storing execution instructions and includes a memory 6021 and an external memory 6022; the memory 6021 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 601 and the data exchanged with the external memory 6022 such as a hard disk, the processor 601 exchanges data with the external memory 6022 through the memory 6021, and when the computer device 600 operates, the processor 601 communicates with the memory 602 through the bus 603, so that the processor 601 executes the following instructions:
carrying out face attribute identification on a face image of a target user, and determining face attribute characteristic information of the target user;
determining a target virtual scenery displaying animation matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and playing the target virtual scenery display animation through a target display device.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method for triggering display of an animation described in the above method embodiment. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product for displaying an animation triggering method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the animation triggering method described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method for triggering display animation is characterized by comprising the following steps:
carrying out face attribute identification on a face image of a target user, and determining face attribute characteristic information of the target user;
determining a target virtual scenery displaying animation matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and playing the target virtual scenery display animation through a target display device.
2. The method of claim 1, wherein the face attribute feature information comprises at least one of the following information:
gender, age, smile value, color value, mood, skin color.
3. The method according to claim 1, wherein the determining the target virtual scene exhibition animation matching with the facial attribute feature information of the target user based on the facial attribute feature information of the target user comprises:
determining a target virtual scenery type matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and based on the target virtual scenery type, selecting the target virtual scenery showing animation corresponding to the target virtual scenery type from virtual scenery showing animations corresponding to a plurality of pre-stored virtual scenery types.
4. The method of claim 3, wherein determining the target virtual scene type matching the facial attribute feature information of the target user based on the facial attribute feature information of the target user comprises:
calculating the matching degree between the face attribute feature information of the target user and each virtual scenery type based on the plurality of face attribute features indicated by the face attribute feature information of the target user and the plurality of face attribute features matched with each virtual scenery type;
and selecting the virtual scenery type with the highest corresponding matching degree as the target virtual scenery type.
5. The method of claim 1, wherein after determining the target virtual scene rendering animation matching the facial attribute feature information of the target user, the method further comprises:
determining a target display position of the target virtual scenery display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
the playing of the virtual scenery presentation animation by the target display device includes:
and controlling the target display device to display the target virtual scenery display animation at the target display position.
6. The method according to claim 5, wherein if the face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scene display animation at the target display position comprises:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users do not have an overlapped area, controlling the target display device to synchronously display the corresponding target virtual scenery display animations at the target display positions corresponding to the target virtual scenery display animations respectively.
7. The method according to claim 5, wherein if the face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scene display animation at the target display position comprises:
and when the display positions corresponding to the target virtual scenery display animations matched with the face attribute feature information of the target users have the coincidence areas, sequentially playing the target virtual scenery display animations with the coincidence areas through a target display device, or selecting one target virtual scenery display animation from the target virtual scenery display animations with the coincidence areas for playing.
8. The method according to claim 1, wherein the performing face attribute recognition on the face image of the target user to determine the face attribute feature information of the target user comprises:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute feature information of the target user; the neural network is obtained by training based on a sample image marked with face attribute feature information.
9. A presentation animation trigger device, comprising:
the first determining module is used for carrying out face attribute identification on the face image of the target user and determining face attribute characteristic information of the target user;
the second determination module is used for determining a target virtual scenery display animation matched with the face attribute feature information of the target user based on the face attribute feature information of the target user;
and the playing module is used for playing the target virtual scenery display animation through the target display device.
10. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the method of presenting animation triggers according to any of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of presenting an animation trigger according to any one of claims 1 to 8.
CN202010491640.2A 2020-06-02 2020-06-02 Method and device for triggering display animation Active CN111626254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010491640.2A CN111626254B (en) 2020-06-02 2020-06-02 Method and device for triggering display animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010491640.2A CN111626254B (en) 2020-06-02 2020-06-02 Method and device for triggering display animation

Publications (2)

Publication Number Publication Date
CN111626254A true CN111626254A (en) 2020-09-04
CN111626254B CN111626254B (en) 2024-04-16

Family

ID=72270177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010491640.2A Active CN111626254B (en) 2020-06-02 2020-06-02 Method and device for triggering display animation

Country Status (1)

Country Link
CN (1) CN111626254B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967214A (en) * 2021-02-18 2021-06-15 深圳市慧鲤科技有限公司 Image display method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010108310A (en) * 2008-10-30 2010-05-13 Koichi Sumida Advertisement matching device and advertisement matching method
WO2014104518A1 (en) * 2012-12-27 2014-07-03 전자부품연구원 System and method for providing target advertisement
CN106973319A (en) * 2017-03-28 2017-07-21 武汉斗鱼网络科技有限公司 A kind of virtual gift display method and system
CN107708100A (en) * 2017-11-15 2018-02-16 特斯联(北京)科技有限公司 A kind of advertisement broadcast method based on customer position information
CN107908281A (en) * 2017-11-06 2018-04-13 北京小米移动软件有限公司 Virtual reality exchange method, device and computer-readable recording medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110996148A (en) * 2019-11-27 2020-04-10 重庆特斯联智慧科技股份有限公司 Scenic spot multimedia image flow playing system and method based on face recognition
CN111210258A (en) * 2019-12-23 2020-05-29 北京三快在线科技有限公司 Advertisement putting method and device, electronic equipment and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010108310A (en) * 2008-10-30 2010-05-13 Koichi Sumida Advertisement matching device and advertisement matching method
WO2014104518A1 (en) * 2012-12-27 2014-07-03 전자부품연구원 System and method for providing target advertisement
CN106973319A (en) * 2017-03-28 2017-07-21 武汉斗鱼网络科技有限公司 A kind of virtual gift display method and system
CN107908281A (en) * 2017-11-06 2018-04-13 北京小米移动软件有限公司 Virtual reality exchange method, device and computer-readable recording medium
CN107708100A (en) * 2017-11-15 2018-02-16 特斯联(北京)科技有限公司 A kind of advertisement broadcast method based on customer position information
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110996148A (en) * 2019-11-27 2020-04-10 重庆特斯联智慧科技股份有限公司 Scenic spot multimedia image flow playing system and method based on face recognition
CN111210258A (en) * 2019-12-23 2020-05-29 北京三快在线科技有限公司 Advertisement putting method and device, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAR GONZALEZ-FRANCO ET AL: "Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification", IEEE, 13 February 2020 (2020-02-13) *
洪瑶: "探析新媒体时代网络视频广告的发展", 出版广角, 8 May 2017 (2017-05-08) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967214A (en) * 2021-02-18 2021-06-15 深圳市慧鲤科技有限公司 Image display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111626254B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN112348969A (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111627117B (en) Image display special effect adjusting method and device, electronic equipment and storage medium
CN111638797A (en) Display control method and device
CN108920490A (en) Assist implementation method, device, electronic equipment and the storage medium of makeup
CN111694430A (en) AR scene picture presentation method and device, electronic equipment and storage medium
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN111651047A (en) Virtual object display method and device, electronic equipment and storage medium
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN111862341A (en) Virtual object driving method and device, display equipment and computer storage medium
CN112150349A (en) Image processing method and device, computer equipment and storage medium
CN112632349A (en) Exhibition area indicating method and device, electronic equipment and storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
CN111652971A (en) Display control method and device
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN111640167A (en) AR group photo method, AR group photo device, computer equipment and storage medium
CN111626254A (en) Display animation triggering method and device
CN105468249B (en) Intelligent interaction system and its control method
CN111639615B (en) Trigger control method and device for virtual building
CN112333498A (en) Display control method and device, computer equipment and storage medium
CN111665942A (en) AR special effect triggering display method and device, electronic equipment and storage medium
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium
CN111625103A (en) Sculpture display method and device, electronic equipment and storage medium
CN113538703A (en) Data display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant