CN111626254B - Method and device for triggering display animation - Google Patents

Method and device for triggering display animation Download PDF

Info

Publication number
CN111626254B
CN111626254B CN202010491640.2A CN202010491640A CN111626254B CN 111626254 B CN111626254 B CN 111626254B CN 202010491640 A CN202010491640 A CN 202010491640A CN 111626254 B CN111626254 B CN 111626254B
Authority
CN
China
Prior art keywords
target
virtual scene
face attribute
display
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010491640.2A
Other languages
Chinese (zh)
Other versions
CN111626254A (en
Inventor
孙红亮
揭志伟
王子彬
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010491640.2A priority Critical patent/CN111626254B/en
Publication of CN111626254A publication Critical patent/CN111626254A/en
Application granted granted Critical
Publication of CN111626254B publication Critical patent/CN111626254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The disclosure provides a method and a device for triggering a display animation, comprising the following steps: face attribute identification is carried out on the face image of the target user, and face attribute characteristic information of the target user is determined; determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user; and playing the target virtual scene display animation through a target display device.

Description

Method and device for triggering display animation
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to a display animation triggering method and device.
Background
In an exhibition hall, electronic screens are typically provided in certain areas, through which virtual scenes, such as flowers, trees, etc., associated with the exhibition hall are presented. A typical way to present virtual scenes on an electronic screen is to automatically cycle through one or more pictures or animations.
Obviously, the current display mode is monotonous, the display effect is poor, and even the display mode can be directly ignored, so that the display resource is wasted.
Disclosure of Invention
The embodiment of the disclosure at least provides a method and a device for triggering a display animation.
In a first aspect, an embodiment of the present disclosure provides a method for triggering a presentation animation, including:
face attribute identification is carried out on the face image of the target user, and face attribute characteristic information of the target user is determined;
determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
and playing the target virtual scene display animation through a target display device.
According to the method, different target virtual scenery display animations can be matched for different target users according to the characteristic information of the target users, then the corresponding target virtual scenery display animations are displayed, and the display contents of the different target users are different, so that the display contents can be enriched, and the display effect is improved.
In a possible implementation manner, the face attribute feature information includes at least one of the following information:
gender, age, smile value, face value, mood, skin tone.
In a possible implementation manner, the determining, based on the face attribute feature information of the target user, a target virtual scene showing animation matched with the face attribute feature information of the target user includes:
Determining a target virtual scene type matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
and selecting a target virtual scene showing animation corresponding to the target virtual scene type from virtual scene showing animations corresponding to a plurality of virtual scene types stored in advance based on the target virtual scene type.
In a possible implementation manner, determining a target virtual scene category matched with the face attribute feature information of the target user based on the face attribute feature information of the target user includes:
calculating the matching degree between the face attribute characteristic information of the target user and each virtual scene type based on a plurality of face attribute characteristics indicated by the face attribute characteristic information of the target user and a plurality of face attribute characteristics matched with each virtual scene type;
and selecting the virtual scene type with the highest corresponding matching degree as the target virtual scene type.
In a possible implementation manner, after determining a target virtual scene showing animation matched with the face attribute feature information of the target user, the method further comprises:
Determining a target display position of the target virtual scene display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
the playing of the virtual scenery display animation through the target display device comprises the following steps:
and controlling the target display device to display the target virtual scene display animation on the target display position.
According to the embodiment, the target user can control the display position of the target virtual scenery display animation on the target display device by changing the relative position information of the target user and the target display device, so that the interaction process between the target user and the target display device is increased, and the display effect is improved.
In a possible implementation manner, if face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scene display animation at the target display position includes:
and when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users do not have the overlapping areas, controlling the target display device to synchronously display the corresponding target virtual scene display animations on the target display positions corresponding to the target virtual scene display animations respectively.
By synchronously displaying a plurality of target virtual sceneries, the display content of the target display device can be enriched, and the display effect is improved.
In a possible implementation manner, if face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scene display animation at the target display position includes:
when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users have the overlapping areas, sequentially playing the target virtual scene display animations with the overlapping areas through a target display device, or selecting one target virtual scene display animation from the target virtual scene display animations with the overlapping areas to play.
In a possible implementation manner, the face attribute recognition is performed on the face image of the target user, and the face attribute feature information of the target user is determined, which includes:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute characteristic information of the target user; the neural network is obtained through training based on sample images marked with face attribute characteristic information.
In a second aspect, an embodiment of the present disclosure further provides a display animation triggering device, including:
the first determining module is used for carrying out face attribute recognition on the face image of the target user and determining face attribute characteristic information of the target user;
the second determining module is used for determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
and the playing module is used for playing the target virtual scene display animation through the target display device.
In a possible implementation manner, the face attribute feature information includes at least one of the following information:
gender, age, smile value, face value, mood, skin tone.
In a possible implementation manner, the second determining module is configured to, when determining, based on the face attribute feature information of the target user, a target virtual scene showing animation that matches the face attribute feature information of the target user:
determining a target virtual scene type matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
And selecting a target virtual scene showing animation corresponding to the target virtual scene type from virtual scene showing animations corresponding to a plurality of virtual scene types stored in advance based on the target virtual scene type.
In a possible implementation manner, the second determining module is configured to, when determining, based on the face attribute feature information of the target user, a target virtual scene type that matches the face attribute feature information of the target user:
calculating the matching degree between the face attribute characteristic information of the target user and each virtual scene type based on a plurality of face attribute characteristics indicated by the face attribute characteristic information of the target user and a plurality of face attribute characteristics matched with each virtual scene type;
and selecting the virtual scene type with the highest corresponding matching degree as the target virtual scene type.
In a possible implementation manner, after determining the target virtual scene showing animation matched with the face attribute feature information of the target user, the second determining module is further configured to:
determining a target display position of the target virtual scene display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
The playing module is used for playing the virtual scenery display animation through the target display device, and is used for:
and controlling the target display device to display the target virtual scene display animation on the target display position.
In a possible implementation manner, if face attribute feature information of a plurality of target users is obtained, the playing module is configured to, when controlling the target display device to display the target virtual scene display animation at the target display position:
and when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users do not have the overlapping areas, controlling the target display device to synchronously display the corresponding target virtual scene display animations on the target display positions corresponding to the target virtual scene display animations respectively.
In a possible implementation manner, if face attribute feature information of a plurality of target users is obtained, the playing module is configured to, when controlling the target display device to display the target virtual scene display animation at the target display position:
when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users have the overlapping areas, sequentially playing the target virtual scene display animations with the overlapping areas through a target display device, or selecting one target virtual scene display animation from the target virtual scene display animations with the overlapping areas to play.
In a possible implementation manner, the first determining module is configured to, when performing face attribute recognition on a face image of a target user and determining face attribute feature information of the target user:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute characteristic information of the target user; the neural network is obtained through training based on sample images marked with face attribute characteristic information.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 shows a flow chart illustrating an animation triggering method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a neural network training method provided by an embodiment of the present disclosure;
FIG. 3a illustrates a display interface diagram of a target display device provided by an embodiment of the present disclosure;
FIG. 3b illustrates a display interface diagram of another object display device provided by an embodiment of the present disclosure;
FIG. 4 illustrates a display interface diagram of another object display device provided by an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of an architecture of a display animation triggering device provided by an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a computer device 600 provided in an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
When the virtual scenery related to the exhibition hall is displayed in the exhibition hall, one or more pictures or animations are generally played in an automatic circulation mode, and the display mode is monotonous and has poor display effect.
Based on the above, the present disclosure provides a method and an apparatus for triggering a display animation, which can match different target virtual sceneries for different target users according to feature information of the target users, and then display corresponding target virtual sceneries to display the animation, wherein display contents of the different target users are different, so that the display contents can be enriched, and the display effect is improved.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a method for triggering a presentation animation according to an embodiment of the present disclosure, where an execution subject of the method for triggering a presentation animation provided by the embodiment of the present disclosure is generally a computer device having a certain computing capability, where the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, or the like.
The method for triggering the display animation according to the embodiment of the present disclosure will be described below by taking the executing body as an electronic device as an example.
Referring to fig. 1, a flowchart of a method for triggering an animation display according to an embodiment of the disclosure is shown, where the method includes steps 101 to 103, where:
step 101, face attribute identification is carried out on a face image of a target user, and face attribute characteristic information of the target user is determined.
Wherein the face attribute feature information includes at least one of the following information:
gender, age, smile value, face value, mood, skin tone.
The electronic device performing the present solution may be equipped with an image capturing device (such as a camera), or the electronic device performing the present solution may be connected to the image capturing device in a manner including, but not limited to, a wired connection, a wireless connection, where the wireless connection may include, for example, a bluetooth connection, a wireless network connection, and so on. The position of the image acquisition device may be fixed, and thus, the position area corresponding to the image acquired by the image acquisition device is also fixed.
The user entering the target detection area is a target user, and the target detection area is a position area corresponding to the image acquired by the image acquisition device. In specific implementation, the image acquisition device can acquire the image of the target detection area in real time and then transmit the image to the electronic equipment, and the electronic equipment can analyze the image acquired by the image acquisition device in real time so as to detect whether the target detection area contains a target user or not.
In another possible implementation manner, an infrared detection device may be further disposed in the target detection area, where the infrared detection device is connected to the electronic device, and the infrared detection device detects whether the target detection area contains the target user, and when the infrared device detects that the target detection area contains the target user, the electronic device may further control the image acquisition device to acquire a face image of the target user in the target detection area.
In specific implementation, when face attribute identification is performed on a face image of a target user and face attribute feature information of the target user is determined, the face image of the target user can be acquired first, and then the face image is input into a trained neural network to obtain the face attribute feature information of the target user, wherein the neural network is obtained through training based on a sample image marked with the face attribute feature information.
The neural network training method shown in fig. 2 can be referred to when the neural network is trained, and the method comprises the following steps:
step 201, acquiring a sample image, wherein the sample image is marked with face attribute characteristic information.
And 202, inputting the sample image into a neural network, and outputting to obtain the predicted face attribute characteristic information.
Step 203, determining a loss value in the training process based on the face attribute feature information and the predicted face attribute feature information of the sample image mark.
Step 204, judging whether the loss value in the training process is smaller than a preset loss value.
If yes, step 205 is sequentially executed;
if the result is negative, the network parameters of the neural network in the training process are adjusted, and the step 202 is executed again.
Step 205, determining the neural network used in the training process as a trained neural network.
And 102, determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user.
In the implementation, when the target virtual scene showing animation matched with the face attribute feature information of the target user is determined based on the face attribute feature information of the target user, the target virtual scene type matched with the face attribute feature information of the target user can be determined based on the face attribute feature information of the target user; and selecting the target virtual scene showing animation corresponding to the target virtual scene type from a plurality of virtual scene showing animations corresponding to a plurality of virtual scenes stored in advance based on the target virtual scene type.
When determining the target virtual scene type matched with the face attribute feature information of the target user based on the face attribute feature information of the target user, calculating the matching degree between the face attribute feature information of the target user and each virtual scene type based on various face attribute features indicated by the face attribute feature information of the target user and various face attribute features matched with each virtual scene type; and then selecting the virtual scene type with the highest corresponding matching degree as the target virtual scene type.
The various face attribute characteristics of each virtual scene match are preset, and exemplary face attribute characteristic information includes gender and age, the virtual scene includes virtual sunflower, virtual rose, virtual cactus, and the like, and the various face attribute characteristics of the virtual scene match can be as shown in the following table 1:
TABLE 1
Virtual scenery Face attribute feature
Virtual sunflower 0 to 10 years old for women
Virtual rose 15-30 years old for women
Virtual cactus 15-30 years old, men
When calculating the matching degree between the face attribute characteristic information of the target user and each virtual scene type based on a plurality of face attribute characteristics indicated by the face attribute characteristic information of the target user and a plurality of face attribute characteristics matched with each virtual scene type, the middle matching degree between the gender and age of the target user and the gender and age matched with each virtual scene type can be calculated respectively, and then the middle matching degree is weighted and summed according to preset weights to obtain the matching degree between the face attribute characteristic information of the target user and each virtual scene type.
The virtual scene type can be the identification information of the virtual scene showing animation, and the virtual scene showing animation corresponding to the various virtual scenes stored in advance can be the virtual scene showing animation corresponding to the different types of virtual scenes respectively.
And step 103, playing the target virtual scene display animation through a target display device.
In one possible implementation manner, after determining the target virtual scene showing animation matched with the face attribute feature information of the target user, the target virtual scene showing animation may be further determined at a target showing position corresponding to the target display device according to the relative position information of the target user relative to the target display device, and when the target virtual scene showing animation is played by the target display device, the target display device may be controlled to show the target virtual scene showing animation at the target showing position.
Specifically, when determining the target virtual scene display animation at the target display position corresponding to the target display device according to the relative position information of the target user relative to the target display device, the position of the target user in the face image can be determined based on the face image of the target user.
Exemplary, if the position coordinate of a certain position point of the target user in the face image is (x) 1 ,y 1 ) Determining a position coordinate of the position point in the target display device as (x) by a correspondence relationship between a preset image position and a display position of the target display device 2 ,y 2 ) Then the display device can be displayed in the (x 2 ,y 2 ) The location point is shown at the location of (c).
For example, as shown in fig. 3a, if the target user stands at a position on the left side of the target display device, a target virtual scene display animation corresponding to the face attribute feature information of the target user may be displayed at a position on the left side of the target display position, in fig. 3a, a flower, and 301 is the target display device; if the target user stands in the middle of the target display device, the target virtual scene display animation corresponding to the face attribute feature information of the target user can be displayed in the middle of the target display device, and the effect schematic diagram can be shown in fig. 3b, wherein 301 is the target display device.
When face attribute feature information of a plurality of target users is acquired, when the target display device is controlled to display a target virtual scene display animation on a target display position, different display effects may be caused due to superposition of the target display positions, and the method specifically includes the following two cases:
In the case 1, the display positions corresponding to the target virtual scene display animations matched with the face attribute feature information of the target users do not have overlapping areas.
When the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users do not have the overlapping areas, the target display device can be controlled to synchronously display the corresponding target virtual scene display animations at the target display positions corresponding to the target virtual scene display animations respectively when the target display device is controlled to display.
For example, the target user a stands on the left side of the target display device, the target user B stands on the right side of the target display device, the target virtual scenery display animation matched with the face attribute feature information of the target user a is flower opening, the target virtual scenery display animation matched with the face attribute feature information of the target user B is little dancing, then the flower opening can be displayed on the left side of the target display device at the same time, the little dancing is displayed on the right side of the target display device, the interface diagram can be as shown in fig. 4, and 401 is the target display device.
Here, the left side, the right side and the middle are all brief descriptions for explaining the step, and the target display position of the specific target virtual scene display animation needs to be determined through calculation.
And 2, the overlapping areas exist at the display positions corresponding to the target virtual scene display animation matched with the face attribute characteristic information of the target users.
In this case, the target virtual scene showing animation having the overlapping region may be sequentially played by the target display device, or one target virtual scene showing animation may be selected from the target virtual showing animations having the overlapping region to be played.
When the target virtual scene showing animation with the overlapping area is played sequentially through the target device, the sequential playing can be performed according to the detected sequence of the face attribute characteristic information of the target virtual scene showing animation with the overlapping area. If face attribute feature information of a plurality of target users is detected at the same time, and a target display position of a target virtual scene display animation corresponding to the face attribute feature information has a superposition area, the target display device can be controlled to randomly select the target virtual scene display animation corresponding to the face attribute feature information of the target users for playing.
If the target user a enters the target detection area at 10:00, and based on the face attribute feature information of the target user, determining that the target virtual scene display animation corresponding to the face attribute feature information of the target user a is virtual scene display animation 1, wherein the display position of the virtual scene display animation 1 is area a; the target user B enters a target detection area at a ratio of 10:01, and based on the face attribute feature information of the target user, the target virtual scene display animation corresponding to the face attribute feature information of the target user A is determined to be virtual scene display animation 2, the display position of the virtual scene display animation 2 is area B, and the area A and the area B have overlapping areas, but because the time for detecting the face attribute feature information of the target user A is earlier than the time for detecting the face attribute feature information of the target user B, the virtual scene display animation 1 is played first, and then the virtual scene display animation 2 is played.
When the face attribute characteristic information of the target user is determined based on the neural network, after the face image of the target user is input into the neural network, the accuracy of the face attribute characteristic information of the target user can be output in addition to the face attribute characteristic information of the target user. When one target virtual scene showing animation is selected from the target virtual showing animations with the overlapping areas to play, the target virtual scene showing animation with the overlapping areas can be selected according to the accuracy of the attribute characteristic information of the human face of the target user.
Specifically, if face attribute feature information of a plurality of target users is detected at the same time, the target display device can be controlled to play the target virtual scene display animation corresponding to the face attribute feature information with the highest corresponding accuracy according to the accuracy of the face attribute feature information of the target users.
In one possible implementation manner, when the target virtual scene display animation corresponding to the face attribute feature information of the current target user is displayed, the face attribute feature information of the other target user is detected, and the target display position of the target virtual scene display animation corresponding to the face attribute feature information of the other target user has a superposition area with the target display position of the target virtual scene display animation currently being played, in this case, the target virtual scene display animation corresponding to the face attribute feature information of the other target user can be displayed after the target virtual scene display animation corresponding to the face attribute feature information of the current target user is displayed.
In another possible implementation manner, the accuracy corresponding to the face attribute feature information of the current target user and the accuracy corresponding to the face attribute feature information of the other target user can be determined, and the target virtual scene display animation corresponding to the face attribute feature information of the target user with higher corresponding accuracy is displayed.
According to the display animation triggering method, different target virtual sceneries can be matched for different target users according to the characteristic information of the target users, then the corresponding target virtual sceneries are displayed, the display contents of the different target users are different, therefore, the display contents can be enriched, and the display effect is improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the disclosure further provides a display animation triggering device corresponding to the display animation triggering method, and since the principle of solving the problem by the device in the embodiment of the disclosure is similar to that of the display animation triggering method in the embodiment of the disclosure, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 5, an architecture diagram of a display animation triggering device according to an embodiment of the disclosure is provided, where the device includes: a first determining module 501, a second determining module 502, and a playing module 503; wherein,
a first determining module 501, configured to identify a face attribute of a face image of a target user, and determine face attribute feature information of the target user;
a second determining module 502, configured to determine, based on the face attribute feature information of the target user, a target virtual scene display animation that matches the face attribute feature information of the target user;
and the playing module 503 is used for playing the target virtual scene display animation through a target display device.
In a possible implementation manner, the face attribute feature information includes at least one of the following information:
gender, age, smile value, face value, mood, skin tone.
In a possible implementation manner, the second determining module 502 is configured to, when determining, based on the face attribute feature information of the target user, a target virtual scene showing animation that matches the face attribute feature information of the target user:
determining a target virtual scene type matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
And selecting a target virtual scene showing animation corresponding to the target virtual scene type from virtual scene showing animations corresponding to a plurality of virtual scene types stored in advance based on the target virtual scene type.
In a possible implementation manner, the second determining module 502 is configured to, when determining, based on the face attribute feature information of the target user, a target virtual scene type that matches the face attribute feature information of the target user:
calculating the matching degree between the face attribute characteristic information of the target user and each virtual scene type based on a plurality of face attribute characteristics indicated by the face attribute characteristic information of the target user and a plurality of face attribute characteristics matched with each virtual scene type;
and selecting the virtual scene type with the highest corresponding matching degree as the target virtual scene type.
In a possible implementation manner, after determining the target virtual scene showing animation matched with the face attribute feature information of the target user, the second determining module 502 is further configured to:
determining a target display position of the target virtual scene display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
The playing module 503 is configured to, when playing the virtual scene showing animation through the target display device:
and controlling the target display device to display the target virtual scene display animation on the target display position.
In a possible implementation manner, if face attribute feature information of a plurality of target users is obtained, the playing module 503 is configured to, when controlling the target display device to display the target virtual scene display animation at the target display position:
and when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users do not have the overlapping areas, controlling the target display device to synchronously display the corresponding target virtual scene display animations on the target display positions corresponding to the target virtual scene display animations respectively.
In a possible implementation manner, if face attribute feature information of a plurality of target users is obtained, the playing module 503 is configured to, when controlling the target display device to display the target virtual scene display animation at the target display position:
when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users have the overlapping areas, sequentially playing the target virtual scene display animations with the overlapping areas through a target display device, or selecting one target virtual scene display animation from the target virtual scene display animations with the overlapping areas to play.
In a possible implementation manner, the first determining module 501 is configured to, when performing face attribute recognition on a face image of a target user, determine face attribute feature information of the target user:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute characteristic information of the target user; the neural network is obtained through training based on sample images marked with face attribute characteristic information.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical conception, the embodiment of the application also provides computer equipment. Referring to fig. 6, a schematic structural diagram of a computer device 600 according to an embodiment of the present application includes a processor 601, a memory 602, and a bus 603. The memory 602 is used for storing execution instructions, including a memory 6021 and an external memory 6022; the memory 6021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 601 and data exchanged with the external memory 6022 such as a hard disk, the processor 601 exchanges data with the external memory 6022 through the memory 6021, and when the computer device 600 operates, the processor 601 and the memory 602 communicate through the bus 603, so that the processor 601 executes the following instructions:
Face attribute identification is carried out on the face image of the target user, and face attribute characteristic information of the target user is determined;
determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
and playing the target virtual scene display animation through a target display device.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the show animation triggering method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product for displaying the animation triggering method provided by the embodiment of the disclosure includes a computer readable storage medium storing program code, where the program code includes instructions for executing the steps of the animation triggering method described in the above method embodiment, and the detailed description of the method embodiment will be omitted.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (9)

1. A method for triggering a presentation animation, comprising:
face attribute identification is carried out on the face image of the target user, and face attribute characteristic information of the target user is determined;
determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
playing the target virtual scene display animation through a target display device;
the determining, based on the face attribute feature information of the target user, a target virtual scene showing animation matched with the face attribute feature information of the target user includes:
calculating the matching degree between the face attribute characteristic information of the target user and each virtual scene type based on a plurality of face attribute characteristics indicated by the face attribute characteristic information of the target user and a plurality of face attribute characteristics matched with each virtual scene type;
selecting the virtual scene type with the highest corresponding matching degree as a target virtual scene type;
selecting a target virtual scene showing animation corresponding to the target virtual scene type from virtual scene showing animations corresponding to a plurality of virtual scene types stored in advance based on the target virtual scene type; the calculating the matching degree between the face attribute feature information of the target user and each virtual scene type based on the face attribute features indicated by the face attribute feature information of the target user and the face attribute features matched by each virtual scene type comprises the following steps: and respectively calculating the middle matching degree between the gender and age of the target user and the gender and age matched with each virtual scene type, and then carrying out weighted summation on the middle matching degree according to preset weights to obtain the matching degree between the face attribute characteristic information of the target user and each virtual scene type.
2. The method of claim 1, wherein the face attribute feature information comprises at least one of:
gender, age, smile value, face value, mood, skin tone.
3. The method of claim 1, wherein after determining a target virtual scene presentation animation that matches the face attribute feature information of the target user, the method further comprises:
determining a target display position of the target virtual scene display animation corresponding to the target display device according to the relative position information of the target user relative to the target display device;
the playing of the virtual scenery display animation through the target display device comprises the following steps:
and controlling the target display device to display the target virtual scene display animation on the target display position.
4. The method of claim 3, wherein if face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scene display animation at the target display position comprises:
and when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users do not have the overlapping areas, controlling the target display device to synchronously display the corresponding target virtual scene display animations on the target display positions corresponding to the target virtual scene display animations respectively.
5. The method of claim 3, wherein if face attribute feature information of a plurality of target users is obtained, the controlling the target display device to display the target virtual scene display animation at the target display position comprises:
when the display positions corresponding to the target virtual scene display animations matched with the face attribute characteristic information of the plurality of target users have the overlapping areas, sequentially playing the target virtual scene display animations with the overlapping areas through a target display device, or selecting one target virtual scene display animation from the target virtual scene display animations with the overlapping areas to play.
6. The method according to claim 1, wherein the performing face attribute recognition on the face image of the target user, and determining the face attribute feature information of the target user, includes:
acquiring a face image of the target user;
inputting the face image into a trained neural network to obtain face attribute characteristic information of the target user;
the neural network is obtained through training based on sample images marked with face attribute characteristic information.
7. A display animation triggering device, comprising:
the first determining module is used for carrying out face attribute recognition on the face image of the target user and determining face attribute characteristic information of the target user;
the second determining module is used for determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user;
the playing module is used for playing the target virtual scene display animation through a target display device;
the second determining module is used for determining a target virtual scene display animation matched with the face attribute characteristic information of the target user based on the face attribute characteristic information of the target user, wherein the second determining module is used for:
calculating the matching degree between the face attribute characteristic information of the target user and each virtual scene type based on a plurality of face attribute characteristics indicated by the face attribute characteristic information of the target user and a plurality of face attribute characteristics matched with each virtual scene type;
selecting the virtual scene type with the highest corresponding matching degree as a target virtual scene type;
selecting a target virtual scene showing animation corresponding to the target virtual scene type from virtual scene showing animations corresponding to a plurality of virtual scene types stored in advance based on the target virtual scene type; the second determining module is used for calculating the matching degree between the face attribute feature information of the target user and each virtual scene type when the matching degree is calculated based on a plurality of face attribute features indicated by the face attribute feature information of the target user and a plurality of face attribute features matched with each virtual scene type: and respectively calculating the middle matching degree between the gender and age of the target user and the gender and age matched with each virtual scene type, and then carrying out weighted summation on the middle matching degree according to preset weights to obtain the matching degree between the face attribute characteristic information of the target user and each virtual scene type.
8. A computer device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating over the bus when the computer device is running, said machine readable instructions when executed by said processor performing the steps of the present animation triggering method according to any of claims 1 to 6.
9. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the presentation animation triggering method according to any of claims 1 to 6.
CN202010491640.2A 2020-06-02 2020-06-02 Method and device for triggering display animation Active CN111626254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010491640.2A CN111626254B (en) 2020-06-02 2020-06-02 Method and device for triggering display animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010491640.2A CN111626254B (en) 2020-06-02 2020-06-02 Method and device for triggering display animation

Publications (2)

Publication Number Publication Date
CN111626254A CN111626254A (en) 2020-09-04
CN111626254B true CN111626254B (en) 2024-04-16

Family

ID=72270177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010491640.2A Active CN111626254B (en) 2020-06-02 2020-06-02 Method and device for triggering display animation

Country Status (1)

Country Link
CN (1) CN111626254B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967214A (en) * 2021-02-18 2021-06-15 深圳市慧鲤科技有限公司 Image display method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010108310A (en) * 2008-10-30 2010-05-13 Koichi Sumida Advertisement matching device and advertisement matching method
WO2014104518A1 (en) * 2012-12-27 2014-07-03 전자부품연구원 System and method for providing target advertisement
CN106973319A (en) * 2017-03-28 2017-07-21 武汉斗鱼网络科技有限公司 A kind of virtual gift display method and system
CN107708100A (en) * 2017-11-15 2018-02-16 特斯联(北京)科技有限公司 A kind of advertisement broadcast method based on customer position information
CN107908281A (en) * 2017-11-06 2018-04-13 北京小米移动软件有限公司 Virtual reality exchange method, device and computer-readable recording medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110996148A (en) * 2019-11-27 2020-04-10 重庆特斯联智慧科技股份有限公司 Scenic spot multimedia image flow playing system and method based on face recognition
CN111210258A (en) * 2019-12-23 2020-05-29 北京三快在线科技有限公司 Advertisement putting method and device, electronic equipment and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010108310A (en) * 2008-10-30 2010-05-13 Koichi Sumida Advertisement matching device and advertisement matching method
WO2014104518A1 (en) * 2012-12-27 2014-07-03 전자부품연구원 System and method for providing target advertisement
CN106973319A (en) * 2017-03-28 2017-07-21 武汉斗鱼网络科技有限公司 A kind of virtual gift display method and system
CN107908281A (en) * 2017-11-06 2018-04-13 北京小米移动软件有限公司 Virtual reality exchange method, device and computer-readable recording medium
CN107708100A (en) * 2017-11-15 2018-02-16 特斯联(北京)科技有限公司 A kind of advertisement broadcast method based on customer position information
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110996148A (en) * 2019-11-27 2020-04-10 重庆特斯联智慧科技股份有限公司 Scenic spot multimedia image flow playing system and method based on face recognition
CN111210258A (en) * 2019-12-23 2020-05-29 北京三快在线科技有限公司 Advertisement putting method and device, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification;Mar Gonzalez-Franco et al;IEEE;20200213;全文 *
探析新媒体时代网络视频广告的发展;洪瑶;出版广角;20170508;全文 *

Also Published As

Publication number Publication date
CN111626254A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
US20090251484A1 (en) Avatar for a portable device
CN110390705B (en) Method and device for generating virtual image
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN111640202B (en) AR scene special effect generation method and device
CN106355153A (en) Virtual object display method, device and system based on augmented reality
CN111643900B (en) Display screen control method and device, electronic equipment and storage medium
CN111627117B (en) Image display special effect adjusting method and device, electronic equipment and storage medium
CN109299658B (en) Face detection method, face image rendering device and storage medium
CN111652987B (en) AR group photo image generation method and device
CN109886223B (en) Face recognition method, bottom library input method and device and electronic equipment
CN108985263B (en) Data acquisition method and device, electronic equipment and computer readable medium
CN109670385B (en) Method and device for updating expression in application program
CN111638797A (en) Display control method and device
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN112150349A (en) Image processing method and device, computer equipment and storage medium
CN111640169A (en) Historical event presenting method and device, electronic equipment and storage medium
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN111626254B (en) Method and device for triggering display animation
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
KR20140124087A (en) System and method for recommending hair based on face and style recognition
CN111638798A (en) AR group photo method, AR group photo device, computer equipment and storage medium
CN105468249B (en) Intelligent interaction system and its control method
CN111639615B (en) Trigger control method and device for virtual building

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant