CN111625099A - Animation display control method and device - Google Patents

Animation display control method and device Download PDF

Info

Publication number
CN111625099A
CN111625099A CN202010491112.7A CN202010491112A CN111625099A CN 111625099 A CN111625099 A CN 111625099A CN 202010491112 A CN202010491112 A CN 202010491112A CN 111625099 A CN111625099 A CN 111625099A
Authority
CN
China
Prior art keywords
target
display
position information
target user
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010491112.7A
Other languages
Chinese (zh)
Other versions
CN111625099B (en
Inventor
揭志伟
孙红亮
王子彬
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010491112.7A priority Critical patent/CN111625099B/en
Publication of CN111625099A publication Critical patent/CN111625099A/en
Application granted granted Critical
Publication of CN111625099B publication Critical patent/CN111625099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an animation display control method and device, including: under the condition that the target user is detected to enter a target detection area, acquiring updated position information of the target user in real time; determining a target display position on an electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time; and controlling the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.

Description

Animation display control method and device
Technical Field
The disclosure relates to the technical field of information processing, in particular to an animation display control method and device.
Background
In the related art, in some places, such as an exhibition hall, where the contents of the place need to be introduced, animations related to the place are played through an electronic screen, so as to improve the display effect. However, when the electronic screen plays the animation, the content of the played animation and the playing position of the animation are preset and fixed, and the display form is single and the display effect is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides an animation display control method and device.
In a first aspect, an embodiment of the present disclosure provides an animation display control method, including:
under the condition that the target user is detected to enter a target detection area, acquiring updated position information of the target user in real time;
determining a target display position on an electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time;
and controlling the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.
According to the method provided by the disclosure, the target display position of the target virtual landscape display animation is determined according to the updated position information of the target user, and when the updated position information of the target user changes, the target display position of the target virtual landscape display animation can be changed accordingly, so that the interaction process between the target user and the venue is increased, the user can control the display position of the displayed target virtual landscape display animation by changing the position of the user, the control method of animation display is enriched, and the display effect is improved.
In a possible implementation manner, the obtaining of the updated location information of the target user in real time includes:
acquiring a target user image in real time, determining position information of the target user in the acquired target user image, and taking the position information as updated position information of the target user;
the determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user obtained each time includes:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the user position information in the image and the display position information on the electronic screen.
In a possible implementation manner, the obtaining of the updated location information of the target user in real time includes:
acquiring coordinate information of a target user in a world coordinate system, which is obtained after the target user is positioned, in real time, and taking the coordinate information as updated position information of the target user;
the determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user obtained each time includes:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the position information of the user in the world coordinate system and the display position information on the electronic screen.
In one possible embodiment, the target virtual landscape presentation animation is determined according to the following steps:
and determining a target virtual landscape display animation matched with the updated position information of the target user based on the updated position information of the target user.
In one possible embodiment, the target virtual landscape presentation animation is determined according to the following steps:
acquiring the face attribute information of the target user;
and selecting a target virtual landscape showing animation matched with the face attribute information of the target user from a plurality of target virtual landscape showing animations.
In one possible embodiment, the selecting, from a plurality of target virtual landscape presentation animations, a target virtual landscape presentation animation that matches the facial attribute information of the target user includes:
determining a target virtual landscape type matched with the face attribute information of the target user based on the face attribute information of the target user;
and selecting the target virtual landscape showing animation from a plurality of virtual landscape showing animations corresponding to the plurality of virtual landscape types based on the target virtual landscape type.
In a possible implementation manner, in the case that updated location information of a plurality of target users is obtained, the controlling the electronic screen to display a target virtual landscape display animation at the determined target display location includes:
and if the target display positions corresponding to the updated position information of the target users do not have the overlapping area, controlling the electronic screen to synchronously display the target virtual landscape display animation corresponding to the updated position information of the target users at each target display position respectively.
In a possible implementation manner, in the case that updated location information of a plurality of target users is obtained, the controlling the electronic screen to display a target virtual landscape display animation at the determined target display location includes:
if the target display positions corresponding to the updated position information of the target users have the overlapped areas, controlling the electronic screen to sequentially display the target virtual landscape display animations corresponding to the target users; or selecting one target virtual landscape showing animation from the target virtual landscape showing animations corresponding to all the target users to play.
In a second aspect, an embodiment of the present disclosure further provides an animation display control device, including:
the acquisition module is used for acquiring the updated position information of the target user in real time under the condition that the target user is detected to enter the target detection area;
the determining module is used for determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time;
and the control module is used for controlling the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.
In a possible implementation manner, the obtaining module, when obtaining the updated location information of the target user in real time, is configured to:
acquiring a target user image in real time, determining position information of the target user in the acquired target user image, and taking the position information as updated position information of the target user;
the determining module, when determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time, is configured to:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the user position information in the image and the display position information on the electronic screen.
In a possible implementation manner, the obtaining module, when obtaining the updated location information of the target user in real time, is configured to:
acquiring coordinate information of a target user in a world coordinate system, which is obtained after the target user is positioned, in real time, and taking the coordinate information as updated position information of the target user;
the determining module, when determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time, is configured to:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the position information of the user in the world coordinate system and the display position information on the electronic screen.
In a possible embodiment, the control module is further configured to determine the target virtual landscape presentation animation according to the following steps:
and determining a target virtual landscape display animation matched with the updated position information of the target user based on the updated position information of the target user.
In a possible embodiment, the control module is further configured to determine the target virtual landscape presentation animation according to the following steps:
acquiring the face attribute information of the target user;
and selecting a target virtual landscape showing animation matched with the face attribute information of the target user from a plurality of target virtual landscape showing animations.
In one possible embodiment, the control module, when selecting a target virtual landscape presenting animation matching with the face attribute information of the target user from a plurality of target virtual landscape presenting animations, is configured to:
determining a target virtual landscape type matched with the face attribute information of the target user based on the face attribute information of the target user;
and selecting the target virtual landscape showing animation from a plurality of virtual landscape showing animations corresponding to the plurality of virtual landscape types based on the target virtual landscape type.
In a possible embodiment, in the case that updated location information of a plurality of target users is obtained, the control module, when controlling the electronic screen to display a target virtual landscape display animation at the determined target display location, is configured to:
and if the target display positions corresponding to the updated position information of the target users do not have the overlapping area, controlling the electronic screen to synchronously display the target virtual landscape display animation corresponding to the updated position information of the target users at each target display position respectively.
In a possible embodiment, in the case that updated location information of a plurality of target users is obtained, the control module, when controlling the electronic screen to display a target virtual landscape display animation at the determined target display location, is configured to:
if the target display positions corresponding to the updated position information of the target users have the overlapped areas, controlling the electronic screen to sequentially display the target virtual landscape display animations corresponding to the target users; or selecting one target virtual landscape showing animation from the target virtual landscape showing animations corresponding to all the target users to play.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method for controlling animation display according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram illustrating an effect of an electronic screen display interface provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a method for training a first neural network provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an architecture of an animation display control apparatus according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of a computer device 500 provided by the embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, when the animation is played through the electronic screen, because the content of the played animation and the playing position of the animation are preset, and the playing process of the animation is not interacted with a user, the user may ignore the animation played through the electronic screen, and the display effect is poor.
Based on this, the embodiment of the disclosure provides an animation display control method, where a target display position of a target virtual landscape display animation is determined according to updated position information of a target user, and when the updated position information of the target user changes, the target display position of the target virtual landscape display animation may be changed accordingly, thereby increasing an interaction process between the target user and a venue, and a user may control the display position of the displayed target virtual landscape display animation by changing his or her own position, enriching the animation display control method, and improving the display effect.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, an animation display control method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the animation display control method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a Personal Digital Assistant (PDA), a computing device, or a server or other processing device.
Referring to fig. 1, a flowchart of an animation display control method provided in the embodiment of the present disclosure is shown, where the method includes steps 101 to 103, where:
step 101, acquiring the updated position information of the target user in real time under the condition that the target user is detected to enter the target detection area.
The target detection area may be a preset position area, specifically, a user entering the target detection area is a target user, an image of a position corresponding to the target detection area may be acquired by the image acquisition device, and the installation position and the orientation of the image acquisition device may be fixed, so that the position area corresponding to the image acquired by the image acquisition device is also fixed, that is, the target detection area.
The image acquisition device can be connected with an electronic device executing the scheme provided by the disclosure, and the connection mode of the image acquisition device can include wired connection or wireless connection, wherein the wireless connection mode can include bluetooth connection, wireless local area network connection and the like.
In a possible embodiment, the image capturing device may capture an image of the target detection area in real time and then transmit the image to the electronic device, and the electronic device may analyze the image captured by the image capturing device in real time to detect whether the target user is included in the target detection area.
In another possible implementation, an infrared detection device may be further disposed in the target detection area, and the infrared detection device is connected to the electronic device, and detects whether the target user is included in the target detection area through the infrared detection device, when the infrared device detects that the target user is included in the target detection area, the electronic device may further control the image capture device to capture a target user image of the target user in the target detection area, and after detecting that the target user is included in the target detection area through the infrared device, the electronic device may capture the target user image in the target detection area in real time while controlling the image capture device to capture the target user image in the target detection area.
When the updated position information of the target user is obtained in real time, any one of the following methods can be used:
the method comprises the steps of acquiring a target user image in real time, determining position information of a target user in the acquired target user image, and taking the position information as updated position information of the target user.
Here, the position information of the determined target user in the obtained target user image is the position of the target user in the target user image, and the corresponding information is a specific pixel position, for example, a certain position area in the target user image; the target user image is an image including a target user.
Since the position of the image acquisition device is fixed, when the position of the target user changes, the position of the target user in the image acquisition device also changes, and therefore the position information of the target user in the image of the target user can be directly used as the updated position information of the target user.
And secondly, acquiring updated position information of the target user in a world coordinate system after the target user is positioned in real time.
The position of the target user changes, and the fact that the coordinates of the target user in the world coordinate system changes, therefore, the coordinates of the target user in the world coordinate system can be used as the updated position information of the target user.
Specifically, the target user image can be acquired in real time, the internal reference, the external reference and the distortion parameter of the camera can be acquired by calibrating the image acquisition device, such as a calibration camera, and then the transformation matrix between the addition coordinate system and the world coordinate system can be determined by the internal reference, the external reference and the distortion parameter of the camera. After the position coordinates of the target user in the target user image are determined, the coordinate information of the target user in the world coordinate system can be determined according to the transformation matrix, and then the coordinate information of the target user in the world coordinate system is used as the updated position information of the target user.
When the position coordinate of the target user in the target user image is determined, the position coordinate of the target user in the target user image can be identified through an image detection algorithm, the position coordinate is a coordinate under a coordinate system established in the target user image, for example, a two-dimensional rectangular coordinate system can be established by taking the upper left corner of the target user image as a coordinate origin and two sides intersected with the upper left corner as an x axis and a y axis respectively.
And 102, determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time.
The electronic screen is used for displaying the target virtual landscape display animation and is connected with the electronic equipment executing the scheme, and the electronic equipment executing the scheme can control the playing content of the electronic screen.
When determining the target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time, different determination methods can be selected according to different types of the updated position information, specifically:
1) and if the updated position information of the target user is the position information of the target user in the image of the target user, determining the target display position on the electronic screen corresponding to the updated position information based on the preset corresponding relationship between the user position information in the image and the display position information on the electronic screen.
Because the installation position and the orientation of the image acquisition device are fixed, the corresponding relation between the display position information of the image acquired by the image acquisition device on the electronic screen can be preset, and after the position information of the target user in the image of the target user acquired by the image acquisition device is determined, the display position information corresponding to the position information of the target user in the image of the target user acquired by the image acquisition device can be determined according to the corresponding relation.
For example, in the preset corresponding relationship, the display position coordinates of the pixel point at (x, y) in the image acquired by the image acquisition device in the display position information on the electronic screen are (a, b), and if the position coordinates of a certain position point of the target user in the image of the target user is (x, y), the position point can be displayed at the (a, b) position of the electronic screen.
2) And if the updated position information of the target user is the coordinate information of the target user in the world coordinate system, determining the target display position on the electronic screen corresponding to the updated information according to the preset corresponding relationship between the position information of the user in the world coordinate system and the display position information on the electronic screen.
Here, the coordinate information of the target user in the world coordinate system is not that the target user is regarded as one point in the world coordinate system, the target user occupies one area in the target user image, and the coordinate information of the target user in the world coordinate system is obtained after each pixel point in the area occupied by the target user in the target user image is converted into the world coordinate system, so that the coordinate information of the target user in the world coordinate system includes a plurality of coordinate information.
And 103, controlling the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.
In a possible implementation manner, the target virtual landscape presentation animation may be preset, and the target virtual landscape presentation animations corresponding to different target users may be the same.
In another possible implementation, the target virtual landscape presentation animation may be associated with updated location information of the target user, and when determining the target virtual presentation animation, the target virtual landscape presentation animation that matches the updated location information of the target user may be determined according to the updated location information of the target user.
For example, if the updated location information of the target user a is the area a, the target virtual landscape showing animation matched with the updated location information of the target user may be an animation opened by a sunflower flower, and if the updated location information of the target user a is the area B, the target virtual landscape showing animation matched with the updated location information of the target user may be a dancing animation of a bear, and the showing effect diagram may be as shown in fig. 2, where the target users respectively represent the locations of the same target user at different times.
In specific implementation, the target detection area may be divided into a plurality of areas, or the target user image may be divided into a plurality of areas, then the virtual landscape presentation animation corresponding to each area is preset, after the updated location information of the target user is determined, the area where the target user is located may be determined according to the updated location information of the target user, and then the virtual landscape presentation animation corresponding to the area where the target user is located is determined as the target virtual landscape presentation animation.
In another possible implementation manner, the target virtual animations corresponding to different target users may be determined according to the face attribute information of the target users. Specifically, when the target virtual landscape showing animation is determined, the face attribute information of the target user may be obtained first, and then the target virtual landscape showing animation matched with the face attribute information of the target user may be selected from the plurality of target virtual landscape showing animations.
The face attribute information of the target user may include any one of the following information:
gender, age, smile value, color value, mood, skin color.
When the face attribute information of the target user is determined, the acquired image of the target user can be input into a trained first neural network to obtain the face attribute information of the target user, wherein the first neural network is obtained by training based on a sample image with a face attribute information label.
Specifically, the training process of the first neural network may be as shown in fig. 3, and includes the following steps:
step 301, obtaining a sample image, wherein the sample image carries a face attribute information tag.
Step 302, inputting the sample image into a first neural network to obtain the predicted face attribute information of the target user.
And step 303, determining a loss value in the training process based on the predicted face attribute information and the face attribute information label.
And step 304, judging whether the loss value in the training process is smaller than a preset loss value.
If yes, go to step 305;
if not, adjusting the network parameters of the first neural network used in the training process, and returning to execute the step 302.
And 305, determining the first neural network used in the training process as the trained neural network.
When selecting a target virtual landscape presentation animation matched with the face attribute information of the target user from the plurality of target virtual landscape presentation animations, the target virtual landscape type matched with the face attribute information of the target user may be determined based on the face attribute information of the target user, and then the target virtual landscape presentation animation may be selected from the plurality of virtual landscape presentation animations corresponding to the plurality of virtual landscape types based on the target virtual landscape type.
When the target virtual landscape type matched with the face attribute information of the target user is determined based on the face attribute information of the target user, the face attribute information of the target user can be input into a second neural network for training to obtain the target virtual landscape type matched with the face attribute information of the target user, wherein the training process of the second neural network and the training process type of the first neural network are not repeated herein, but sample data of the second neural network is sample face attribute information carrying a target virtual landscape type label during training, and when a loss value is calculated, calculation is performed by predicting the target virtual landscape type and the target virtual landscape type label.
In a specific implementation, a plurality of virtual landscape types and a plurality of virtual landscape presentation animations may be stored, where each virtual landscape type corresponds to one virtual landscape presentation animation, and the virtual landscape type may be identification information of the virtual landscape presentation animation.
In a possible implementation manner, when updated location information of a plurality of target users is obtained, and when the electronic screen is controlled to display the target virtual landscape display animation at the determined target display location, different display methods may be determined according to the target display location, which may be specifically divided into the following two cases:
in case 1, there is no overlapping area in the target display positions corresponding to the updated position information of the plurality of target users.
In this case, when the electronic screen is controlled to display the target virtual landscape display animation at the determined target display position, the electronic screen may be controlled to synchronously display the target virtual landscape display animation corresponding to the updated position information of each target user at each target display position, respectively.
In case 2, overlapping areas exist at the target display positions corresponding to the updated position information of the plurality of target users.
In this case, the electronic screen can be controlled to sequentially display the target virtual landscape display animations corresponding to the target users; or selecting one target virtual landscape showing animation from the target virtual landscape showing animations corresponding to all the target users to play.
Specifically, the display priorities of different virtual landscape display animations may be preset, and when there is a coincidence region in the target display positions corresponding to the updated position information of the plurality of target users, the display order of the target virtual landscape display animations corresponding to the plurality of target users may be determined according to the display priorities corresponding to the target virtual landscape display animations corresponding to the plurality of target users, and then the display is performed based on the display order corresponding to the target virtual landscape display animations.
In another possible implementation manner, when one target virtual landscape showing animation is selected from the target virtual landscape showing animations corresponding to the target users to be played, one target virtual landscape showing animation can be randomly selected from the target virtual landscape showing animations corresponding to the target users to be played.
Or when the target virtual landscape type matched with the face attribute information of the target user is determined based on the second neural network, the second neural network outputs the matching degree between the face attribute information and the target virtual landscape type in addition to the target virtual landscape type, when one target virtual landscape showing animation is selected from the target virtual landscape showing animations corresponding to each target user to be played, the matching degree between each target user and the target virtual landscape showing animation can be determined, and then the electronic screen is controlled to play the target virtual landscape showing animation with the highest matching degree.
In specific implementation, the electronic screen may play a preset virtual landscape display animation, and when it is detected that the target user enters the target detection area and the target display position is determined, the target virtual landscape display animation may be displayed in a superimposed manner at the target display position of the electronic screen.
According to the method provided by the disclosure, the target display position of the target virtual landscape display animation is determined according to the updated position information of the target user, and when the updated position information of the target user changes, the target display position of the target virtual landscape display animation can be changed accordingly, so that the interaction process between the target user and the venue is increased, the user can control the display position of the displayed target virtual landscape display animation by changing the position of the user, the control method of animation display is enriched, and the display effect is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides an animation display control device corresponding to the animation display control method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the animation display control method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 4, there is shown a schematic structural diagram of an animation display control apparatus according to an embodiment of the present disclosure, the apparatus includes: an acquisition module 401, a determination module 402, and a control module 403; wherein the content of the first and second substances,
an obtaining module 401, configured to obtain updated location information of a target user in real time when it is detected that the target user enters a target detection area;
a determining module 402, configured to determine, based on the obtained updated location information of the target user each time, a target display location on the electronic screen corresponding to the updated location information;
and a control module 403, configured to control the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.
In a possible implementation manner, the obtaining module 401, when obtaining the updated location information of the target user in real time, is configured to:
acquiring a target user image in real time, determining position information of the target user in the acquired target user image, and taking the position information as updated position information of the target user;
the determining module 402, when determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user obtained each time, is configured to:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the user position information in the image and the display position information on the electronic screen.
In a possible implementation manner, the obtaining module 401, when obtaining the updated location information of the target user in real time, is configured to:
acquiring coordinate information of a target user in a world coordinate system, which is obtained after the target user is positioned, in real time, and taking the coordinate information as updated position information of the target user;
the determining module 402, when determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user obtained each time, is configured to:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the position information of the user in the world coordinate system and the display position information on the electronic screen.
In a possible implementation, the control module 403 is further configured to determine the target virtual landscape presentation animation according to the following steps:
and determining a target virtual landscape display animation matched with the updated position information of the target user based on the updated position information of the target user.
In a possible implementation, the control module 403 is further configured to determine the target virtual landscape presentation animation according to the following steps:
acquiring the face attribute information of the target user;
and selecting a target virtual landscape showing animation matched with the face attribute information of the target user from a plurality of target virtual landscape showing animations.
In one possible embodiment, the control module 403, when selecting a target virtual landscape presenting animation matching with the face attribute information of the target user from a plurality of target virtual landscape presenting animations, is configured to:
determining a target virtual landscape type matched with the face attribute information of the target user based on the face attribute information of the target user;
and selecting the target virtual landscape showing animation from a plurality of virtual landscape showing animations corresponding to the plurality of virtual landscape types based on the target virtual landscape type.
In a possible implementation manner, in the case that updated location information of a plurality of target users is obtained, the control module 403, when controlling the electronic screen to display a target virtual landscape display animation at the determined target display location, is configured to:
and if the target display positions corresponding to the updated position information of the target users do not have the overlapping area, controlling the electronic screen to synchronously display the target virtual landscape display animation corresponding to the updated position information of the target users at each target display position respectively.
In a possible implementation manner, in the case that updated location information of a plurality of target users is obtained, the control module 403, when controlling the electronic screen to display a target virtual landscape display animation at the determined target display location, is configured to:
if the target display positions corresponding to the updated position information of the target users have the overlapped areas, controlling the electronic screen to sequentially display the target virtual landscape display animations corresponding to the target users; or selecting one target virtual landscape showing animation from the target virtual landscape showing animations corresponding to all the target users to play.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the application also provides computer equipment. Referring to fig. 5, a schematic structural diagram of a computer device 500 provided in the embodiment of the present application includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the computer device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
under the condition that the target user is detected to enter a target detection area, acquiring updated position information of the target user in real time;
determining a target display position on an electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time;
and controlling the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the animation display control method in the above method embodiment are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the animation display control method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the animation display control method described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. An animation display control method, comprising:
under the condition that the target user is detected to enter a target detection area, acquiring updated position information of the target user in real time;
determining a target display position on an electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time;
and controlling the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.
2. The method of claim 1, wherein the obtaining updated location information of the target user in real time comprises:
acquiring a target user image in real time, determining position information of the target user in the acquired target user image, and taking the position information as updated position information of the target user;
the determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user obtained each time includes:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the user position information in the image and the display position information on the electronic screen.
3. The method of claim 1, wherein the obtaining updated location information of the target user in real time comprises:
acquiring coordinate information of a target user in a world coordinate system, which is obtained after the target user is positioned, in real time, and taking the coordinate information as updated position information of the target user;
the determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user obtained each time includes:
and determining a target display position on the electronic screen corresponding to the updated position information based on a preset corresponding relation between the position information of the user in the world coordinate system and the display position information on the electronic screen.
4. The method of claim 1, wherein the target virtual landscape presentation animation is determined according to the following steps:
and determining a target virtual landscape display animation matched with the updated position information of the target user based on the updated position information of the target user.
5. The method of claim 1, wherein the target virtual landscape presentation animation is determined according to the following steps:
acquiring the face attribute information of the target user;
and selecting a target virtual landscape showing animation matched with the face attribute information of the target user from a plurality of target virtual landscape showing animations.
6. The method of claim 5, wherein selecting the target virtual landscape presentation animation from the plurality of target virtual landscape presentation animations that matches the facial attribute information of the target user comprises:
determining a target virtual landscape type matched with the face attribute information of the target user based on the face attribute information of the target user;
and selecting the target virtual landscape showing animation from a plurality of virtual landscape showing animations corresponding to the plurality of virtual landscape types based on the target virtual landscape type.
7. The method according to claim 1, wherein in the case of obtaining updated location information of a plurality of target users, the controlling the electronic screen to display a target virtual landscape display animation at the determined target display location comprises:
and if the target display positions corresponding to the updated position information of the target users do not have the overlapping area, controlling the electronic screen to synchronously display the target virtual landscape display animation corresponding to the updated position information of the target users at each target display position respectively.
8. The method according to claim 1, wherein in the case of obtaining updated location information of a plurality of target users, the controlling the electronic screen to display a target virtual landscape display animation at the determined target display location comprises:
if the target display positions corresponding to the updated position information of the target users have the overlapped areas, controlling the electronic screen to sequentially display the target virtual landscape display animations corresponding to the target users; or selecting one target virtual landscape showing animation from the target virtual landscape showing animations corresponding to all the target users to play.
9. An animation display control apparatus, comprising:
the acquisition module is used for acquiring the updated position information of the target user in real time under the condition that the target user is detected to enter the target detection area;
the determining module is used for determining a target display position on the electronic screen corresponding to the updated position information based on the updated position information of the target user acquired each time;
and the control module is used for controlling the electronic screen to display the target virtual landscape display animation at the determined target display position according to the target display position determined each time.
10. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the animation display control method according to any one of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program for executing the steps of the animation display control method according to any one of claims 1 to 8 when the computer program is executed by a processor.
CN202010491112.7A 2020-06-02 2020-06-02 Animation display control method and device Active CN111625099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010491112.7A CN111625099B (en) 2020-06-02 2020-06-02 Animation display control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010491112.7A CN111625099B (en) 2020-06-02 2020-06-02 Animation display control method and device

Publications (2)

Publication Number Publication Date
CN111625099A true CN111625099A (en) 2020-09-04
CN111625099B CN111625099B (en) 2024-04-16

Family

ID=72259175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010491112.7A Active CN111625099B (en) 2020-06-02 2020-06-02 Animation display control method and device

Country Status (1)

Country Link
CN (1) CN111625099B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110145718A1 (en) * 2009-12-11 2011-06-16 Nokia Corporation Method and apparatus for presenting a first-person world view of content
CN102903137A (en) * 2011-07-27 2013-01-30 腾讯科技(深圳)有限公司 Animation playing method and system
CN105451093A (en) * 2015-11-05 2016-03-30 小米科技有限责任公司 Method and apparatus for adjusting visual area of screen
CN105630135A (en) * 2014-10-27 2016-06-01 中兴通讯股份有限公司 Intelligent terminal control method and device
CN106773759A (en) * 2016-12-15 2017-05-31 上海创功通讯技术有限公司 A kind of information displaying method and system
CN108563410A (en) * 2018-01-02 2018-09-21 联想(北京)有限公司 A kind of display control method and electronic equipment
CN109656363A (en) * 2018-09-04 2019-04-19 亮风台(上海)信息科技有限公司 It is a kind of for be arranged enhancing interaction content method and apparatus
CN109753145A (en) * 2018-05-11 2019-05-14 北京字节跳动网络技术有限公司 A kind of methods of exhibiting and relevant apparatus of transition cartoon
WO2019159044A1 (en) * 2018-02-19 2019-08-22 ГИОРГАДЗЕ, Анико Тенгизовна Method for placing a virtual advertising object for display to a user
CN110166842A (en) * 2018-11-19 2019-08-23 深圳市腾讯信息技术有限公司 A kind of video file operation method, apparatus and storage medium
CN110568931A (en) * 2019-09-11 2019-12-13 百度在线网络技术(北京)有限公司 interaction method, device, system, electronic device and storage medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110145718A1 (en) * 2009-12-11 2011-06-16 Nokia Corporation Method and apparatus for presenting a first-person world view of content
CN102903137A (en) * 2011-07-27 2013-01-30 腾讯科技(深圳)有限公司 Animation playing method and system
CN105630135A (en) * 2014-10-27 2016-06-01 中兴通讯股份有限公司 Intelligent terminal control method and device
CN105451093A (en) * 2015-11-05 2016-03-30 小米科技有限责任公司 Method and apparatus for adjusting visual area of screen
CN106773759A (en) * 2016-12-15 2017-05-31 上海创功通讯技术有限公司 A kind of information displaying method and system
CN108563410A (en) * 2018-01-02 2018-09-21 联想(北京)有限公司 A kind of display control method and electronic equipment
WO2019159044A1 (en) * 2018-02-19 2019-08-22 ГИОРГАДЗЕ, Анико Тенгизовна Method for placing a virtual advertising object for display to a user
CN109753145A (en) * 2018-05-11 2019-05-14 北京字节跳动网络技术有限公司 A kind of methods of exhibiting and relevant apparatus of transition cartoon
CN109656363A (en) * 2018-09-04 2019-04-19 亮风台(上海)信息科技有限公司 It is a kind of for be arranged enhancing interaction content method and apparatus
CN110166842A (en) * 2018-11-19 2019-08-23 深圳市腾讯信息技术有限公司 A kind of video file operation method, apparatus and storage medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110568931A (en) * 2019-09-11 2019-12-13 百度在线网络技术(北京)有限公司 interaction method, device, system, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANTHONY MARTINET; GÉRY CASIEZ; LAURENT GRISONI: "The design and evaluation of 3D positioning techniques for multi-touch displays", 《2010 IEEE SYMPOSIUM ON 3D USER INTERFACES (3DUI)》, 29 April 2010 (2010-04-29) *
武雪玲;任福;杜清运;: "混合硬件跟踪定位的空间信息虚实配准", 地理与地理信息科学, no. 03, 15 May 2010 (2010-05-15) *

Also Published As

Publication number Publication date
CN111625099B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN112162930B (en) Control identification method, related device, equipment and storage medium
CN109727303B (en) Video display method, system, computer equipment, storage medium and terminal
US10742900B2 (en) Method and system for providing camera effect
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN112348969A (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN106355153A (en) Virtual object display method, device and system based on augmented reality
CN110888532A (en) Man-machine interaction method and device, mobile terminal and computer readable storage medium
US11308655B2 (en) Image synthesis method and apparatus
CN111638797A (en) Display control method and device
US11164384B2 (en) Mobile device image item replacements
CN112348968A (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN111862341A (en) Virtual object driving method and device, display equipment and computer storage medium
CN114003160B (en) Data visual display method, device, computer equipment and storage medium
CN110021062B (en) Product characteristic acquisition method, terminal and storage medium
CN113345083A (en) Product display method and device based on virtual reality, electronic equipment and medium
CN111640169A (en) Historical event presenting method and device, electronic equipment and storage medium
CN111651058A (en) Historical scene control display method and device, electronic equipment and storage medium
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN113313066A (en) Image recognition method, image recognition device, storage medium and terminal
CN112333498A (en) Display control method and device, computer equipment and storage medium
CN111744197A (en) Data processing method, device and equipment and readable storage medium
CN109816744B (en) Neural network-based two-dimensional special effect picture generation method and device
CN111625101A (en) Display control method and device
CN110490065A (en) Face identification method and device, storage medium, computer equipment
CN111625099A (en) Animation display control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant