CN112135161A - Dynamic effect display method and device of virtual gift, storage medium and electronic equipment - Google Patents

Dynamic effect display method and device of virtual gift, storage medium and electronic equipment Download PDF

Info

Publication number
CN112135161A
CN112135161A CN202011022695.5A CN202011022695A CN112135161A CN 112135161 A CN112135161 A CN 112135161A CN 202011022695 A CN202011022695 A CN 202011022695A CN 112135161 A CN112135161 A CN 112135161A
Authority
CN
China
Prior art keywords
animation
image
gift
data
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011022695.5A
Other languages
Chinese (zh)
Inventor
巫金生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN202011022695.5A priority Critical patent/CN112135161A/en
Publication of CN112135161A publication Critical patent/CN112135161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a dynamic effect display method and device of a virtual gift, a storage medium and electronic equipment, wherein the method comprises the following steps: responding to the received gift display instruction, determining a gift animation to be displayed, and creating a video player in a display interface; triggering a video player to sequentially read image data of each frame of animation image in the gift animation to be displayed; if the image data of the currently read animation image contains rendering data and interactive data, rendering an rendering image corresponding to the rendering data in a rendering layer, and drawing the interactive image in an interactive layer; displaying a display image obtained by stacking the rendering image and the interactive image in a display interface; and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, finishing the dynamic effect display process of the gift animation to be displayed. The special effect of the virtual gift is displayed in an animation mode, so that a complex animation effect can be perfectly displayed without using a game engine, and good experience of a user is guaranteed.

Description

Dynamic effect display method and device of virtual gift, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a dynamic effect display method and device of a virtual gift, a storage medium and electronic equipment.
Background
With the development of the internet, people use various social platforms or social software more and more frequently, in order to improve the experience of users in the process of using the social software, at present, virtual gifts are arranged on a plurality of platforms and software, users can give the virtual gifts to objects interacting with the platforms and the software, and the terminals can show the dynamic effect of the virtual gifts, so that the gift sending effect in the network world can be realized, and high-quality service and interactive experience are provided for the users.
At present, various platforms are all provided with the function of presenting gifts, for example, a live broadcast platform is used, a viewer can present virtual gifts to a main broadcast by paying the amount of money corresponding to the virtual gifts, and the virtual gifts presented to the main broadcast by a user can display the effect corresponding to the gifts at a terminal, so that the effect of presenting the gifts is achieved. At present, in the process of displaying a virtual gift with complex dynamic effect, a terminal with high performance and a game engine such as unity3D or cos2D are required to be mounted to perfectly display the high-level dynamic effect of the virtual gift, while a terminal with low performance and a terminal without a game engine is required to display a poor dynamic effect of the virtual gift, occupy more operation resources of the terminal, directly influence the use feeling of the gift of a user, and bring a bad experience to the user.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for displaying the dynamic effect of a virtual gift, a storage medium, and an electronic device, which are used to display a special effect animation in a video manner, so that various and complex dynamic effects of the virtual gift can be displayed to a user while consuming low power, and thereby a good special effect feeling can be provided to the user.
In order to achieve the purpose, the invention provides the following technical scheme:
a dynamic effect display method of a virtual gift comprises the following steps:
responding to the received gift display instruction, determining a to-be-displayed gift animation corresponding to the gift display instruction in preset gift animations, and creating a video player in a current display interface;
triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed;
if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player;
stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface;
and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed.
The above method, optionally, further includes:
and if the image data of the currently read animation image only contains rendering data of the animation image, rendering an rendering image corresponding to the rendering data in a rendering layer of the video player, and taking the rendering image as a display image corresponding to the animation image.
Optionally, the method for determining the to-be-displayed gift animation corresponding to the gift display instruction in each preset gift animation includes:
acquiring a target animation mark in the gift display instruction;
comparing the target animation identification with the animation identification of each gift animation;
and determining an animation identity consistent with the target animation identity, and determining the gift animation corresponding to the animation identity as the gift animation to be displayed.
Optionally, the method for rendering the rendered image corresponding to the rendering data in the rendering layer of the video player includes:
determining display parameters of the display interface;
constructing an image outline corresponding to the animation image on the rendering layer based on the image size parameter in the rendering data and the display parameter of the display interface;
rendering the image outline based on transparency data and color data in the rendering data to obtain a rendering image corresponding to the rendering data.
Optionally, the above method, where drawing an interactive image corresponding to the interactive data in an interaction layer of the video player, includes:
determining a drawing position in the interaction layer based on the position parameters in the interaction data and the display parameters of the display interface, and drawing an interaction picture frame at the drawing position according to the drawing parameters in the interaction data;
and determining display data based on the display attribute parameters in the interactive data, and adding the display data to the interactive drawing frame to obtain an interactive image corresponding to the interactive data.
The method described above, optionally, the process of creating the gift animation includes:
determining an initial gift animation;
coding the initial gift animation by using a preset video coding technology to obtain a coded video;
determining each frame of key animation image in the coded video, and acquiring interactive data corresponding to each frame of key animation image;
and writing each interactive data into the corresponding key animation image by using a preset data processing algorithm so as to complete the construction of the gift animation.
Optionally, the method further includes, when the video player is used to read image data of each frame of animation image of the gift animation to be displayed, the method further includes:
and synchronously reading audio data corresponding to each frame of animation image of the gift animation to be displayed, and playing the audio data.
A dynamic display device of virtual gifts, comprising:
the response unit is used for responding to the received gift display instruction, determining the gift animation to be displayed corresponding to the gift display instruction in each preset gift animation, and creating a video player in the current display interface;
the triggering unit is used for triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed;
the rendering unit is used for rendering a rendering image corresponding to the rendering data in a rendering layer of the video player and drawing an interactive image corresponding to the interactive data in an interactive layer of the video player if the image data of the currently read animation image contains the rendering data of the animation image and preset interactive data corresponding to the animation image;
the stacking unit is used for stacking the rendering image and the interactive image to obtain a display image of the animation image and displaying the display image in the display interface;
and the display unit is used for completing the dynamic effect display process of the gift animation to be displayed when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence.
The above apparatus, optionally, further comprises:
and the rendering unit is used for rendering a rendering image corresponding to the rendering data in a rendering layer of the video player if the image data of the currently read animation image only contains the rendering data of the animation image, and taking the rendering image as a display image corresponding to the animation image.
The above apparatus, optionally, the response unit includes:
the obtaining subunit is used for obtaining a target animation identifier in the gift display instruction;
the comparison subunit is used for comparing the target animation identification with the animation identification of each gift animation;
and the first determining subunit is used for determining the animation identification consistent with the target animation identification, and determining the gift animation corresponding to the animation identification as the gift animation to be displayed.
Optionally, the apparatus described above, wherein the drawing unit or the rendering unit that determines, in each preset gift animation, a gift animation to be displayed corresponding to the gift display instruction is executed, is configured to:
acquiring a target animation mark in the gift display instruction;
comparing the target animation identification with the animation identification of each gift animation;
and determining an animation identity consistent with the target animation identity, and determining the gift animation corresponding to the animation identity as the gift animation to be displayed.
The above apparatus, optionally, the drawing unit includes:
a second determining subunit, configured to determine, based on the position parameter in the interaction data and the display parameter of the display interface, a drawing position in the interaction layer, and draw an interaction frame at the drawing position according to the drawing parameter in the interaction data;
and the adding subunit is used for determining display data based on the display attribute parameters in the interactive data and adding the display data to the interactive drawing frame to obtain an interactive image corresponding to the interactive data.
The above apparatus, optionally, further comprises:
a first determination unit for determining an initial gift animation;
the encoding unit is used for encoding the initial gift animation by using a preset video encoding technology to obtain an encoded video;
the second determining unit is used for determining each frame of key animation image in the coded video and acquiring interactive data corresponding to each frame of key animation image;
and the writing unit is used for writing each interactive data into the corresponding key animation image by using a preset data processing algorithm so as to complete the construction of the gift animation.
The above apparatus, optionally, further comprises:
and the playing unit is used for synchronously reading the audio data corresponding to each frame of animation image of the gift animation to be displayed and playing the audio data.
A storage medium comprises a stored program, wherein when the program runs, a device where the storage medium is located is controlled to execute the dynamic effect display method of the virtual gift.
An electronic device specifically includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors to perform the method for displaying the virtual gift.
Compared with the prior art, the invention has the following advantages:
the invention provides a dynamic effect display method of a virtual gift, which comprises the following steps: responding to the received gift display instruction, determining a to-be-displayed gift animation corresponding to the gift display instruction in preset gift animations, and creating a video player in a current display interface; triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed; if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player; stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface; and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed. The invention shows the special effect of the virtual gift in the form of the showing animation, therefore, the to-be-shown gift animation corresponding to the virtual gift is shown under the condition of not using a game engine to finish the dynamic effect showing of the virtual gift, the energy consumption of the process to the terminal is less, a large amount of CPU is not occupied, the complex and high-level dynamic effect of the gift can be perfectly shown, and the good special effect using experience of a user is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for displaying a dynamic effect of a virtual gift according to an embodiment of the present invention;
fig. 2 is a flowchart of another method of displaying the dynamic effect of the virtual gift according to the embodiment of the present invention;
fig. 3 is a flowchart of another method of displaying the dynamic effect of the virtual gift according to the embodiment of the present invention;
fig. 4 is a diagram illustrating an example of a scenario of a method for displaying a dynamic effect of a virtual gift according to an embodiment of the present invention;
fig. 5 is a diagram illustrating a special effect of a method for displaying a dynamic effect of a virtual gift according to an embodiment of the present invention;
fig. 6 is a diagram illustrating another effect of the method for displaying the dynamic effect of the virtual gift according to the embodiment of the present invention;
fig. 7 is a schematic structural diagram of a dynamic effect display device for virtual gifts according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The invention is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor apparatus, distributed computing environments that include any of the above devices or equipment, and the like.
The embodiment of the invention provides a dynamic effect display method of a virtual gift, which can be applied to various live broadcast systems or chat systems, and also can be applied to various computer terminals or intelligent terminals, wherein an execution subject of the method can be a processor or a server of the computer terminal or the intelligent terminal, and a flow chart of the method is shown in fig. 1, and specifically comprises the following steps:
s101, responding to the received gift display instruction, determining the gift animation to be displayed corresponding to the gift display instruction in each preset gift animation, and creating a video player in the current display interface.
In the method provided by the embodiment of the invention, when the intelligent mobile terminal receives a gift display instruction, in response to the received gift display instruction, it should be noted that the intelligent mobile terminal can receive the gift display instruction sent by the central server of the live broadcast system, or the intelligent mobile terminal receives a gift display instruction generated by the intelligent mobile terminal itself, or the intelligent mobile terminal receives a gift display instruction sent by another intelligent mobile terminal, wherein the intelligent mobile terminal can be the same as intelligent devices such as a mobile terminal, an intelligent terminal, a personal computer, and the like. Further, determining a to-be-displayed gift animation corresponding to the gift display instruction in each preset gift animation, and creating a video player on a current display interface; the display interface is a display interface of the intelligent mobile terminal, further, the size of the created video player can be equal to or smaller than the size of the display interface, preferably, the video player created in the scheme completely covers the display interface, and the video player is a transparent and specially-customized player, and is particularly used for playing gift animation; the gift animation to be displayed consists of a plurality of frames of animation images, and the frames of animation images are arranged according to the playing sequence of the frames of animation images; each gift animation is a special effect animation video, wherein each special effect animation video is stored in the local content of the mobile terminal.
And S102, triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed.
In the method provided by the embodiment of the invention, after the to-be-displayed gift animation corresponding to the gift display instruction is acquired, the to-be-displayed gift animation is lost to the video player so as to trigger the video player to sequentially read each frame of animation image in the to-be-displayed gift animation, and the currently read animation image is analyzed so as to acquire the image data of each frame of animation image. It should be noted that the frequency of reading the animation image by the video player may be 24 frames/second; therefore, the video player can play at most 24 frames of the presentation image per second.
S103, if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player.
In the method provided by the embodiment of the invention, whether the image data of the currently read animation image contains rendering data of the animation image and preset interactive data corresponding to the animation image is determined, if the image data of the currently read animation image contains the rendering data of the animation image and the interactive data corresponding to the animation image, the rendering image corresponding to the rendering data is rendered in a rendering layer of a video player, wherein the rendering image corresponds to the animation image, and the rendering image is a picture corresponding to the animation image and having special effect; and drawing an interactive image corresponding to the interactive data in an interactive layer of the video player, wherein the interactive image corresponds to the animation image, and the interactive image is an image for increasing interactivity.
S104, stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface.
In the method provided by the embodiment of the invention, the rendering image and the interactive image are stacked, so that the interactive image and the rendering image can be mashup together to obtain a display image corresponding to the animation image, the display image is used for displaying a special effect of the animation image, and the display image is displayed in the display interface to finish the display processing corresponding to the animation image.
And S105, when the display images corresponding to the animation images of the frames are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed.
In the method provided by the embodiment of the invention, when the display images corresponding to the frames of animation images of the to-be-displayed gift animation are sequentially displayed in the display interface, the dynamic effect display process of the to-be-displayed gift animation is completed, so that the special effect of the to-be-displayed gift animation can be displayed in the display interface, and thus, the dynamic effect of the virtual gift corresponding to the to-be-displayed gift animation can be displayed in the display interface of the mobile terminal.
Further, if it is determined that the image data of the currently read animation image only includes rendering data of the animation image, rendering an rendering image corresponding to the rendering data in a rendering layer of the video player, and taking the rendering image as a display image corresponding to the animation image.
In the method provided by the embodiment of the invention, in response to a received gift display instruction, a to-be-displayed gift animation corresponding to the gift display instruction is determined in each preset gift animation, and a video player is established on a current display interface; triggering a video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed; if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player; stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface; and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed. By applying the method and the device, the video player is used for sequentially reading the image data of each frame of animation image in the gift animation to be displayed, the image data is processed to obtain the display image corresponding to the animation image, and when the display image corresponding to each frame of animation image is sequentially displayed in the display interface, the special effect display of the gift animation to be displayed is completed, the process does not need to carry a game engine unit 3D or a cos2D on the mobile terminal, and the dynamic effect display of the virtual gift can be realized in the form of animation video, so that less CPU resources of the mobile terminal are occupied when the dynamic effect of the virtual gift is displayed, the energy consumption of the mobile terminal is low, the more complex and high-level dynamic effect of the virtual gift can be displayed, a better use feeling is provided for a user, and a better quality dynamic effect display service is provided for the user.
In the method provided by the embodiment of the present invention, in response to a received gift display instruction, a flow of determining a to-be-displayed gift animation corresponding to the gift display instruction in each preset gift animation is shown in fig. 2, which specifically describes the following:
s201, obtaining a target animation mark in the gift display instruction.
In the method provided by the embodiment of the invention, the gift display instruction is analyzed to obtain the target animation mark in the gift display instruction, wherein the target animation mark can be a number, and the composition form of the target animation mark can be composed of numbers, letters and symbols.
S202, comparing the target animation identification with the animation identification of each gift animation.
In the method provided by the embodiment of the invention, the obtained target animation identification traverses the animation identification of each preset animation video so as to compare the target identification with the animation identification of each gift animation; the animation identification of the gift animation has uniqueness and can be composed of numbers, letters and symbols, wherein the animation identification can be used as the unique identification of the gift animation and can also be used for representing the grade, type, number of the type, online time and the like of the gift animation; for example, the animation ID is: vip _ money _58_20190513, wherein vip indicates that the gift animation is a member-level gift animation, money indicates that the gift animation is a pay-type gift animation, 58 indicates the number of the gift animation, and 20190513 indicates that the gift animation is online for 2019, 5-month, and 13-day; as another example, the animated ID is: the method includes the steps of obtaining an unavip _ free _79_20190513, wherein the unavip indicates that the gift animation is a non-membership-level gift animation, the free indicates that the gift animation is a free-type gift animation, the 79 indicates the number of the gift animation, and the 20190513 indicates that the gift animation is used online in 2019, 5 months and 13 days, and no further example is given here.
S203, determining an animation identity consistent with the target animation identity, and determining the gift animation corresponding to the animation identity as the gift animation to be displayed.
In the method provided by the embodiment of the invention, the target animation identification is compared with the animation identification of each gift animation to determine the animation identification which is the same as the target animation identification, and the gift animation corresponding to the animation identification which is the same as the target animation identification is determined as the gift animation to be displayed.
In the method provided by the embodiment of the invention, the gift display instruction is analyzed, and the gift animation to be displayed is determined based on the target animation mark in the gift display instruction, so that the gift animation to be displayed can be rapidly and accurately determined, and the gift animation can be rapidly displayed on the display interface of the intelligent terminal or the mobile terminal, so that the dynamic effect and the special effect of the virtual gift corresponding to the gift animation to be displayed are displayed for a user, and good dynamic effect experience can be provided for the user.
In the method provided by the embodiment of the present invention, after determining the gift animation to be displayed, each frame of animation image in the gift animation to be displayed needs to be read to obtain image data in each frame of animation image, and a specific processing flow of rendering the rendering image corresponding to the rendering data in the image data on the rendering layer of the video player is shown in fig. 3, which specifically describes the following steps:
s301, determining display parameters of the display interface.
In the method provided by the embodiment of the invention, the display parameters of the display interface of the intelligent terminal are determined, wherein the display parameters can be determined according to the equipment information of the intelligent terminal, and the display parameters include but are not limited to the size of the display interface and the resolution information of the display interface.
S302, constructing an image outline corresponding to the animation image on the rendering layer based on the image size parameter in the rendering data and the display parameter of the display interface.
In the method provided by the embodiment of the present invention, the rendering data includes an image size parameter, where the image size parameter is a shape parameter, a size parameter, a position parameter, a resolution parameter, and the like of a graph to be constructed; different display interfaces of different intelligent terminals are different, in order to perfectly show the special effect of the gift animation to be shown on the display interface, the special effect can be enlarged or reduced according to the display parameters of the display interface, so that the picture corresponding to the animation image can be shown on the display interface, the explanation is performed based on the resolution, for example, the resolution parameter in the rendering data is 720x1280, the resolution parameter of the display interface of the intelligent terminal is 1080x2240, when the image is built on the rendering layer of the video player, the position to be built is determined first, the image is built on the position, and finally the image with the resolution of 1080x1920 can be obtained. According to the method, an image outline corresponding to a video image is constructed on a rendering layer of a video player according to image size parameters in rendering data and display parameters of a display interface; such as the image contour of a dog, the image contour of a diamond, or the image contour of a flower, etc.
And S303, rendering the image outline based on the transparency data and the color data in the rendering data to obtain a rendering image corresponding to the video image.
In the method provided by the embodiment of the invention, transparency data and color data in rendering data are determined, and color filling is carried out on the image contour constructed on a rendering layer, wherein the transparency data and the color data are data of color values of pixel points; the color data is RGB component data or ARGB component data of the pixel points and is used for representing the color of the pixel points; the transparency data comprise a transparency value of each pixel point and are used for representing the transparency of the pixel points; the color data comprises RGB component data of each pixel point to be rendered; for example, assume that the RGB component data of pixel a is: 255,0, 0; the pixel point A is represented as red, the transparency value of the pixel point A is 255, and the transparency of the pixel point A is represented as non-transparent. Rendering each pixel point in the image contour according to the RGB component data of each pixel point in the color data and the transparency value of each pixel point in the transparency data so as to color the image contour and obtain a rendered image corresponding to the video image; preferably, each pixel point in the rendering layer, which is not in the image contour, can be rendered into a transparent and colorless pixel point.
In the method provided by the embodiment of the invention, the rendering image corresponding to the animation image is rendered on the rendering layer of the video player, so that the gift animation to be displayed can be displayed on the display interface, and the special effect of the virtual gift corresponding to the gift animation to be displayed can be displayed on the display interface.
In the method provided by the embodiment of the present invention, when interactive data is read from an animation image, the interactive data is drawn on an interaction layer of the video player, so that an interactive image can be obtained, and a process of obtaining the interactive image on the interaction layer based on the interactive data is as follows:
determining a drawing position in the interaction layer based on the position parameters in the interaction data and the display parameters of the display interface, and drawing an interaction picture frame at the drawing position according to the drawing parameters in the interaction data;
and determining display data based on the display attribute parameters in the interactive data, and adding the display data to the interactive drawing frame to obtain an interactive image corresponding to the interactive data.
In the method provided by the embodiment of the present invention, the description of the display parameters herein can refer to the description of the display parameters in fig. 3, and is not repeated herein. Acquiring a position parameter in the interactive data, and determining a drawing position on an interactive layer of a video player based on the position parameter and the position parameter, wherein the position parameter in the interactive data can be represented by coordinates; determining a drawing position on an interaction layer, and drawing an interaction picture frame at the drawing position according to drawing parameters in interaction data, wherein the interaction picture frame can be in other shapes such as a circle, a square and a fan; after the interactive drawing frame is drawn, display data is determined based on display attribute parameters in the interactive data, the display data is obtained from a gift display instruction, wherein the display attribute parameters are types of data to be displayed, such as: the method comprises the steps that pictures, texts, rich texts and the like are provided, different display attribute parameters exist in different types, and obtained display data are added into an interactive picture frame by using opengl and other technologies to obtain an interactive image; according to the method provided by the invention, different display data can be displayed in real time by drawing the interactive image in the interactive layer, the display data belongs to part of special effect materials of the animation to be displayed when the special effect is displayed, and the replacement of the special effect materials can be realized by adding the display data to the drawn interactive picture frame.
In the method provided by the embodiment of the invention, based on the interactive data read from the video image, the corresponding interactive image can be drawn on the interactive layer, so that interactive contents are added when the special effect of the animation video is displayed, the displayed animation video has better interactivity, and further the interactivity among users is added, wherein the interactive image drawn on the interactive layer can draw different interactive images according to different interactive instructions, so that interactive materials can be replaced in real time, and the displayed special effect has more individuality.
In the method provided by the embodiment of the invention, after a gift display instruction is received, when the video player is used for reading the image data of each frame of animation image of the gift animation to be displayed, the audio data corresponding to each frame of animation image of the gift animation to be displayed can be synchronously read and played, so that the corresponding sound effect can be increased when the gift animation to be displayed is played.
In the method provided by the embodiment of the present invention, the gift animation in the scheme is a pre-created gift animation, and the process of creating the gift animation is as follows:
determining an initial gift animation;
coding the initial gift animation by using a preset video coding technology to obtain a coded video;
determining each frame of key animation image in the coded video, and acquiring interactive data corresponding to each frame of key animation image;
and writing each interactive data into the corresponding key animation image by using a preset data processing algorithm so as to complete the construction of the gift animation.
In the method provided by the embodiment of the invention, an initial gift animation is determined, wherein the initial gift animation is an animation video which is made by a designer on animal CC or After Effects and supports transparency, such as video in webm, avi, mov and other formats. The method comprises the following steps of using a preset video coding technology to code an initial gift animation, wherein three types of frames exist in a coded video obtained by processing the initial gift animation by using the video coding technology, namely: Intra-Prediction (I) frames, Bi-Prediction (B) frames, and uni-Prediction (P) frames, i.e., I frames, B frames, and P frames; the frame I can be used as a key frame, namely the frame I is a key video image; it should be noted that after the initial animation video is processed, a format supporting transparency is selected for the obtained encoded video, and preferably, the encoded video obtained after the initial animation video is processed supports the format supporting transparency; by further illustration, when the gift animation supporting the transparency format is displayed, the background of the display interface can be seen at the position without the special effect animation on the display interface, so that the experience of the user can be visually improved. Determining each frame of key animation image in the coded video, and determining interactive data corresponding to each frame of key animation image, wherein the interactive data may be blank data; writing each interactive data into a key animation image corresponding to the interactive data by using a preset data processing algorithm so as to complete the construction of the gift animation; and when the interactive data in the interactive data is blank, not writing the data into the key video image. It should be noted that the data processing algorithm may be a digital watermark algorithm, a spatial domain algorithm, a Patchwork algorithm, or the like; by using a data processing algorithm, interactive data can be written into the key video image, and real-time interactive layer data logic storage can be realized by using a digital watermarking algorithm. It should be noted that, the content in the interaction data may be as follows:
Figure BDA0002701184050000141
the endTime represents the end time of the image display drawn by the interactive data, and the unit is millisecond, wherein the time is the playing time of the gift animation; for example, if endTime is 8000ms, it means that the image drawn by the interactive data stops being displayed when the gift animation is played for 8000 ms;
the Type is a display attribute parameter of display data; for indicating the type of presentation data, e.g. 1 for a picture; 2 denotes a text; 3 denotes rich text;
action represents a show action, such as rotate, move left, move right, etc.;
left denotes the upper left corner x coordinate;
top represents the top left corner y coordinate;
right represents the lower right corner x coordinate;
bottom denotes the lower right corner y coordinate;
it should be noted that the content in the interactive data may be added with corresponding content according to specific requirements, for example, a shape of a block diagram storing the presentation data, such as a circle, a circle with saw teeth, a square with flower edges, and the like.
In the method provided by the embodiment of the invention, after the initial gift animation is coded, the coded video is obtained, each frame of key animation image in the coded video is determined, the interactive data of each frame of key animation image is determined, and the interactive data of each frame of key animation image is written into the key animation image by using a preset data processing algorithm, so that the gift animation can be obtained, and the gift animation is obtained by compression coding, so that the gift animation occupies a small memory, has high compression ratio, is not complex in the process of constructing the gift animation, and has the advantages of high video packaging format compression ratio, high coding and decoding efficiency, and no problems of memory floating, overflow and the like caused by the frame animation. Meanwhile, the defects of high manufacturing cost, high rendering performance consumption and the like of the conventional 3D animation can be well avoided. The complex 3D special effect can be realized through the simple gift animation, the performance requirement on the mobile terminal playing the gift animation is not high, the practicability and universality of the method are stronger, and a better special effect display effect can be provided for more users.
In the method provided by the implementation of the present invention, to further explain the application of the present invention in practice, a scenario example is described here, as shown in fig. 4, which specifically explains as follows:
as shown in fig. 4, a user 1, a user 2, and a user 3 watch the live broadcast of a user 4, in the live broadcast process of the user 4, the user 1 presents a diamond gift to the user 4, that is, the terminal of the user 1 generates a gift display instruction and sends the gift display instruction to a server, and the server sends the gift display instruction to the terminals of the user 2, the user 3, and the user 4 after receiving the gift display instruction, wherein the user 1, the user 2, the user 3, and the user 4 determine a gift animation to be displayed in each pre-created gift animation according to the gift display instruction and create a video player on the current display interface; reading image data of each frame of video image of the gift animation to be displayed by using a video player, if the image data comprises rendering data and interactive data, rendering an image corresponding to the rendering data in a rendering layer of the video player, drawing an interactive image corresponding to the interactive data in an interactive layer of the video player, stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in a display interface; if the image data only contains rendering data, rendering an image corresponding to the rendering data in a rendering layer of the video player, determining the rendering image as a display image, and displaying the display image on a display interface; the effect of the special effect of the animation of the gift to be displayed corresponding to the diamond gift given by the user 1 to the user 4 is shown in fig. 5 and 6, so that the special effect of the virtual gift given by the user 1 to the user 4 can be displayed, the special effect of the virtual gift given by the user 1 to the user 4 can be displayed on the display interfaces of the user 2, the user 3 and the user 4, wherein the display interface of the user 1 also displays the special effect of the virtual gift given by the user 4. Further, the transparency of the diamond gift can be set, and when the transparency is high, a user can see the content in the main broadcasting room of the display interface through the diamond gift, so that the user is not influenced to watch live broadcasting; in addition, when the method of the invention is used for displaying the special effect of the gift, the blank position of the display interface without special effect can be directly seen from the position of the live broadcast room; the method provided by the invention provides better experience for the user visually when the gift special effect is displayed.
By applying the method and the device, the gift animation can be displayed in a video form, the animation video is played by constructing the gift animation and using the corresponding video player, so that the highly complex special effect which is required to be realized by high-order complex animations supporting a particle system, Gaussian blur, three-dimensional space, illumination, physical collision and the like can be displayed, the CPU occupation ratio of the process of displaying the gift animation on the intelligent terminal is low, the memory consumption is low, the power consumption is low, and the better special effect display effect can be provided for a user.
Corresponding to the method described in fig. 1, an embodiment of the present invention further provides a dynamic effect display apparatus for a virtual gift, which is used for implementing the method in fig. 1 specifically, the dynamic effect display apparatus for a virtual gift provided in the embodiment of the present invention may be implemented in a computer terminal or various mobile devices, and a schematic structural diagram of the dynamic effect display apparatus for a virtual gift is shown in fig. 7, and specifically includes:
a response unit 701, configured to determine, in response to a received gift display instruction, a to-be-displayed gift animation corresponding to the gift display instruction in preset gift animations, and create a video player in a current display interface;
a triggering unit 702, configured to trigger the video player to sequentially read image data of each frame of animation image in the gift animation to be displayed;
a drawing unit 703, configured to, if image data of a currently read animation image includes rendering data of the animation image and preset interaction data corresponding to the animation image, render a rendering image corresponding to the rendering data in a rendering layer of the video player, and draw an interaction image corresponding to the interaction data in an interaction layer of the video player;
a stacking unit 704, configured to stack the rendered image and the interactive image, obtain a display image of the animation image, and display the display image in the display interface;
the display unit 705 is configured to complete a dynamic effect display process of the to-be-displayed gift animation when display images corresponding to the frames of animation images are sequentially displayed in the display interface.
In the method provided by the embodiment of the invention, in response to a received gift display instruction, a to-be-displayed gift animation corresponding to the gift display instruction is determined in each preset gift animation, and a video player is established on a current display interface; triggering a video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed; if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player; stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface; and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed. By applying the method and the device, the video player is used for sequentially reading the image data of each frame of animation image in the gift animation to be displayed, the image data is processed to obtain the display image corresponding to the animation image, and when the display image corresponding to each frame of animation image is sequentially displayed in the display interface, the special effect display of the gift animation to be displayed is completed, the process does not need to carry a game engine unit 3D or a cos2D on the mobile terminal, and the dynamic effect display of the virtual gift can be realized in the form of animation video, so that less CPU resources of the mobile terminal are occupied when the dynamic effect of the virtual gift is displayed, the energy consumption of the mobile terminal is low, the more complex and high-level dynamic effect of the virtual gift can be displayed, a better use feeling is provided for a user, and a better quality dynamic effect display service is provided for the user.
Based on the above scheme, the apparatus provided in the embodiment of the present invention may further be configured to:
and the rendering unit is used for rendering a rendering image corresponding to the rendering data in a rendering layer of the video player if the image data of the currently read animation image only contains the rendering data of the animation image, and taking the rendering image as a display image corresponding to the animation image.
Based on the above scheme, in the apparatus provided in the embodiment of the present invention, the response unit 701 may be configured to:
the obtaining subunit is used for obtaining a target animation identifier in the gift display instruction;
the comparison subunit is used for comparing the target animation identification with the animation identification of each gift animation;
and the first determining subunit is used for determining the animation identification consistent with the target animation identification, and determining the gift animation corresponding to the animation identification as the gift animation to be displayed.
Based on the above solution, in the apparatus provided in the embodiment of the present invention, the drawing unit 703 or the rendering unit, which executes to determine the to-be-displayed gift animation corresponding to the gift display instruction in each preset gift animation, is configured to:
acquiring a target animation mark in the gift display instruction;
comparing the target animation identification with the animation identification of each gift animation;
and determining an animation identity consistent with the target animation identity, and determining the gift animation corresponding to the animation identity as the gift animation to be displayed.
Based on the foregoing solution, in the apparatus provided in the embodiment of the present invention, the drawing unit 703 may be configured to:
a second determining subunit, configured to determine, based on the position parameter in the interaction data and the display parameter of the display interface, a drawing position in the interaction layer, and draw an interaction frame at the drawing position according to the drawing parameter in the interaction data;
and the adding subunit is used for determining display data based on the display attribute parameters in the interactive data and adding the display data to the interactive drawing frame to obtain an interactive image corresponding to the interactive data.
Based on the above scheme, the apparatus provided in the embodiment of the present invention may further include:
a first determination unit for determining an initial gift animation;
the encoding unit is used for encoding the initial gift animation by using a preset video encoding technology to obtain an encoded video;
the second determining unit is used for determining each frame of key animation image in the coded video and acquiring interactive data corresponding to each frame of key animation image;
and the writing unit is used for writing each interactive data into the corresponding key animation image by using a preset data processing algorithm so as to complete the construction of the gift animation.
Based on the above scheme, the apparatus provided in the embodiment of the present invention may further include:
and the playing unit is used for synchronously reading the audio data corresponding to each frame of animation image of the gift animation to be displayed and playing the audio data.
The embodiment of the invention also provides a storage medium, which comprises a stored program, wherein when the program runs, the device where the storage medium is located is controlled to execute the following method for displaying the dynamic effect of the virtual gift, which comprises the following steps:
responding to the received gift display instruction, determining a to-be-displayed gift animation corresponding to the gift display instruction in preset gift animations, and creating a video player in a current display interface;
triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed;
if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player;
stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface;
and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed.
An electronic device is provided in an embodiment of the present invention, and the structural diagram thereof is shown in fig. 8, and specifically includes a memory 801 and one or more programs 802, where the one or more programs 802 are stored in the memory 801 and configured to be executed by the one or more processors 803 to perform the following operations:
responding to the received gift display instruction, determining a to-be-displayed gift animation corresponding to the gift display instruction in preset gift animations, and creating a video player in a current display interface;
triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed;
if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player;
stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface;
and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A dynamic effect display method of a virtual gift is characterized by comprising the following steps:
responding to the received gift display instruction, determining a to-be-displayed gift animation corresponding to the gift display instruction in preset gift animations, and creating a video player in a current display interface;
triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed;
if the currently read image data of the animation image comprises rendering data of the animation image and preset interaction data corresponding to the animation image, rendering the rendering image corresponding to the rendering data in a rendering layer of the video player, and drawing the interaction image corresponding to the interaction data in an interaction layer of the video player;
stacking the rendering image and the interactive image to obtain a display image of the animation image, and displaying the display image in the display interface;
and when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence, completing the dynamic effect display process of the gift animation to be displayed.
2. The method of claim 1, further comprising:
and if the image data of the currently read animation image only contains rendering data of the animation image, rendering an rendering image corresponding to the rendering data in a rendering layer of the video player, and taking the rendering image as a display image corresponding to the animation image.
3. The method of claim 1, wherein the determining the to-be-presented gift animation corresponding to the gift display instruction in each preset gift animation comprises:
acquiring a target animation mark in the gift display instruction;
comparing the target animation identification with the animation identification of each gift animation;
and determining an animation identity consistent with the target animation identity, and determining the gift animation corresponding to the animation identity as the gift animation to be displayed.
4. The method of claim 1 or 2, wherein the rendering the rendered image corresponding to the rendering data in a rendering layer of the video player comprises:
determining display parameters of the display interface;
constructing an image outline corresponding to the animation image on the rendering layer based on the image size parameter in the rendering data and the display parameter of the display interface;
rendering the image outline based on transparency data and color data in the rendering data to obtain a rendering image corresponding to the rendering data.
5. The method of claim 1, wherein the rendering of the interactive image corresponding to the interactive data in the interactive layer of the video player comprises:
determining a drawing position in the interaction layer based on the position parameters in the interaction data and the display parameters of the display interface, and drawing an interaction picture frame at the drawing position according to the drawing parameters in the interaction data;
and determining display data based on the display attribute parameters in the interactive data, and adding the display data to the interactive drawing frame to obtain an interactive image corresponding to the interactive data.
6. The method of claim 1, wherein the process of creating the gift animation comprises:
determining an initial gift animation;
coding the initial gift animation by using a preset video coding technology to obtain a coded video;
determining each frame of key animation image in the coded video, and acquiring interactive data corresponding to each frame of key animation image;
and writing each interactive data into the corresponding key animation image by using a preset data processing algorithm so as to complete the construction of the gift animation.
7. The method of claim 1, wherein when the video player is used to read the image data of each frame of animation image of the gift animation to be displayed, the method further comprises:
and synchronously reading audio data corresponding to each frame of animation image of the gift animation to be displayed, and playing the audio data.
8. A dynamic display device of virtual gifts, comprising:
the response unit is used for responding to the received gift display instruction, determining the gift animation to be displayed corresponding to the gift display instruction in each preset gift animation, and creating a video player in the current display interface;
the triggering unit is used for triggering the video player to sequentially read the image data of each frame of animation image in the gift animation to be displayed;
the rendering unit is used for rendering a rendering image corresponding to the rendering data in a rendering layer of the video player and drawing an interactive image corresponding to the interactive data in an interactive layer of the video player if the image data of the currently read animation image contains the rendering data of the animation image and preset interactive data corresponding to the animation image;
the stacking unit is used for stacking the rendering image and the interactive image to obtain a display image of the animation image and displaying the display image in the display interface;
and the display unit is used for completing the dynamic effect display process of the gift animation to be displayed when the display images corresponding to the animation images of each frame are displayed in the display interface in sequence.
9. A storage medium comprising a stored program, wherein the program controls a device on which the storage medium is located to execute the method for dynamically exhibiting a virtual gift according to any one of claims 1 to 7 when the program runs.
10. An electronic device comprising a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors to perform the method of claim 1-7.
CN202011022695.5A 2020-09-25 2020-09-25 Dynamic effect display method and device of virtual gift, storage medium and electronic equipment Pending CN112135161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011022695.5A CN112135161A (en) 2020-09-25 2020-09-25 Dynamic effect display method and device of virtual gift, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011022695.5A CN112135161A (en) 2020-09-25 2020-09-25 Dynamic effect display method and device of virtual gift, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112135161A true CN112135161A (en) 2020-12-25

Family

ID=73840148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011022695.5A Pending CN112135161A (en) 2020-09-25 2020-09-25 Dynamic effect display method and device of virtual gift, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112135161A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804578A (en) * 2021-01-28 2021-05-14 广州虎牙科技有限公司 Atmosphere special effect generation method and device, electronic equipment and storage medium
CN112905074A (en) * 2021-02-23 2021-06-04 北京达佳互联信息技术有限公司 Interactive interface display method, interactive interface generation method and device and electronic equipment
CN113538633A (en) * 2021-07-23 2021-10-22 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN114374867A (en) * 2022-01-19 2022-04-19 平安国际智慧城市科技股份有限公司 Multimedia data processing method, device and medium
CN114860358A (en) * 2022-03-31 2022-08-05 北京达佳互联信息技术有限公司 Object processing method and device, electronic equipment and storage medium
CN115643462A (en) * 2022-10-13 2023-01-24 北京思明启创科技有限公司 Interactive animation display method and device, computer equipment and storage medium
WO2023146469A3 (en) * 2022-01-31 2023-08-31 Lemon Inc. Content creation using interactive effects

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170003740A1 (en) * 2015-06-30 2017-01-05 Amazon Technologies, Inc. Spectator interactions with games in a specatating system
CN108156503A (en) * 2017-12-14 2018-06-12 北京奇艺世纪科技有限公司 A kind of method and device for generating present
CN109660859A (en) * 2018-12-25 2019-04-19 北京潘达互娱科技有限公司 A kind of animated show method and mobile terminal
CN110351592A (en) * 2019-07-17 2019-10-18 深圳市蓝鲸数据科技有限公司 Animation rendering method, device, computer equipment and storage medium
US20200077157A1 (en) * 2018-08-28 2020-03-05 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
CN111669646A (en) * 2019-03-07 2020-09-15 北京陌陌信息技术有限公司 Method, device, equipment and medium for playing transparent video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170003740A1 (en) * 2015-06-30 2017-01-05 Amazon Technologies, Inc. Spectator interactions with games in a specatating system
CN108156503A (en) * 2017-12-14 2018-06-12 北京奇艺世纪科技有限公司 A kind of method and device for generating present
US20200077157A1 (en) * 2018-08-28 2020-03-05 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
CN109660859A (en) * 2018-12-25 2019-04-19 北京潘达互娱科技有限公司 A kind of animated show method and mobile terminal
CN111669646A (en) * 2019-03-07 2020-09-15 北京陌陌信息技术有限公司 Method, device, equipment and medium for playing transparent video
CN110351592A (en) * 2019-07-17 2019-10-18 深圳市蓝鲸数据科技有限公司 Animation rendering method, device, computer equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804578A (en) * 2021-01-28 2021-05-14 广州虎牙科技有限公司 Atmosphere special effect generation method and device, electronic equipment and storage medium
CN112905074A (en) * 2021-02-23 2021-06-04 北京达佳互联信息技术有限公司 Interactive interface display method, interactive interface generation method and device and electronic equipment
CN113538633A (en) * 2021-07-23 2021-10-22 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN113538633B (en) * 2021-07-23 2024-05-14 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN114374867A (en) * 2022-01-19 2022-04-19 平安国际智慧城市科技股份有限公司 Multimedia data processing method, device and medium
CN114374867B (en) * 2022-01-19 2024-03-15 平安国际智慧城市科技股份有限公司 Method, device and medium for processing multimedia data
WO2023146469A3 (en) * 2022-01-31 2023-08-31 Lemon Inc. Content creation using interactive effects
CN114860358A (en) * 2022-03-31 2022-08-05 北京达佳互联信息技术有限公司 Object processing method and device, electronic equipment and storage medium
CN115643462A (en) * 2022-10-13 2023-01-24 北京思明启创科技有限公司 Interactive animation display method and device, computer equipment and storage medium
CN115643462B (en) * 2022-10-13 2023-09-08 北京思明启创科技有限公司 Interactive animation display method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112135161A (en) Dynamic effect display method and device of virtual gift, storage medium and electronic equipment
CN115908644A (en) Animation processing method and device
CN105447898A (en) Method and device for displaying 2D application interface in virtual real device
CN107197341B (en) Dazzle screen display method and device based on GPU and storage equipment
US20140087877A1 (en) Compositing interactive video game graphics with pre-recorded background video content
CN108668168B (en) Android VR video player based on Unity3D and design method thereof
CN106527857A (en) Virtual reality-based panoramic video interaction method
US20110221748A1 (en) Apparatus and method of viewing electronic documents
CN108959392B (en) Method, device and equipment for displaying rich text on 3D model
CN108882055B (en) Video live broadcast method and system, and method and device for synthesizing video stream
US20230290043A1 (en) Picture generation method and apparatus, device, and medium
WO2022193614A1 (en) Water wave special effect generation method and apparatus, storage medium, computer device
CN111491208B (en) Video processing method and device, electronic equipment and computer readable medium
CN109891462A (en) The system and method for interactive 3D environment are generated for using virtual depth
CN115082609A (en) Image rendering method and device, storage medium and electronic equipment
CN102521864A (en) Method for simulating display screen effect in game
Miller et al. XNA game studio 4.0 programming: developing for windows phone 7 and xbox 360
CN109379622B (en) Method and device for playing video in game
CN117390322A (en) Virtual space construction method and device, electronic equipment and nonvolatile storage medium
CN110944218B (en) Multimedia information playing system, method, device, equipment and storage medium
WO2023160041A1 (en) Image rendering method and apparatus, computer device, computer-readable storage medium and computer program product
CN111402369A (en) Interactive advertisement processing method and device, terminal equipment and storage medium
CN113676753B (en) Method and device for displaying video in VR scene, electronic equipment and storage medium
CN100531376C (en) Method for the management of descriptions of graphic animations for display, receiver and system for the implementation of said method
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210113

Address after: 510000 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 28th floor, block B1, Wanda Plaza, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20201225

RJ01 Rejection of invention patent application after publication