CN112052416A - Method and device for displaying image elements - Google Patents

Method and device for displaying image elements Download PDF

Info

Publication number
CN112052416A
CN112052416A CN202010869764.XA CN202010869764A CN112052416A CN 112052416 A CN112052416 A CN 112052416A CN 202010869764 A CN202010869764 A CN 202010869764A CN 112052416 A CN112052416 A CN 112052416A
Authority
CN
China
Prior art keywords
frame
animation
image element
page
reference path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010869764.XA
Other languages
Chinese (zh)
Inventor
梁宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN202010869764.XA priority Critical patent/CN112052416A/en
Publication of CN112052416A publication Critical patent/CN112052416A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method, a device, computing equipment and a computer readable medium for displaying image elements. The method for presenting image elements in a page comprises the following steps: monitoring an image element triggering event, wherein the image element triggering event is used for triggering the replacement of a first image element, and the first image element is displayed in a page based on a reference path of the first image element; when the image element triggering event is monitored, replacing the reference path of the first image element with the reference path of the second image element; and displaying the second image element in the page based on the reference path of the second image element, wherein the first image element and/or the second image element comprises a frame-by-frame animation. The embodiment of the invention completes the replacement of the image elements in the page by replacing the reference paths of the image elements, effectively improves the development and maintenance efficiency, reduces the development and maintenance cost, improves the compatibility and can bear more complex scenes.

Description

Method and device for displaying image elements
Technical Field
Embodiments of the present invention relate generally to image element processing technology and, more particularly, relate to methods, apparatuses, computing devices, and computer-readable media for presenting image elements.
Background
With the development of client applications, the number of the interactive activities embedded in the client is increased, and the requirements for the interactive activities embedded in the client are also increased. Existing client-embedded interactive activities typically use client technology, GIF technology, and Flash technology to present image elements. The interactive activities displayed by the existing method are relatively simple, the development and maintenance efficiency is low, the cost is high, the compatibility is poor, and complex scenes cannot be borne.
Therefore, how to provide a method for displaying image elements, which can bear complex scenes, improve development and maintenance efficiency, reduce development and maintenance costs, and have strong compatibility, is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, a computing device, and a computer-readable medium for displaying image elements, so as to achieve the display of complex scenes while improving development and maintenance efficiency and reducing development and maintenance costs, and have strong compatibility.
According to a first aspect of embodiments of the present invention, there is provided a method for presenting image elements, the method comprising: monitoring an image element triggering event, wherein the image element triggering event is used for triggering the replacement of a first image element, and the first image element is displayed in a page based on a reference path of the first image element; when the image element triggering event is monitored, replacing the reference path of the first image element with the reference path of the second image element; and displaying the second image element in the page based on the reference path of the second image element, wherein the first image element and/or the second image element comprises a frame-by-frame animation.
According to a second aspect of embodiments of the present invention, there is provided an apparatus for presenting image elements, the apparatus comprising: a monitor image element module to monitor an image element triggering event for triggering replacement of a first image element, wherein the first image element is displayed in a page based on a reference path of the first image element; the reference path replacing module is used for replacing the reference path of the first image element with the reference path of the second image element when the image element triggering event is monitored; and a display module to display the second image element in the page based on a reference path of the second image element, wherein the first image element and/or the second image element comprises a frame-by-frame animation.
In some embodiments, based on the foregoing, the page is configured based on system style; and configuring the frame-by-frame animation based on the animation style of the frame-by-frame animation, wherein the system style at least comprises the animation style of the frame-by-frame animation, and the animation style of the frame-by-frame animation at least comprises the frame number and the control information of the frame-by-frame animation.
In some embodiments, based on the foregoing scheme, the apparatus further comprises: the page monitoring module is used for monitoring the size change of the page; and the adjusting module is used for adjusting the animation form of the frame-by-frame animation when the change of the page size is monitored, wherein the animation form at least comprises the size of the frame-by-frame animation.
In some embodiments, based on the foregoing, the adjusting module includes: the setting module is used for setting the length-width ratio of the page; the copying module is used for copying the animation class of the frame-by-frame animation; and the switching module is used for switching to the copied animation class so as to trigger the animation form adjustment of the frame-by-frame animation.
In some embodiments, based on the foregoing scheme, the apparatus further comprises: the calling module is used for calling the system scheme style again to calculate system pixels when the change of the page size is monitored; recalling animation styles of the frame-by-frame animation after the system pixels are calculated; and a configuration module to reconfigure the frame-by-frame animation based on the recalled animation style.
In some embodiments, based on the foregoing scheme, the listening image element module includes: a definition module for defining one or more listeners; and a binding module to use logic to bind the one or more listeners to one or more image element triggering events.
In some embodiments, based on the foregoing scheme, when both the first image element and the second image element include a frame-by-frame animation, the first image element and the second image element belong to the same frame-by-frame animation type, the first image element and the second image element have the same number of frames, and the frames of the first image element and the second image element have the same size.
In some embodiments, based on the foregoing scheme, when one of the first image element or the second image element comprises a frame-by-frame animation, the other of the first image element or the second image element comprises a still picture.
In some embodiments, based on the foregoing scheme, the page displaying the first image element or the second image element includes a WEB page.
In some embodiments, based on the foregoing scheme, the reference path of the frame-by-frame animation includes a reference path of an animation picture, wherein the animation picture includes a combination of frames of the frame-by-frame animation.
In some embodiments, based on the foregoing scheme, when the frame-by-frame animation is CSS3 frame-by-frame animation, the animation picture is a sprite picture.
In some embodiments, at least the system style, the animation style, and the animation picture are preloaded based on the foregoing scheme.
According to a third aspect of embodiments of the present invention, there is provided a computing device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the method for presenting image elements as described in the above-mentioned embodiments of the invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the method for presenting image elements as described in the above-mentioned embodiments of the invention.
The embodiment of the invention can have the following beneficial effects:
in the technical scheme provided by some embodiments of the invention, the replacement of the image elements in the page is completed by replacing the reference paths of the image elements without largely reconfiguring and controlling the image elements, thereby effectively improving the development and maintenance efficiency and reducing the development and maintenance cost. Displaying and replacing image elements in a page in an efficient manner improves compatibility while at the same time being able to carry more complex scenes. For example, in application scenarios of game interaction, there are a number of image element display and replacement issues. In the traditional technology, a large number of elements need to be configured and controlled, so that the development and maintenance cost is extremely high, the display effect is not ideal, and cross-platform display cannot be realized. The technical scheme provided by the embodiment of the invention can efficiently display and replace image elements in game interactive application, greatly save cost, improve the complexity and display effect of image display, and simultaneously realize cross-platform display.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following description will be made of exemplary embodiments with reference to the accompanying drawings. Obviously, the figures in the following description are only some embodiments of the invention, in which:
FIG. 1 shows a schematic diagram of individual frames in a frame-by-frame animation;
FIG. 2 illustrates an exemplary system architecture according to an embodiment of the present invention;
FIG. 3 shows a flow diagram of a method for presenting image elements according to an embodiment of the invention;
FIG. 4 illustrates a collection of snowbill maps in accordance with an embodiment of the invention;
FIG. 5 shows a schematic flow chart of the steps of listening for image element triggering events in the method for presenting image elements shown in FIG. 3;
FIG. 6 illustrates a schematic diagram of a logical snoop, according to an embodiment of the present invention;
7A-7D illustrate an example application of JavaScript logic control in accordance with an embodiment of the present invention;
FIG. 8 shows a flow diagram of a method for presenting image elements according to another embodiment of the invention;
FIG. 9 shows a schematic flow chart of the steps of adjusting the animation morphology of a frame-by-frame animation in the method for presenting image elements shown in FIG. 8;
FIG. 10 illustrates a schematic diagram of preloading resources based on object trees, according to an embodiment of the invention;
FIG. 11 is a block diagram illustrating an exemplary structure of an apparatus for presenting image elements according to an embodiment of the present invention;
FIG. 12 is a block diagram illustrating an exemplary structure of an apparatus for presenting image elements according to another embodiment of the present invention;
FIG. 13 illustrates an example application scenario in accordance with an embodiment of the present invention;
14A-14D illustrate a user interface presenting image elements from the perspective of a user according to an embodiment of the present invention;
FIG. 15 shows a schematic block diagram of a computing device according to an embodiment of the invention.
It should be understood that other figures may also be derived from these figures to those skilled in the art.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are merely for illustrating and explaining the present invention, and are not intended to limit the present invention, and features in the embodiments and examples of the present invention may be combined with each other without conflict.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the inventive aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are only functional entities and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow diagrams depicted in the figures are merely exemplary and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted, unless otherwise indicated. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Before describing embodiments of the present invention in detail, some relevant concepts are explained first:
animation frame by frame: the principle of the frame-by-frame animation is to decompose animation actions in 'continuous key frames', that is, draw different contents on each frame of a time axis frame by frame, and continuously play the contents to form the animation. The frame-by-frame animation has great flexibility, can represent almost any content to be represented, is similar to a playing mode of a movie, and is very suitable for performing fine animation. For example: sharp turns of the character or animal, waving of hair and clothing, walking, speaking, and delicate 3D effects, etc. FIG. 1 shows a schematic diagram of individual frames of a frame-by-frame animation. As shown in fig. 1, each frame is shown as a key decomposition of the walking motion, and when the frames are played continuously, an animation of the walking motion is formed.
Hypertext markup language 5 (HTML 5, also known as H5): a hypertext markup language has multiple characteristics including semantic characteristics, local storage characteristics, device compatibility characteristics, connection characteristics, webpage multimedia characteristics, three-dimensional, graphic and special effect characteristics, performance and integration characteristics, and the like. H5 introduces many elements and attributes, gives better meaning and structure to web page, provides open interface for data and application access, and makes external application directly connect with data in browser.
Cascading Style Sheets (CSS for short): a style sheet language is used to describe the rendering of HTML or XML (including XML branching languages such as SVG, MathML, XHTML) documents. CSS describes the problem of how elements on a screen, paper, audio, etc. other media should be rendered.
CSS 3: the upgrading version of the CSS technology adopts the CSS technology when the page is manufactured, and can effectively and more accurately control the layout, the font, the color, the background and other effects of the page.
Document Object Model (DOM): a standard programming interface for processing extensible markup language. It is a platform and language independent Application Program Interface (API) that can dynamically access programs and scripts to update its content, structure and style of www documents (currently, HTML and XML documents are defined by declarative parts). The document may be further processed and the results of the processing may be added to the current page. The DOM is a tree-based API document.
CSSOM: a set of APIs that allow JavaScript to operate CSS. It is very similar to DOM, but for CSS rather than HTML. It allows the user to dynamically read and modify CSS styles.
Js (javascript): an transliterated script language is a dynamic type, weak type, prototype-based language and a built-in support type; the JS is a script language widely used for a client, is used on an HTML webpage at the earliest time and is used for adding a dynamic function to the HTML webpage.
Image element: refers to a generic term for various graphic and visual elements and may include, for example, animation, still pictures, frame by frame.
Fig. 2 illustrates an exemplary system architecture 200 in which various methods described herein may be implemented, according to an embodiment of the invention. As shown in fig. 2, the system architecture 200 includes a server 210, a network 240, and one or more terminal devices 250.
Server 210 stores and executes instructions, which may be a single server or a cluster of servers, that can perform the various methods described herein. It should be understood that the servers referred to herein are typically server computers having a large amount of memory and processor resources, but other embodiments are possible.
Examples of network 240 include a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), and/or a combination of communication networks such as the Internet. Server 210 and one or more terminal devices 250 may each include at least one communication interface (not shown) capable of communicating over network 240. Such communication interfaces may be one or more of the following: any type of network interface (e.g., a Network Interface Card (NIC)), wired or wireless (such as IEEE 802.11 wireless lan (wlan)) wireless interface, global microwave access interoperability (Wi-MAX) interface, ethernet interface, Universal Serial Bus (USB) interface, cellular network interface, Bluetooth @ interface, Near Field Communication (NFC) interface, or the like. Further examples of communication interfaces are described elsewhere herein.
Terminal device 250 may be any type of mobile computing device, including a mobile computer or mobile computing device (e.g., Microsoft Surface devices, Personal Digital Assistants (PDAs), laptop computers, notebook computers, a tablet computer such as Apple iPad, a netbook, etc.), a mobile phone (e.g., a cellular phone, a smart phone such as Microsoft Windows phones, Apple iPhone, a phone that implements the Google Android operating system, Palm devices, Blackberry devices, etc.), a wearable computing device (e.g., smart watches, a head-mounted device, including smart glasses, such as Google Glass, etc.), or other type of mobile device. In some embodiments, terminal device 250 may also be a stationary computing device. Further, where the system includes multiple terminal devices 250, the multiple terminal devices 250 can be the same or different types of computing devices.
Terminal device 250 may include a display screen 251 and a terminal application 252 that may interact with a terminal user via display screen 251. Terminal device 250 may interact with, e.g., send data to or receive data from, server 210, e.g., via network 240. The terminal application 252 may be a native application, a Web page (Web) application, or an applet (LiteApp) that is a lightweight application. In the case where the terminal application 252 is a local application that needs to be installed, the terminal application 252 may be installed in the terminal device 250. In the case where the terminal application 252 is a Web application, the terminal application 252 can be accessed through a browser. In the case where the terminal application 252 is an applet, the terminal application 252 may be directly opened on the terminal device 250 by searching relevant information of the terminal application 252 (e.g., a name of the terminal application 252, etc.), scanning a graphic code of the terminal application 252 (e.g., a barcode, a two-dimensional code, etc.), and the like, without installing the terminal application 252.
In one application scenario of an embodiment of the present invention, a user may use the terminal device 250 to send a request to the server 210 via the network 240, for example in a game drawing, the terminal device 250 may send a drawing request to the server. The server 210, upon receiving the request, may return a result message, such as a lottery result, to the terminal device 250.
It should be understood that the number of terminal devices, networks, and servers in fig. 2 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 3 is a flow diagram 300 of a method for presenting image elements, according to an embodiment of the invention. First, in step 310, a first image element is displayed in a page based on a reference path of the first image element. In step 320, an image element triggering event is monitored, the image element triggering event for triggering replacement of the first image element. In step 330, when the image element triggering event is monitored, the reference path of the first image element is replaced by the reference path of the second image element. In particular, the replacement of the frame-by-frame animation can be completed only by replacing the reference path of the frame-by-frame animation, and the animation style does not need to be rewritten for each frame of the frame-by-frame animation when replacing as in the conventional method. Meanwhile, the replacement of animation and static pictures frame by frame can be realized by changing the reference path without writing a large amount of configuration data and control data for different types of image elements. Therefore, the development and maintenance efficiency is effectively improved, and the development and maintenance cost is reduced. In step 340, the second image element is displayed in the page based on the reference path of the second image element.
In one embodiment, the page may be a WEB page embedded in a client, and the client may be a terminal application 252 in a terminal device 250 as described in fig. 2, such as a hand game, an end game, a WeChat, and the like. The display of the image elements on the WEB page can realize cross-terminal equipment display, and different display modes do not need to be compiled for different terminal equipment, so that the compatibility is greatly improved, and the development cost is reduced. Meanwhile, displaying the image elements on the WEB page is more convenient for maintenance, for example, when the image elements need to be modified, the modification can be directly performed on the WEB page embedded in the client without updating the client application. In another embodiment, the page may also be a WEB page in a browser, for example, the WEB page may be a browser of the terminal device. In an application scenario, when a user shares activities displayed in a page in a client, other users can directly open the browser of the terminal without downloading the client, and therefore communication efficiency can be effectively improved. In yet another embodiment, the WEB page may be combined with other devices as well. In one application scenario, the embodiment of the present invention can also be used for WEB pages combined with a game engine. For example, the game engine can well simulate complex scenes such as daily life, physical scenes and the like. But the cost of simulating complex scenes with a game engine is very high and also requires a large amount of data support. By using the method in the embodiment of the invention, the WEB and the game engine are combined, so that the high-efficiency development and maintenance can be realized, and the user experience effect is basically consistent with that of the game engine which is independently used.
Image elements refer to the collective names of various graphic and visual elements. In embodiments of the present invention, the first image element and the second image element may comprise a frame-by-frame animation or a still picture. In one embodiment of the invention, the first image element and the second image element may each be animated frame by frame. For example, in a soccer game, when an animation of a different player needs to be replaced, the replacement can be done directly replacing the addresses of the frame-by-frame animation without recalculating pixels, reconfiguring control information, and the like for each frame of the different animation. When the first image element and the second image element are both frame-by-frame animations, the first image element and the second image element belong to the same frame-by-frame animation type, e.g. both belong to CSS3 frame-by-frame animations. And the first image element and the second image element have the same number of frames, and the frames of the first image element and the second image element have the same size. In another embodiment of the invention, one of the first image element or the second image element may be a frame-by-frame animation and the other of the first image element or the second image element may be a normal still picture. For example, in the soccer game described above, it may be necessary to replace a still picture showing a conversion fee, gold medals, talents, and the like with a frame-by-frame animation showing the player. Embodiments of the present invention can directly replace a still picture with a frame-by-frame animation (or vice versa) by merely replacing addresses, without writing a large amount of configuration data and control data for different types of image elements. As shown in the following code, in the embodiment of the present invention, the data source of the general still picture is defined in the same format as the data source of the frame-by-frame animation, and is different only in that the data type of the frame-by-frame animation is defined as the "file" type and the data type of the general still picture is defined as the "pic" type. Therefore, according to the embodiment of the invention, the replacement of the frame-by-frame animation and the common still picture can be completed only by replacing the reference path of the frame-by-frame animation and the reference path of the common still picture.
{
sid:"",
name of 8000 ten thousand,
num is 8000 ten thousand,
stype:"pic",
pic:"transferfee.png",
type:1,
color:"pruple",
},
{
sid:"",
name of the classic players,
num:"1",
stype:"film",
pic:"weiaila.png",
type:1,
color:"aurum",
} 。
in one embodiment of the invention, when the image element is a frame-by-frame animation, the reference path of the frame-by-frame animation may be a reference path of an animation picture, wherein the animation picture includes a combination of individual frames of the frame-by-frame animation. An example of an animation picture is a Sprite (english). In the CSS technique, the sprite is a CSS image merging technique, which merges a small icon and a background image into one picture, and then displays a picture portion to be displayed by using the background positioning of the CSS. In an embodiment of the present invention, the frames and the background image in the frame-by-frame animation can be combined into one picture, and then different frames at different positions can be displayed as required by using the CSS background positioning. Fig. 4 shows a collection of 9 snow-Bill diagrams according to an embodiment of the invention. It should be understood that the 9 illustrated snowbill diagrams are merely an example number, and that more or fewer snowbill diagrams may be used. As shown in fig. 4, each of the snow-jade images has a background image (e.g., black background), and each of the frames is arranged at a different position of the background image to form one picture. The frame-by-frame animation can thus be invoked by calling the address of the sprite. Example code to invoke frame-by-frame animation based on the address of the sprite map is shown below:
<em class="purple"><span class="film" style="background-image: url("//game.gtimg.cn/images/ffm/act/a20200408lottery/reward/film/7101masailuo.png"); animation-name: film;"></span></em>。
in one embodiment, the page is configured based on system styles, which may include animation styles that animate from frame to frame, styles of other graphical elements, and positions of various elements in the page. In one embodiment, the frame-by-frame animation is configured based on an animation style of the frame-by-frame animation, wherein the animation style of the frame-by-frame animation includes at least a frame number of the frame-by-frame animation and control information. The following code illustrates a general style of a frame-by-frame animation according to an embodiment of the present invention, to which each frame of the frame-by-frame animation can be animated to display the frame-by-frame animation. In the following codes, "keyframes" and "webkit-keyframes" have the same meaning, and some low-level browsers may not support keyframes and can recognize webkit keyframes, so the two sections of codes are different syntax writing methods of the same functional syntax. Here, the animation style of the frame-by-frame animation is explained by taking "keyframes" as an example. In this example, the frame-by-frame animation has 15 frames, each at a different position of the background picture, the position of the frame being defined by "background-position", where rem is a unit of image length. Thus, the above-described general style represents: at 0%, a frame whose display position is background-position: 00; at 7.1%, frames positioned at background-position: 0-1 rem are displayed; and so on. When the frames are continuously played according to the scheme, the frames can be dynamically changed into an animation form, and the frame-by-frame animation is displayed.
@keyframes film{
0%{background-position:0 0; }
7.1%{background-position:0 -1rem; }
14.2%{background-position:0 -2rem; }
21.3%{background-position:0 -3rem; }
28.4%{background-position:0 -4rem; }
35.5%{ background-position:0 -5rem; }
42.6%{ background-position:0 -6rem; }
49.7%{ background-position:0 -7rem; }
56.8%{ background-position:0 -8rem; }
63.9%{ background-position:0 -9rem; }
71%{ background-position:0 -10rem; }
78.1%{ background-position:0 -11rem; }
85.2%{ background-position:0 -12rem; }
92.3%{ background-position:0 -13rem; }
100%{ background-position:0 -14rem; }
}
@-webkit-keyframes film{
0%{background-position:0 0; }
7.1%{background-position:0 -1rem; }
14.2%{background-position:0 -2rem; }
21.3%{background-position:0 -3rem; }
28.4%{background-position:0 -4rem; }
35.5%{ background-position:0 -5rem; }
42.6%{ background-position:0 -6rem; }
49.7%{ background-position:0 -7rem; }
56.8%{ background-position:0 -8rem; }
63.9%{ background-position:0 -9rem; }
71%{ background-position:0 -10rem; }
78.1%{ background-position:0 -11rem; }
85.2%{ background-position:0 -12rem; }
92.3%{ background-position:0 -13rem; }
100%{ background-position:0 -14rem; }
} 。
Fig. 5 shows a schematic flow chart of the step of listening for image element triggering events (i.e. step 320 in the method 300) in the method for presenting image elements shown in fig. 3. In step 321, one or more listeners are defined. In particular, one or more listeners may be maintained in an event listener registry. The listener may listen to click on the screen twice, press a particular area of the page, etc. for any suitable action. In step 322, logic is used to bind one or more listeners to one or more image element triggering events. In particular, one or more listeners in the event listener registry can be bound to one or more image element triggering events using JS logic or other suitable logic. For example, a listener listening to a particular area of a pressed page may be bound to an image element trigger event, after which the listener that is bound by logic is associated with the trigger event. Thereafter, when the listener discovers the action of pressing a particular area of the page, the result is returned that an image element triggering event occurred.
FIG. 6 illustrates a schematic diagram of logical snooping, according to an embodiment of the invention. As described in FIG. 6, one or more events may be defined in an event source. Specifically, the events may include finger pressing on the screen, finger moving off the screen, rotating the screen, dragging, may also include mouse clicks, and the like. When an event in an event source occurs, the event is broadcasted through an event broadcaster, a listener corresponding to the event is searched in an event listening registry, and a result is returned after the listener is found. In one application scenario, the event may be a user finger pressing the screen. When the user's finger presses the screen, the event is broadcast by the event broadcaster and the listener corresponding to the event is found in the event listening registry. For example, defining a listener to listen that a finger presses a specific area, after the user presses the screen, the event is broadcast through the event broadcaster and the listener corresponding to the event is found in the event listening registry. If the user presses the particular area, the corresponding listener can be found. After finding the corresponding listener, return the result, e.g. return that a specific area is pressed. If a listener is logically bound to an event, the result of the corresponding event being triggered may be returned. If the user does not press the particular area, the corresponding listener will not be found and therefore no result will be returned or no result will be returned for the corresponding listener and the event bound to the listener will not be triggered. It should be understood that events in the event source may also be associated with a timer. For example, a listener may be defined as one that reaches a timer of 10s, and when an event occurs that reaches 10s, the listener may be found and the correlation result returned. In one application scenario, this may be used to time to replace certain image elements, e.g., replacing a frame-by-frame animation after 10 s.
Referring now to fig. 5 and 6, one application scenario in accordance with an embodiment of the present invention will first define a listener to listen to a specific area where a "lottery button" is displayed. The listener is then bound to the lottery trigger event. Then, when the user clicks "draw," a finger-down screen event in the event source occurs. The event is broadcast by the event broadcaster and a listener corresponding to the event is found in the event listening registry. Upon finding the listener, a return to the lottery trigger event occurs because the listener is already bound to the lottery trigger event. In response to the occurrence of the lottery event, the system can initialize the prize pool and then judge whether the user wins the gold prize or the silver prize according to the data returned by the server. If the user wins the gold prize, the animation class is converted into the animation class name of the gold prize pool, and if the user wins the silver prize, the animation class is converted into the animation class name of the silver prize pool. In one example, when a user draws a gold prize, an animation is initiated, resources and styles are invoked, and the gold prize animation is displayed. In one example, after the animation is complete, the animation can be hidden, parked, and a winning reminder displayed. The following is example code that implements the above application scenario, where "function initPoll ()" is used to initialize a prize pool, and "if (skin = =" gold ")" is used to determine whether the user wins a gold prize. If the prize is won, the instructions in the if function are executed, wherein "new mo. film ('# large matching estimate')" is used to initialize the animation, call the resources and styles.
function initPoll () {// initialize progressive
PTTSendClick ('initpool' );
if(skin == "gold"){
var pollData = rewardPoll.goldPool;
document.getElementById("superPollPrevList").className = "superPollPrevList1";
document.getElementById("normalPollPrevList").className = "normalPollPrevList1";
changeBgFootball("gold");
if(!goldLotteryAnimate){
var path = "//game.gtimg.cn/images/ffm/act/a20200408lottery/reward/animate/lottery_Yellow/";
var resource = [];
for (var i = 0; i < 126; i++) {
var name = (Array(5).join(0) + i).slice(-5);
resource.push(path+'Yellow_'+name+'.png');
};
goldLotteryAnimate = new mo.Film(document.querySelector('#goldLotteryAnimate'),{
resource : resource,
totalFrame : 126,
// onPlaying:function(index,i){
// console.log(index);
// },
aniComplete:function(){
document.getElementById("goldLotteryAnimate").style.display = "none";
document.getElementById('goldfootball').style.display="";
document.getElementById("overlay").style.display = "none";
goldAnimateBg.play(51,'backward')
lotteryNotice(); 。
7A-7D illustrate an example application of JS logic control in accordance with an embodiment of the present invention. As shown, JS logic may be used to control the live action latency exposure of an element. As shown in fig. 7A, the picture can be covered by a white background picture with a strip-shaped gap in the middle. In fig. 7B, a white bar clear button is built up over each gap. Clicking the transparent button may be defined as a listener in the event listening registry. Then in FIG. 7C, the animation effect event is bound to the listener. Specifically, a dynamic effect is provided below the white bar, and is sequentially provided in the order of fly-out (down) -fly-out (up), and a delay time is provided from the second transparent button. As shown in fig. 7D, when the transparent button of the white bar is clicked, a click event in the event source occurs, the event is broadcast through the event broadcaster, and a listener is found in the event listening registry. And returning the bound animation effect event after finding the listener, and sequentially flying out the white bars in the image according to the setting to display the covered image elements. And finally, the effect is the dynamic effect delay display of the element.
Fig. 8 shows a flow diagram of a method for presenting image elements according to another embodiment of the invention. In an embodiment of the present invention, the method 300 may further include listening for a page size change in step 350. Since the size of the page is not forcibly displayed under the WEB page system, the size of the page can be changed. The page size may be changed by simply rotating the screen (e.g., on the mobile side), by dragging the page to change size (e.g., on the computer side), or by any other suitable method. The method for monitoring page size change is the same as the above method for monitoring image element event, and is not described herein again.
In step 360, the animation shape of the frame-by-frame animation is adjusted when a page size change is monitored. The change of the page size needs to adjust the animation shape of the frame-by-frame animation. In one embodiment, the animation morphology of the frame-by-frame animation may include the size of the frames of the frame-by-frame animation. After the page size changes, if the animation frame by frame is still displayed according to the original frame size, a display problem may be caused, for example, when 1 frame should be played originally, 1.5 frames are displayed as a result. Therefore, it is necessary to change the animation shape of the frame-by-frame animation after the page size is changed, and to recalculate the size of the frame.
In step 370, the system plan style is recalled to compute the system pixels. After the page size changes, the size and relative position of the various elements in the page may change, thus requiring a system schema to be recalled to recalculate the system pixels.
In step 380, after the system pixels are computed, the animation style of the frame-by-frame animation is recalled. After the system pixels are computed, the animation style of the frame-by-frame animation may be recalled to reset the animation style according to the new system style. It should be noted that calling an animation style needs to be done after the system pixel computation to prevent the pixel points of the animation frame from being computed incorrectly. The code that invokes the system style and animation style is as follows:
win.addEventListener('DOMContentLoaded', function(){
setFont();
}, false);
//win.addEventListener('onorientationchange' in window
Figure DEST_PATH_IMAGE001
'orientationchange' : 'resize', setFont, false);
})(window, document);
</script>
<link rel="stylesheet" href="css/style.css">
</head> 。
in step 390, the frame-by-frame animation is reconfigured based on the recalled animation style.
It is noted that although the method step of listening for a change in size of a page is denoted 350, this does not mean that the method step of listening for a change in size of a page is performed after displaying the second image element. In fact, the step of listening for page size changes may be performed at any suitable time, e.g., listening may be performed immediately after system loading. The step of listening for a page size change may also be performed at any suitable time, for example, in one embodiment the page size change may be found before the first image element is displayed. At this point, the animation aspect of the frame-by-frame animation may be adjusted, the system solution style recalled to compute the system pixels, the animation style of the frame-by-frame animation recalled, and then the first image element displayed in the new animation style and size. In another embodiment, the first image element may be displayed before the image element triggering event is heard. The animation aspect of the frame-by-frame animation may be adjusted at this point, the system solution style recalled to compute the system pixels, the animation style of the frame-by-frame animation recalled, and then the first image element redisplayed in the new animation style and size. In yet another embodiment, the screen size change may be discovered after listening for the image element triggering event, before displaying the second image element. At this time, the animation shape of the frame-by-frame animation can be adjusted, the system scheme style is called again to calculate the system pixel, the animation style of the frame-by-frame animation is called again, and after the reference path of the second image element is replaced, the second image element is displayed in a new animation style and size. In yet another embodiment, the screen size change may be found after displaying the second image element. The animation aspect of the frame-by-frame animation may be adjusted at this point, the system solution style recalled to compute the system pixels, the animation style of the frame-by-frame animation recalled, and then the first image element redisplayed in the new animation style and size.
Fig. 9 shows a schematic flow chart of the step of adjusting the animation morphology of the frame-by-frame animation (i.e. step 360 of the method 300) in the method for presenting image elements shown in fig. 8. In step 361, the aspect ratio of the page is set. In particular, because it is the change in aspect ratio of the page regardless of the change in page size. Therefore, the parameter of the length-width ratio of the page is set according to the length-width ratio of the current page.
In step 362, the animation class (English: class) of the frame-by-frame animation is copied. In particular, upon listening for a page size change, another animation class is replicated for the current animation class. Because of the replicated animation classes, both animation classes are of the same animation type, which may both be of CSS3 frame-by-frame animation, for example. In one embodiment, the replicated animation class may simply be a change in the name of the animation class. For example, the following code is an animation style of the replicated animation class. It can be seen that compared to the previously defined animation style, the two differ only in name, i.e., the original animation class is named "file" and the copied animation class is named "file 2".
@keyframes film2{
0%{background-position:0 0; }
7.1%{background-position:0 -1rem; }
14.2%{background-position:0 -2rem; }
21.3%{background-position:0 -3rem; }
28.4%{background-position:0 -4rem; }
35.5%{ background-position:0 -5rem; }
42.6%{ background-position:0 -6rem; }
49.7%{ background-position:0 -7rem; }
56.8%{ background-position:0 -8rem; }
63.9%{ background-position:0 -9rem; }
71%{ background-position:0 -10rem; }
78.1%{ background-position:0 -11rem; }
85.2%{ background-position:0 -12rem; }
92.3%{ background-position:0 -13rem; }
100%{ background-position:0 -14rem; }
} 。
In step 363, a switch is made to the copied animation class to trigger an animation morphology adjustment for the frame-by-frame animation. In one embodiment, in CSS3 frame-by-frame animation, switching to the copied animation class may be simply modifying the name of the animation class. Under the CSS3 frame-by-frame animation technology, when the name of the animation class is modified, the CSS3 frame-by-frame animation can be triggered to reset the calculation mode of the frame-by-frame animation, so as to complete the animation shape adjustment of the frame-by-frame animation. The traditional mode of dynamically activating and inserting each JS and controlling the frame-by-frame animation is extremely difficult to maintain, and the CSS3 frame-by-frame animation is adopted to realize, so that the maintenance and updating efficiency is higher, and the cost is more saved.
The following code shows example code for adjusting animation aspects of a frame-by-frame animation after monitoring a page size change:
win.addEventListener('resize', function(){
setFont();
var films = document.querySelectorAll('.film');
for(var i=0; i<films.length; i++){
if(film == "film"){
films[i].style.animationName = "film2";
film = "film2";
}else{
films[i].style.animationName = "film";
film = "film";
}
//
}
}, false); 。
it should be appreciated that to improve display efficiency, resources such as styles, pictures, etc. may be preloaded prior to execution of method 300. The preloading can effectively prevent the frame from being loaded and completed when the frame-by-frame animation displays a certain frame according to a plan, thereby effectively avoiding the blocking condition of the frame-by-frame animation and effectively improving the display efficiency. In one embodiment, resources such as styles, animated pictures of frame-by-frame animations, etc. may be preloaded through the document object model. The creation and maintenance of image elements in a page based on an object tree is highly efficient and can save costs significantly. In one embodiment, the frame size of the same type of frame-by-frame animation is fixed to facilitate subsequent invocations.
FIG. 10 illustrates a diagram for preloading resources based on an object tree, according to an embodiment of the invention. As shown in fig. 10, a Document Object Model (DOM) tree and a CSS object model (CSSOM) tree are first constructed. The steps of building the DOM tree include: converting, converting bytes into characters according to the specified encoding of the document; tokenizing, namely converting the character string into a token according to the existing standard; lexical analysis, converting the token into nodes defining attributes and rules of the token; and constructing a DOM tree, and linking the created nodes in a tree-shaped data structure. The common steps for constructing the CSSOM tree are similar to the steps for constructing the DOM tree, and comprise the steps of converting, tokenizing, lexical analysis, constructing the CSSOM tree and the like. It should be understood that other method steps may be used to build the DOM tree and the CSSOM tree. After the DOM Tree and the CSSOM Tree are built, the nodes of the DOM Tree and the CSSOM Tree are connected together to have the attributes and the rules defined in the DOM Tree and the CSSOM Tree, so that a Render Tree (English: Render Tree) is built. The render tree is used to calculate the layout of the visible elements and as input to the process of rendering the pixels onto the screen. In order to form a render tree, the following steps are generally required: each visible node is traversed starting from the DOM tree root node. Some nodes are completely invisible (such as meta tags, etc.) and these nodes will be ignored because they will not affect the output of the rendering. Some nodes are hidden by CSS styles and are also ignored-for example, the span node in FIG. 10 is ignored in the rendering tree because the span style is display: none. For each visible node, the appropriate matching CSSOM rule is found, and the visible node is displayed applying a style.
It can be appreciated that the embodiment of the invention realizes the display of the image elements by using the embedded WEB form webpage at the client and utilizing various H5 animation interaction technologies, so as to improve the development efficiency and effect of the client. Specifically, in the embodiment of the invention, a client side animation library is formed by interactively combining the CSS3 frame-by-frame animation, the JS frame-by-frame animation and the video animation, and the dynamic animation is generated by combining characters and props in the client side so as to achieve the purpose of displaying image elements. The embodiment of the invention has strong compatibility, improves the development and maintenance efficiency, reduces the development and maintenance cost and can display complex scenes. It should be appreciated that although the CSS3 frame-by-frame animation is used as an example by embodiments of the present invention, other similar frame-by-frame animation may be used to implement embodiments of the present invention.
Fig. 11 shows an exemplary structural block diagram of an apparatus for presenting image elements according to an embodiment of the present invention. As shown in fig. 11, the apparatus 1100 includes a listening image element module 1110, a reference path replacing module 1120, and a display module 1130.
The listen image element module 1110 is configured to listen for an image element triggering event that triggers replacement of a first image element, where the first image element is displayed in the page based on a reference path of the first image element. The reference path replacing module 1120 is configured to replace the reference path of the first image element with the reference path of the second image element when the image element triggering event is monitored. The display module 1130 is configured to display the second image element in the page based on the reference path of the second image element.
In one embodiment, the page may be a WEB page embedded in the client. In another embodiment, the WEB page may not be embedded in the client, for example, the WEB page may be a browser of the terminal device or the WEB may be combined with other devices. Image elements refer to the collective names of various graphic and visual elements. In embodiments of the present invention, the first image element and the second image element may comprise a frame-by-frame animation or a still picture. In one embodiment of the invention, the first image element and the second image element may each be animated frame by frame. At this point, the first image element and the second image element are of the same frame-by-frame animation type, e.g., both are of CSS3 frame-by-frame animation. And the first image element and the second image element have the same number of frames, and the frames of the first image element and the second image element have the same size. In another embodiment of the invention, one of the first image element or the second image element may be a frame-by-frame animation and the other of the first image element or the second image element may be a normal still picture.
In one embodiment of the invention, when the image element is a frame-by-frame animation, the reference path of the frame-by-frame animation may be a reference path of an animation picture, wherein the animation picture includes a combination of individual frames of the frame-by-frame animation. In one embodiment of the invention, the animation picture may be a sprite picture when the frame-by-frame animation is the CSS3 frame-by-frame animation. In one embodiment, resources such as styles, animated pictures of frame-by-frame animations, etc. may be preloaded through the document object model.
In one embodiment of the present invention, listening image elements module 1110 further comprises a definition module 1111 for defining one or more listeners. The listen for image element module 1110 also includes a binding module 1112 for using logic to bind one or more listeners to one or more image element triggering events.
Fig. 12 is a block diagram illustrating an exemplary structure of an apparatus for presenting image elements according to another embodiment of the present invention. As shown in fig. 12, the apparatus 1200 may further include a listening page module 1240, an adjustment module 1250, a call module 1260, and a configuration module 1270 in addition to the listening image element module 1210, the reference path replacing module 1220, and the display module 1230. The image element module 1210, the reference path replacing module 1220, and the display module 1230 are the same as the listening image element module 1110, the reference path replacing module 1120, and the display module 1130 shown in fig. 11, and are not described herein again.
Listening pages module 1240 is used to listen for page size changes. The adjusting module 1250 is configured to adjust an animation shape of the frame-by-frame animation when the page size change is monitored, wherein the animation shape includes at least a size of a frame of the frame-by-frame animation. The calling module 1260 is used for recalling the system scheme style to calculate the system pixel when the page size change is monitored; after the system pixels are computed, the animation style of the frame-by-frame animation is recalled. A configuration module 1270 is used to reconfigure the frame-by-frame animation based on the recalled animation style.
In one embodiment, the adjustment module 1250 also includes a setting module 1251 for setting the aspect ratio of the page. Adjustment module 1250 also includes a copy module 1252 for copying animation classes for frame-by-frame animation. The adjustment module 1250 also includes a switch module 1253 for switching to the copied animation class to trigger animation pose adjustment of the frame-by-frame animation.
The various modules described above with respect to fig. 11 and 12 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, the modules may be implemented as hardware logic/circuitry. For example, in an embodiment, one or more of the listening image elements module 1110, the reference path replacement module 1120, and the display module 1130 may be implemented together in a system on a chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions. The features of the techniques described herein are carrier-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.
Although specific functionality is discussed above with reference to particular modules, it should be noted that the functionality of the various modules discussed herein may be divided into multiple modules and/or at least some of the functionality of multiple modules may be combined into a single module. Additionally, a particular module performing an action discussed herein includes the particular module itself performing the action, or alternatively the particular module invoking or otherwise accessing another component or module that performs the action (or performs the action in conjunction with the particular module). Thus, a particular module that performs an action can include the particular module that performs the action itself and/or another module that the particular module that performs the action calls or otherwise accesses.
FIG. 13 illustrates an example application scenario in accordance with an embodiment of the present invention. It should be understood that the application scenario shown in FIG. 13 is merely exemplary, and not limiting. And opening a client in the mobile terminal, for example, opening a mobile phone game in a mobile phone. In step 1301, after the client is opened, resources such as styles, frames, pictures, etc. are preloaded. For example, various animation styles, a snow-jade map, and the like are required to be displayed on a mobile game page. In step 1302, after loading is complete, pages, frame data, etc. may be initialized. In step 1303, it is listened whether the screen is rotated. If the screen is rotated, the animation aspect of the frame-by-frame animation is adjusted, and the frame size of the frame-by-frame animation is adjusted in step 1304. Then in step 1305, new system styles and animation styles are invoked to configure the page and frame-by-frame animation. If the screen is not rotated, steps 1304 and 1305 are skipped and the process goes directly to step 1306. In step 1306, a data transfer is performed. The user clicks the lottery drawing button in the page, the mobile phone sends the lottery drawing request to the server, and then the server returns the lottery drawing result to the mobile phone of the user according to the calculation result. In step 1307, data interworking is performed. The mobile phone of the user receives the lottery result and feeds the lottery result back to the corresponding module in the client. And the corresponding module in the client selects the animation effect according to the lottery result. For example, if the user wins a gold prize, a gold animation is displayed, and if the user wins a silver prize, a silver animation is displayed or no animation is displayed. In step 1308, the frame is animated into an animated shape. For example, when a user wins a gold prize, an animation picture of the gold prize is called, and each frame of the frame-by-frame animation is animated based on a previously configured animation shape and animation style. In step 1309, an interactive presentation is performed.
14A-14D are user interfaces showing image elements from the perspective of a user according to embodiments of the present invention.
In fig. 14A, a WEB page 14100 embedded in a client and a navigation bar 14101 are shown. Control of the WEB page, such as forward, backward, refresh, forward, exit, etc., may be achieved via the navigation bar 14101. The WEB page 14100 in fig. 14A includes a background picture 14103, and activity plans 14105A to 14105C. In one embodiment, the WEB page 1400 may further include a button 14107 for switching the activity schemes 14105A-14105C, and a button 14109 for selecting the activity schemes 14105A-14105C. After selecting the activity scheme 14105A, 14105B, or 14105C, the selected activity scheme is entered.
When the activity scheme is selected, the client pre-loads the system style and the animation style corresponding to the activity scheme, and initializes and displays the image elements according to the pre-loaded system style and animation style. In one embodiment, the activity scheme 14200 in fig. 14B is a lottery activity example. In one embodiment, the activity scheme 14200 includes background pictures 14201, video animations 14203, and image elements 14205A-14205F. In one embodiment, the campaign 14200 may also include a lottery button, a prize pool selection button (e.g., gold prize pool, silver prize pool). In one embodiment, the activity scheme may also include other elements, such as text, buttons, links, and the like. It should be understood that while one video animation and six image elements are shown in FIG. 14B, embodiments of the invention may include multiple video animations or no video animation, and may also include one or more image elements.
FIG. 14C is an example of the image elements 14205A-14205F in FIG. 14B. In one embodiment, one or more of the image elements 14205A-14205F may be animated frame by frame, while the remaining image elements may be still pictures.
Fig. 14D illustrates the lottery dynamic effect of the activity scenario 14200 in fig. 14B. In FIG. 14D, the activity scheme 14200 includes background pictures 14201, video animations 14203, image elements 14305A-14305F. In the event scenario 14200, each time a drawing is completed, a video animation 14203 is played (in this example, the ball flies in the direction of the arrow), and the result of the drawing is presented by replacing the picture elements 14205A-14205F in fig. 14B with picture elements 14305A-14305F.
FIG. 15 shows a schematic block diagram of a computing device 1500 in accordance with embodiments of the invention. Computing device 1500 is a device for performing methods for presenting image elements in accordance with embodiments of the present invention.
Computing device 1500 can be a variety of different types of devices, such as a server computer, a device associated with a client (e.g., a client device), a system on a chip, and/or any other suitable computing device or computing system.
The computing device 1500 may include at least one processor 1502, memory 1504, communication interface(s) 1506, display device 1508, other input/output (I/O) devices 1510, and one or more mass storage devices 1512, which may be capable of communicating with each other such as by way of a system bus 1514 or other appropriate connection.
The processor 1502 may be a single processing unit or multiple processing units, all of which may include single or multiple computing units or multiple cores. The processor 1502 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 1502 may be configured to retrieve and execute computer readable instructions, such as program code for an operating system 1516, program code for an application 1518, program code for other programs 1520, etc., stored in the memory 1504, mass storage 1512, or other computer readable medium to implement the methods for presenting image elements provided by embodiments of the present invention.
Memory 1504 and mass storage device 1512 are examples of computer storage media for storing instructions that are executed by processor 1502 to carry out the various functions described supra. By way of example, the memory 1504 may generally include both volatile and non-volatile memory (e.g., RAM, ROM, and the like). In addition, the mass storage device 1512 may generally include a hard disk drive, solid state drive, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. Memory 1504 and mass storage device 1512 may both be collectively referred to herein as memory or computer storage media, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 1502 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of program modules may be stored on the mass storage device 1512. These programs include an operating system 1516, one or more application programs 1518, other programs 1520, and program data 1522, and they can be loaded into memory 1504 for execution. Examples of such applications or program modules may include, for instance, computer program logic (e.g., computer program code or instructions) for implementing the following components/functions: a listening image elements module 1110, a reference path replacement module 1120, a display module 1130, and/or further embodiments described herein. In some embodiments, these program modules may be distributed over different physical locations.
Although illustrated in fig. 15 as being stored in memory 1504 of computing device 1500, modules 1516, 1518, 1520, and 1522, or portions thereof, can be implemented using any form of computer-readable media that is accessible by computing device 1500. As used herein, "computer-readable media" includes at least two types of computer-readable media, namely computer storage media and communication media.
Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism. Computer storage media, as defined herein, does not include communication media.
Computing device 1500 may also include one or more communication interfaces 1506 for exchanging data with other devices, such as over a network, direct connection, and the like. The communication interface 1506 may facilitate communication within various networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the internet, and so forth. The communication interface 1506 may also provide communication with external storage devices (not shown), such as in storage arrays, network attached storage, storage area networks, and so forth.
In some examples, a display device 1508, such as a monitor, may be included for displaying information and images. Other I/O devices 1510 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so forth.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various devices, elements, or components, these devices, elements, or components should not be limited by these terms. These terms are only used to distinguish one device, element, or component from another device, element, or component.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus.
Although embodiments of the present invention have been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of embodiments of the invention is limited only by the accompanying claims. Additionally, although individual features may be included in different claims, these may possibly advantageously be combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. The order of features in the claims does not imply any specific order in which the features must be worked. Furthermore, in the claims, the word "comprising" does not exclude other elements, and the indefinite article "a" or "an" does not exclude a plurality. Reference signs in the claims are provided merely as a clarifying example and shall not be construed as limiting the scope of the claims in any way.

Claims (15)

1. A method for presenting image elements, the method comprising:
monitoring an image element triggering event, wherein the image element triggering event is used for triggering the replacement of a first image element, and the first image element is displayed in a page based on a reference path of the first image element;
when the image element triggering event is monitored, replacing the reference path of the first image element with the reference path of the second image element; and
displaying the second image element in the page based on the reference path of the second image element,
wherein the first image element and/or the second image element comprise a frame-by-frame animation.
2. The method of claim 1, wherein:
the page is configured based on system styles; and
the frame-by-frame animation is configured based on an animation style of the frame-by-frame animation,
wherein the system style includes at least an animation style of the frame-by-frame animation,
wherein the animation style of the frame-by-frame animation at least comprises the frame number and the control information of the frame-by-frame animation.
3. The method of claim 2, wherein the method further comprises:
monitoring the size change of the page; and
when the page size change is monitored, adjusting the animation shape of the frame-by-frame animation,
wherein the animation morphology comprises at least a size of a frame of the frame-by-frame animation.
4. The method of claim 3, wherein adjusting the animation morphology of the frame-by-frame animation comprises:
setting the length-width ratio of the page;
copying an animation class of the frame-by-frame animation; and
switching to the copied animation class to trigger an animation morphology adjustment of the frame-by-frame animation.
5. The method of claim 3, wherein the method further comprises:
when the page size change is monitored, recalling the system scheme style to calculate system pixels;
recalling animation styles of the frame-by-frame animation after the system pixels are calculated; and
reconfiguring the frame-by-frame animation based on the recalled animation style.
6. The method of claim 1, wherein listening for image element triggering events comprises:
defining one or more listeners; and
logic is used to bind the one or more listeners to one or more image element triggering events.
7. The method of any one of claims 1-6, wherein:
when the first image element and the second image element both comprise a frame-by-frame animation, the first image element and the second image element are of the same frame-by-frame animation type, the first image element and the second image element have the same number of frames, and the frames of the first image element and the second image element have the same size.
8. The method of any one of claims 1-6, wherein:
when one of the first image element or the second image element comprises a frame-by-frame animation, the other of the first image element or the second image element comprises a still picture.
9. The method of any one of claims 1-6, wherein:
the page on which the first image element or the second image element is displayed includes a WEB page.
10. The method of any one of claims 1-6, wherein:
the reference path of the frame-by-frame animation comprises a reference path of an animation picture, wherein the animation picture comprises a combination of frames of the frame-by-frame animation.
11. The method of claim 10, wherein:
when the frame-by-frame animation is the CSS3 frame-by-frame animation, the animation picture is a sprite picture.
12. The method of claim 10,
at least the system style, the animation style, and the animation picture are preloaded.
13. An apparatus for presenting image elements, the apparatus comprising:
a monitor image element module to monitor an image element triggering event for triggering replacement of a first image element, wherein the first image element is displayed in a page based on a reference path of the first image element;
the reference path replacing module is used for replacing the reference path of the first image element with the reference path of the second image element when the image element triggering event is monitored; and
a display module to display the second image element in the page based on the reference path of the second image element,
wherein the first image element and/or the second image element comprise a frame-by-frame animation.
14. A computing device, characterized in that the computing device comprises a memory and a processor, the memory having stored therein a computer program, which, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1-12.
15. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1-12.
CN202010869764.XA 2020-08-26 2020-08-26 Method and device for displaying image elements Pending CN112052416A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010869764.XA CN112052416A (en) 2020-08-26 2020-08-26 Method and device for displaying image elements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010869764.XA CN112052416A (en) 2020-08-26 2020-08-26 Method and device for displaying image elements

Publications (1)

Publication Number Publication Date
CN112052416A true CN112052416A (en) 2020-12-08

Family

ID=73600706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010869764.XA Pending CN112052416A (en) 2020-08-26 2020-08-26 Method and device for displaying image elements

Country Status (1)

Country Link
CN (1) CN112052416A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561585A (en) * 2020-12-16 2021-03-26 中国人寿保险股份有限公司 Information service system and method based on graph
CN112770185A (en) * 2020-12-25 2021-05-07 北京达佳互联信息技术有限公司 Method and device for processing Sprite map, electronic equipment and storage medium
CN113538633A (en) * 2021-07-23 2021-10-22 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN113792238A (en) * 2021-09-16 2021-12-14 山石网科通信技术股份有限公司 SVG image processing method and device, storage medium and processor
CN114978933A (en) * 2022-05-25 2022-08-30 安天科技集团股份有限公司 Display method and device for display elements of three-dimensional topology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359430A1 (en) * 2013-06-03 2014-12-04 Microsoft Corporation Animation editing
CN109582402A (en) * 2017-09-29 2019-04-05 北京金山安全软件有限公司 page display method and device
CN110297996A (en) * 2019-05-21 2019-10-01 深圳壹账通智能科技有限公司 Cartoon display method, device, equipment and storage medium based on the H5 page
CN111428166A (en) * 2020-02-28 2020-07-17 深圳壹账通智能科技有限公司 Page configuration method, page element replacement method, device, equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140359430A1 (en) * 2013-06-03 2014-12-04 Microsoft Corporation Animation editing
CN109582402A (en) * 2017-09-29 2019-04-05 北京金山安全软件有限公司 page display method and device
CN110297996A (en) * 2019-05-21 2019-10-01 深圳壹账通智能科技有限公司 Cartoon display method, device, equipment and storage medium based on the H5 page
CN111428166A (en) * 2020-02-28 2020-07-17 深圳壹账通智能科技有限公司 Page configuration method, page element replacement method, device, equipment and medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561585A (en) * 2020-12-16 2021-03-26 中国人寿保险股份有限公司 Information service system and method based on graph
CN112770185A (en) * 2020-12-25 2021-05-07 北京达佳互联信息技术有限公司 Method and device for processing Sprite map, electronic equipment and storage medium
CN112770185B (en) * 2020-12-25 2023-01-20 北京达佳互联信息技术有限公司 Method and device for processing Sprite map, electronic equipment and storage medium
CN113538633A (en) * 2021-07-23 2021-10-22 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN113538633B (en) * 2021-07-23 2024-05-14 北京达佳互联信息技术有限公司 Animation playing method and device, electronic equipment and computer readable storage medium
CN113792238A (en) * 2021-09-16 2021-12-14 山石网科通信技术股份有限公司 SVG image processing method and device, storage medium and processor
CN114978933A (en) * 2022-05-25 2022-08-30 安天科技集团股份有限公司 Display method and device for display elements of three-dimensional topology
CN114978933B (en) * 2022-05-25 2024-04-30 安天科技集团股份有限公司 Display method and device for display elements of three-dimensional topology

Similar Documents

Publication Publication Date Title
US10592238B2 (en) Application system that enables a plurality of runtime versions of an application
CN112052416A (en) Method and device for displaying image elements
CN112114807A (en) Interface display method, device, equipment and storage medium
US9098505B2 (en) Framework for media presentation playback
US8453051B1 (en) Dynamic display dependent markup language interface
JP5930497B2 (en) Template file processing method and apparatus
CN111966354A (en) Page display method and device and computer readable storage medium
CN113411664B (en) Video processing method and device based on sub-application and computer equipment
US20150248722A1 (en) Web based interactive multimedia system
CN109154943A (en) Conversion based on server of the automatic broadcasting content to click play content
CN112764871B (en) Data processing method, data processing device, computer equipment and readable storage medium
CN111221530B (en) Mobile terminal Web application interface construction method, web application interface and operation method thereof
CN113655999B (en) Page control rendering method, device, equipment and storage medium
WO2016058416A1 (en) Push information processing method and device, browser, browser plug-in and non-volatile computer storage medium
CN111324381B (en) Development system, development method, development apparatus, computer device, and storage medium
CN113891140A (en) Material editing method, device, equipment and storage medium
CN113326043B (en) Webpage rendering method, webpage manufacturing method and webpage rendering system
CN105359104A (en) Synchronization points for state information
CN112667942A (en) Animation generation method, device and medium
Godwin-Jones New developments in web browsing and authoring
CN114860231A (en) Page rendering method, device, equipment and storage medium
CN113849257A (en) Page processing method, device, medium and electronic equipment
Sodnik et al. The future of web
Aryal Bootstrap: a front-end framework for responsive web design
WO2022088981A1 (en) Advertisement display method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 5 / F, area C, 1801 Hongmei Road, Xuhui District, Shanghai, 201200

Applicant after: Tencent Technology (Shanghai) Co.,Ltd.

Address before: 201200 5th floor, area C, 1801 Hongmei Road, Xuhui District, Shanghai

Applicant before: Tencent Technology (Shanghai) Co.,Ltd.

CB02 Change of applicant information