CN118175367A - Display equipment and content display method - Google Patents

Display equipment and content display method Download PDF

Info

Publication number
CN118175367A
CN118175367A CN202410164512.5A CN202410164512A CN118175367A CN 118175367 A CN118175367 A CN 118175367A CN 202410164512 A CN202410164512 A CN 202410164512A CN 118175367 A CN118175367 A CN 118175367A
Authority
CN
China
Prior art keywords
display
animation
object node
opengl rendering
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410164512.5A
Other languages
Chinese (zh)
Inventor
刘涧
付延松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202410164512.5A priority Critical patent/CN118175367A/en
Publication of CN118175367A publication Critical patent/CN118175367A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a display device and a content display method, wherein a configuration file is acquired, and the configuration file comprises at least one object node. If the object node has at least one animation node, the animation node is used to describe animation information of the object node and animation time of the animation information execution. Based on the animation information, the animation time and the initial display state, the display states of the object nodes at different moments are determined, and OpenGL rendering instructions are generated according to the display states of the object nodes at different moments. And then sending an OpenGL rendering instruction to an OpenGL rendering engine, wherein the OpenGL rendering engine renders the display states of the object nodes at different moments on a display according to the OpenGL rendering instruction, namely, the dynamic display effect of the object nodes is displayed. The dynamic display scheme of the application can realize complex animation effects of multiple pictures and multiple styles under the condition of occupying less system resources.

Description

Display equipment and content display method
Technical Field
The present invention relates to the field of display devices, and in particular, to a display device and a content display method.
Background
The display device refers to a terminal device capable of outputting a specific display screen. Malls and the like can display selling points of various products by using a display device. The dynamic display mode of the product selling points of the market is as follows: the display device plays the high-resolution video in a whole machine full screen mode, for example, the whole machine plays the 4K video in a full screen mode, and the selling points of the products are displayed in a part area of the screen in a floating window mode.
The whole-screen 4K video playing of the display device is realized through Android View, and the 4K video playing by utilizing the Android View needs to occupy most of system resources, so that the available system resources are less when the product selling points are displayed, and therefore, the dynamic display of the product selling points in a market is usually carried out in a mode of multi-picture carousel.
It can be seen that the dynamic display scheme aiming at the market product selling points in the related technology cannot realize the complex animation effects of multiple pictures and multiple styles.
Disclosure of Invention
The application provides display equipment and a content display method, which solve the technical problem that the current dynamic display scheme of a market product selling point cannot realize complex animation effects of multiple pictures and multiple styles.
In a first aspect, some embodiments of the present application provide a display apparatus, including:
A display;
A controller configured to:
Acquiring a configuration file, wherein the configuration file comprises at least one object node, and the object node is used for describing content to be displayed on the display and an initial display state of the object node;
If the object node has at least one animation node, determining the display states of the object node at different moments based on the animation information of the object node described by the animation node, the animation time executed by the animation information and the initial display state;
Generating OpenGL rendering instructions according to display states of the object nodes at different moments;
and sending the OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders the display states of the object nodes at different moments on the display.
In a second aspect, some embodiments of the present application provide a content display method, applied to a display device, the content display method including:
Acquiring a configuration file, wherein the configuration file comprises at least one object node, and the object node is used for describing content to be displayed on the display and an initial display state of the object node;
If the object node has at least one animation node, determining the display states of the object node at different moments based on the animation information of the object node described by the animation node, the animation time executed by the animation information and the initial display state;
Generating OpenGL rendering instructions according to display states of the object nodes at different moments;
and sending the OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders the display states of the object nodes at different moments on the display.
As can be seen from the above technical solutions, the present application provides a display device and a content display method, where a configuration file is obtained, where the configuration file includes at least one object node, and the object node is used to describe content to be displayed on a display and an initial display state of the object node. If the object node has at least one animation node, the animation node is used to describe animation information of the object node and animation time of the animation information execution. Based on the animation information, the animation time and the initial display state, the display states of the object nodes at different moments are determined, and OpenGL rendering instructions are generated according to the display states of the object nodes at different moments. And then sending an OpenGL rendering instruction to an OpenGL rendering engine, wherein the OpenGL rendering engine renders the display states of the object nodes at different moments on a display according to the OpenGL rendering instruction, namely, the dynamic display effect of the object nodes is displayed. The dynamic display scheme of the application can realize complex animation effects of multiple pictures and multiple styles under the condition of occupying less system resources.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device according to an embodiment of the present application;
fig. 2 is a block diagram of a hardware configuration of a control device 100 according to an embodiment of the present application;
Fig. 3 is a block diagram of a hardware configuration of a display device 200 according to an embodiment of the present application;
Fig. 4 is a software configuration diagram of a display device 200 according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an application scenario of a product selling point display method of the related art;
FIG. 6 is a flowchart of a content display method according to some embodiments of the present application;
FIG. 7 is a schematic diagram of a home interface of a display device 200 according to some embodiments of the present application;
FIG. 8 is a schematic diagram of a product selling point display application main interface in a display device 200 according to some embodiments of the present application;
FIG. 9 is a schematic diagram showing the effect of displaying a status of a product sales point description picture in the process of displaying an animation according to some embodiments of the present application;
FIG. 10 is a schematic diagram showing still another display status effect of a product sales point description picture in an animation display process in a display device 200 according to some embodiments of the present application;
FIG. 11 is a system frame diagram of a content display method of a display device 200 according to some embodiments of the present application;
FIG. 12 is a signaling diagram of the system framework of the content display method of FIG. 11 to implement content display;
FIG. 13 is a schematic diagram showing still another display status effect of a product sales point description picture in the animation display process of the display device 200 according to some embodiments of the present application;
FIG. 14 is a schematic view of still another display status effect of a product sales point description picture in the process of displaying an animation according to some embodiments of the present application;
FIG. 15 is a schematic view of still another display status effect of a product sales point description picture in the animation display process of the display device 200 according to some embodiments of the present application;
FIG. 16 is a schematic diagram showing still another display status effect of a product sales point description picture in the animation display process of the display device 200 according to some embodiments of the present application;
FIG. 17 is a schematic diagram showing still another display status effect of a product sales point description picture in the process of displaying an animation according to some embodiments of the present application;
fig. 18 is a schematic diagram of a frame for implementing a content display method based on a timer according to some embodiments of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments of the present application will be described more fully hereinafter with reference to the accompanying drawings. It will be apparent that the exemplary embodiments described are only some, but not all, embodiments of the application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of some embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The display device provided by the embodiment of the application can have various implementation forms, for example, a television, an intelligent television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table) and the like.
Fig. 1 and 3 are specific embodiments of a display device of the present application.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device in an exemplary embodiment of the present application. As shown in fig. 1, a user may operate the display device 200 through the mobile terminal 300 and the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device may receive instructions not using the smart device or control device described above, but rather receive control of the user by touch or gesture, or the like.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control device configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a hardware configuration block diagram of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200. As shown in fig. 3, the display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a processor 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface.
In some embodiments the processor includes a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface. The display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal derived from the processor output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface. The display 260 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display device 200 may establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220.
A user interface, which may be used to receive control signals from the control device 100 (e.g., an infrared remote control, etc.).
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; either the detector 230 comprises an image collector, such as a camera, which may be used to collect external environmental scenes, user attributes or user interaction gestures, or the detector 230 comprises a sound collector, such as a microphone or the like, for receiving external sounds.
The external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the processor 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the host device in which the processor 250 is located, such as an external set-top box or the like.
The processor 250 controls the operation of the display device and responds to the user's operations by various software control programs stored on the memory. The processor 250 controls the overall operation of the display device 200. For example: in response to receiving a user command to select to display a UI object on the display 260, the processor 250 may perform operations related to the object selected by the user command.
In some embodiments, the processor includes at least one of a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), RAM (Random AccessMemory, RAM), ROM (Read-Only Memory, ROM), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Or the user may input the user command by inputting a specific sound or gesture, the user input interface recognizes the sound or gesture through the sensor, and receives the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of a user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a graphically displayed user interface that is related to computer operations. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
As shown in fig. 4, the system of the display device is divided into three layers, an application layer, a middleware layer, and a hardware layer, from top to bottom.
The application layer mainly comprises common applications on the television, and an application framework (Application Framework), wherein the common applications are mainly applications developed based on Browser, such as: HTML5 APPs; and a native application (NATIVE APPS).
The application framework (Application Framework) is a complete program model with all the basic functions required by standard application software, such as: file access, data exchange, and the interface for the use of these functions (toolbar, status column, menu, dialog box).
The native application (NATIVE APPS) may support online or offline, message pushing, or local resource access.
The middleware layer includes middleware such as various television protocols, multimedia protocols, and system components. The middleware can use basic services (functions) provided by the system software to connect various parts of the application system or different applications on the network, so that the purposes of resource sharing and function sharing can be achieved.
The hardware layer mainly comprises a HAL interface, hardware and a driver, wherein the HAL interface is a unified interface for all the television chips to be docked, and specific logic is realized by each chip. The driving mainly comprises: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
A selling point for displaying various products using the display device 200 is required in a place such as a mall. The dynamic display mode of the product selling points of the market is as follows: the display device 200 plays the high-resolution video in full screen, for example, plays the 4K video in full screen, and the selling point of the product is displayed in the form of floating window in the partial area of the screen. Because the full-screen playing of the 4K video by the whole display device 200 is realized through Android View, and the playing of the 4K video by using the Android View needs to occupy most of the system resources, the system resources which can be used when the product selling point is displayed are less, and therefore, the market dynamically displays the product selling point usually in a mode of multi-picture carousel.
For example, as shown in the user interface schematic diagram of the display device 200 shown in fig. 5, on the basis of playing 4K video through the whole machine, the description pictures of the selling points are displayed in the form of floating windows, usually, a plurality of description pictures of the selling points are preset, and then dynamic display is performed in the manner of carousel of the description pictures of the selling points. Therefore, in order to reduce the occupation of system resources, the related technology cannot realize complex animation effects of multiple pictures and multiple styles aiming at a dynamic display scheme of a market product selling point.
In view of the above, some embodiments of the present application provide a content display method applied to the display apparatus 200. In order to facilitate understanding of the technical solutions in some embodiments of the present application, the following details of each step are described with reference to some specific embodiments and the accompanying drawings. Fig. 6 is a flowchart illustrating a content display method performed by the display device 200 according to some embodiments of the present application.
As shown in fig. 6, the controller 250 in some embodiments of the application is configured to perform the following steps:
step S601, obtaining a configuration file, where the configuration file includes at least one object node and at least one animation node corresponding to the object node, where the object node is used to describe content that needs to be displayed on the display and an initial display state that the object node has.
The content display method of the application can be applied to dynamic display of product selling points in places such as markets, and the like, so the application scene can be: after the display device 200 is turned on, a home page interface is displayed on the display, and the home page interface includes an application icon as shown in fig. 7. The application icons shown in fig. 7 include at least an application icon for dynamically displaying a product selling point application, such as application 1. In response to a start instruction input by the user by clicking an application icon of the application 1, the application 1 is controlled to start, and jumps from the home page interface to the application interface of the application 1 as shown in fig. 8. The application interface of the application 1 further comprises a plurality of icons for displaying animation: animation 1, animation 2, animation 3 … …. And responding to the play instruction input by the user through clicking the icon of the animation 1, acquiring the configuration file 1 corresponding to the animation 1, and responding to the play instruction input by the user through clicking the icon of the animation 2, and acquiring the configuration file 2 … … corresponding to the animation 2.
The configuration file is a file for dynamically showing product selling points. The configuration file may be stored in a memory internal to the display device 200 or may be stored in a memory external to the display device 200. The configuration files may also be stored in a plug-in device, such as in a server corresponding to a platform associated with the dynamic display product selling point, where the user may edit different configuration files. If the configuration file is stored in the plug-in device, the display device 200 needs to send a configuration file request to the plug-in device, and the plug-in device feeds back the configuration file to the display device 200 according to the configuration file request.
It should be noted that, in the present application, the configuration file may include a correspondence between an icon identifier (an identifier of an icon displaying an animation) and an animation description file, and may further include specific animation description file contents, that is, specific animation description file contents may be directly obtained from the configuration file. The configuration file may also include only the correspondence between the icon identification and the animation description file, and not the specific animation description file content. The specific animation description file content can be stored in other memories, the corresponding relation between the icon identification and the animation description file is found from the configuration file, and the corresponding specific animation description file content can be obtained from the other memories according to the corresponding relation.
In some embodiments, a plurality of configuration files may be set in the display device 200 or the plug-in device, and then the priority ranking may be performed on the plurality of configuration files, that is, when a correspondence between the plurality of configuration files and the icon identifier is found, the configuration file with the highest priority is obtained according to the priority ranking.
The content display method is applied to dynamic display of the product selling points, so that the configuration file comprises at least one object node, and the object node is used for describing the content required to be displayed on a display and the initial display state of the object node. The content to be displayed on the display may be elements such as pictures, characters, graphics, etc., and the initial display state of the object node may be determined by information such as a display position state, a display size state, and a display transparency state. The display position state can be specifically determined by coordinates of elements such as pictures, characters and graphics on a screen, the display size state can be determined by size information such as length, width, height and side length, and the display transparency state can be determined by transparency information.
Each object node in the configuration file has its own initial display state, for example, the configuration file includes a picture 1 and a text 1, and the initial display state of the picture 1 is: displayed in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the initial display state of the character 1 is: displayed in the upper right corner of the screen (center coordinate x2, y 2), size { L2, H2}, transparency 100%.
Step S602, if the object node has at least one animation node, determining the display states of the object node at different moments based on the animation information of the object node described by the animation node, the animation time executed by the animation information and the initial display state;
After the controller obtains the configuration file, it needs to determine whether the object node has an animation node. And then displaying the object nodes in different modes by using different tools according to whether the object nodes have animation nodes or not.
If the object node does not have an animation node, the object node indicates that the product selling point does not need to be dynamically displayed currently, and the product selling point is displayed by using a static picture. Therefore, the object node can be directly rendered on the display in the initial display state, specifically, the PNG (Portable Network Graphic Format, portable network graphics format) picture can be generated based on the object node in the initial display state, and then the PNG picture is displayed by using the picture viewer.
If the object node has an animation node, the animation node is used to describe animation information of the object node, and animation time of the animation information execution. The animation information refers to operation information for modifying the display state of the object node, and the animation time refers to the time for executing specific operation information. For example, the animation node of the object node describes that the object node executes a moving operation at a first time point, wherein the moving operation is animation information, and the first time point is animation time of the moving operation; the animation node of the object node describes that the object node executes transparency changing operation at a second time point, wherein the transparency changing operation is animation information, and the second time point is animation time of the transparency changing operation.
After the animation information and the animation time described by the animation node are obtained, the display states of the object node at different moments can be determined based on the animation information, the initial display states of the animation node and the object node.
For example, in the example of fig. 9, the object node is a product selling point description picture whose initial display state is in the upper right corner of the screen (center coordinates are x1, y 1), the size is { L1, H1}, and the transparency is 100%, and the object node has an animation node describing that the object node starts to perform a moving operation (e.g., moving horizontally to the left by 10) at one point of time (e.g., 2 nd second), the moving operation being for 2 seconds (i.e., it takes 2 seconds to move from the initial position to the end position). Therefore, based on the animation information, the initial display state of the object node can be obtained as follows: at the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%, and display state of the object node at 4 th second as shown in fig. 10: in the upper right corner of the screen (center coordinates x1+10, y 1), the dimensions are { L1, H1}, the transparency is 100%.
Step S603, generating OpenGL rendering instructions according to display states of the object nodes at different moments;
step S604, sending the OpenGL rendering instruction to an OpenGL rendering engine, so that the OpenGL rendering engine renders the display states of the object node at different moments on the display.
OpenGL (Open Graphics Library): open graphics library, openGL, is a cross-language, cross-platform application programming interface for rendering 2D, 3D vector graphics, published by american silicon graphic company (SGI) at 6-30, 1992. The interface is typically used to interact with a graphics processing unit to achieve hardware acceleration. OpenGL is commonly used for CAD, virtual reality, scientific visualization programs, and electronic game development. OpenGL ES: a subset of OpenGL three-dimensional graphics APIs is designed for embedded devices such as cell phones, PDAs, and game hosts.
OpenGL is a state machine, and OpenGL may record its own state (e.g., the currently used color, whether a mixing function is turned on, etc.) OpenGL may receive input (e.g., when an OpenGL function is called, it may actually be regarded that OpenGL is receiving input), and OpenGL may update the current state according to the content and initial state of the input, and then may display the current state of OpenGL. The OpenGL rendering engine based on OpenGL may render the input received by the OpenGL rendering engine on the display.
Based on the principle, after the display states of the object nodes at different moments are determined, the OpenGL rendering instructions can be generated according to the display states of the object nodes at different moments, then the OpenGL rendering instructions are sent to the OpenGL rendering engine, and the OpenGL rendering engine can render the display states of the object nodes at different moments on a display according to the contents carried by the OpenGL rendering instructions.
For example, an OpenGL rendering instruction may be generated according to an initial display state of the object node and a display state of the object node at 4 th second, where the OpenGL rendering instruction describes the initial display state of the object node at an initial time and the display state of the object node at the 4 th second. The OpenGL rendering instruction is then sent to an OpenGL rendering engine. Because the OpenGL rendering instruction carries an initial display state of the object node at the initial time and a display state of the object node at the 4 th second, the OpenGL rendering engine can respectively determine the initial display state of the object node at the initial time (the display state of the 0 th second), the display state of the object node at the 1 st second, the display state of the object node at the 2 nd second (the first two seconds of the object node remain motionless), the display state of the object node at the 3 rd second and the display state of the object node at the 4 th second, and then render an image frame actually required to be displayed on the display according to the display state of the object node at each time. The OpenGL rendering engine renders these image frames on the display, i.e., achieves a presentation animation effect.
For example, in the example of fig. 9, the object node is a product selling point description picture, and the initial display state (0 th second) of the product selling point description picture is: at the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%, display state of the object node at 4 th second is: in the upper right corner of the screen (center coordinates x1+10, y 1), the dimensions are { L1, H1}, the transparency is 100%. It can thus be determined that the display state of the 1 st second is: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at the 2 nd second is: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at 3 rd second is: in the upper right corner of the screen (center coordinates x1+5, y 1), the dimensions are { L1, H1}, the transparency is 100%. Then, the OpenGL rendering engine may render the content of the image frame at each moment (5 image frames are rendered onto the interface according to the time sequence) according to the description of the display state of the object node at each moment, so that an animation display effect of 4 seconds may be obtained. In the above example, the object node of the 4 th second is moved out of the interface, so the final animation effect achieved is equivalent to the product sales point description picture exit interface animation effect. The application realizes the animation rendering based on the OpenGL rendering engine, and occupies less system resources while realizing the complex animation effect.
A specific implementation framework of the foregoing embodiment is shown in fig. 11, where the framework includes a memory, a picture playing control system and Android OpenGL ES image rendering engine, and a specific implementation process of the foregoing embodiment is shown in fig. 12: the picture playing controller system obtains the configuration file from the memory, then analyzes the configuration file to generate an object model describing the picture and an animation model describing the animation of the object. The picture playing control system calculates the states of the objects in each time interface through a timer, generates an OpenGL rendering instruction, and then sends the OpenGL rendering instruction to the Android OpenGL ES image rendering engine. And finally Android OpenGL ES, rendering the animation on the screen by the image rendering engine according to the OpenGL rendering instruction.
As can be seen from the above technical solution, the content display method executed by the display device 200 provided by the present application is applied to a controller, and the controller obtains a configuration file, where the configuration file includes at least one object node, and the object node is used for describing the content to be displayed on a display and an initial display state of the object node. If the object node has at least one animation node, the animation node is used to describe animation information of the object node and animation time of the animation information execution. Based on the animation information, the animation time and the initial display state, the display states of the object nodes at different moments are determined, and OpenGL rendering instructions are generated according to the display states of the object nodes at different moments. And then sending an OpenGL rendering instruction to an OpenGL rendering engine, wherein the OpenGL rendering engine renders the display states of the object nodes at different moments on a display according to the OpenGL rendering instruction, namely, the dynamic display effect of the object nodes is displayed. The dynamic display scheme of the application can realize complex animation effects of multiple pictures and multiple styles under the condition of occupying less system resources.
If the object node has at least one animation node, the content display method shown in fig. 6 includes the following cases when applied:
The first case is that the configuration file only includes one object node, and the object node only has one animation node, so that only an OpenGL rendering instruction needs to be generated according to one object node and one animation node, and finally the OpenGL rendering engine renders the animation on the display according to only one object node and the first animation node.
For example, in the example of fig. 9, the object node is a product selling point description picture, and the initial display state (0 th second) of the product selling point description picture is: at the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%, display state of the object node at 4 th second is: in the upper right corner of the screen (center coordinates x1+10, y 1), the dimensions are { L1, H1}, the transparency is 100%. It can thus be determined that the display state of the 1 st second is: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at the 2 nd second is: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at 3 rd second is: in the upper right corner of the screen (center coordinates x1+5, y 1), the dimensions are { L1, H1}, the transparency is 100%. Then, the OpenGL rendering engine may render the content of the image frame at each moment (5 image frames are rendered onto the interface according to the time sequence) according to the description of the display state of the object node at each moment, so that an animation display effect of 4 seconds may be obtained.
The second case is that the configuration file includes an object node having a plurality of animation nodes: a third animation node and a fourth animation node. The third animation node is used for describing third animation information of the object node and third animation time executed by the third animation information, and the fourth animation node is used for describing fourth animation information of the object node and fourth animation time executed by the fourth animation information. The display state of the object node at a different time may then be determined based on the third animation information, the third animation time, the fourth animation information, the fourth animation time, and the initial display state. Generating an OpenGL rendering instruction according to the display states of the object nodes at different moments, and after the OpenGL rendering instruction is sent to an OpenGL rendering engine, the OpenGL rendering engine can render the display states of the object nodes at different moments on a display. That is, when one object node has a plurality of animation nodes, the display states of the object node at different moments are determined according to the plurality of animation nodes, and the OpenGL rendering engine renders the display states of the object node at different moments on the display, that is, the dynamic display effect of the object node is presented.
The third animation time includes a third start time and a third end time, and the fourth animation time includes a fourth start time and a fourth end time. If the third animation time and the fourth animation time are identical (the start time is identical, and the end time is also identical), that is, the third animation information and the fourth animation information for the object node are simultaneously performed, the display state of the object node can be simultaneously changed based on the third animation information and the fourth animation information.
For example, as in the example of fig. 9, the object node is a product selling point description picture whose initial display state is in the upper right corner of the screen (center coordinates are x1, y 1), and the size is { L1, H1}, and the transparency is 100%. The third moving picture information of the product selling point describing picture is moved to the right 10 when the second is 2 nd, and the moving process lasts for 2 seconds. The transparency was changed to 0% when the fourth animation information was 2 nd second, and the transparency change process was continued for 2 seconds. The animation time of the third animation information and the fourth animation information are the same, so that the display state of the picture is changed based on the third animation information and the fourth animation information at the same time.
The display state of the product selling point description picture at 4 th second is determined to be at the upper right corner of the screen (center coordinates are x1+10, y 1), the size is { L1, H1}, and the transparency is 0% based on the third animation information and the fourth animation information. It may also be determined that the display status of the product vendor description picture at 1 st second is still: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at 2 nd second is still: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at 3 rd second is: in the upper right corner of the screen (center coordinates x1+5, y 1), the size is { L1, H1}, the transparency is 50% (the effect presented is shown in fig. 13). Then, the OpenGL rendering engine can render the content of the image frames at each moment according to the description of the display state of the product selling point description picture at each moment (5 image frames are rendered on the interface according to the time sequence), so that the animation display effect of 4 seconds can be obtained as well. Finally, the animation effect of hiding the picture while moving the picture is presented.
If the third animation time and the fourth animation time are not identical, that is, the third animation information and the fourth animation information for the object node are separately performed, the display state of the object node may be separately changed based on the third animation information and the fourth animation information. The third animation time and the fourth animation time not being identical may include: the third start time and the fourth start time are the same and the third end time and the fourth end time are different; the third start time and the fourth start time are different and the third end time and the fourth end time are the same; the third start time and the fourth start time are different and the third end time and the fourth end time are different. The following description will be made taking an example in which the third start time and the fourth start time are different and the third end time and the fourth end time are different.
If the third start time and the fourth start time are different and the third end time and the fourth end time are different, it is necessary to change the display state of the object node using the third animation information and the fourth animation information, respectively. To further facilitate the explanation of the scheme, the explanation of the scheme is made below taking the example that the fourth start time is equal to the third end time. If the fourth starting time is equal to the third ending time, the display states of the object node at different moments in the first changing process need to be determined firstly based on the third animation information, the third animation time and the initial display states. And then generating a third OpenGL rendering instruction according to the display states at different moments in the first change process, and rendering the animation effect of the first change process on a display by the OpenGL rendering engine according to the OpenGL rendering instruction. The object node assumes a third display state at the end of the first change procedure.
And then determining the display states of the object nodes at different moments in the second change process based on the fourth animation information, the fourth animation time and the third display state. And then generating a fourth OpenGL rendering instruction according to the display states at different moments in the second change process, and rendering the animation effect of the second change process on the display by the OpenGL rendering engine according to the OpenGL rendering instruction. And the object node presents a fourth display state when the second change process is finished.
For example, as in the example of fig. 9, the object node is a product selling point description picture whose initial display state is in the upper right corner of the screen (center coordinates are x1, y 1), and the size is { L1, H1}, and the transparency is 100%. The third moving picture information of the product selling point describing picture is moved to the right 10 when the second is 2 nd, and the moving process lasts for 2 seconds. The fourth animation information is to move 10 to the left at 4 seconds (the product selling point description picture is re-entered into the interface from the right side), and the movement process lasts 2 seconds. Since the animation time of the third animation information and the animation time of the fourth animation information are different, it is necessary to change the display state of the picture based on the third animation information and the fourth animation information, respectively.
The display state of the product selling point description picture at 4 th second is determined to be at the upper right corner of the screen (the center coordinates are x1+10, y 1), the size is { L1, H1}, and the transparency is 100%. The display state of the product selling point description picture at the 1 st second can be determined as follows: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at the 2 nd second is: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at 3 rd second is: in the upper right corner of the screen (center coordinates x1+5, y 1), the dimensions are { L1, H1}, the transparency is 100%. Then, the OpenGL rendering engine may render the content of the image frame at each moment according to the description of the display state of the product selling point description picture at each moment (5 image frames are rendered on the interface according to the time sequence), so that the animation display effect of 4 seconds can be obtained as well (the animation effect displayed finally is that the product selling point description picture moves from the position of fig. 9 to the position of fig. 10).
Since the fourth starting time is equal to the third ending time, the display state of the product selling point description picture is changed based on the fourth animation information after the display state changing process. Specifically, the display state of the product selling point description picture at 6 th second is determined to be at the upper right corner of the screen (center coordinates are x1, y 1), the size is { L1, H1}, and the transparency is 100%. The display state of the product selling point description picture at the 5 th second can be determined as follows: in the upper right corner of the screen (center coordinates x1+5, y 1), size { L1, H1}, transparency 100%; the display state at the 6 th second is: in the upper right corner of the screen (center coordinates x1, y 1), the size is { L1, H1}, and the transparency is 100%. Then, the OpenGL rendering engine may render the content of the image frame at each moment (2 image frames are rendered onto the interface according to the time sequence) according to the description of the display state of the product selling point description picture at each moment, so that the 2-second animation display effect can be obtained as well. Based on the third animation information and the fourth animation information, the animation effect that the product selling point description picture moves from the interface to the right side of the display device to move out of the interface and then moves from the right side of the display device to the interface is finally presented.
In the second case, there is also a scenario in which the fourth start time is not equal to the third end time. For example, the third ending time of the third moving picture information is 6 th second, the fourth starting time of the fourth starting time is 4 th second, and the fourth ending time is also 6 th second. That is, in the process of moving the picture, the operation of changing the transparency of the picture is started, and the finally presented animation effect of changing the transparency of the product sales point description picture while moving the product sales point description picture (the product sales point description picture is moved from the position shown in fig. 9 to the position shown in fig. 10 and back from the position shown in fig. 10 to the position shown in fig. 9).
The third case is that the configuration file includes a plurality of object nodes: the first object node has a first initial display state, the second object node has a second initial display state, the first object node has a first animation node for describing first animation information of the first object node and first animation time executed by the first animation information, the second object node has a second animation node for describing second animation information of the second object node and second animation time executed by the second animation information.
Wherein the first animation time of the first object node comprises a first start time and a first end time, and the second animation time of the second object node comprises a second start time and a second end time. It is thus possible to determine the display states of the first object node at different times based on the first animation information, the first animation time and the first initial display state, and to determine the display states of the second object node at different times based on the second animation information, the second animation time and the second initial display state. And then generating a first OpenGL rendering instruction according to the display states of the first object node at different moments, and generating a second OpenGL rendering instruction according to the display states of the second object node at different moments. The OpenGL rendering engine may render display states of the first object node and the second object node at different moments on the display according to the first OpenGL rendering instruction and the second OpenGL rendering instruction, respectively.
The first animation time includes a first start time and a first end time, and the second animation time includes a second start time and a second end time. If the first animation time and the second animation time are identical (the start time is identical, and the end time is also identical), that is, the first animation information for the first object node and the second animation information for the second object node are simultaneously performed, the display state of the first object node can be changed based on the first animation information while the display state of the second object node can be changed based on the second animation information.
For example, as in the example of fig. 14, the first object node is a product point description picture, the second object is a product point description text, and the product point description text may be overlaid over the product point description picture. The initial display state of the product point description picture is at the upper right corner of the screen (center coordinates are x1, y 1), the size is { L1, H1}, and the transparency is 100%. The first animation information of the product selling point description picture is moved to the right 10 when the second is 2 nd, and the movement process lasts for 2 seconds. The initial display state of the product point description text is at the upper right corner of the screen (the central coordinate is x2, y 2), the size is { L2, H2}, and the transparency is 100%. The second animation information of the product selling point descriptive text is moved to the right 10 when the second is 2 nd second, and the movement process lasts for 2 seconds. The animation time of the first animation information and the second animation information is the same, so that the display state of the product selling point description text can be changed based on the second animation information while the display state of the product selling point description picture is changed based on the first animation information.
The display state of the product selling point description picture at 4 th second is determined to be at the upper right corner of the screen (center coordinates are x1+10, y 1), the size is { L1, H1}, and the transparency is 100%. It may also be determined that the display state of the object node at 1 st second is: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at the 2 nd second is: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at 3 rd second is: in the upper right corner of the screen (center coordinates x1+5, y 1), the dimensions are { L1, H1}, the transparency is 100%.
The display state of the product selling point descriptive text at 4 th second is determined to be at the upper right corner of the screen (the center coordinates are x2+10, y 2), the size is { L2, H2}, and the transparency is 100%. It may also be determined that the display state of the object node at 1 st second is: in the upper right corner of the screen (center coordinates x2, y 2), size { L2, H2}, transparency 100%; the display state at the 2 nd second is: in the upper right corner of the screen (center coordinates x2, y 2), size { L2, H2}, transparency 100%; the display state at 3 rd second is: in the upper right corner of the screen (center coordinates x2+5, y 2), the dimensions are { L2, H2}, the transparency is 100%.
Then, the OpenGL rendering engine can render the content of the image frames at each moment (5 image frames are rendered on the interface according to the time sequence) according to the description of the display state of the product selling point description picture and the product selling point description text at each moment, so that the animation display effect of 4 seconds can be obtained. The final presentation is an animation effect of moving the picture and the text, namely, the product selling point description picture and the product selling point description text move from the position shown in fig. 14 to the position shown in fig. 15.
If the first animation time and the second animation time are not identical, that is, the first animation information for the first object node and the second animation information for the second object node are separately performed, the display states of the first object node and the second object node may be separately changed based on the first animation information and the second animation information. The first animation time and the second animation time not being identical may include: the first start time and the second start time are the same and the first end time and the second end time are different; the first start time and the second start time are different and the first end time and the second end time are the same; the first start time and the second start time are different and the first end time and the second end time are different. The following description will be made taking an example that the first start time and the second start time are different and the first end time and the second end time are different.
If the first start time and the second start time are different and the first end time and the second end time are different, it is necessary to change the display state of the first object node with the first animation information and the display state of the second object node with the second animation information, respectively. To further facilitate the description of the scheme, the scheme is described below taking the example that the second start time is equal to the first end time. If the second start time is equal to the first end time, the display states of the first object node at different moments need to be determined based on the first animation information, the first animation time and the first initial display state. And then generating a first OpenGL rendering instruction according to the display states of the first object node at different moments, and rendering the animation effect of the first object node on a display by the OpenGL rendering engine according to the first OpenGL rendering instruction.
And then determining the display state of the second object node at different moments based on the second animation information, the second animation time and the second initial display state. And generating a second OpenGL rendering instruction according to the display states of the second object node at different moments, and rendering the animation effect of the second object node on a display by the OpenGL rendering engine according to the second OpenGL rendering instruction.
For example, as in the example of fig. 14, the first object node is a product point description text, the second object is a product point description picture, and the product point description text may be overlaid over the product point description picture. The initial display state of the product point description text is at the upper right corner of the screen (the central coordinate is x2, y 2), the size is { L2, H2}, and the transparency is 100%. The first animation information of the product selling point descriptive text is upward movement 5 when the second time is 2 seconds, and the movement process lasts for 2 seconds. The initial display state of the product point description picture is at the upper right corner of the screen (center coordinates are x1, y 1), the size is { L1, H1}, and the transparency is 100%. The second animation information of the product selling point descriptive text is moved to the right 5 when the 4 th second is set, and the movement process lasts for 2 seconds.
The display state of the product description text at 4 th second is determined to be at the upper right corner of the screen (center coordinates are x2, y2+5), the size is { L2, H2}, and the transparency is 100%. The display state of the product description text at the 1 st second can be determined as follows: in the upper right corner of the screen (center coordinates x2, y 2), size { L2, H2}, transparency 100%; the display state at the 2 nd second is: in the upper right corner of the screen (center coordinates x2, y 2), size { L2, H2}, transparency 100%; the display state at 3 rd second is: at the upper right corner of the screen (center coordinate x2, y2+2.5), the dimensions were { L2, H2}, and the transparency was 100%.
After the animation rendering of the product description text is completed, it is determined that the display state of the product description picture at 6 th second is at the upper right corner of the screen (center coordinates are x1+5, y 1), the size is { L1, H1}, and the transparency is 100% based on the second animation information. It may also be determined that the display states of the object node at 0 th to 4 th seconds are: in the upper right corner of the screen (center coordinates x1, y 1), size { L1, H1}, transparency 100%; the display state at 5 th second is: at the upper right corner of the screen (center coordinates x1+2.5, y 1), the dimensions are { L2, H2}, the transparency is 100%.
Then, the OpenGL rendering engine may render the content of the image frame at each moment (render 6 image frames onto the interface according to the time sequence) according to the description of the display state of the product description text and the product description picture at each moment, so that the animation display effect of 6 seconds can be obtained as well. Based on the above description, the screen presented at 4 th second is shown in fig. 16, and the screen presented at 6 th second is shown in fig. 17 (the product selling point description picture and the product selling point description text animation are performed separately).
It should be noted that, in the above embodiment, the object node may be a plurality of picture objects, a plurality of text objects, or a combination of a picture object and a text object, and each object may have a plurality of different animation information. In addition to the moving object, changing the object size, changing the object transparency of the animation information in the above example, the animation information may be information of rotation, deformation, or the like. Therefore, the animation rendering method based on the OpenGL rendering engine can flexibly configure a plurality of picture objects, a plurality of text objects and a plurality of animation information according to different product display requirements under the condition of occupying less system resources, and realizes complex animation effects of multiple pictures and multiple styles.
In some embodiments, the OpenGL rendering engine may render the display state of the object node at different times on SurfaceView, thus creating a container SurfaceView for displaying the animation, and then may adjust the properties of the animation display by modifying the properties of SurfaceView and the properties of Window in SurfaceView. For example, the resolution attribute of SurfaceView may be adjusted to 4K resolution so that the rendered animation may be displayed at 4K resolution. Since the OpenGL engine consumes less resources when rendering the animation, the animation is not easily stuck even if rendered at 4K resolution.
In some embodiments, a timer may also be set for each animation information, based on which the overall implementation framework of the present application is as shown in fig. 18: after the configuration file is acquired, a timer is set for the animation information of each animation node. When the timer detects that the animation time is up, a time instruction is sent to the controller model, the controller model calls the animation node corresponding to the object node according to the time instruction, and then the display state of the object node is updated based on the animation node. And drawing the object node with the updated display state on a canvas, and rendering the object node on a screen by the canvas by using an OpenGL rendering instruction, so as to realize an animation effect.
The same and similar parts of the embodiments in this specification are referred to each other, and are not described herein.
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied essentially or in parts contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the embodiments or parts of the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. The illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
A display;
A controller configured to:
Acquiring a configuration file, wherein the configuration file comprises at least one object node, and the object node is used for describing content to be displayed on the display and an initial display state of the object node;
If the object node has at least one animation node, determining the display states of the object node at different moments based on the animation information of the object node described by the animation node, the animation time executed by the animation information and the initial display state;
Generating OpenGL rendering instructions according to display states of the object nodes at different moments;
and sending the OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders the display states of the object nodes at different moments on the display.
2. The display device of claim 1, wherein the animation time includes at least a start time and an end time, wherein the controller is configured to determine a display state of the object node at different times based on the animation information, the animation time, and the initial display state based on the display state, and is configured to:
determining the number of image frames displayed by the object node according to the starting time and the ending time;
And determining the display state of the object node on each image frame according to the animation information and the initial display state.
3. The display device of claim 2, wherein the controller executing the generating OpenGL rendering instructions according to the display states of the object nodes at different times is configured to:
Generating OpenGL rendering instructions corresponding to each image frame according to the display state of the object node on each image frame;
The controller is configured to send the OpenGL rendering instruction to an OpenGL rendering engine, so that the OpenGL rendering engine renders display states of the object nodes at different moments on the display, and the OpenGL rendering engine is configured to:
And sending an OpenGL rendering instruction corresponding to each image frame to an OpenGL rendering engine so that the OpenGL rendering engine renders the display state of the object node on each image frame on the display.
4. The display device of claim 1, wherein the configuration file includes a first object node having a first initial display state and a second object node having a second initial display state, the first object node having a first animation node for describing first animation information of the first object node and a first animation time for the first animation information to be performed, the second object node having a second animation node for describing second animation information of the second object node and a second animation time for the second animation information to be performed, the controller configured to:
Determining display states of the first object node at different moments based on the first animation information, the first animation time and the first initial display state, and determining display states of the second object node at different moments based on the second animation information, the second animation time and the second initial display state;
Generating a first OpenGL rendering instruction according to the display states of the first object node at different moments, and generating a second OpenGL rendering instruction according to the display states of the second object node at different moments;
And if the first animation time and the second animation time are the same, sending the first OpenGL rendering instruction and the second OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine simultaneously renders the display states of the first object node and the second object node at different moments on the display.
5. The display device of claim 4, wherein the controller is further configured to:
And if the first starting time and the second starting time are different and the second starting time is equal to the first ending time of the first animation time, sending the first OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine can render the display state of the first object node at different moments on the display, and then sending the second OpenGL rendering instruction to the OpenGL rendering engine so that the OpenGL rendering engine can render the display state of the second object node at different moments on the display.
6. The display device of claim 1, wherein the object node has a third animation node for describing third animation information of the object node and a third animation time that the third animation information performs, and a fourth animation node for describing fourth animation information of the object node and a fourth animation time that the fourth animation information performs, the controller further configured to:
If the third animation time is the same as the fourth animation time, determining the display states of the object node at different moments based on the third animation information, the third animation time, the fourth animation information, the fourth animation time and the initial display states;
Generating OpenGL rendering instructions according to display states of the object nodes at different moments;
and sending the OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders the display states of the object nodes at different moments on the display.
7. The display device of claim 6, wherein the controller is further configured to:
If the third starting time and the fourth starting time are different and the fourth starting time is equal to the third ending time, determining display states of the object node at different moments in a first change process based on the third animation information, the third animation time and the initial display states;
Generating a third OpenGL rendering instruction according to display states of the object node at different moments in the first change process;
Sending the third OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders display states of the object node at different moments in a first change process on the display, wherein the object node presents a third display state on the display at the termination time in the first change process;
Determining display states of the object node at different moments in a second change process based on the fourth animation information, the fourth animation time and the third display state;
Generating a fourth OpenGL rendering instruction according to display states of the object node at different moments in the second change process;
And sending the fourth OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders display states of the object node at different moments in a second change process on the display.
8. The display device according to claim 1, wherein the initial display states include at least an initial position state, an initial size state, and an initial transparency state, the controller performing determining display states of the object node at different times based on the animation information, the animation time, and the initial display states, configured to:
And determining the display position state, the display size state and the display transparency state of the object node at different moments based on the animation information, the animation time, the initial position state, the initial size state and the initial transparency state.
9. The display device of claim 1, wherein the controller executing send the OpenGL rendering instructions to an OpenGL rendering engine to cause the OpenGL rendering engine to render the display state of the object node on the display at different times is configured to:
And sending the OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders display states of the object nodes at different moments on Surfaceview.
10. A content display method, characterized by being applied to a display device, comprising:
Acquiring a configuration file, wherein the configuration file comprises at least one object node, and the object node is used for describing content to be displayed on the display and an initial display state of the object node;
If the object node has at least one animation node, determining the display states of the object node at different moments based on the animation information of the object node described by the animation node, the animation time executed by the animation information and the initial display state;
Generating OpenGL rendering instructions according to display states of the object nodes at different moments;
and sending the OpenGL rendering instruction to an OpenGL rendering engine so that the OpenGL rendering engine renders the display states of the object nodes at different moments on the display.
CN202410164512.5A 2024-02-05 2024-02-05 Display equipment and content display method Pending CN118175367A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410164512.5A CN118175367A (en) 2024-02-05 2024-02-05 Display equipment and content display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410164512.5A CN118175367A (en) 2024-02-05 2024-02-05 Display equipment and content display method

Publications (1)

Publication Number Publication Date
CN118175367A true CN118175367A (en) 2024-06-11

Family

ID=91355525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410164512.5A Pending CN118175367A (en) 2024-02-05 2024-02-05 Display equipment and content display method

Country Status (1)

Country Link
CN (1) CN118175367A (en)

Similar Documents

Publication Publication Date Title
CN114297436A (en) Display device and user interface theme updating method
CN112799627B (en) Display apparatus and image display method
CN113810746B (en) Display equipment and picture sharing method
US11425466B2 (en) Data transmission method and device
US11960674B2 (en) Display method and display apparatus for operation prompt information of input control
CN112584211A (en) Display device
WO2022161401A1 (en) Screen-projection data processing method and display device
CN111984167B (en) Quick naming method and display device
CN114115637A (en) Display device and electronic drawing board optimization method
CN111930233B (en) Panoramic video image display method and display device
CN116347166A (en) Display device and window display method
CN118175367A (en) Display equipment and content display method
CN116801027A (en) Display device and screen projection method
CN116980554A (en) Display equipment and video conference interface display method
CN113596559A (en) Method for displaying information in information bar and display equipment
CN111949150A (en) Method and device for controlling peripheral switching, storage medium and electronic equipment
CN112883302B (en) Method for displaying page corresponding to hyperlink address and display equipment
CN115396717B (en) Display device and display image quality adjusting method
CN117812433A (en) Operation prompting method and display device
CN113190202B (en) Data display method and display equipment
CN118283012A (en) Conference creation terminal, virtual conference scene creation method, and storage medium
CN118283011A (en) Virtual conference terminal, virtual conference scene display method, and storage medium
CN117812378A (en) Display device and interface display method
CN117608709A (en) Display apparatus and display method
CN116563369A (en) Display equipment and virtual object pose updating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination