CN116661790A - Cross-platform rendering method and device and electronic equipment - Google Patents

Cross-platform rendering method and device and electronic equipment Download PDF

Info

Publication number
CN116661790A
CN116661790A CN202310958752.8A CN202310958752A CN116661790A CN 116661790 A CN116661790 A CN 116661790A CN 202310958752 A CN202310958752 A CN 202310958752A CN 116661790 A CN116661790 A CN 116661790A
Authority
CN
China
Prior art keywords
rendering
node
objects
data
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310958752.8A
Other languages
Chinese (zh)
Other versions
CN116661790B (en
Inventor
易成
陈晓波
李斌
罗程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310958752.8A priority Critical patent/CN116661790B/en
Publication of CN116661790A publication Critical patent/CN116661790A/en
Application granted granted Critical
Publication of CN116661790B publication Critical patent/CN116661790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Abstract

The embodiment of the application provides a cross-platform rendering method, a device and electronic equipment, which relate to the technical field of multi-path video rendering in the field of multimedia display, and the method comprises the following steps: acquiring a User Interface (UI) tree structure of an application program; the application program is a program constructed based on a cross-platform UI language, the UI tree structure comprises window nodes, child nodes of the window nodes comprise at least one texture rendering node, and a first off-screen buffer area and a first display area in a display page are associated with a first texture rendering node in the at least one texture rendering node; acquiring first rendering data subjected to off-screen rendering from the first off-screen buffer area; and drawing the first rendering data to the first display area by using the first texture rendering node. According to the method, the off-screen rendering scheme and the texture rendering scheme based on the cross-platform UI language are combined, so that the rendering performance of the cross-platform high-frame-rate video can be improved.

Description

Cross-platform rendering method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of multi-channel video rendering in the field of multimedia display, and in particular relates to a cross-platform rendering method, a cross-platform rendering device and electronic equipment.
Background
In general, in application scenarios such as video call, video conference, live video, and live connection microphone (for short, microphone connection), a cross-platform language is generally used to develop a User Interface (UI) to improve the development efficiency of the UI.
However, the language characteristics and cross-platform capabilities of the cross-platform language (e.g., the JavaScript interpreted language used by H5 and real Native is not high enough in execution performance, and neither cross-platform language can directly operate graphics processor (Graphics Processing Unit, GPU) rendering hardware) determine that it cannot implement rendering of multiple high FPS videos.
Therefore, there is a need in the art for a cross-platform rendering method to improve the rendering performance of cross-platform high frame rate video.
Disclosure of Invention
The embodiment of the application provides a cross-platform rendering method, a device and electronic equipment, which can improve the rendering performance of a cross-platform high-frame-rate video.
In a first aspect, an embodiment of the present application provides a cross-platform rendering method, including:
acquiring a User Interface (UI) tree structure of an application program;
the application program is a program constructed based on a cross-platform UI language, the UI tree structure comprises window nodes, child nodes of the window nodes comprise at least one texture rendering node, and a first off-screen buffer area and a first display area in a display page are associated with a first texture rendering node in the at least one texture rendering node;
Acquiring first rendering data subjected to off-screen rendering from the first off-screen buffer area;
and drawing the first rendering data to the first display area by using the first texture rendering node.
In a second aspect, an embodiment of the present application provides a cross-platform rendering apparatus, including:
a first acquisition unit configured to acquire a user interface UI tree structure of an application;
the application program is a program constructed based on a cross-platform UI language, the UI tree structure comprises window nodes, child nodes of the window nodes comprise at least one texture rendering node, and a first off-screen buffer area and a first display area in a display page are associated with a first texture rendering node in the at least one texture rendering node;
the second acquisition unit is used for acquiring first rendering data subjected to off-screen rendering from the first off-screen buffer area;
and a drawing unit for drawing the first rendering data to the first display area by using the first texture rendering node.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor adapted to implement computer instructions; the method comprises the steps of,
a computer readable storage medium storing computer instructions adapted to be loaded by a processor and to perform the method of the first aspect referred to above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when read and executed by a processor of a computer device, cause the computer device to perform the method of the first aspect referred to above.
In a fifth aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, the processor executes the computer instructions, causing the computer device to perform the method of the first aspect referred to above.
Based on the technical scheme, the cross-platform rendering method provided by the embodiment of the application combines the off-screen rendering scheme with the texture rendering scheme based on the cross-platform UI language, so that the rendering performance of the cross-platform high-frame-rate video can be improved.
Specifically, since the first texture rendering node in the cross-platform UI tree structure is associated with the first off-screen buffer and the first display area in the display page, when the display page is rendered, first rendering data subjected to off-screen rendering can be directly obtained from the first off-screen buffer, and then the first rendering data is drawn to the first display area by utilizing the first texture rendering node; equivalently, when the display page is rendered, the first rendering data stored in the first off-screen buffer area after off-screen rendering can be directly used as textures to perform texture rendering, so that the rendering performance of the cross-platform high-frame-rate video can be improved.
Drawings
Fig. 1 is an example of a system framework provided by an embodiment of the present application.
FIG. 2 is an example of rendering principles based on native UI techniques and cross-platform UI techniques provided by embodiments of the application.
Fig. 3 is a schematic flowchart of a cross-platform rendering method provided by an embodiment of the present application.
Fig. 4 is an example of displaying a page provided in an embodiment of the present application.
Fig. 5 is another example of a display page provided in an embodiment of the present application.
Fig. 6 is an example of the principle of a cross-platform rendering method provided by an embodiment of the present application.
Fig. 7 is another example of the principle of a cross-platform rendering method provided by an embodiment of the present application.
Fig. 8 is an example of a media data transmission procedure provided by an embodiment of the present application.
Fig. 9 is a schematic block diagram of a cross-platform rendering apparatus provided by an embodiment of the present application.
Fig. 10 is a schematic block diagram of an electronic device provided by an embodiment of the present application.
Detailed Description
The technical scheme provided by the application will be clearly and completely described below in connection with specific embodiments.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In addition, the term "indication" referred to in the embodiments of the present application may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is some association between a and B. The term "corresponding" may mean that there is a direct correspondence or an indirect correspondence between the two, or may mean that there is an association between the two, or may be a relationship between an instruction and an indicated, a configuration and a configured, or the like. The description "at … …" may be interpreted as "if" or "when … …" or "responsive". Similarly, the phrase "if determined … …" or "if detected (stated condition or event) … …" may be interpreted as "when determining … …" or "in response to determining … …" or "when detecting (stated condition or event) … …" or "in response to detecting (stated condition or event) … …", depending on the context. The term "predefined" or "predefined rules" may be implemented by pre-storing corresponding codes, tables, or other means by which relevant information may be indicated in devices (e.g., including terminal devices and network devices), the application is not limited to a particular implementation thereof. Such as predefined may refer to what is defined in the protocol. The term "plurality" refers to two or more. The term "and/or" is merely an association relationship describing an associated object, meaning that three relationships may exist. Specifically, a and/or B may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Furthermore, the terms first, second and the like in the description and in the claims of the application and in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. The terms "comprising" and "having," and any variations thereof, are intended to cover without excluding the inclusion of other elements. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The embodiment of the application relates to the technical field of multi-channel video rendering in the field of multimedia display.
The multimedia display may be a display technology for simultaneously displaying a plurality of media files on a display interface, for example, the plurality of media files include, but are not limited to, video, image or text, and even the plurality of media files may be media files with different sources or different contents. Application scenarios for multi-path video rendering include, but are not limited to: multiple application scenes such as multi-person video call, multi-person video conference, live multi-person video connection microphone (called' microphone for short) and the like existing in the instant messaging application program; and the method can be applied to scenes such as multi-path video detection and viewing. Taking a multi-person video call as an example, a plurality of participants participate in the video call, and each participant can select to open a camera of the participant to transmit video data; at the view angle of the participant, multiple paths of video data sent by a plurality of other participants are received, decoded and displayed in different display areas; for example, at the perspective of a participant, multiple channels of video data transmitted from multiple other participants located within the current display area are received and decoded for presentation in different display sub-areas.
The embodiment of the application relates to the technical field of natural language processing (Nature Language processing, NLP).
Among them, NLP is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Thus, the research in this field will involve natural language, i.e. language that people use daily, so it has a close relationship with the research in linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like. For example, the media data entered by the user may be obtained using natural language processing techniques.
The following is a description of terms involved in the present application.
Display interface (UI) technology: refers to a user interface technology that can run on multiple mobile operating systems. This technique allows a developer to build an application using one code library without having to write different code for each operating system.
The following are some common cross-platform UI technologies (the specific cross-platform UI implementation details in the present application are also mainly described by way of example):
Hypertext markup language 5 (Hypertext Markup Language, html5 or H5): is a standard for creating web pages and applications that can run on a variety of devices and operating systems, including desktop computers, notebook computers, tablet computers, smart phones, etc., requiring a browser carrier.
Reactive Native: is a cross-platform mobile application development framework of a company open source. It allows the developer to build native applications using Java description language (JavaScript) and React. Wherein, act is a software that allows a user to build an interface through components.
Flutter: is another cross-platform mobile application development framework of the open source of a certain company. The computer programming language used is the Dart language. Flutter allows developers to use one code library to build native applications under different operating systems.
Native UI: in contrast to cross-platform UI languages, which are developed by a dedicated UI interface provided by each operating system itself, a set of code typically cannot run under multiple different operating systems. But the advantage of comparing cross-platform UIs is that the general performance is better and some of the underlying hardware of the operating system can be accessed directly.
Off-screen rendering (OSR): the name implies that off-screen rendering, i.e., operating by newly opening up a new buffer beyond the current screen buffer.
For example, taking a browser as an example, the screen buffer of the browser control is a buffer for storing display images, and the off-screen buffer of the browser control is a buffer for storing intermediate images or data. The off-screen buffer area of the browser control and the screen buffer area of the browser control are mutually independent buffer areas.
Application program interface (Application Programming Interface, API): also known as an application programming interface, is a convention whereby different components of a software system are joined.
Notably, due to the increasing size of software in recent years, it is often necessary to divide complex systems into small components, and the design of the programming interface is important. In programming practice, the programming interface is designed to divide the responsibilities of the software system reasonably. The good interface design can reduce the mutual dependence of all parts of the system, improve the cohesion of the constituent units and reduce the coupling degree among the constituent units, thereby improving the maintainability and expansibility of the system. The API provides code for application calls to a computer Operating system (Operating system) or a program library. The main purpose is to allow an application developer to invoke a set of routine functions without regard to why its underlying source code is, or to understand the details of its internal operating mechanisms. The API itself is abstract in that it defines only one interface and does not involve specific operations of the application in the actual implementation.
Tree structure: also referred to as a node tree or UI node tree, refers to a data structure in which all elements in a User Interface (UI) are organized in a hierarchical structure. It is similar to a tree, where the root node is the entire UI interface, and each child node represents one UI element, e.g., a button control, text box, image, etc.
In the UI node tree, each node has a parent node and zero (or at least one) child nodes. A parent node refers to a node directly superior to the node, and a child node refers to a node directly inferior to the node. The hierarchical structure makes the relation among UI elements clear, and facilitates the management and operation of the UI by the developer.
Fig. 1 is an example of a system framework 100 provided by an embodiment of the present application.
As shown in fig. 1, the system frame 100 may include: a first terminal 110, a server 120, and a second terminal 130. The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The first terminal 110 and the second terminal 130 are both installed and run with an instant messaging application program, so that the first terminal 110 and the second terminal 130 can communicate through the running instant messaging application program, so as to realize multiple application scenarios such as two-person video call, two-person video conference, live two-person video connection microphone (simply called "microphone connection"), and the like. Taking two-person video call as an example, there are two participants (i.e. participant 1 and participant 2) participating in the video call, and both participant 1 and participant 2 can choose to open their own cameras to transmit video data; at the view of the participant 1, the video data sent by the participant 2 is received, and the video data sent by the participant 2 is decoded and displayed in a display area different from the video display area of the participant 1; at the perspective of participant 2, the video data transmitted from participant 1 is received and the video data transmitted from participant 1 is decoded and presented in a different display area than the video display area of participant 2.
Further, the first terminal 110 and the second terminal 130 are each an electronic device having an image display function, for example, including but not limited to: smart phones, notebook computers, tablet computers, gaming devices, and other portable or mobile computing devices. Of course, the electronic device may be a Virtual Reality device such as augmented Reality (Augmented Reality, AR) or Virtual Reality (VR). It is noted that the device types of the first terminal 110 and the second terminal 130 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer, to which the present application is not particularly limited.
The server 120 comprises at least one of a server, a server cluster formed by a plurality of servers, a 'cloud' computing platform and a virtualization center. The server 120 is used to provide background services for the first terminal 110 and the second terminal 130. The server 120 may be a server of an instant messaging application including a memory, a processor, a user information database, and an Input/Output Interface (I/O Interface). Wherein the processor is configured to load instructions stored in the memory, the processor further configured to process data in the user information database; the user information database is used to store user data used by the first terminal 110, the second terminal 130, and other terminals, such as a user's head portraits, a user's nicknames, user-corresponding media data, and the like. The I/O interface is used to exchange media data of the first terminal 110 and the second terminal 130 through a wireless network or a wired network.
Of course, only two terminals are shown in fig. 1, but in other alternative embodiments, a plurality of terminals may be connected to the server 120, so as to implement multiple types of application scenarios such as multi-person video call, multi-person video conference, live multi-person video connection microphone (abbreviated as "microphone connection"), etc., which is not limited in this application.
In general, in application scenarios such as video call, video conference, live video, and live connection microphone (for short, microphone connection), a cross-platform language is generally used to develop a User Interface (UI) to improve the development efficiency of the UI. However, the language characteristics and cross-platform capabilities of the cross-platform language (e.g., the JavaScript interpreted language used by H5 and real Native is not high enough in execution performance, and neither cross-platform language can directly operate graphics processor (Graphics Processing Unit, GPU) rendering hardware) determine that it cannot implement rendering of multiple high FPS videos.
In view of this, the present application considers that since the cross-platform language itself cannot realize efficient video rendering, the high performance of the native UI rendering framework and the capability of operating GPU hardware can be utilized to render video, and then embedded into the cross-platform UI rendering framework, thereby realizing the rendering of the interface constructed based on the cross-platform language. For example, such effects may be done by embedding Native UI rendering nodes in the cross-platform UI rendering framework, e.g., video (Video) components may be embedded in H5, and Video/platform window (Video/platform View) nodes may be embedded in real Native and Flutter, for example.
Notably, in different UI rendering frameworks, nodes typically cannot communicate directly with each other. However, an interface integrated with native UI nodes may be configured for the cross-platform UI rendering framework, based on which the multi-path video rendering may include the steps of:
1. for each path of video, a corresponding video node is realized by using a native UI rendering frame, and after each video data comes, the inside of the node is drawn by using efficient GPU drawing languages such as OpenGL/Metal and the like.
2. A corresponding video container node (such as the platformView of flutter) is also created for each path of video in the cross-platform UI, and the native UI rendering frame is queried and embedded with the corresponding video node in the native UI rendering frame.
3. The business code of the cross-platform UI rendering frame is responsible for laying out the positions and the sizes of the video container nodes, and then the kernel of the cross-platform UI rendering frame requests the native UI rendering frame to set the positions/sizes of the video nodes embedded into the cross-platform UI rendering frame according to the positions and/or the sizes of the video container nodes at proper time so as to achieve the effect of embedding the video nodes (hereinafter referred to as the native UI nodes) in the native UI rendering frame into the cross-platform UI rendering frame.
FIG. 2 is an example of rendering principles based on native UI techniques and cross-platform UI techniques provided by embodiments of the application.
As shown in fig. 2, the native UI rendering framework may include native UI nodes, child nodes of the native UI nodes may include video nodes 1-N, the cross-platform UI rendering framework may include cross-platform UI nodes, child nodes of the cross-platform UI nodes include window nodes, child nodes of the window nodes include video containers 1-N, and other UI nodes. The video nodes 1-N are respectively embedded into the video container nodes 1-N, so that the effect of embedding the native UI nodes into the cross-platform UI rendering frame is achieved.
However, with the following drawbacks in mind, schemes that "embed" native nodes in a cross-platform UI rendering framework are typically used for presentation of single video, rarely in presentation of multiple videos:
1. there are many limitations to "embedding" native UI nodes in a cross-platform UI rendering framework.
For example, the rendering kernel is a cross-platform UI rendering framework like flutter or H5, which is not the same as the native UI rendering framework, in which case the performance loss of the operation of "embedding" the node is more severe and the embedding experience is poorer. For example, the rendering threads are not identical and the rendering objects are not identical, which causes problems of intermediate cross-thread rendering and synchronization, and thus may cause problems such as failure to acquire rendering data, and even if rendering data conversion and cross-thread switching are performed, the rendering overhead is increased. For another example, the inconsistent handling of touch events (e.g., events such as "double-finger zoom") by the two sets of rendering frames results in problems such as failure of the cross-platform UI rendering frames to respond.
2. The performance pressure ratio is larger in multi-channel video.
Since a video container node needs to correspond to a unique native UI node, each path of video needs to create a separate native UI node to be "embedded" into. The original UI node is used as an independent rendering unit in the original UI rendering frame, which is equivalent to that two sets of UI rendering frames render at the same time, and the CPU/memory cost caused by creation and destruction or the memory and the display memory cost occupied by normal operation are relatively large, and the performance pressure ratio is relatively large when the number of video paths is relatively large.
In view of this, the embodiment of the application provides a cross-platform rendering method, which combines an off-screen rendering scheme with a texture rendering scheme based on a cross-platform UI language, so as to avoid the problem of overlarge performance overhead caused by rendering by using a native UI node, and improve the rendering performance of a cross-platform high-frame-rate video. Specifically, the embodiment of the application combines off-screen rendering to the technical scheme of cross-platform texture rendering to optimize the performance of cross-platform high-frame-rate video rendering, and applies a texture rendering multiplexing mechanism to the use of off-line rendering data so as to ensure that an off-screen buffer and off-line rendering media data are created only for visible objects (i.e. objects displayed in a display page), thereby achieving the effect of simultaneously supporting multiple paths of high-frame-rate video. The cross-platform rendering method provided by the application does not need to embed the native UI nodes in the cross-platform UI rendering frame, namely does not need to use the native UI rendering frame (namely the native UI tree structure) at the same time, which is a large performance cost.
Fig. 3 is a schematic flow chart of a cross-platform rendering method 200 provided by an embodiment of the present application.
It should be appreciated that the method 200 may be performed by any electronic device having data processing capabilities. For example, the electronic device may be implemented as any electronic device having a display screen and having data processing capabilities. For example, the electronic device includes, but is not limited to: smart phones, notebook computers, tablet computers, gaming devices, and other portable or mobile computing devices. Of course, the electronic device may be a Virtual Reality device such as augmented Reality (Augmented Reality, AR) or Virtual Reality (VR). For ease of illustration, the method 200 is described below by taking a cross-platform rendering device as an execution subject.
In addition, it should be noted that the data rendered by the method 200 provided by the present application may be data authorized by the user. For example, data that has been authorized for use by a user (e.g., rendering). For another example, the data is data after the user authorizes the data acquisition mode (e.g., shooting). Alternatively, the data rendered using the method 200 provided by the present application may be published data.
As shown in fig. 3, the method 200 may include:
s210, acquiring a User Interface (UI) tree structure of an application program by a cross-platform rendering device; the application program is a program constructed based on a cross-platform UI language, the UI tree structure comprises window nodes, child nodes of the window nodes comprise at least one texture rendering node, and a first off-screen buffer area and a first display area in a display page are associated with a first texture rendering node in the at least one texture rendering node.
The application may be adapted for use with multiple video rendering applications, for example.
Wherein the multi-path video rendered scene includes, but is not limited to: multiple application scenes such as multi-person video call, multi-person video conference, live multi-person video connection microphone (called' microphone for short) and the like existing in the instant messaging application program; and the method can be applied to scenes such as multi-path video detection and viewing. Taking a multi-person video call as an example, a plurality of participants participate in the video call, and each participant can select to open a camera of the participant to transmit video data; at the view angle of the participant, multiple paths of video data sent by a plurality of other participants are received, decoded and displayed in different display areas; for example, at the perspective of a participant, multiple channels of video data transmitted from multiple other participants located within the current display area are received and decoded for presentation in different display sub-areas.
Illustratively, the cross-platform UI language refers to a user interface language that may run on multiple mobile operating systems. The cross-platform UI language allows a developer to build applications using one code library without having to write different code for each operating system.
Illustratively, the UI tree structure may also be referred to as a node tree or UI node tree, meaning a data structure in which all elements in a User Interface (UI) are organized in a hierarchical structure. It is similar to a tree, where the root node is the entire UI interface, and each child node represents a UI element. Such as button controls, text boxes, images, and the like. The UI interface can also be called as a product interface, the interface is an abstract concept, the interface is a medium for transferring and exchanging information between a person and a computer, and the interface is a carrier for two-way information interaction between a user and a system.
For example, the root node of the UI tree structure may be a cross-platform UI node (i.e., the entire UI interface), and the window node may be a child node of the cross-platform UI node, being a parent node of the texture node. Of course, in other alternative embodiments, the cross-platform UI node may further include a child node other than a window node, which is not specifically limited by the present application. For example, the cross-platform UI node may further include a control node, where a child node of the control node includes a node such as a set page open control.
Illustratively, the first texture rendering node may be a node that performs texture rendering based on texture. Wherein, the texture can comprise information such as the color of the image, rendering refers to the process of generating the image based on the data, and the texture rendering refers to the process of generating the image based on the texture.
The display page may be, for example, any page of an application program. Pages are physical and any form of content that is seen on a screen may be referred to as pages. The display page may include a plurality of display regions, which may include the first display region, each of the plurality of display regions may be used to display a video of an object, and the first display region may be any one of the plurality of display regions.
Taking an instant call scenario among a plurality of objects as an example, the display page may be a video on which all objects for making a call are displayed.
Fig. 4 is an example of displaying a page provided in an embodiment of the present application.
As shown in fig. 4, the cross-platform rendering apparatus may display a video of an object to conduct a call on a page displayed after performing a page creation operation in response to the page creation operation for the display page. For example, assuming that at most 9 objects are displayed in the display page and the number of objects to make a call is 15, the cross-platform rendering apparatus may display a part of the plurality of objects on the page displayed after performing the page creation operation; for example, the initial page may have objects a through I shown in fig. 4 displayed thereon.
Fig. 5 is an example of displaying a page provided in an embodiment of the present application.
As shown in fig. 5, assuming that at most 9 objects are displayed in the display page and the number of objects communicating through the application program is 15, the cross-platform rendering device may respond to the page turning operation for the display page, and the objects J to O shown in fig. 5 are displayed on the page displayed after the page turning operation is performed.
Of course, in other alternative embodiments, the cross-platform rendering apparatus may respond to a click operation for setting a page open control in the display page, and information such as each button control and an explanation of the button control may be displayed on the setting page displayed after the click operation is performed. The present application is not particularly limited thereto.
S220, the cross-platform rendering device acquires first rendering data subjected to off-screen rendering from the first off-screen buffer zone.
Illustratively, the first off-screen buffer is different from the screen buffer.
For example, the first off-screen buffer is an intermediate image (or intermediate data) for storing images (or data) displayed in the first display area; the screen buffer may be used to store images or data displayed in the first display area.
Taking the instant call scenario among a plurality of objects as an example, assuming that the first display area is the area where the object a shown in fig. 4 is located (i.e., the upper left corner area), the first off-screen buffer is used for storing intermediate data of the video of the object a, and the screen buffer is directly used for storing the video of the object a. Of course, in other alternative embodiments, the first off-screen buffer may also be used to store intermediate data of videos of other objects, which is not particularly limited by the present application.
S230, the cross-platform rendering device draws the first rendering data to the first display area by utilizing the first texture rendering node.
For example, the cross-platform rendering device may generate a display image using the first texture rendering node, using the first rendering data as textures, and display the display image within the first display region.
Taking the instant call scenario among the objects as an example, assuming that the first display area associated with the first texture rendering node is the area where the object a shown in fig. 4 is located (i.e., the upper left corner area), the cross-platform rendering device may generate a display image using the first texture rendering node and using the first rendering data as textures, and display the display image (i.e., the video of the object a) in the first display area (i.e., the upper left corner area).
In this embodiment, since the first off-screen buffer area and the first display area in the display page are associated with the first texture rendering node in the cross-platform UI tree structure, when rendering the display page, first rendering data subjected to off-screen rendering may be directly obtained from the first off-screen buffer area, and then the first rendering data is drawn to the first display area by using the first texture rendering node; equivalently, when the display page is rendered, the first rendering data stored in the first off-screen buffer area after off-screen rendering can be directly used as textures to perform texture rendering, so that the rendering performance of the cross-platform high-frame-rate video can be improved.
Notably, in order to reduce the overhead of native UI node rendering in the scheme of "embedding" native UI nodes in the cross-platform UI rendering framework, the application proposes a scheme combining off-screen rendering with cross-platform texture rendering. The main idea is as follows:
the texture rendering nodes in the native off-screen rendering framework and the UI tree structure have one point of association on the rendering data:
1. cross-platform UI such as flutter and reactive can provide a lower texture rendering node (such as a texture node of flutter and a reactive-video node of reactive), and the texture rendering node can receive external input image data and perform texture rendering. This image data is in some data format that is more common to the image world, such as RGB image data in memory, or GPU video memory data native to the window operating system and/or Android operating system.
2. The native off-screen rendering framework can also set a rendering target as a specific rendering data buffer area specified by the system through an off-screen rendering process (for example, using OpenGL (GPU drawing language) language supported by iOS/android), that is, directly render decoded video data or image data into a buffer area specified by the system (referred to as an off-screen buffer area for short), and export the data stored in the off-screen buffer area to a cross-platform UI texture rendering node (i.e., a texture rendering node in the UI tree structure) through a manner of reading pixels, so that the texture rendering node in the UI tree structure takes the decoded video data or image data as required image data.
Therefore, the compatibility problems of inconsistent rendering threads, unreachable rendering data, event response processing conflict and the like among nodes of different UI rendering frames can be avoided through rendering data association instead of node association (for example, a native UI node is embedded in a cross-platform UI rendering frame), and the rendering performance of a cross-platform high-frame-rate video can be further improved.
Fig. 6 is an example of the principle of a cross-platform rendering method provided by an embodiment of the present application.
As shown in fig. 6, the cross-platform rendering device may include a native off-screen rendering side and a cross-platform UI side, and based on this, implementation steps of the cross-platform rendering method provided by the embodiment of the present application may include:
1. Native off-screen rendering side:
the native off-screen rendering side allocates a corresponding off-screen buffer area for each path of video. For simplicity of description, the application is firstly tentatively 1-to-1 relationship, that is, each path of video corresponds to a separate off-screen buffer. Of course, in other alternative embodiments, multiple video channels may correspond to one off-screen buffer, which is not particularly limited by the present application.
2. Preparation of cross-platform UI side:
the cross-platform UI side constructs a product interface corresponding to multiple paths of videos through a cross-platform UI language, namely a UI tree structure, and then a corresponding texture rendering node is associated with a corresponding display position of each path of video. Here, for simplicity, the correspondence between texture rendering nodes and off-screen rendering data buffers is also tentatively 1 to 1.
3. The flow is connected in series:
when the original off-screen rendering side receives and decodes the corresponding video data, the off-screen rendering is performed on the corresponding off-screen buffer area; the off-screen buffer area stores the currently rendered image data, and the currently rendered video memory information can be directly recorded by reading the rendering pixels of the video card into the memory or by directly recording the currently rendered video memory information (the iOS and the android have similar UI support modes). Based on this, the cross-platform UI side may perform texture rendering or on-screen rendering based on rendering data held in an off-screen buffer in the native off-screen rendering side.
In some embodiments, the S220 may include:
under the condition that the cross-platform rendering device receives a rendering data update notification, acquiring the first rendering data from the first off-screen buffer area; the rendering data update notification includes a flag bit for indicating that the first rendering data has been updated.
Illustratively, the native off-screen rendering side sends a rendering data update notification to a texture rendering node corresponding to the cross-platform UI side, where the rendering data update notification includes a flag bit recorded under the first texture rendering node for indicating that new data exists, and requests (or indicates) that the cross-platform UI side triggers a redrawing procedure. Then, in the triggered rendering process of the cross-platform UI side, the first texture rendering node acquires corresponding first rendering data from the corresponding first off-screen buffer according to whether the first texture rendering node has the marking bit of the new data or not, and further performs texture rendering. For the texture rendering node without new rendering data, the original rendering data (i.e. old rendering data) may be used for texture rendering or not for texture rendering, which is not particularly limited in the present application.
In some embodiments, prior to the S220, the method 200 may further comprise:
Receiving media data of a plurality of objects; the plurality of objects includes a first object associated with the first off-screen buffer;
determining whether to render the media data of the first object off-screen based on the number of the plurality of objects and the number of the at least one texture rendering node if the number of the at least one texture rendering node is less than or equal to a maximum allowed number of objects in the display page;
under the condition that the media data of the first object is determined to be rendered off screen, the media data of the first object is rendered off screen, and the first rendering data is obtained;
and storing the first rendering data to the first off-screen buffer.
Illustratively, the number of the plurality of objects may be equal to, less than, or greater than the maximum allowable number.
Illustratively, if a maximum of W objects are allowed to be displayed within the display page, the maximum allowed number is W, which is a positive integer.
For example, in a case where the number of the at least one texture rendering node is smaller than or equal to the maximum allowable number of objects in the display page, it is indicated that the at least one texture rendering node is only capable of satisfying the texture rendering of the media data of the maximum allowable number of objects, and the plurality of objects may need to be typeset by a plurality of pages, i.e. there is insufficient at least one texture rendering node for texture rendering of all objects of the plurality of objects, in which case the cross-platform rendering means may determine whether to render the media data of the first object off-screen based on the number of the plurality of objects and the number of the at least one texture rendering node. Of course, in other alternative embodiments, if the number of the at least one texture rendering node is equal to the number of the plurality of objects, or if the at least one texture rendering node is a texture rendering node adapted to the construction of the plurality of objects, the media data of the first object may be directly rendered off-screen.
Illustratively, each of the at least one texture rendering node may correspond to an object of the plurality of objects, i.e., each object may be texture rendered by a dedicated texture rendering node.
For example, if the plurality of objects are objects in a plurality of pages, each texture rendering node of the at least one texture node may correspond to an object in at least one page of the plurality of pages. For example, each texture rendering node may correspond to an object in each of the plurality of pages if the object in the last of the plurality of pages is equal to the maximum allowed number. For another example, if the objects in the last page of the plurality of pages are less than the maximum allowed number, each of a portion of the texture rendering nodes in the at least one texture rendering node may correspond to one object in each of the plurality of pages, and each of another portion of the texture rendering nodes may correspond to one object in each of the plurality of pages other than the last page.
In other words, if the plurality of objects are objects in a plurality of pages, part of texture nodes in the at least one texture node are texture rendering nodes shared by objects in different pages, that is, if the plurality of objects are objects in a plurality of pages, the objects in different pages in the plurality of pages can be subjected to texture rendering through one shared texture rendering node.
For example, the cross-platform rendering device stores the first rendering data in the first off-screen buffer, and if the first off-screen buffer is empty, the first rendering data may be directly stored; if the first off-screen buffer is not empty, the already stored rendering data (i.e., old rendering data) may be deleted first, and then the first rendering data may be stored.
In this embodiment, when the number of the at least one texture rendering node is less than or equal to the maximum allowable number of objects in the display page, determining whether to render the media data of the first object off-screen based on the number of the plurality of objects and the number of the at least one texture rendering node, and when determining to render the media data of the first object off-screen, rendering the media data of the first object off-screen, and obtaining the first rendering data; the method can ensure that the at least one texture rendering node timely performs off-screen rendering on the media data to be displayed, can improve the timeliness of off-screen rendering, and further improves the timeliness of displaying the media data.
In some embodiments, the off-screen rendering of the media data of the first object is determined in the event that the number of the plurality of objects is less than or equal to the number of the at least one texture rendering node.
In this embodiment, when the number of the plurality of objects is less than or equal to the number of the at least one texture rendering node, it is described that all of the media data of the plurality of objects are objects laid out in the display page, and therefore, it is possible to determine whether the first object is an object or not, and render the media data of the first object directly off-screen.
In some embodiments, in the event that the number of the plurality of objects is greater than the number of the at least one texture rendering node, determining a list of visible objects using the window node; the visible object list comprises rendering objects of the at least one texture node, and the number of the objects in the visible object list is less than or equal to the number of the at least one texture rendering node; in the case that the list of visible objects includes the first object, determining to render media data of the first object off-screen.
Illustratively, the number of the at least one texture rendering node is equal to the maximum allowed number of objects in the display page, and the number of the visible object list is equal to the number of the at least one texture rendering object.
Of course, in other alternative embodiments, the number of the at least one texture rendering node may be less than the maximum allowed number of objects in the display page, in which case some or all of the at least one texture rendering node may be the texture rendering node shared by at least two objects in the display page. Similarly, in other alternative embodiments, the number of visible object lists may be less than the number of at least one texture rendering object. For example, the number of the at least one texture rendering node is fixed to the maximum allowed number of objects in the display page, and the number of the visible object list is smaller than the number of the at least one texture rendering objects, in which case a part of the at least one texture rendering nodes are free nodes.
In this embodiment, when the number of the plurality of objects is greater than the number of the at least one texture rendering node, it is indicated that the media data of some objects in the plurality of objects are objects typeset in pages other than the display page, in this case, the window node is used to determine the rendering object (i.e. the visible object list) of the at least one texture node, if the visible object list includes the first object, it is indicated that the media data of the first object is the media data that needs to be displayed on the display page, in this case, the media data of the first object may be directly determined to be rendered off-screen, so that timely off-screen rendering of the media data that needs to be displayed by using the at least one texture rendering node may be ensured, and the timeliness of off-screen rendering may be improved, and further, the timeliness of displaying the media data may be improved.
In addition, by updating the visible object list, the offline rendering object and the visible object can be switched under the condition that the at least one texture rendering node is unchanged, so that multiplexing of the at least one texture rendering node can be realized, and the cross-platform rendering device can achieve the effect of supporting multiple paths of high-frame-rate videos at the same time.
It is noted that, compared with the scheme of "embedding" the native UI node in the cross-platform UI rendering frame, the method of associating rendering data can avoid "embedding" the native UI node and optimize rendering performance, but when multiple paths of media data need to be rendered simultaneously, a rendering environment (off-screen buffer area) needs to be saved for each path of media data, which causes relatively large overhead on memory/video memory+rendering CPU/GPU.
In view of this, in the embodiment of the present application, considering that the number of objects displayed at most on a "screen" is relatively limited, the cross-platform rendering device may use only the media data rendered "visible" as the optimization direction; just in the application, the loosely coupled mode of rendering data association (texture rendering nodes only inquire data after off-screen rendering during rendering), but not the node association mode (corresponding native UI nodes need to be bound when each video container node in the cross-platform UI rendering frame is created), by introducing a visible object list, the same texture rendering node design by multiplexing (namely objects in different pages) can be realized, and the cross-platform rendering device can achieve the effect of simultaneously supporting multiple paths of high-frame-rate videos.
In some embodiments, the method 200 may further comprise:
for each object in the list of visible objects, an off-screen buffer is maintained.
Illustratively, the cross-platform rendering device needs to maintain at least one off-screen buffer (including the first off-screen buffer referred to above).
If the number of objects in the visible object list is equal to the number of the at least one texture rendering node, the correspondence between the at least one off-screen buffer and the at least one texture rendering node may be a one-to-one correspondence. If the number of objects in the visible object list is less than the number of the at least one texture rendering node, the correspondence between the at least one off-screen buffer and the at least one texture rendering node may be a many-to-one correspondence.
In this embodiment, for each object in the visible object list, an off-screen buffer is maintained, that is, the objects in the visible object list are used as granularity to maintain the off-screen buffer, so that the objects in the plurality of objects (the number of the plurality of objects is greater than the number of at least one texture rendering node) are prevented from being used as granularity to maintain the off-screen buffer, not only can the off-screen rendering of the media data of the objects in the visible object list be realized, but also the maintenance of the off-screen buffer for the objects except for the visible object list in the plurality of objects can be avoided, and the number of the maintenance buffers of the cross-platform rendering device is reduced; in addition, in the scene of updating the visible object list, the maintained off-screen buffer area can be not changed, so that the off-screen buffer area maintained by the cross-platform rendering device can be reused, and the utilization efficiency of the off-screen buffer area of the cross-platform rendering device is improved.
In some embodiments, the display page includes a display area for each object in the list of visible objects.
Illustratively, the display page includes at least one display region (including the first display region referred to above). If the number of objects in the visible object list is equal to the number of the at least one texture rendering node, the correspondence between the at least one display area and the at least one texture rendering node may be a one-to-one correspondence. If the number of objects in the visible object list is less than the number of the at least one texture rendering node, the correspondence between the at least one display area and the at least one texture rendering node may be a many-to-one correspondence.
In this embodiment, the display page includes a display area of each object in the visible object list, which is equivalent to off-screen rendering data obtained by off-screen rendering on media data of each object in the visible object list, and the cross-platform rendering device can utilize texture rendering nodes corresponding to each object in at least one texture rendering node to render off-screen rendering data corresponding to each object in the display page, so that the cross-platform rendering device is ensured to be capable of rendering all off-screen rendering data to the display area in the display page, and the utilization rate of the off-screen rendering data is improved.
In some embodiments, the list of visible objects is determined with the window node in response to a page creation operation for the display page; the visible object list includes objects in a page displayed after the page creation operation is performed.
Illustratively, the cross-platform rendering device determines the list of visible objects with the window node in response to a page creation operation for the display page; the visible object list includes objects in a page displayed after the page creation operation is performed. For example, in connection with fig. 4, assuming that the page displayed after the cross-platform rendering device performs the page creation operation is the page shown in fig. 4, the visible object list includes objects a to I.
In this embodiment, the cross-platform rendering device responds to the page creation operation for the display page, and determines the visible object list by using the window node, which is equivalent to that the cross-platform rendering device triggers the window node to update the visible object list through the page creation operation, so that the accuracy and the timeliness of the visible object list can be ensured.
In some embodiments, the list of visible objects is determined with the window node in response to a page flip operation for the display page; the list of visible objects includes objects in the displayed page after the page turning operation is performed.
Illustratively, the cross-platform rendering device determines the list of visible objects with the window node in response to a page flip operation for the display page; the list of visible objects includes objects in the displayed page after the page turning operation is performed. For example, referring to fig. 5, assuming that the page displayed after the cross-platform rendering device performs the page turning operation is the page shown in fig. 5, the visible object list includes objects J to 0.
In this embodiment, the cross-platform rendering device responds to the page turning operation for the display page, and determines the visible object list by using the window node, which is equivalent to that the cross-platform rendering device triggers the window node to update the visible object list through the page turning operation, so that the accuracy and the timeliness of the visible object list can be ensured.
Fig. 7 is another example of the principle of a cross-platform rendering method provided by an embodiment of the present application.
As shown in fig. 7, the key points for the cross-platform rendering device to achieve multiplexing are: the cross-platform rendering device notifies a native off-screen rendering side of a list of visible objects (e.g., a list of video users) upon creation, page turning, etc. of a display page.
Based on this, the following operations on the native off-screen rendering side may be triggered:
1. after the media data is received and decoded, judging whether the current object (such as a user of the current video) is in a visible object list or not through a preposed 'visibility screening' process, and if not, directly discarding the current object and not rendering the current object; if the off-screen rendering process is only performed.
2. The off-screen buffer always maintains only a limited number, which corresponds to the number of objects in the visible object list. Therefore, through the multiplexing mechanism, the cross-platform rendering device can always render only the (limited number of) media data in the visible region, so that the cost of memory/video memory and rendering CPU/GPU is saved, and the effect of simultaneously supporting multiple paths of high-frame-rate videos is achieved.
In some embodiments, before the cross-platform rendering device receives media data for a plurality of objects, the method 200 may further comprise:
a subscription request is sent to the data server, the subscription request being for subscribing to media data belonging to the plurality of objects.
The data server may be, for example, a background server of the application. In other words, before the cross-platform rendering device receives the media data of the plurality of objects, a subscription request is sent to the background server of the application program, where the subscription request is used for subscribing to the media data belonging to the plurality of objects.
Fig. 8 is an example of a media data transmission process 300 provided by an embodiment of the present application.
As shown in fig. 8, the media data transmission process 300 may include:
s310, collector video collection and coding.
The collector may be an electronic device that collects video. Taking a multi-person video call as an example, there are multiple participants involved in the video call, and the electronic device used by each participant may be a collector.
S320, the collector transmits video data to the video data server.
The collector may transmit video data to the video data server by wired or wireless means. Taking the multi-person video call as an example, the video data server may be a background server of an application program for conducting the multi-person video call.
S330, the watching end sends a subscription request to the video data server.
Wherein the subscription request is for subscribing to media data of a plurality of objects. The viewing end can be any electronic device configured with a cross-platform rendering device capable of executing the cross-platform rendering method provided by the embodiment of the application.
S340, the watching end receives the video data of the corresponding object sent by the video data server.
The viewing end receives video data of an object indicated in the subscription request sent by the video data server.
S350, decoding the video at the watching end.
After receiving the video data of the object indicated in the subscription request sent by the video data server, the received video data can be decoded to obtain decoded data and determine whether to render the decoded data off-screen.
S360, video rendering.
Under the condition that the off-screen rendering decoding data is determined by the viewing end, the off-screen rendering decoding data is obtained and stored in the off-screen buffer area, so that the viewing end uses the off-screen rendering data in the off-screen buffer area as textures to conduct texture rendering through the texture rendering node, and the effect of displaying video on the viewing end is achieved.
It should be noted that fig. 8 is only an example of the present application, and should not be construed as limiting the present application. For example, in general, each video is a separate video data and a display area, and the timing of the timing is different, and in other alternative embodiments, multiple videos may be acquired simultaneously, or multiple videos may be combined into one video to be displayed in one display area, which is not limited to this embodiment of the present application.
In some embodiments, the application is an instant talk application and the plurality of objects includes all session objects that communicate through the instant talk application. In other words, the plurality of objects includes an initiating object that initiates the session and all objects that join the session.
Illustratively, in connection with fig. 4 and 5, the plurality of objects includes object a-object I shown in fig. 4 and object J-object O shown in fig. 5.
Of course, in other alternative embodiments, the application may be any program capable of playing multiple video, which is not particularly limited by the present application.
The preferred embodiments of the present application have been described in detail above with reference to the accompanying drawings, but the present application is not limited to the specific details of the embodiments described above, and various simple modifications can be made to the technical solution of the present application within the scope of the technical concept of the present application, and all the simple modifications belong to the protection scope of the present application. For example, the individual features described in the above-mentioned embodiments can be combined in any suitable manner, without contradiction, and the application will not be described in any way in any possible combination in order to avoid unnecessary repetition. As another example, any combination of the various embodiments of the present application may be made without departing from the spirit of the present application, which should also be regarded as the disclosure of the present application.
It should also be understood that, in the various method embodiments of the present application, the sequence numbers of the processes referred to above do not mean the sequence of execution, and the execution sequence of the processes should be determined by the functions and internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The method provided by the embodiment of the application is described above, and the device provided by the embodiment of the application is described below.
Fig. 9 is a schematic block diagram of a cross-platform rendering apparatus 400 provided by an embodiment of the present application.
As shown in fig. 9, the cross-platform rendering apparatus 400 may include:
a first obtaining unit 410, configured to obtain a user interface UI tree structure of an application;
the application program is a program constructed based on a cross-platform UI language, the UI tree structure comprises window nodes, child nodes of the window nodes comprise at least one texture rendering node, and a first off-screen buffer area and a first display area in a display page are associated with a first texture rendering node in the at least one texture rendering node;
a second obtaining unit 420, configured to obtain, from the first off-screen buffer, first rendering data that is rendered off-screen;
and a drawing unit 430 for drawing the first rendering data to the first display area using the first texture rendering node.
In some embodiments, the second obtaining unit 420 is specifically configured to:
under the condition that a rendering data update notification is received, acquiring the first rendering data from the first off-screen buffer area; the rendering data update notification includes a flag bit for indicating that the first rendering data has been updated.
In some embodiments, before the second obtaining unit 420 obtains the first rendering data that is rendered off-screen, the second obtaining unit is further configured to:
receiving media data of a plurality of objects; the plurality of objects includes a first object associated with the first off-screen buffer;
determining whether to render the media data of the first object off-screen based on the number of the plurality of objects and the number of the at least one texture rendering node if the number of the at least one texture rendering node is less than or equal to a maximum allowed number of objects in the display page;
under the condition that the media data of the first object is determined to be rendered off screen, the media data of the first object is rendered off screen, and the first rendering data is obtained;
and storing the first rendering data to the first off-screen buffer.
In some embodiments, the second obtaining unit 420 is specifically configured to:
determining to render media data of the first object off-screen if the number of the plurality of objects is less than or equal to the number of the at least one texture rendering node.
In some embodiments, the second obtaining unit 420 is specifically configured to:
determining a list of visible objects using the window node if the number of the plurality of objects is greater than the number of the at least one texture rendering node; the visible object list comprises rendering objects of the at least one texture node, and the number of the objects in the visible object list is less than or equal to the number of the at least one texture rendering node;
In the case that the list of visible objects includes the first object, determining to render media data of the first object off-screen.
In some embodiments, the second obtaining unit 420 is further configured to:
for each object in the list of visible objects, an off-screen buffer is maintained.
In some embodiments, the display page includes a display area for each object in the list of visible objects.
In some embodiments, the second obtaining unit 420 is specifically configured to:
determining, with the window node, the list of visible objects in response to a page creation operation for the display page; the visible object list includes objects in a page displayed after the page creation operation is performed.
In some embodiments, the second obtaining unit 420 is specifically configured to:
determining the list of visible objects with the window node in response to a page flip operation for the display page; the list of visible objects includes objects in the displayed page after the page turning operation is performed.
In some embodiments, before the second obtaining unit 420 receives the media data of the plurality of objects, the second obtaining unit is further configured to:
a subscription request is sent to the data server, the subscription request being for subscribing to media data belonging to the plurality of objects.
In some embodiments, the application is an instant talk application and the plurality of objects includes all session objects that communicate through the instant talk application.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here. Specifically, the cross-platform rendering device 400 may correspond to a corresponding main body in the method 200 for executing the embodiment of the present application, and each unit in the cross-platform rendering device 400 is for implementing a corresponding flow in the method 200, and for brevity, will not be described herein.
It should be further understood that, in the cross-platform rendering apparatus 400 according to the embodiment of the present application, each unit is divided based on a logic function, and in practical application, the functions of one unit may be implemented by multiple units, or the functions of multiple units may be implemented by one unit, or even these functions may be implemented with assistance of one or more other units. For example, some or all of cross-platform rendering device 400 may be combined into one or several additional units. For another example, some unit(s) in the cross-platform rendering apparatus 400 may be further split into a plurality of units with smaller functions, which may achieve the same operation without affecting the implementation of the technical effects of the embodiments of the present application. For another example, the cross-platform rendering apparatus 400 may also include other units, and in practical applications, these functions may also be implemented with assistance from other units, and may be implemented by cooperation of multiple units.
According to another embodiment of the present application, the cross-platform rendering apparatus 400 according to the embodiment of the present application may be constructed by running a computer program (including a program code) capable of executing steps involved in the respective methods on a general-purpose computing device of a general-purpose computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like, and implementing the encoding method or decoding method of the embodiment of the present application. The computer program may be recorded on a computer readable storage medium, and loaded into an electronic device and executed therein to implement a corresponding method of an embodiment of the present application. In other words, the units referred to above may be implemented in hardware, or may be implemented by instructions in software, or may be implemented in a combination of hardware and software. Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software in the decoding processor. Alternatively, the software may reside in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory and, in combination with its hardware, performs the steps in the method embodiments referred to above.
Fig. 10 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application.
As shown in fig. 10, the electronic device 500 includes at least a processor 510 and a computer-readable storage medium 520. Wherein the processor 510 and the computer-readable storage medium 520 may be connected by a bus or other means. The computer-readable storage medium 520 is used to store a computer program 521, the computer program 521 including computer instructions, and the processor 510 is used to execute the computer instructions stored in the computer-readable storage medium 520. Processor 510 is a computing core and a control core of electronic device 500 that are adapted to implement one or more computer instructions, in particular to load and execute one or more computer instructions to implement a corresponding method flow or a corresponding function.
By way of example, the processor 510 may also be referred to as a central processing unit (Central Processing Unit, CPU). Processor 510 may include, but is not limited to: general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete element gate or transistor logic devices, discrete hardware components, and so forth.
By way of example, computer-readable storage medium 520 may be high-speed RAM memory, or Non-volatile memory (Non-Volatilememory), such as at least one magnetic disk memory; alternatively, it may be at least one computer-readable storage medium located remotely from the aforementioned processor 510. In particular, computer-readable storage media 520 includes, but is not limited to: volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
As shown in fig. 10, the electronic device 500 may also include a transceiver 530.
The processor 510 may control the transceiver 530 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. The transceiver 530 may include a transmitter and a receiver. The transceiver 530 may further include antennas, the number of which may be one or more.
It should be appreciated that the various components in the electronic device 500 are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus. It is noted that the electronic device 500 may be any electronic device having data processing capabilities; the computer readable storage medium 520 has stored therein first computer instructions; first computer instructions stored in computer-readable storage medium 520 are loaded and executed by processor 510 to implement corresponding steps in method 200 provided by embodiments of the present application; in particular, the first computer instructions in the computer readable storage medium 520 are loaded by the processor 510 and execute the corresponding steps, and are not repeated here.
According to another aspect of the present application, an embodiment of the present application provides a chip. The chip may be an integrated circuit chip with signal processing capability, and may implement or execute the methods, steps and logic blocks disclosed in the embodiments of the present application. The chip may also be referred to as a system-on-chip, a system-on-chip or a system-on-chip, etc. The chip can be applied to various electronic devices capable of mounting the chip, so that the device mounted with the chip can perform the respective steps in the disclosed methods or logic blocks in the embodiments of the present application. For example, the chip may be adapted to implement one or more computer instructions, in particular to load and execute one or more computer instructions to implement the corresponding method flow or corresponding functions.
According to another aspect of the present application, an embodiment of the present application provides a computer-readable storage medium (Memory). The computer-readable storage medium is a memory device of a computer for storing programs and data. It is understood that the computer readable storage medium herein may include a built-in storage medium in a computer, and of course, may include an extended storage medium supported by a computer. The computer-readable storage medium provides a storage space that stores an operating system of the electronic device. The memory space holds computer instructions adapted to be loaded and executed by a processor, which when read and executed by the processor of a computer device, cause the computer device to perform the respective steps of the methods or logic blocks disclosed in the embodiments of the present application.
According to another aspect of the application, embodiments of the application provide a computer program product or computer program. The computer program product or computer program includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the respective steps of the methods or logic blocks disclosed in the embodiments of the present application. In other words, when the solution provided by the present application is implemented using software, it may be implemented in whole or in part in the form of a computer program product or a computer program. The computer program product or computer program includes one or more computer instructions. When loaded and executed on a computer, the computer program instructions run in whole or in part the processes or implement the functions of embodiments of the present application.
It is noted that the computer to which the present application relates may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions according to the present application may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, from one website, computer, server, or data center by a wired (e.g., coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means.
Those of ordinary skill in the art will appreciate that the elements and process steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. In other words, the skilled person may use different methods for each specific application to achieve the described functionality, but such implementation should not be considered to be beyond the scope of the present application.
Finally, it should be noted that the above is only a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about the changes or substitutions within the technical scope of the present application, and the changes or substitutions are all covered by the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims. For example, the individual technical features described in the above-described embodiments may be combined in any suitable manner without contradiction. As another example, any combination of the various embodiments of the present application may be made without departing from the basic idea of the present application, which should also be regarded as the disclosure of the present application.

Claims (14)

1. A cross-platform rendering method, comprising:
acquiring a User Interface (UI) tree structure of an application program;
the UI tree structure comprises window nodes, wherein child nodes of the window nodes comprise at least one texture rendering node, and a first off-screen buffer area and a first display area in a display page are associated with a first texture rendering node in the at least one texture rendering node;
acquiring first rendering data subjected to off-screen rendering from the first off-screen buffer area;
and drawing the first rendering data to the first display area by using the first texture rendering node.
2. The method of claim 1, wherein the obtaining off-screen rendered first rendering data from the first off-screen buffer comprises:
under the condition that a rendering data update notification is received, acquiring the first rendering data from the first off-screen buffer area; the rendering data update notification includes a flag bit for indicating that the first rendering data has been updated.
3. The method of claim 1, wherein prior to the obtaining the first rendered data that is rendered off-screen, the method further comprises:
Receiving media data of a plurality of objects; the plurality of objects includes a first object associated with the first off-screen buffer;
determining whether to render media data of the first object off-screen based on the number of the plurality of objects and the number of the at least one texture rendering node if the number of the at least one texture rendering node is less than or equal to a maximum allowed number of objects in the display page;
under the condition that the off-screen rendering of the media data of the first object is determined, the off-screen rendering of the media data of the first object is performed, and the first rendering data is obtained;
and storing the first rendering data to the first off-screen buffer area.
4. The method of claim 3, wherein the determining whether to render the media data of the first object off-screen based on the number of the plurality of objects and the number of the at least one texture rendering node comprises:
determining to render media data of the first object off-screen if the number of the plurality of objects is less than or equal to the number of the at least one texture rendering node.
5. The method of claim 3, wherein the determining whether to render the media data of the first object off-screen based on the number of the plurality of objects and the number of the at least one texture rendering node comprises:
Determining a list of visible objects using the window node if the number of the plurality of objects is greater than the number of the at least one texture rendering node; the visible object list comprises rendering objects of the at least one texture node, and the number of the objects in the visible object list is smaller than or equal to the number of the at least one texture rendering node;
in the case that the list of visible objects includes the first object, determining to render media data of the first object off-screen.
6. The method of claim 5, wherein the method further comprises:
for each object in the list of visible objects, an off-screen buffer is maintained.
7. The method of claim 5, wherein the display page includes a display area for each object in the list of visible objects.
8. The method of claim 5, wherein said determining a list of visible objects using said window node comprises:
determining the visible object list by using the window node in response to a page creation operation for the display page; the visible object list includes objects in a page displayed after the page creation operation is performed.
9. The method of claim 5, wherein said determining a list of visible objects using said window node comprises:
determining the visible object list by using the window node in response to a page turning operation for the display page; the visible object list comprises objects in a displayed page after the page turning operation is executed.
10. The method according to any one of claims 3 to 9, wherein prior to the receiving media data of a plurality of objects, the method further comprises:
a subscription request is sent to a data server, the subscription request being for subscribing to media data belonging to the plurality of objects.
11. The method of any of claims 3 to 9, wherein the application is a talk-around application and the plurality of objects includes all session objects that communicate through the talk-around application.
12. A cross-platform rendering apparatus, comprising:
a first acquisition unit configured to acquire a user interface UI tree structure of an application;
the UI tree structure comprises window nodes, wherein child nodes of the window nodes comprise at least one texture rendering node, and a first off-screen buffer area and a first display area in a display page are associated with a first texture rendering node in the at least one texture rendering node;
The second acquisition unit is used for acquiring first rendering data subjected to off-screen rendering from the first off-screen buffer area;
and the drawing unit is used for drawing the first rendering data to the first display area by utilizing the first texture rendering node.
13. An electronic device, comprising:
a processor adapted to execute a computer program;
a computer readable storage medium having stored therein a computer program which, when executed by the processor, implements the method of any one of claims 1 to 11.
14. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 11.
CN202310958752.8A 2023-08-01 2023-08-01 Cross-platform rendering method and device and electronic equipment Active CN116661790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310958752.8A CN116661790B (en) 2023-08-01 2023-08-01 Cross-platform rendering method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310958752.8A CN116661790B (en) 2023-08-01 2023-08-01 Cross-platform rendering method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN116661790A true CN116661790A (en) 2023-08-29
CN116661790B CN116661790B (en) 2023-12-22

Family

ID=87721047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310958752.8A Active CN116661790B (en) 2023-08-01 2023-08-01 Cross-platform rendering method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116661790B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9846682B1 (en) * 2013-11-25 2017-12-19 Amazon Technologies, Inc. Cross-platform presentation of digital content
CN109218802A (en) * 2018-08-23 2019-01-15 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109656637A (en) * 2018-12-13 2019-04-19 高新兴科技集团股份有限公司 Cross-platform rendering method, device and the computer storage medium for calling OpenGL ES
CN113079408A (en) * 2019-12-17 2021-07-06 北京嗨动视觉科技有限公司 Video playing method, device and system
CN114625997A (en) * 2022-03-22 2022-06-14 通号智慧城市研究设计院有限公司 Page rendering method and device, electronic equipment and computer readable medium
WO2023093779A1 (en) * 2021-11-25 2023-06-01 华为技术有限公司 Interface generation method and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9846682B1 (en) * 2013-11-25 2017-12-19 Amazon Technologies, Inc. Cross-platform presentation of digital content
CN109218802A (en) * 2018-08-23 2019-01-15 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109656637A (en) * 2018-12-13 2019-04-19 高新兴科技集团股份有限公司 Cross-platform rendering method, device and the computer storage medium for calling OpenGL ES
CN113079408A (en) * 2019-12-17 2021-07-06 北京嗨动视觉科技有限公司 Video playing method, device and system
WO2023093779A1 (en) * 2021-11-25 2023-06-01 华为技术有限公司 Interface generation method and electronic device
CN114625997A (en) * 2022-03-22 2022-06-14 通号智慧城市研究设计院有限公司 Page rendering method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN116661790B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
US20230362430A1 (en) Techniques for managing generation and rendering of user interfaces on client devices
JP2020511711A (en) Message processing method, storage medium, and computer device
WO2021082299A1 (en) Video playback method and device
EP4210053A1 (en) Application program control method and apparatus, electronic device, and storage medium
CN108881227B (en) Operation control method and device of remote whiteboard system and remote whiteboard system
CN111277869A (en) Video playing method, device, equipment and storage medium
CN113368492A (en) Rendering method and device
CN115292020B (en) Data processing method, device, equipment and medium
CN112911320B (en) Live broadcast method, live broadcast device, computer equipment and storage medium
CN112995721A (en) Video delivery method, delivery method and device of rich media content and storage medium
CN114268796A (en) Method and device for processing video stream
US11507633B2 (en) Card data display method and apparatus, and storage medium
CN116661790B (en) Cross-platform rendering method and device and electronic equipment
CN110865864A (en) Interface display method, device and equipment for fast application and storage medium
CN111104183B (en) Application program running method and device, electronic equipment and storage medium
CN117014689A (en) Bullet screen display method and device and electronic equipment
CN113779466B (en) Page display method and device, storage medium and electronic equipment
CN108462902A (en) A kind of media file read method and device, multimedia play system
CN113628312B (en) Cloud rendering method and device based on Unity3D built-in rendering pipeline
WO2022111698A1 (en) Method for generating transition animation for application switching, and related device
CN112306337B (en) Conference same screen system, method and device, computer equipment and storage medium
CN115842815A (en) Web-based video effect addition
CN118037923A (en) Image rendering method and device, storage medium and electronic equipment
CN115114046A (en) Data communication method, device, storage medium and equipment
CN117979046A (en) Video processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40092625

Country of ref document: HK