CN110045958B - Texture data generation method, device, storage medium and equipment - Google Patents

Texture data generation method, device, storage medium and equipment Download PDF

Info

Publication number
CN110045958B
CN110045958B CN201910309843.2A CN201910309843A CN110045958B CN 110045958 B CN110045958 B CN 110045958B CN 201910309843 A CN201910309843 A CN 201910309843A CN 110045958 B CN110045958 B CN 110045958B
Authority
CN
China
Prior art keywords
rendering
code segment
texture data
kernel
rendering engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910309843.2A
Other languages
Chinese (zh)
Other versions
CN110045958A (en
Inventor
段庆龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910309843.2A priority Critical patent/CN110045958B/en
Publication of CN110045958A publication Critical patent/CN110045958A/en
Application granted granted Critical
Publication of CN110045958B publication Critical patent/CN110045958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses a texture data generation method, a texture data generation device, a storage medium and equipment, and belongs to the technical field of computers. The method is used in a rendering engine, the rendering engine comprises a rendering kernel and at least two rendering engine interfaces, and the method comprises the following steps: creating a shared JS context, wherein the shared JS context is used for providing an operating environment of the JS code segment; for each rendering engine interface in the at least two rendering engine interfaces, acquiring a JS code segment by the rendering engine interface, and sending the JS code segment to the rendering kernel; for each JS code segment, the rendering kernel is based on the sharing of the JS context, and the JS code segment is rendered to obtain texture data corresponding to the JS code segment. The embodiment of the application can save memory overhead.

Description

Texture data generation method, device, storage medium and equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a texture data generation method, a texture data generation device, a storage medium and equipment.
Background
The graphical interface displayed in the terminal may be obtained by rendering texture data, and the texture data is obtained by rendering a JavaScript (JS for short) code segment after the JavaScript code segment is loaded by a rendering engine. For example, if the graphical interface is a sticker capable of responding to a gesture operation of a user, texture data of the sticker may be generated and the sticker may be displayed on a video or image. For example, the sticker includes a question of "whether to look good" and options of "yes" and "no".
In the related art, when texture data is generated, a rendering engine and a shared JS context (JSContext) may be created, a JS code segment is loaded by using the rendering engine, and the texture data is generated based on the shared JS context.
When multiple texture data need to be generated simultaneously, multiple rendering engines and multiple shared JS contexts need to be created, and multiple rendering engines and multiple shared JS contexts consume more memory resources when being created, so that memory overhead is large when the texture data is generated.
Disclosure of Invention
The embodiment of the application provides a texture data generation method, a texture data generation device, a storage medium and texture data generation equipment, which are used for solving the problem of high memory overhead when a plurality of rendering engines and a plurality of shared JS contexts are created simultaneously to generate a plurality of texture data. The technical scheme is as follows:
in one aspect, a texture data generating method is provided for use in a rendering engine, the rendering engine including a rendering kernel and at least two rendering engine interfaces, the method including:
creating a shared JS context, wherein the shared JS context is used for providing an operating environment of the JS code segment;
for each rendering engine interface in the at least two rendering engine interfaces, acquiring a JS code segment by the rendering engine interface, and sending the JS code segment to the rendering kernel;
for each JS code segment, the rendering kernel is based on the sharing of the JS context, and the JS code segment is rendered to obtain texture data corresponding to the JS code segment.
In one aspect, a method for displaying a graphical interface in a video is provided, and the method includes:
the video display system comprises a cover for displaying a video and n pieces of pasting paper positioned on the upper layer of the cover, wherein the pasting paper is obtained by rendering texture data, and n is more than or equal to 2;
when a first operation on the cover is received, displaying each video frame in the video and the n stickers on the upper layer of the cover;
and when a second operation on any one of the n stickers is received, replacing the displayed sticker with the functional interface of the sticker, wherein the functional interface comprises the feedback information collected by the sticker aiming at the second operation.
In one aspect, there is provided a texture data generating apparatus for use in a rendering engine including a rendering kernel and at least two rendering engine interfaces, the apparatus comprising:
the creating module is used for creating a shared JS context which is used for providing an operating environment of the JS code segment;
the sending module is used for controlling each rendering engine interface in the at least two rendering engine interfaces to acquire a JS code segment and sending the JS code segment to the rendering kernel;
and the rendering module is used for controlling the rendering kernel to render the JS code segment based on the sharing of the JS context to obtain texture data corresponding to the JS code segment.
In one aspect, an apparatus for displaying a graphical interface in a video is provided, the apparatus comprising:
the display module is used for displaying a cover of a video and n pieces of pasting paper positioned on the upper layer of the cover, the pasting paper is obtained by rendering texture data, and n is more than or equal to 2;
the display module is further configured to display each video frame in the video and the n stickers located on the upper layer of the cover when receiving a first operation on the cover;
the display module is further used for replacing the displayed paster with the function interface of the paster when receiving a second operation on any one of the n pasters, and the function interface contains the feedback information collected by the paster according to the second operation.
In one aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a texture data generating method as described above.
In one aspect, a texture data generating device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the texture data generating method as described above.
In the embodiment of the application, when the rendering engine comprises at least two rendering engine interfaces and one rendering kernel, a shared JS context can be created, the obtained JS code segment is sent to the rendering kernel by using each rendering engine interface, the rendering kernel can render each code segment based on the shared JS context, and texture data corresponding to each code segment is obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is an architectural diagram of a rendering system according to some exemplary embodiments;
FIG. 2 is a flow chart of a method of texture data generation according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for generating texture data according to another embodiment of the present application;
FIG. 4 is an architecture diagram of a rendering engine provided by another embodiment of the present application;
FIG. 5 is a schematic diagram of a closure object provided in accordance with another embodiment of the present application;
FIG. 6 is a block diagram of a rendering engine provided in another embodiment of the present application;
FIG. 7 is a flow chart illustrating the display of texture data according to another embodiment of the present application;
FIG. 8 is a graphical representation of experimental data provided in another embodiment of the present application;
FIG. 9 is an architectural diagram illustrating an interactive system in accordance with some exemplary embodiments;
FIG. 10 is a flowchart of a method for displaying a graphical interface in a video, according to an embodiment of the present application;
FIG. 11 is a schematic view of a video cover provided by one embodiment of the present application;
FIG. 12 is a schematic view of a functional interface of a sticker provided by one embodiment of the present application;
FIG. 13 is a schematic view of an interface for capturing video provided by an embodiment of the present application;
FIG. 14 is a schematic view of an interface for selection of a sticker provided by one embodiment of the present application;
fig. 15 is a block diagram illustrating a structure of a texture data generating apparatus according to an embodiment of the present application;
fig. 16 is a block diagram of a video interface display device according to a further embodiment of the present application;
fig. 17 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
The architecture of the rendering system is described first.
Referring to fig. 1, the rendering system in the present embodiment includes JS code 101, a front-end public library 102, a rendering engine 103, and an operating system platform 104.
The JS code 101 is a code for embodying the implementation logic of the service. For example, the JS code 101 may instruct to draw a rendering effect of a circular pattern, a triangular pattern, or the like.
The front-end public library 102 is a library of some basic classes that can be called by the JS code 101.
The rendering engine 103 is a JS script rendering engine, and loads the JS code 101 on an operating system platform of the terminal, so as to render a graphical interface. The rendering engine includes at least a rendering engine interface, a rendering kernel, and a module interface, such as a Canvas interface.
The rendering engine interface is an interface for rendering and displaying texture data generated by the rendering engine. If the rendering engine is referred to as a QG engine, the rendering engine interface may be QGView.
In this embodiment, the rendering engine interface may be created externally, and the created rendering engine interface may be loaded into the rendering engine 103. The number of the rendering engine interfaces is equal to the type of texture data which needs to be generated simultaneously. For example, when a texture data needs to be generated, a rendering engine interface may be created; when three texture data need to be generated simultaneously, three rendering engine interfaces may be created, and the following description will take the creation of at least two rendering engine interfaces as an example.
The rendering kernel includes at least a JS API (Application Interface). The JS API is a communication bridge between the JS language and the operating system language, for example, the JS API can translate the JS code into a code which can be recognized by the operating system platform, and then an interface in the operating system platform is called to execute the code; alternatively, the JS API can translate code that the operating system platform can recognize into the JS code and then trigger the JS logic to execute the JS code.
It should be noted that the rendering kernel functions similarly to the browser kernel, and the effect that the JS code can be locally rendered is achieved, just like executing the JS code in a browser.
The Canvas interface is a drawing interface for JS API calls. The Canvas interface may provide different communication protocols to communicate with the operating system platform, such as WebGL (Web Graphics Library, 3D drawing protocol) and Canvas2D (2D drawing protocol) as shown in FIG. 1. The Canvas interface implements the drawing function by calling the interface of the operating system platform. For example, the Canvas interface may call OpenGL (Open Graphics Library) in fig. 1.
The following describes a process of generating texture data by the rendering engine.
Referring to fig. 2, a flowchart of a texture data generating method provided in an embodiment of the present application is shown, where the texture data generating method may be applied to the rendering engine shown in fig. 1. The texture data generation method comprises the following steps:
step 201, a shared JS context is created, and the shared JS context is used for providing an execution environment of the JS code segment.
The shared JS context comprises a plurality of shared environment variables which are used in the process of operating the JS code segment, so that the shared JS context can be used for replacing the operating environment of the global JS code. That is, the shared JS context is used to provide the runtime environment for the JS code segment.
In this embodiment, a shared JS context may be created by the rendering engine and loaded into the rendering kernel.
Step 202, for each rendering engine interface in the at least two rendering engine interfaces, the rendering engine interface acquires a JS code segment, and sends the JS code segment to the rendering kernel.
When each rendering engine interface is created, a unique identification (Key) can be allocated to each rendering engine interface, so that the JS code segment acquired by the rendering engine interface and the identification of the rendering engine interface can be sent to the rendering content together, and the rendering engine interface sending the JS code is indicated by the identification.
And step 203, for each JS code segment, rendering the JS code segment by the rendering kernel based on the shared JS context to obtain texture data corresponding to the JS code segment.
For each JS code segment, the rendering kernel can generate a JS instance according to the shared JS context and the JS code segment, and then the JS instance is translated into an instance which can be identified by the operating system platform; and then generating a Canvas based on the converted instance, calling a Canvas interface through the Canvas, and finally calling OpenGL through the Canvas interface to generate texture data, wherein the texture data is the texture data corresponding to the JS code segment.
If the rendering kernel can obtain the identifier of the rendering engine interface, the rendering kernel can respectively correspond the generated JS instance and Canvas to the identifier.
To sum up, according to the texture data generation method provided by the embodiment of the application, when the rendering engine includes at least two rendering engine interfaces and one rendering kernel, a shared JS context can be created, and the obtained JS code segment is sent to the rendering kernel by using each rendering engine interface, and the rendering kernel can render each code segment based on the shared JS context to obtain texture data corresponding to each code segment.
Referring to fig. 3, a flowchart of a texture data generating method according to another embodiment of the present application is shown, where the texture data generating method can be applied to the rendering engine shown in fig. 1. The texture data generation method comprises the following steps:
step 301, a shared JS context is created, which is used to provide the execution environment for the JS code segment.
The shared JS context comprises a plurality of shared environment variables which are used in the process of operating the JS code segment, so that the shared JS context can be used for replacing the operating environment of the global JS code. That is, the shared JS context is used to provide the runtime environment for the JS code segment.
In this embodiment, a shared JS context may be created by the rendering engine and loaded into the rendering kernel.
Step 302, for each rendering engine interface in the at least two rendering engine interfaces, the rendering engine interface acquires a JS code segment, and sends the JS code segment to the rendering kernel.
When each rendering engine interface is created, a unique identification (Key) can be allocated to each rendering engine interface, so that the JS code segment acquired by the rendering engine interface and the identification of the rendering engine interface can be sent to the rendering content together, and the rendering engine interface sending the JS code is indicated by the identification.
Optionally, the rendering engine may further include an Agent (Agent) located between the rendering engine interface and the rendering kernel, and each rendering engine interface may send the JS code segment to the Agent, and the Agent assigns a unique identifier to each rendering engine interface, and correspondingly sends the JS code and the identifier of each rendering engine interface to the rendering kernel, please refer to fig. 4.
And after the rendering kernel acquires the IS code segment and the identification of each rendering engine interface, creating a mapping table, wherein the mapping table stores the corresponding relation between the JS code segment sent by the rendering engine interface and the identification of the rendering engine interface.
As shown in fig. 4, the rendering engine interface a acquires the JS code segment a, sends the JS code segment a to the proxy, the proxy generates KeyA for the rendering engine interface a, sends the JS code segment a and the KeyA together to the rendering kernel, and the rendering kernel creates a corresponding relationship between the JS code segment a and the KeyA in the mapping table. And the rendering engine interface B acquires the JS code segment B, sends the JS code segment B to an agent, the agent generates KeyB for the rendering engine interface B, sends the JS code segment B and the KeyB together to a rendering kernel, and the rendering kernel creates a corresponding relation between the JS code segment B and the KeyB in a mapping table.
Step 303, for each JS code segment, the rendering kernel acquires a closure object corresponding to the JS code segment, where the closure object is used to isolate the rendering of each JS code segment in the rendering kernel.
The closure object can be understood as a closure macro QGDefineView (function (window)) { }, and the closure property of the closure macro can encapsulate the execution environment of the JS instance so as to be isolated from the execution environments of other external JS instances, so that the JS instance is prevented from modifying some shared environment variables in the shared JS context when running in the same shared JS context, and therefore, the execution logic and generated data of other JS instances are subjected to errors.
Referring to fig. 5, JS instance a and JS instance B share shared environment variable X in the same shared JS context, and if JS instance a modifies the value of shared environment variable X to 20 and JS instance B requires the value of shared environment variable X to be 30, the running logic and the generated data of JS instance B are in error.
Step 304, the rendering kernel loads the JS code segment into the closure object.
And the rendering kernel records each JS code segment into a corresponding closure object. For example, the JS code segment a corresponds to the closure object a, and the JS code segment B corresponds to the closure object B, then the rendering kernel loads the JS code segment a into the closure object a, and loads the JS code segment B into the closure object B.
In step 305, the rendering kernel sends the shared environment variable in the shared JS context, which is the environment variable shared by all JS code segments, to the closure object.
The rendering kernel can send the shared environment variable in the shared JS context to the closure object through the Window parameter. For example, the rendering kernel sends 10 the shared environment variable X in fig. 5 to the closure object.
And step 306, the closure object stores the shared environment variable, and modifies part or all of the shared environment variables to obtain the local environment variable.
The closure object may store the shared environment variable, i.e., the closure object copies the shared environment variable in the Window parameter to its inside.
The closure object can modify part or all of the shared environment variables stored in the closure object to obtain local environment variables, and the numerical values of the local environment variables do not influence the numerical values of the shared environment variables outside the closure object. For example, JS instance a may modify the value of the shared environment variable X stored in the closure object a to 20, and JS instance B may modify the value of the shared environment variable X stored in the closure object B to 30, at which time the value of the shared environment variable X stored in the shared JS context is still 10.
And 307, rendering the JS code segment by the rendering kernel based on the local environment variable in the closure object to obtain texture data corresponding to the JS code segment.
For each JS code segment, the rendering kernel can generate a JS instance according to the shared JS context and the JS code segment, and then the JS instance is translated into an instance which can be identified by the operating system platform; and then generating a Canvas based on the converted instance, calling a Canvas interface through the Canvas, and finally calling OpenGL through the Canvas interface to generate texture data, wherein the texture data is the texture data corresponding to the JS code segment. Namely, for each JS code segment, the rendering kernel corresponds the obtained texture data with the identifier corresponding to the JS code segment in the mapping table.
And if the rendering kernel can acquire the identifier of the rendering engine interface, the rendering kernel can respectively correspond the generated JS instance and Canvas to the identifier corresponding to the JS code segment in the mapping table.
In this embodiment, a plurality of rendering engine interfaces can be created simultaneously, each rendering engine interface can load one JS code segment, and each JS code segment can generate one JS instance and one Canvas, so that different JS instances and Canvas are operated in the same shared JS context, and memory overhead can be saved when multiple texture data are generated simultaneously.
After generating the texture data, the rendering engine may output the texture data, where the texture data is invisible to the user and needs to invoke other tools to render and display the texture data, or the rendering engine may render and display the texture data so as to present a direct visual effect to the user, where step 308 and step 310 are executed.
Step 308, for each rendering engine interface of the at least two rendering engine interfaces, the rendering engine interface obtains corresponding texture data from the rendering kernel.
In a possible implementation manner, after obtaining texture data, the rendering kernel may send the texture data and the corresponding identifier to the agent, and the agent determines a rendering engine interface according to the identifier and then sends the texture data to the rendering engine interface. In another possible implementation manner, the rendering engine interface may request texture data from an agent, the agent sends an acquisition request carrying an identifier of the rendering engine interface to the rendering kernel, the rendering kernel finds the texture data corresponding to the identifier from the mapping table, sends the texture data and the identifier to the agent, the agent determines the rendering engine interface according to the identifier, and then sends the texture data to the rendering engine interface. Of course, the rendering engine interface may also obtain texture data from the rendering kernel in other manners, which is not limited in this embodiment.
In this embodiment, the interior of the rendering engine interface may monitor a refresh signal of the display interface, and the number of times of sending the refresh signal is equal to the number of times of refreshing the display interface. For example, if the number of times of refreshing the display interface within 1 second is 60 times, the number of times of sending the refresh signal is also 60 times. When the rendering engine interface listens to the refresh signal, texture data may be retrieved from the rendering kernel for display on the display interface. Alternatively, the rendering engine interface may not obtain the texture data from the rendering kernel according to the refresh signal, and the obtaining time of the texture data is not limited in this embodiment.
Referring to fig. 6, after acquiring the JS code segment, the rendering engine may send the JS code segment to the rendering kernel, the rendering kernel calls the Canvas interface, the Canvas interface calls OpenGL of the operating system platform to generate texture data, and finally the texture data is sent to the rendering engine interface.
In step 309, the rendering engine interface stores each frame data in the texture data in a rendering buffer.
The texture data in this embodiment includes multiple frames of data, and each frame of data may be sequentially stored in a rendering buffer in the rendering engine interface according to the rendering order. That is, a data queue is stored in the rendering buffer, and the data queue is used for rendering the data sorted in the front first and rendering the data sorted in the rear second.
If in step 308, the rendering engine interface acquires texture data while monitoring the refresh signal, then step 311 is executed after step 309; if the rendering engine interface does not obtain texture data according to the refresh signal in step 308, then step 310 is performed after step 309.
In step 310, the rendering engine interface monitors a refresh signal of the display interface.
In step 311, when the refresh signal is monitored, the rendering engine interface sends the earliest buffered frame data in the rendering buffer to the frame buffer, and the frame buffer is bound to the rendering layer.
In step 312, the rendering layer renders a frame of data in the frame buffer, and displays the rendered data in the display interface.
The render buffer may have multiple frames of data stored therein, and the render engine interface may send the oldest buffered frame of data into the frame buffer. That is, when a data queue is stored in the render buffer, the first data in the data queue is sent to the frame buffer. After the sending is completed, the rendering engine interface deletes the data in the data queue.
In this embodiment, the frame buffer and the rendering layer are bound when the rendering engine interface is created, so that each time a frame of data is stored in the frame buffer, the rendering layer may read the frame of data from the frame buffer, render the frame of data, and display rendered data obtained by rendering in the display interface. The rendering layer referred to herein may be a CALAYER rendering layer.
Referring to fig. 7, the rendering engine interface includes a rendering buffer, a frame buffer and a rendering layer, and the rendering engine interface may obtain texture data from the rendering kernel, store the texture data in the rendering buffer, send the texture data to the frame buffer when a refresh signal is monitored, and render and display a frame of data in the frame buffer on the display interface by the callayer.
To sum up, according to the texture data generation method provided by the embodiment of the application, when the rendering engine includes at least two rendering engine interfaces and one rendering kernel, a shared JS context can be created, and the obtained JS code segment is sent to the rendering kernel by using each rendering engine interface, and the rendering kernel can render each code segment based on the shared JS context to obtain texture data corresponding to each code segment.
The closure object encapsulates the operation environment of the JS instances so as to be isolated from the operation environments of other external JS instances, and prevents the JS instances from modifying some shared environment variables in the shared JS context when the JS instances operate in the same shared JS context, so that the operation logics and the generated data errors of other JS instances are caused, and the operation accuracy of the JS instances is improved. In addition, the effect and the man-machine interaction between different JS potentials do not influence each other.
The rendering engine interface displays the texture data, so that the problem that the texture data is invisible to a user can be avoided, and the texture data can be visually presented on the display interface.
Next, experiments were performed on two implementation manners of the related art and the embodiment of the present application on an apple (iphone)6 mobile phone, and the obtained experimental data are shown in fig. 8. The solid line represents the memory increment for simultaneously generating multiple texture data in the related art, and the dotted line represents the memory increment for simultaneously generating multiple texture data in the embodiment of the present application. Analysis of experimental data can find that the average increment of each QGVIew before optimization is 3340KB, the average increment of each QGVIew after optimization is 337KB, and memory overhead is reduced by about 90%.
An application scenario of the texture data generation method is described below.
The texture data can be rendered into a graphic interface and superposed on any display interface for display. For example, the graphical interface may be displayed superimposed on the image, or the graphical interface may be displayed superimposed on a video frame of the video. The following describes a method for displaying a graphical interface in a video, taking as an example that the graphical interface is a sticker and the sticker is displayed superimposed on a video frame.
When the video is the video stored in the terminal, the display method of the graphical interface in the video can be executed by the terminal; when the video is a video obtained by networking the terminal, the graphical interface display method in the video can be executed by the terminal and the server together.
Please refer to fig. 9, which illustrates a schematic structural diagram of an interactive system according to an embodiment of the present application. The interactive system comprises a terminal 910 and a server 920, wherein the terminal 910 comprises a rendering engine. The terminal 910 establishes a connection with the server 920 through a wired network or a wireless network.
The terminal 910 may be a mobile phone, a tablet computer, an e-book reader, smart glasses, a smart watch, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4), a laptop computer, a desktop computer, or the like.
The terminal 910 has a client installed therein, and the client may create a video including a graphical interface and distribute the video to the server, or the client may obtain the distributed video including the graphical interface from the server.
The server 920 is a background server of the client, and may be a server or a server cluster formed by multiple servers or a cloud computing center.
In the embodiment of the present application, the number of the terminals 910 is at least one, and fig. 1 illustrates only one terminal 910 and one server 920.
Referring to fig. 10, a flowchart of a method for displaying a graphical interface in a video according to an embodiment of the present application is shown, where the method for displaying a graphical interface in a video can be applied in a terminal or in the interactive system shown in fig. 9. The display method of the graphical interface in the video comprises the following steps:
step 1001, displaying a cover of a video and n pieces of pasting paper positioned on the upper layer of the cover, wherein the pasting paper is obtained by rendering texture data, and n is larger than or equal to 2.
The video may be stored in the terminal, or may be acquired by the terminal from a distribution platform of the server, and the source of the video is not limited in this embodiment.
The cover of the video is usually the first video frame of the video, and may also be other video frames in the video, and may also be other images, and the like, which is not limited in this embodiment.
In this embodiment, the terminal may display n stickers on the cover of the video through step 301-310. Wherein each texture data corresponds to a sticker.
The sticker is a graphical interface obtained by rendering after the JS code segment is loaded by using the rendering engine, and the graphical interface can be superposed on the upper layers of other display interfaces for displaying.
Optionally, the sticker may not contain an operation option, at which time the sticker will not respond to the user's operation. Alternatively, the sticker may contain an operation option, at which time the sticker can respond to a user operation. When the terminal is a touch terminal, the sticker can respond to gesture clicking operation of a user.
Wherein stickers implementing different functions may have different modes of operation. Referring to fig. 11, a first view in fig. 11 shows a desktop of the terminal, the desktop includes an icon 1101 of the client "xq", and when the user clicks the icon 1101, the display interface jumps to a second view; the second view displays a message list in the 'xq' and a 'dynamic' control 1102, and when a user clicks the 'dynamic' control, the display interface jumps to the third view; a third view is provided with a portal 1103 of 'friend x view', and when the user clicks the portal 1103 of 'friend x view', the display interface jumps to a fourth view; the fourth view shows the page "friend x view". The description will be given by taking an example in which two stickers are displayed on the cover 1104 of the video. Wherein the top sticker 1105 is a scoring sticker, the sticker 1105 includes the text "score" and 5 point score options for scoring, and the 5 point score options are operable. The next sticker 1106 is a voting sticker, the sticker 1106 includes the options of question "whether to look good" and "yes" and "no", and the options of "yes" and "no" are operational.
Step 1002, when a first operation on a cover is received, displaying each video frame in a video and n stickers on the upper layer of the cover.
The user controls the terminal to play the video by triggering a first operation, which may be an operation of clicking the front cover, an operation of double-clicking the front cover, an operation of sliding on the front cover, and the like, and the embodiment is not limited.
In this embodiment, when each video frame is displayed, n frames of texture data may be acquired, and n pieces of stickers corresponding to the n frames of texture data are displayed in the video frame in an overlapping manner.
Step 1003, when receiving a second operation on any one of the n stickers, replacing the displayed sticker with a function interface of the sticker, wherein the function interface comprises feedback information collected by the sticker for the second operation.
The feedback information is feedback that the user generated to the sticker for the video content. Wherein the feedback information is related to the manner of operation of the sticker. When the sticker contains an input box, the feedback information may be information entered for the content of the sticker. For example, the feedback information may be a user's look-after feeling. When the sticker contains an option, the feedback information may be statistical information collected for the number of clicks of the option. For example, when the sticker is a scoring sticker, the feedback information may be a total score value obtained by statistics and the number of people who participate in scoring; when the sticker is a voting sticker, the feedback information can be the percentage of votes for each option and the number of people who participated in the vote.
The user operates the sticker by triggering a second operation, which may be an operation of clicking an option in the sticker, an operation of double clicking an option in the sticker, and the like, and this embodiment is not limited.
For example, when the sticker is a scoring sticker, if the user selects one of the 5 scoring options, the functional interface of the sticker is displayed at the display position of the sticker when the next video frame is displayed, and the functional interface includes the total scoring values obtained by all scoring operations performed on the sticker currently, and may further include the number of people who participate in scoring.
For example, when the sticker is the voting sticker 1106, if the user selects one of the options "yes" and "no", the display interface jumps from the fourth view in fig. 11 to fig. 12, that is, when the next video frame is displayed, the functional interface of the sticker is displayed at the display position of the sticker, and the functional interface includes the voting results obtained by all the voting operations performed on the sticker currently. As shown in FIG. 12, the next tab 1206 displays in its functional interface the selection percentages of "yes" and "no" and the number of people who participated in the vote.
In summary, according to the graphical interface display method in the video provided by the embodiment of the application, the upper layer of the video frame can simultaneously display n pieces of stickers, so that the content of the stickers in the video frame can be enriched, and the user experience is improved.
Different stickers are rendered by different rendering engine interfaces, so that the operation between different stickers is not influenced mutually.
Optionally, the user may also make the video and the corresponding sticker through the terminal, and the making process is as follows:
step 1, displaying a first shooting control.
Referring to fig. 11, a first view in fig. 11 shows a desktop of the terminal, the desktop includes an icon 1101 of the client "xq", and when the user clicks the icon 1101, the display interface jumps to a second view; the second view displays a message list in the 'xq' and a 'dynamic' control 1102, and when a user clicks the 'dynamic' control, the display interface jumps to the third view; the third view shows a portal 1103 of "buddy x view", and when the user clicks the portal 1103 of "buddy x view", the display interface jumps to the first view in fig. 13, and a first shooting control 1301 is displayed in the first view in fig. 13.
And step 2, when a fifth operation on the first shooting control is received, displaying a second shooting control and a preview interface, such as the second shooting control 1302 and the preview interface 1303 in the second view in fig. 13.
The user controls the client to jump from the dynamic interface to the shooting interface by triggering a fifth operation, where the fifth operation may be an operation of clicking the first shooting control, an operation of double-clicking the first shooting control, and so on, and this embodiment is not limited. For example, when the user clicks the first capture control 1301, the display interface jumps to the second view in fig. 13.
Step 3, shooting a video when receiving a sixth operation on the second shooting control; and when the seventh operation of the second shooting control is received, ending the video shooting.
The user controls the camera to start shooting the video by triggering a sixth operation, where the sixth operation may be an operation of clicking the second shooting control, an operation of double clicking the second shooting control, and so on, and this embodiment is not limited. For example, when the user clicks the second photographing control 1302, the photographing of the video is started.
The user controls the camera to stop shooting the video by triggering a seventh operation, where the seventh operation may be an operation of clicking the second shooting control, an operation of double clicking the second shooting control, and so on, and this embodiment is not limited. For example, when the user clicks the second capture control 1302 again, the capture of the video is stopped and the display jumps to the first view in FIG. 14.
Step 4, when the video shooting is finished, a sticker selection control is displayed, such as the sticker selection control 1401 in the first view in fig. 14.
And 5, when a third operation on the sticker selection control is received, displaying m pieces of pasting paper, wherein m is larger than or equal to n.
The user controls the client to display the sticker to be selected by triggering a third operation, where the third operation may be an operation of clicking the sticker selection control, an operation of double clicking the sticker selection control, and the like, and this embodiment is not limited. For example, when the user clicks on the sticker selection control 1401, the display jumps to the second view in FIG. 14. As shown in the second view in fig. 14, the m stickers include a voting sticker 1402 and a scoring sticker 1403.
And 6, displaying the front cover of the video, the n stickers and a release control when receiving the selection operation of any n stickers in the m stickers.
The user controls the client to select the sticker displayed on the video frame by triggering a selection operation. For example, when the user clicks on the vote sticker 1402 and score sticker 1403, the display interface jumps to the third view in fig. 14, with the vote sticker 1402 and score sticker 1403 displayed on the top layer of the cover of the video.
And 7, correspondingly releasing the video and the n stickers when the fourth operation on the release control is received.
The user controls the client to publish the video and the n poster paper to the distribution platform in the server by triggering a selection operation, where the fourth operation may be an operation of clicking a publishing control, an operation of double clicking the publishing control, and the like, and this embodiment is not limited.
Referring to fig. 15, a block diagram of a texture data generating apparatus provided in an embodiment of the present application is shown, where the texture data generating apparatus may be applied to a rendering engine, and the rendering engine includes a rendering kernel and at least two rendering engine interfaces. The texture data generation device includes:
a creating module 1510, configured to create a shared JS context, where the shared JS context is used to provide an execution environment for the JS code segment;
the sending module 1520, configured to control, for each rendering engine interface of the at least two rendering engine interfaces, the rendering engine interface to obtain one JS code segment, and send the JS code segment to the rendering kernel;
and the rendering module 1530 is used for controlling the rendering kernel to render the JS code segment based on the shared JS context for each JS code segment, so as to obtain texture data corresponding to the JS code segment.
Optionally, the rendering module 1530 is further configured to:
for each JS code segment, controlling a rendering kernel to obtain a closure object corresponding to the JS code segment, wherein the closure object is used for isolating the rendering of each JS code segment in the rendering kernel;
the rendering kernel is controlled to load the JS code segment into the closure object;
and the control rendering kernel renders the JS code segment in the closure object based on the shared JS context to obtain texture data corresponding to the JS code segment.
Optionally, the rendering module 1530 is further configured to:
the rendering control kernel sends a shared environment variable in the shared JS context to the closure object, wherein the shared environment variable is an environment variable shared by all the JS code segments;
the control closure object stores the shared environment variable and modifies part or all of the shared environment variables to obtain a local environment variable;
and the control rendering kernel renders the JS code segment based on the local environment variable in the closure object to obtain texture data corresponding to the JS code segment.
Optionally, the rendering module 1530 is further configured to:
controlling a rendering kernel to acquire the identifier of each rendering engine interface;
controlling a rendering kernel to create a mapping table, wherein the mapping table stores a corresponding relation between a JS code segment sent by a rendering engine interface and an identifier of the rendering engine interface;
and for each JS code segment, controlling a rendering kernel to render the JS code segment based on the shared JS context, and corresponding the obtained texture data to the identification corresponding to the JS code segment in the mapping table.
Optionally, the apparatus further comprises:
the obtaining module is used for controlling the rendering kernel to render the JS code segments based on the shared JS context for each JS code segment to obtain texture data corresponding to the JS code segment, and controlling the rendering engine interface to obtain the corresponding texture data from the rendering kernel for each rendering engine interface in at least two rendering engine interfaces;
and the display module is used for controlling the rendering engine interface to render and display the texture data.
Optionally, the display module is further configured to:
controlling a rendering engine interface to store each frame data in the texture data into a rendering buffer area;
controlling a rendering engine interface to monitor a refreshing signal of a display interface;
when a refresh signal is monitored, controlling a rendering engine interface to send data of a frame cached earliest in a rendering buffer area to a frame buffer area, and binding the frame buffer area with a rendering layer;
and the rendering layer renders the frame data in the frame buffer area and displays the rendered data in the display interface.
To sum up, the texture data generating device provided by the embodiment of the application, when the rendering engine includes at least two rendering engine interfaces and one rendering kernel, can create a shared JS context, and send an obtained JS code segment to the rendering kernel by using each rendering engine interface, the rendering kernel can render each code segment based on the shared JS context, and obtain a texture data corresponding to each code segment, so that various texture data can be generated by creating one rendering engine and one shared JS context, and the problem of large memory overhead when multiple rendering engines and multiple shared JS contexts need to be created simultaneously to generate various texture data is solved, thereby achieving the effect of saving memory overhead.
The closure object encapsulates the operation environment of the JS instances so as to be isolated from the operation environments of other external JS instances, and prevents the JS instances from modifying some shared environment variables in the shared JS context when the JS instances operate in the same shared JS context, so that the operation logics and the generated data errors of other JS instances are caused, and the operation accuracy of the JS instances is improved. In addition, the effect and the man-machine interaction between different JS potentials do not influence each other.
The rendering engine interface displays the texture data, so that the problem that the texture data is invisible to a user can be avoided, and the texture data can be visually presented on the display interface.
Referring to fig. 16, a block diagram of a graphical interface display apparatus in a video according to an embodiment of the present application is shown, where the graphical interface display apparatus in the video can be applied to a terminal or the interactive system shown in fig. 9. The graphical interface display device in the video comprises:
the display module 1610 is configured to display a cover of a video and n pieces of pasting paper located on an upper layer of the cover, where the pasting paper is obtained by rendering texture data, and n is greater than or equal to 2;
the display module 1610 is further configured to, when receiving a first operation on the cover, display each video frame in the video and n poster paper located on an upper layer of the cover;
the display module 1610 is further configured to replace the displayed sticker with a function interface of the sticker when a second operation on any one of the n stickers is received, where the function interface includes feedback information collected by the sticker for the second operation.
Optionally, the display module 1610 is further configured to display a sticker selection control when the video shooting is finished;
the display module 1610 is further configured to display m poster paper when a third operation on the sticker selection control is received, where m is greater than or equal to n;
the display module 1610 is further configured to display a front cover of the video, n pieces of stickers, and a release control when receiving a selection operation on any n pieces of stickers among the m pieces of stickers;
the device also includes: the publishing module 1620 is configured to, when the fourth operation on the publishing control is received, publish the video and the n stickers correspondingly.
To sum up, the graphical interface display device in the video that this application embodiment provided can enrich the content of the sticker in the video frame because n stickers can be shown simultaneously to the upper strata of video frame to promote user's experience.
Different stickers are rendered by different rendering engine interfaces, so that the operation between different stickers is not influenced mutually.
Fig. 17 shows a block diagram of a terminal 1700 according to an exemplary embodiment of the present application. The terminal 1700 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement the texture data generation method and the graphical interface display method in video provided by the method embodiments of the present application.
In some embodiments, terminal 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1704, a touch display screen 1705, a camera 1706, an audio circuit 1707, a positioning component 1708, and a power source 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1705 may be one, providing the front panel of terminal 1700; in other embodiments, display 1705 may be at least two, each disposed on a different surface of terminal 1700 or in a folded design; in still other embodiments, display 1705 may be a flexible display disposed on a curved surface or a folded surface of terminal 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
The positioning component 1708 is used to locate the current geographic Location of the terminal 1700 to implement navigation or LBS (Location Based Service). The Positioning component 1708 may be a Positioning component based on a GPS (Global Positioning System) in the united states, a beidou System in china, a greiner System in russia, or a galileo System in the european union.
Power supply 1709 is used to power the various components in terminal 1700. The power supply 1709 may be ac, dc, disposable or rechargeable. When power supply 1709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1714, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1700. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the touch display screen 1705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the terminal 1700, and the gyro sensor 1712 may cooperate with the acceleration sensor 1711 to acquire a 3D motion of the user on the terminal 1700. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1713 may be disposed on the side frames of terminal 1700 and/or underlying touch display 1705. When the pressure sensor 1713 is disposed on the side frame of the terminal 1700, the user's grip signal to the terminal 1700 can be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1713. When the pressure sensor 1713 is disposed at the lower layer of the touch display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1714 is configured to capture a fingerprint of the user, and the processor 1701 is configured to identify the user based on the fingerprint captured by the fingerprint sensor 1714, or the fingerprint sensor 1714 is configured to identify the user based on the captured fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1714 may be disposed on the front, back, or side of terminal 1700. When a physical key or vendor Logo is provided on terminal 1700, fingerprint sensor 1714 may be integrated with the physical key or vendor Logo.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the touch display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1705 is turned down. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1715.
Proximity sensors 1716, also known as distance sensors, are typically disposed on the front panel of terminal 1700. Proximity sensor 1716 is used to gather the distance between the user and the front face of terminal 1700. In one embodiment, when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually reduced, processor 1701 controls touch display 1705 to switch from a bright screen state to a dark screen state; when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually increased, processor 1701 controls touch display 1705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is not intended to be limiting with respect to terminal 1700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
An embodiment of the present application provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, which is loaded and executed by a processor to implement the texture data generating method and the graphical interface display method in video as described above.
One embodiment of the present application provides a texture data generating apparatus, which includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the texture data generating method as described above.
It should be noted that: in the texture data generating device provided in the above embodiment, when generating texture data, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed by different functional modules as needed, that is, the internal structure of the texture data generating device may be divided into different functional modules to complete all or part of the above described functions. In addition, the texture data generation apparatus and the texture data generation method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description should not be taken as limiting the embodiments of the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (15)

1. A method of texture data generation for use in a rendering engine, the rendering engine comprising a rendering kernel and at least two rendering engine interfaces, the method comprising:
creating a shared JS context, wherein the shared JS context is used for providing an operating environment of the JS code segment;
for each rendering engine interface in the at least two rendering engine interfaces, acquiring a JS code segment by the rendering engine interface, and sending the JS code segment to the rendering kernel;
for each JS code segment, the rendering kernel is based on the sharing of the JS context, and the JS code segment is rendered to obtain texture data corresponding to the JS code segment.
2. The method of claim 1, wherein for each JS code segment, the rendering kernel renders the JS code segment based on the shared JS context, resulting in texture data corresponding to the JS code segment, comprising:
for each JS code segment, the rendering kernel acquires a closure object corresponding to the JS code segment, and the closure object is used for isolating the rendering of each JS code segment in the rendering kernel;
the rendering kernel loads the JS code segment into the closure object;
and the rendering kernel is based on the shared JS context, and is right for rendering the JS code segment in the closure object to obtain texture data corresponding to the JS code segment.
3. The method of claim 2, wherein the rendering kernel renders the JS code segment in the closure object based on the shared JS context, resulting in texture data corresponding to the JS code segment, comprising:
the rendering kernel sends a shared environment variable in the shared JS context to the closure object, wherein the shared environment variable is an environment variable shared by all JS code segments;
the closure object stores the shared environment variable and modifies part or all of the shared environment variables to obtain a local environment variable;
and the rendering kernel is based on the local environment variable in the closure object, and the JS code segment is rendered to obtain texture data corresponding to the JS code segment.
4. The method of claim 1, wherein for each JS code segment, the rendering kernel renders the JS code segment based on the shared JS context, resulting in texture data corresponding to the JS code segment, comprising:
the rendering kernel acquires the identifier of each rendering engine interface;
the rendering kernel creates a mapping table, wherein the mapping table stores a corresponding relation between a JS code segment sent by a rendering engine interface and an identifier of the rendering engine interface;
for each JS code segment, the rendering kernel is based on the sharing JS context, right the JS code segment is rendered, and texture data obtained by the rendering kernel corresponds to the identification corresponding to the JS code segment in the mapping table.
5. The method of any of claims 1-4, wherein after said rendering kernel, for each JS code segment, renders the JS code segment based on the shared JS context, resulting in texture data corresponding to the JS code segment, the method further comprises:
for each rendering engine interface of the at least two rendering engine interfaces, the rendering engine interface obtaining corresponding texture data from the rendering kernel;
the rendering engine interface renders and displays the texture data.
6. The method of claim 5, wherein the rendering engine interface renders and displays the texture data, comprising:
the rendering engine interface stores each frame data in the texture data into a rendering buffer zone;
the rendering engine interface monitors a refreshing signal of a display interface;
when the refreshing signal is monitored, the rendering engine interface sends the earliest cached frame data in the rendering buffer area to a frame buffer area, and the frame buffer area is bound with the rendering layer;
and the rendering layer renders the frame data in the frame buffer area and displays the obtained rendering data in the display interface.
7. A graphical interface display method in a video, the method comprising:
the video display system comprises a cover for displaying a video and n pieces of pasting paper positioned on the upper layer of the cover, wherein the pasting paper is obtained by rendering texture data, and n is more than or equal to 2;
when a first operation on the cover is received, displaying each video frame in the video and the n stickers on the upper layer of the cover;
when a second operation on any one of the n stickers is received, replacing the displayed sticker with a function interface of the sticker, wherein the function interface comprises feedback information collected by the sticker aiming at the second operation;
wherein the texture data is determined by:
the rendering engine creates a shared JS context, wherein the shared JS context is used for providing a running environment of the JS code segment, and the rendering engine comprises a rendering kernel and at least two rendering engine interfaces;
for each rendering engine interface in the at least two rendering engine interfaces, acquiring a JS code segment by the rendering engine interface, and sending the JS code segment to the rendering kernel;
for each JS code segment, the rendering kernel is based on the sharing of the JS context, and the JS code segment is rendered to obtain texture data corresponding to the JS code segment.
8. The method of claim 7, wherein prior to the cover displaying the video and the n-poster paper on the top layer of the cover, the method further comprises:
when video shooting is finished, displaying a sticker selection control;
when a third operation on the sticker selection control is received, m pieces of pasting paper are displayed, wherein m is larger than or equal to n;
when the selection operation of any n stickers in the m stickers is received, displaying the cover of the video, the n stickers and a release control;
and correspondingly publishing the video and the n pieces of stickers when a fourth operation on the publishing control is received.
9. A texture data generating apparatus for use in a rendering engine including a rendering kernel and at least two rendering engine interfaces, the apparatus comprising:
the creating module is used for creating a shared JS context which is used for providing an operating environment of the JS code segment;
the sending module is used for controlling each rendering engine interface in the at least two rendering engine interfaces to acquire a JS code segment and sending the JS code segment to the rendering kernel;
and the rendering module is used for controlling the rendering kernel to render the JS code segment based on the sharing of the JS context to obtain texture data corresponding to the JS code segment.
10. The apparatus of claim 9, wherein the rendering module is further configured to:
for each JS code segment, controlling the rendering kernel to acquire a closure object corresponding to the JS code segment, wherein the closure object is used for isolating the rendering of each JS code segment in the rendering kernel;
controlling the rendering kernel to load the JS code segment into the closure object;
and controlling the rendering kernel to render the JS code segment in the closure object based on the shared JS context, so as to obtain texture data corresponding to the JS code segment.
11. The apparatus of claim 9 or 10, further comprising:
the obtaining module is used for controlling the rendering kernel to render the JS code segment based on the shared JS context for each JS code segment, and controlling the rendering engine interface to obtain corresponding texture data from the rendering kernel for each rendering engine interface in the at least two rendering engine interfaces after the texture data corresponding to the JS code segment is obtained;
and the display module is used for controlling the rendering engine interface to render and display the texture data.
12. The apparatus of claim 11, wherein the display module is further configured to:
controlling the rendering engine interface to store each frame data in the texture data into a rendering buffer area;
controlling the rendering engine interface to monitor a refreshing signal of a display interface;
when the refreshing signal is monitored, controlling the rendering engine interface to send the earliest cached frame data in the rendering buffer area to a frame buffer area, wherein the frame buffer area is bound with a rendering layer;
and controlling the rendering layer to render the frame data in the frame buffer area, and displaying the obtained rendering data in the display interface.
13. An apparatus for graphical interface display in a video, the apparatus comprising:
the display module is used for displaying a cover of a video and n pieces of pasting paper positioned on the upper layer of the cover, the pasting paper is obtained by rendering texture data, and n is more than or equal to 2;
the display module is further configured to display each video frame in the video and the n stickers located on the upper layer of the cover when receiving a first operation on the cover;
the display module is further configured to replace the displayed sticker with a function interface of the sticker when a second operation on any one of the n stickers is received, where the function interface includes feedback information collected by the sticker for the second operation;
wherein the texture data is determined by:
the rendering engine creates a shared JS context, wherein the shared JS context is used for providing a running environment of the JS code segment, and the rendering engine comprises a rendering kernel and at least two rendering engine interfaces;
for each rendering engine interface in the at least two rendering engine interfaces, acquiring a JS code segment by the rendering engine interface, and sending the JS code segment to the rendering kernel;
for each JS code segment, the rendering kernel is based on the sharing of the JS context, and the JS code segment is rendered to obtain texture data corresponding to the JS code segment.
14. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the texture data generating method according to any one of claims 1 to 6.
15. A texture data generating device comprising a processor and a memory, the memory having stored therein at least one instruction which is loaded and executed by the processor to implement the texture data generating method as claimed in any one of claims 1 to 6.
CN201910309843.2A 2019-04-17 2019-04-17 Texture data generation method, device, storage medium and equipment Active CN110045958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910309843.2A CN110045958B (en) 2019-04-17 2019-04-17 Texture data generation method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910309843.2A CN110045958B (en) 2019-04-17 2019-04-17 Texture data generation method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN110045958A CN110045958A (en) 2019-07-23
CN110045958B true CN110045958B (en) 2021-09-28

Family

ID=67277556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910309843.2A Active CN110045958B (en) 2019-04-17 2019-04-17 Texture data generation method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN110045958B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110471701B (en) * 2019-08-12 2021-09-10 Oppo广东移动通信有限公司 Image rendering method and device, storage medium and electronic equipment
CN112068816B (en) * 2020-07-22 2023-11-10 福建天泉教育科技有限公司 Method for preventing JS global pollution and storage medium
CN114625364A (en) * 2022-02-09 2022-06-14 北京达佳互联信息技术有限公司 Data processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126693A (en) * 2016-06-29 2016-11-16 微梦创科网络科技(中国)有限公司 The sending method of the related data of a kind of webpage and device
CN108076147A (en) * 2017-12-13 2018-05-25 上海哔哩哔哩科技有限公司 The server-side of Internet service renders hot update method, system and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080024510A1 (en) * 2006-07-27 2008-01-31 Via Technologies, Inc. Texture engine, graphics processing unit and video processing method thereof
CN107707965B (en) * 2016-08-08 2021-02-12 阿里巴巴(中国)有限公司 Bullet screen generation method and device
CN107092643B (en) * 2017-03-06 2020-10-16 武汉斗鱼网络科技有限公司 Barrage rendering method and device
CN108632540B (en) * 2017-03-23 2020-07-03 北京小唱科技有限公司 Video processing method and device
US10621767B2 (en) * 2017-06-12 2020-04-14 Qualcomm Incorporated Fisheye image stitching for movable cameras
CN108647313A (en) * 2018-05-10 2018-10-12 福建星网视易信息系统有限公司 A kind of real-time method and system for generating performance video
CN108875670A (en) * 2018-06-28 2018-11-23 咪咕动漫有限公司 Information processing method, device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126693A (en) * 2016-06-29 2016-11-16 微梦创科网络科技(中国)有限公司 The sending method of the related data of a kind of webpage and device
CN108076147A (en) * 2017-12-13 2018-05-25 上海哔哩哔哩科技有限公司 The server-side of Internet service renders hot update method, system and storage medium

Also Published As

Publication number Publication date
CN110045958A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN110278464B (en) Method and device for displaying list
CN108762881B (en) Interface drawing method and device, terminal and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN110213153B (en) Display method, acquisition method, device, terminal and storage medium of unread messages
CN110045958B (en) Texture data generation method, device, storage medium and equipment
CN113204672B (en) Resource display method, device, computer equipment and medium
CN111177013A (en) Log data acquisition method and device, computer equipment and storage medium
CN112163406A (en) Interactive message display method and device, computer equipment and storage medium
CN113204298A (en) Method and device for displaying release progress, electronic equipment and storage medium
CN111368114A (en) Information display method, device, equipment and storage medium
CN113051015A (en) Page rendering method and device, electronic equipment and storage medium
CN114245218A (en) Audio and video playing method and device, computer equipment and storage medium
CN112966798B (en) Information display method and device, electronic equipment and storage medium
CN112148499A (en) Data reporting method and device, computer equipment and medium
CN110677713A (en) Video image processing method and device and storage medium
CN112311661B (en) Message processing method, device, equipment and storage medium
CN111241451A (en) Webpage processing method and device, computer equipment and storage medium
CN111327919A (en) Method, device, system, equipment and storage medium for virtual gift feedback processing
CN111064657A (en) Method, device and system for grouping concerned accounts
CN111275607A (en) Interface display method and device, computer equipment and storage medium
CN114546188B (en) Interaction method, device and equipment based on interaction interface and readable storage medium
CN111898048B (en) Data adjustment method and device for display information, electronic equipment and storage medium
CN109618018B (en) User head portrait display method, device, terminal, server and storage medium
CN112596810A (en) Loading prompt information display method and device, electronic equipment and storage medium
CN112699364A (en) Method, device and equipment for processing verification information and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant