CN113542757A - Image transmission method and device for cloud application, server and storage medium - Google Patents

Image transmission method and device for cloud application, server and storage medium Download PDF

Info

Publication number
CN113542757A
CN113542757A CN202110819854.2A CN202110819854A CN113542757A CN 113542757 A CN113542757 A CN 113542757A CN 202110819854 A CN202110819854 A CN 202110819854A CN 113542757 A CN113542757 A CN 113542757A
Authority
CN
China
Prior art keywords
engine
rendering
cloud application
encoding
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110819854.2A
Other languages
Chinese (zh)
Other versions
CN113542757B (en
Inventor
刘玉雪
陈安庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110819854.2A priority Critical patent/CN113542757B/en
Publication of CN113542757A publication Critical patent/CN113542757A/en
Application granted granted Critical
Publication of CN113542757B publication Critical patent/CN113542757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application discloses an image transmission method and device for cloud application, a server and a storage medium, and belongs to the technical field of cloud application. The method comprises the following steps: responding to an image rendering instruction of the cloud application, and calling a virtual hardware abstraction layer through a runtime library layer to perform image rendering to obtain original image data; calling a virtual hardware abstraction layer through a runtime library layer to perform image coding on original image data to obtain a video stream; and pushing the video stream to the terminal through the runtime library layer. Because the rendering, the encoding and the plug flow are integrated in the runtime library layer, the deployment integration level of the cloud application is higher, and the deployment of the cloud application is facilitated; the rendering coding is completed between the runtime library layer and the virtual hardware abstraction layer, and the processing path of the cloud application picture is short, so that the display delay of the cloud application picture is favorably reduced.

Description

Image transmission method and device for cloud application, server and storage medium
Technical Field
The embodiment of the application relates to the technical field of cloud application, in particular to an image transmission method, an image transmission device, a server and a storage medium for cloud application.
Background
Cloud applications (Cloud Apps) are an online application technology based on Cloud computing technology. In a cloud application scene, an application program is not operated in a user side terminal but in a server, application sound and pictures are rendered into audio and video streams by the server and transmitted to the user side terminal through a network for decoding and playing by the user side terminal.
In the related art, a server acquires an application picture in a Virtual Display (Virtual Display) manner, and encodes the acquired application picture to obtain a video stream. However, in this way, the acquisition of the application screen needs to pass through a long acquisition path, resulting in a large delay in the display of the cloud application screen.
Disclosure of Invention
The embodiment of the application provides an image transmission method and device for cloud application, a server and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an image transmission method for cloud applications, where the method includes:
responding to an image rendering instruction of the cloud application, calling a virtual hardware abstraction layer through a runtime library layer to perform image rendering, and obtaining original image data, wherein the original image data is image data of a cloud application picture;
calling the virtual hardware abstraction layer through the runtime library layer to perform image coding on the original image data to obtain a video stream;
and pushing the video stream to a terminal through the runtime library layer.
In another aspect, an embodiment of the present application provides an image transmission apparatus for cloud application, where the apparatus includes:
the rendering unit is used for responding to an image rendering instruction of the cloud application, calling the virtual hardware abstraction layer through the runtime library layer to perform image rendering, and obtaining original image data, wherein the original image data is image data of a cloud application picture;
the coding unit is used for calling the virtual hardware abstraction layer through the runtime library layer to perform image coding on the original image data to obtain a video stream;
and the stream pushing unit is used for pushing the video stream to the terminal through the runtime library layer based on the video stream.
In another aspect, an embodiment of the present application provides a server, which includes a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the image transmission method of the cloud application as described in the above aspect.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the image transmission method for cloud applications as described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the image transmission method of the cloud application provided in the various alternative implementations of the above aspect.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
in the embodiment of the application, the rendering, coding and stream pushing of the cloud application picture are integrated in the runtime library layer, so that when an image rendering instruction of the cloud application is received, the runtime library layer calls the virtual hardware abstraction layer to render the image, the virtual hardware abstraction layer is further called to code original image data obtained by rendering, and finally the runtime library layer pushes the video stream containing the coded image to the terminal to realize image transmission of the cloud application; because the rendering, the encoding and the plug flow are integrated in the runtime library layer, the deployment integration level of the cloud application is higher in the embodiment of the application, and the deployment of the cloud application is facilitated; moreover, the rendering coding is completed between the runtime library layer and the virtual hardware abstraction layer, and the processing path of the cloud application picture is short, so that the display delay of the cloud application picture is favorably reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
fig. 2 is a flowchart illustrating an image transmission method for a cloud application according to an exemplary embodiment of the present application;
FIG. 3 is a system architecture diagram of a containerized operating system shown in one exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating an image transmission method of a cloud application according to another exemplary embodiment of the present application;
fig. 5 is an interaction sequence diagram illustrating a cloud application screen transmission process according to an exemplary embodiment of the present application;
fig. 6 is a flowchart illustrating an image transmission method for a cloud application according to another exemplary embodiment of the present application;
fig. 7 is a block diagram illustrating a configuration of an image transmission apparatus for a cloud application according to an embodiment of the present application;
fig. 8 shows a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: a terminal 110 and a server 120.
A cloud application client is installed and operated in the terminal 110, and through the cloud application client, the terminal 110 can use the cloud application by using a cloud application technology without installing the cloud application, so that the storage space of the terminal 110 is saved (the size of the cloud application client is far smaller than that of the cloud application). The cloud application may be a game application, an instant messaging application, a shopping application, a navigation application, and the like, and the type of the cloud application is not limited in this embodiment; moreover, the cloud application client may support running a single cloud application, and may also support running multiple cloud applications, which is not limited in this embodiment.
The terminal 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. In some embodiments, the terminal 110 has a display screen, so that an application screen of the cloud application is displayed through the display screen; the terminal 110 has an audio component (external or internal), so that an application sound of the cloud application is played through the audio component; the terminal 110 has an input component (a built-in input component such as a touch screen, or an external input component such as a keyboard and a mouse) so as to control the cloud application through the input component.
The server 120 is a cloud device running a cloud application, and the server 120 may be a server, a server cluster formed by a plurality of servers, or a cloud computing center. In some embodiments, the server 120 is provided with a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an encoding card, a memory, a hard disk, and other hardware.
Optionally, the server 120 supports a single cloud application, or alternatively, supports multiple cloud applications. In this embodiment, the cloud application is installed and run in a containerized operating system, and the containerized operating system may be an Android (Android) system, an apple (IOS) system, or another terminal operating system, which is not limited in this embodiment. In the running process of the cloud application, the rendering, the encoding and the plug flow of the cloud application picture are all executed by a containerized operating system.
In the running process of the cloud application, the server 120 renders and encodes the cloud application sound and the image, pushes the cloud application sound and the image to the terminal 110 in the form of audio and video streams, and decodes and plays the audio and video streams by a cloud application client in the terminal 110 (the function is similar to that of a player). When receiving a control operation on the cloud application, such as receiving a touch operation on a screen element in a cloud application screen through a touch display screen of the terminal 110, the terminal 110 sends an instruction stream to the server 120 through a cloud application client. The server 120 analyzes the received instruction stream, controls the cloud application based on the analysis result, and continuously pushes the updated cloud application picture and sound to the terminal 110 in the form of audio/video stream.
The terminal 110 and the server 120 may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. In addition, only one terminal is shown in fig. 1, but in different embodiments, a plurality of terminals simultaneously access the server 120, and the server 120 provides a cloud application service for the plurality of terminals simultaneously, which is not limited in this embodiment of the present application.
Referring to fig. 2, a flowchart of an image transmission method for cloud application according to an exemplary embodiment of the present application is shown, where the embodiment of the present application is described by taking an example in which the method is applied to a server in the implementation environment shown in fig. 1, and the method includes:
step 201, responding to an image rendering instruction of the cloud application, calling a virtual hardware abstraction layer through a runtime library layer to perform image rendering, and obtaining original image data, wherein the original image data is image data of a cloud application picture.
In one possible implementation, a containerized operating system is provided in the server, and a cloud application, a Runtime (Runtime) library Layer corresponding to the cloud application, and a Virtualized Hardware Abstraction Layer (VHAL) are provided in the containerized operating system. The runtime library layer is used for providing a service related to a runtime mechanism, such as a message mechanism, when the system runs; the virtual hardware abstraction layer is an interface layer located between an operating system kernel and a hardware circuit, and is used for abstracting hardware and providing a virtual hardware platform for an operating system so that the operating system has hardware independence.
Optionally, the runtime library layer in the embodiment of the present application integrates rendering, encoding, and stream pushing functions. When an image rendering instruction of the cloud application is received, namely when a cloud application picture rendering requirement exists, the server calls the virtual hardware abstraction layer through the runtime library layer to perform image rendering, and original image data of the cloud application picture is obtained. The virtual hardware abstraction layer further performs image rendering in a software or hardware mode based on the invocation of the runtime library layer. For example, the virtual hardware abstraction layer calls the CPU for software rendering, or calls the GPU for hardware rendering.
Optionally, the image rendering instruction is automatically triggered in the running process of the cloud application, or triggered by an instruction stream sent by the terminal.
It should be noted that, since the hardware in the server is different from the hardware in the terminal, a virtual hardware abstraction layer suitable for the (containerized) operating system in the server needs to be custom-packaged for the hardware in the server.
Step 202, calling a virtual hardware abstraction layer through a runtime library layer to perform image coding on original image data, so as to obtain a video stream.
In the use process of the cloud application, the terminal functions like a player, so the server needs to further encode the rendered original image data to obtain a video stream composed of the encoded image data. In a possible implementation manner, the runtime library layer continues to call the virtual hardware abstraction layer to encode the rendered original image data, so as to obtain a video stream. The virtual hardware abstraction layer is used for carrying out image coding in a software or hardware mode based on the calling of the runtime library layer. For example, the virtual hardware abstraction layer calls the CPU to perform software coding, or calls the GPU to perform hardware coding.
And step 203, pushing the video stream to the terminal through the runtime library layer.
Further, the runtime library layer pushes the video stream to the terminal by means of a stream pushing function, so that a cloud application client on the terminal side performs image decoding on the encoded image data in the video stream, and a cloud application picture is restored.
In some embodiments, the cloud application audio in the cloud application running process is encoded into an audio stream, the audio stream and the video stream are synthesized to obtain an audio/video stream, and the audio/video stream is pushed to the terminal by the running library layer, so that the terminal plays the cloud application sound while displaying the cloud application picture.
Optionally, the server may directly push the video stream to the terminal through the network, or may push the video stream to the terminal through an additional streaming server, which is not limited in this embodiment.
Obviously, by adopting the scheme provided by the embodiment of the application, the rendering coding of the cloud application picture is completed between the runtime library layer and the virtual hardware abstraction layer, so that the processing path of the cloud application picture is shortened, the display delay of the cloud application picture at the terminal side is reduced, and the use experience of a user can be improved for applications sensitive to delay, such as game applications; moreover, the rendering, encoding and stream pushing of the cloud application picture are integrated in the runtime library layer, so that the deployment integration level is improved, and the deployment of the cloud application is facilitated.
To sum up, in the embodiment of the application, rendering, encoding and plug-flow of a cloud application picture are integrated in a runtime library layer, so that when an image rendering instruction of a cloud application is received, the runtime library layer calls a virtual hardware abstraction layer to perform image rendering, the virtual hardware abstraction layer is further called to encode original image data obtained by rendering, and finally, the runtime library layer pushes a video stream to a terminal to realize image transmission of the cloud application; because the rendering, the encoding and the plug flow are integrated in the runtime library layer, the deployment integration level of the cloud application is higher in the embodiment of the application, and the deployment of the cloud application is facilitated; moreover, the rendering coding is completed between the runtime library layer and the virtual hardware abstraction layer, and the processing path of the cloud application picture is short, so that the display delay of the cloud application picture is favorably reduced.
In one possible implementation, as shown in fig. 3, the containerized operating system includes a cloud application 31, a runtime library layer 32, and a virtual hardware abstraction layer 33, where the virtual hardware abstraction layer 33 serves as an interface layer of hardware, and provides related services by calling hardware (such as a GPU, a code card, and the like) of a server through a plug-in (such as a rendering plug-in, an audio plug-in, a coding plug-in, and the like).
The runtime library layer 32 includes a rendering Engine 321(Render Engine) for implementing image rendering, an encoding Engine 322(Encode Engine) for implementing image encoding, and a stream Engine 323(Flow Engine) for performing stream pushing. In the operation process of the cloud application 31, after receiving an image rendering instruction sent by the cloud application 31, the rendering engine 321 calls the virtual hardware abstraction layer 33 to perform image rendering, further encodes rendered original image data through the encoding engine 322, and finally performs stream pushing based on a video stream obtained by encoding through the stream pushing engine 323.
The following describes an image transmission process of a cloud application using an exemplary embodiment in conjunction with a rendering engine, an encoding engine, and a plug flow engine.
Referring to fig. 4, a flowchart of an image transmission method for a cloud application according to another exemplary embodiment of the present application is shown, where the embodiment of the present application is described by taking an example in which the method is applied to a server in the implementation environment shown in fig. 1, and the method includes:
step 401, registering the coding service through the coding engine, and registering the stream pushing service through the stream pushing engine.
Since the rendering engine, the encoding engine, and the stream pushing engine correspond to different processes, in order to reduce consumption caused by inter-process communication and further shorten the image transmission delay, in one possible implementation, the encoding engine and the stream pushing engine in the runtime library layer are registered as services, and then inter-process communication can be performed through a binder mechanism.
Optionally, when the cloud application start instruction is received, the encoding engine registers an encoding service (Encode service), and the push Flow engine registers a push Flow service (Flow service).
Step 402, responding to an image rendering instruction of the cloud application, and calling a virtual hardware abstraction layer through a rendering engine to perform image rendering to obtain original image data.
And when the runtime library layer receives the image rendering instruction of the cloud application, the virtual hardware abstraction layer is called by the rendering engine to perform image rendering. In a possible implementation manner, the virtual hardware abstraction layer includes a gralloc module (gralloc _ vhal) and a hw module (hw _ vhal) for implementing image rendering, where the gralloc module is responsible for applying for a layer (surface) (including allocating Framebuffer), and the hw module is used for synthesizing the layer and transmitting the layer to the display device. Specifically, the image rendering process may include the following steps.
First, a buffer is applied to a gralloc module by a rendering engine.
Optionally, when receiving the image rendering instruction, the rendering engine applies for a buffer (for storing the layer) from the granloc module in the virtual hardware abstraction layer. Correspondingly, after receiving the application, the gralloc module calls a user mode driver of the hardware to apply for a memory or a video memory for the cloud application.
And secondly, performing layer rendering in the buffer area through a rendering engine.
After applying for the buffer area, the rendering engine performs layer rendering in the buffer area. In a possible implementation manner, the rendering engine supports multiple rendering manners, and the rendering engine may select a corresponding rendering manner to perform layer rendering according to the requirement of the cloud application. Illustratively, as shown in fig. 3, the rendering engine 321 supports OpenGL ES rendering as well as Vulkan rendering. The embodiment of the present application does not limit the specific rendering manner supported by the rendering engine.
And in the layer rendering process, the virtual hardware abstraction layer calls hardware through a rendering plug-in, and layer rendering is carried out in a software or hardware mode. Wherein the hardware rendering speed (such as rendering by a GPU) is faster than the software rendering speed (such as rendering by a CPU).
Optionally, when the remaining amount of the hardware resource is lower than the threshold, software rendering is adopted, and when the remaining amount of the hardware resource is higher than the threshold, hardware rendering is adopted. The embodiment of the present application does not limit the specific rendering strategy to be adopted.
And thirdly, requesting the hw module to perform layer composition through the rendering engine, and obtaining original image data after the hw module completes the layer composition.
Since the cloud application picture is composed of a plurality of layers, after layer rendering is performed on each image of the cloud application picture, each layer needs to be further synthesized to obtain original image data. In a possible implementation manner, after the rendering engine completes layer rendering, it requests the hw module of the virtual hardware abstraction layer to perform layer composition, and correspondingly, the hw module performs composition on multiple layers of the same cloud application picture to obtain original image data.
In response to completing the image rendering, the encoding service is obtained through the binder mechanism, step 403.
Since the encoding service registration is performed in advance, after the image rendering is completed, the encoding service can be further acquired through a binder mechanism, so that the encoding engine encodes the original image data.
In a possible implementation manner, after the virtual hardware abstraction layer finishes image rendering, the coding service is acquired through a binder mechanism. For example, with reference to the example in the above step, after the hw module completes layer composition, the coding service is acquired through the binder mechanism.
And step 404, based on the obtained coding service, calling a virtual hardware abstraction layer through a coding engine to perform image coding on the original image data to obtain a video stream.
After the coding service is obtained, the server further calls the virtual hardware abstraction layer through the coding engine to carry out image coding. In a possible implementation manner, the virtual hardware abstraction layer includes an encode module (encode _ vhal) for implementing image encoding, and the encoding engine performs image encoding by calling the encode module.
In a possible implementation manner, the coding engine supports multiple coding manners, and the coding engine may select a corresponding coding manner to perform image coding according to the requirement of the cloud application. Illustratively, as shown in FIG. 3, the encoding engine 322 supports FFmpeg and MediaCodec. The embodiment of the present application does not limit the specific encoding mode supported by the encoding engine.
Optionally, after receiving the coding instruction of the coding engine, the encode module invokes hardware to perform software coding or hardware coding. For example, the CPU may be invoked to perform software encoding on the raw image data, or the GPU may be invoked to perform hardware encoding on the raw image data (the hardware encoding speed is faster than the software encoding speed). In some embodiments, software coding is employed when the remaining amount of hardware resources is below a threshold, and hardware coding is employed when the remaining amount of hardware resources is above the threshold. The embodiment of the present application does not limit the specific encoding strategy used.
In a possible implementation manner, the server captures rendered data at the virtual hardware abstraction layer, so that the captured original image data is directly transmitted to the coding engine for coding, the processing path of the image data is reduced, and the coding speed is improved. Optionally, after the layer composition is completed by the virtual hardware abstraction layer through the hw module, the original image data is sent to the coding engine through the hw module.
In response to the completion of the image encoding, a push streaming service is acquired through the binder mechanism, step 405.
Since the push streaming service registration is performed in advance, after the image coding is completed, the push streaming service can be further acquired through a binder mechanism, so that the push streaming engine pushes the video stream to the terminal. In one possible implementation, after the image coding is completed, the coding engine acquires the push streaming service through the binder mechanism.
And step 406, based on the obtained push stream service, pushing a video stream to the terminal through the push stream engine.
In a possible implementation manner, the coding engine sends the coded video stream to the stream pushing engine through a stream pushing interface in the stream pushing service, and the stream pushing engine further pushes the video stream to the terminal.
Illustratively, as shown in fig. 3, the encoding engine 322 sends the video stream to a Media module of the stream pushing engine 323, which performs the stream pushing.
Optionally, the plug flow engine may integrate multiple plug flow modes, such as webrtc, lib555, and the like, and the plug flow engine may select a corresponding plug flow mode based on the network state, which is not limited in this embodiment.
In some embodiments, since the buffer is applied during image rendering, in order to avoid the buffer being occupied for a long time, after the video stream pushing is completed, the virtual hardware abstraction layer needs to be notified to perform buffer resource recycling.
In a possible implementation manner, the server sends a stream pushing completion response to the encoding engine through the stream pushing engine, the encoding engine further sends an encoding completion response to the hw module based on the stream pushing completion response, the hw module sends a release instruction to the rendering engine based on the encoding completion response, and finally the rendering engine instructs the gradloc module to release the buffer.
In this embodiment, the coding engine and the stream pushing engine in the runtime library layer are registered as services, so that after image rendering is completed, the coding service is acquired through the binder mechanism to perform image coding, and after the coding is completed, the stream pushing service is acquired through the binder mechanism to perform video stream pushing, which can reduce time consumption of inter-process communication and further reduce transmission delay of cloud application pictures.
In addition, image data obtained by rendering in the granloc and hw modules are captured and directly transmitted to the encoding engine for encoding, so that the processing path of the image can be shortened, and the transmission delay of the cloud application picture can be reduced.
In one illustrative example, a screen transfer process for a cloud application is shown in fig. 5.
In step 501, the coding engine registers a coding service with a service manager (ServiceManager).
Step 502, the push flow engine registers the push flow service with the service management.
At step 503, the cloud application sends an image rendering instruction to the rendering engine.
Step 504, the rendering engine applies for a buffer from the granloc module.
Step 505, the granloc module returns the file descriptor (fd) of the buffer to the rendering engine.
Step 506, the rendering engine performs layer rendering in the buffer.
In step 507, the rendering engine sends a composition instruction to the hw module.
In step 508, the hw module creates a mapping map for fd.
The hw module updates the reference count of fd to 1 from 0 (indicating it is being used) and adds fd to the local map.
In step 509, the hw module sends the composed response to the rendering engine.
At step 510, the rendering engine notifies the cloud application that rendering is complete.
In step 511, the hw module requests a coded service (getencondeservice) from the service manager through the binder mechanism.
Step 512, the service management returns the coded service (BpEncodeService) to the hw module.
In step 513, the hw module sends fd to the coding engine, instructing the coding engine to perform coding.
And step 514, calling hardware by the encoding engine through an encode module to encode the image.
The encode module encapsulates hardware encoding and software encoding capabilities so that the underlying GPU or encoding card can be invoked for encoding.
In step 515, the encode module notifies the encoding completion.
In step 516, the coding engine requests a push flow service (GetFlowService) to the service management through the binder mechanism.
Step 517, the service management returns the push streaming service (bpflo service) to the encoding engine.
The encoding engine sends the video stream to the stream push engine, step 518.
Optionally, the encoding engine calls a stream pushing interface in bpflo service to send the video stream to the stream pushing engine.
In step 519, the stream pushing engine pushes the video stream to the terminal.
In step 520, the stream pushing engine notifies the encoding engine of the completion of the stream pushing.
Step 521, the coding engine sends a coded response to the hw module.
At step 522, the hw module instructs the rendering engine to release fd.
And after receiving the encoding response, the hw module sets the reference count of the fd to 0 and deletes the fd from the local map.
In step 523, the rendering engine instructs the granloc module to release the buffer.
At step 524, the gralloc module notifies the release completion.
In a possible implementation manner, the plug flow engine is further configured to identify an instruction stream sent by the terminal, so as to determine, according to an identification result, the instruction stream for controlling the cloud application or adjusting a display parameter of the cloud application. Illustratively, as shown in fig. 3, after receiving the instruction stream through the Input module, the stream pushing engine 323 identifies the instruction stream, so as to inject an operation event to the cloud application 31 or send an encoding parameter adjustment instruction to the encoding engine 322 according to the identification interface.
Optionally, on the basis of fig. 4, as shown in fig. 6, step 401 may include the following steps:
step 4011, receiving an instruction stream sent by a terminal.
Step 4012, identify, by the plug flow engine, an instruction type of an instruction in the instruction stream.
In a possible implementation manner, the types of the instructions in the instruction stream include an operation instruction and an encoding control instruction, where the operation instruction is used to control an application element in the cloud application, for example, the operation instruction is triggered when a user performs a touch operation on a control in a cloud game screen, and the encoding control instruction is used to adjust an encoding parameter in a cloud application running process, for example, to adjust a resolution, a frame rate, and the like of the cloud application. And after the stream pushing engine acquires the instruction stream, identifying the type of the instruction in the instruction stream.
When the instruction type is identified to be the operation instruction, injecting an operation event into the cloud application by the plug flow engine (step 4013); and when the instruction type is identified as the encoding control instruction, the stream pushing engine sends an adjusting instruction to the encoding engine.
Step 4013, in response to the instruction type being the operation instruction, sending an operation event to the cloud application through the plug flow engine, where the operation event is obtained by converting the instruction by the plug flow engine, and the cloud application is configured to send an image rendering instruction based on the operation event.
In order to enable the cloud application to accurately respond to the user operation, the plug-flow engine firstly needs to convert an instruction in the instruction stream to obtain an operation event which can be identified by the cloud application, so that the operation event is sent to the cloud application, and the cloud application responds to the operation event.
In an illustrative example, when the cloud application is a game application supporting touch operation, the plug flow engine converts touch coordinates in the instruction into a touch (touch) event and sends the touch event to the cloud application, and the cloud application controls an element in the game based on touch time.
In other possible embodiments, in response to the instruction type being an encoding control instruction, the server sends, by the stream pushing engine, an encoding parameter adjustment instruction to the encoding engine, the encoding parameter adjustment instruction being used to instruct the encoding engine to adjust at least one of the resolution and the frame rate. Accordingly, the encoding engine adjusts the encoding parameters based on the parameters included in the encoding parameter adjustment instruction.
For example, when the encoding parameter adjustment instruction indicates that the resolution is adjusted from 1080p to 720p, the encoding engine reduces the resolution of the original image data during the encoding process, thereby reducing the resolution of the video stream; when the encoding parameter adjustment instruction indicates that the frame rate is increased from 30 frames/second to 60 frames/second, the encoding engine performs frame interpolation processing in the encoding process, so as to increase the frame rate of the video stream.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 7, a block diagram of an image transmission apparatus for cloud application according to an embodiment of the present disclosure is shown. The apparatus may include:
the rendering unit 701 is configured to respond to an image rendering instruction of the cloud application, call a virtual hardware abstraction layer through a runtime library layer to perform image rendering, and obtain original image data, where the original image data is image data of a cloud application picture;
an encoding unit 702, configured to invoke the virtual hardware abstraction layer through the runtime library layer to perform image encoding on the original image data, so as to obtain a video stream;
and a stream pushing unit 703, configured to push the video stream to a terminal through the runtime library layer.
Optionally, the runtime library layer is provided with a stream pushing engine, a rendering engine and an encoding engine;
the rendering unit 701 is configured to:
calling the virtual hardware abstraction layer through the rendering engine to perform image rendering to obtain the original image data;
the encoding unit 702 is configured to:
calling the virtual hardware abstraction layer through the coding engine to perform image coding on the original image data to obtain the video stream;
the flow pushing unit 703 is configured to:
and pushing the video stream to the terminal through the stream pushing engine.
Optionally, the virtual hardware abstraction layer is provided with a granloc module and a hw module;
the rendering unit 701 is specifically configured to:
applying for a buffer from the granloc module through the rendering engine;
performing layer rendering in the buffer area through the rendering engine;
requesting the hw module to perform layer composition through the rendering engine, and obtaining the original image data after the hw module completes the layer composition.
Optionally, the apparatus further includes a data sending unit, configured to:
sending, by the hw module, the raw image data to the encoding engine.
Optionally, the apparatus further comprises:
a release module, configured to send a stream pushing completion response to the encoding engine through the stream pushing engine, where the encoding engine is configured to send an encoding completion response to the hw module based on the stream pushing completion response, the hw module is configured to send a release instruction to the rendering engine based on the encoding completion response, and the rendering engine is configured to instruct the ralloc module to release the buffer based on the release instruction.
Optionally, the virtual hardware abstraction layer is provided with an encode module;
the encoding unit 702 is configured to:
and calling the encode module to perform image encoding on the original image data through the encoding engine to obtain the video stream, wherein the encode module is used for calling hardware to perform software encoding or hardware encoding.
Optionally, the apparatus further comprises:
the registration unit is used for registering the coding service through the coding engine and registering the stream pushing service through the stream pushing engine;
the encoding unit 702 is configured to:
acquiring the coding service through a binder mechanism in response to the completion of image rendering;
based on the obtained coding service, calling the virtual hardware abstraction layer through the coding engine to perform image coding on the original image data to obtain the video stream;
the flow pushing unit 703 is configured to:
acquiring the push streaming service through a binder mechanism in response to the completion of image coding;
and pushing the video stream to the terminal through the stream pushing engine based on the obtained stream pushing service.
Optionally, the apparatus further comprises:
a receiving unit, configured to receive an instruction stream sent by the terminal;
the identification unit is used for identifying the instruction type of the instruction in the instruction stream through the plug flow engine;
the first sending unit is used for responding to the fact that the instruction type is an operation instruction, sending an operation event to the cloud application through the plug-flow engine, wherein the operation event is obtained by converting the instruction through the plug-flow engine, and the cloud application is used for sending the image rendering instruction based on the operation event.
Optionally, the apparatus further comprises:
a second sending unit, configured to send, by the stream pushing engine, an encoding parameter adjustment instruction to the encoding engine in response to that the instruction type is an encoding control instruction, where the encoding parameter adjustment instruction is used to instruct the encoding engine to adjust at least one of a resolution and a frame rate.
To sum up, in the embodiment of the application, rendering, encoding and stream pushing of a cloud application picture are integrated in a runtime library layer, so that when an image rendering instruction of a cloud application is received, the runtime library layer calls a virtual hardware abstraction layer to perform image rendering, the virtual hardware abstraction layer is further called to encode original image data obtained by rendering, and finally a video stream obtained by encoding is pushed to a terminal through the runtime library layer to realize image transmission of the cloud application; because the rendering, the encoding and the plug flow are integrated in the runtime library layer, the deployment integration level of the cloud application is higher in the embodiment of the application, and the deployment of the cloud application is facilitated; moreover, the rendering coding is completed between the runtime library layer and the virtual hardware abstraction layer, and the processing path of the cloud application picture is short, so that the display delay of the cloud application picture is favorably reduced.
It should be noted that: in the device provided in the above embodiment, when the functions of the device are implemented, only the division of the above functional units is illustrated, and in practical applications, the above functions may be distributed by different functional units as needed, that is, the internal structure of the device may be divided into different functional units to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 8, a schematic structural diagram of a server according to an embodiment of the present application is shown. The server is used for implementing the method provided by the embodiment. Specifically, the method comprises the following steps:
the server 800 includes a Central Processing Unit (CPU)801, a system memory 804 including a Random Access Memory (RAM)802 and a Read Only Memory (ROM)803, and a system bus 805 connecting the system memory 804 and the central processing unit 801. The server 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein the display 808 and the input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.
The server 800 may also operate as a remote computer connected to a network via a network, such as the internet, according to various embodiments of the present application. That is, the server 800 may be connected to the network 812 through the network interface unit 811 coupled to the system bus 805, or may be connected to other types of networks or remote computer systems using the network interface unit 811.
The memory has stored therein at least one instruction, at least one program, set of codes, or set of instructions configured to be executed by one or more processors to implement the functions of the various steps in the above embodiments.
The embodiment of the present application also provides a computer-readable storage medium, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the image transmission method of the cloud application according to the above embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the image transmission method of the cloud application provided in the various alternative implementations of the above aspect.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. An image transmission method for cloud application, the method comprising:
responding to an image rendering instruction of the cloud application, calling a virtual hardware abstraction layer through a runtime library layer to perform image rendering, and obtaining original image data, wherein the original image data is image data of a cloud application picture;
calling the virtual hardware abstraction layer through the runtime library layer to perform image coding on the original image data to obtain a video stream;
and pushing the video stream to a terminal through the runtime library layer.
2. The method of claim 1, wherein the runtime library layer is provided with a plug flow engine, a rendering engine, and an encoding engine;
the step of calling a virtual hardware abstraction layer to perform image rendering through the runtime library layer to obtain original image data includes:
calling the virtual hardware abstraction layer through the rendering engine to perform image rendering to obtain the original image data;
the invoking the virtual hardware abstraction layer through the runtime library layer to perform image coding on the original image data to obtain a video stream, including:
calling the virtual hardware abstraction layer through the coding engine to perform image coding on the original image data to obtain the video stream;
the pushing of the video stream to the terminal through the runtime library layer includes:
and pushing the video stream to the terminal through the stream pushing engine.
3. The method of claim 2, wherein the virtual hardware abstraction layer is provided with a grafloc module and a hw module;
the calling the virtual hardware abstraction layer through the rendering engine to perform image rendering to obtain the original image data includes:
applying for a buffer from the granloc module through the rendering engine;
performing layer rendering in the buffer area through the rendering engine;
requesting the hw module to perform layer composition through the rendering engine, and obtaining the original image data after the hw module completes the layer composition.
4. The method according to claim 3, wherein the requesting, by the rendering engine, the hw module to perform layer composition comprises, after the hw module completes layer composition to obtain the original image data, the method comprising:
sending, by the hw module, the raw image data to the encoding engine.
5. The method of claim 3, wherein after the pushing the video stream to the terminal by the streaming engine, the method further comprises:
sending, by the stream pushing engine, a stream pushing completion response to the encoding engine, where the encoding engine is configured to send an encoding completion response to the hw module based on the stream pushing completion response, the hw module is configured to send a release instruction to the rendering engine based on the encoding completion response, and the rendering engine is configured to instruct the ralloc module to release the buffer based on the release instruction.
6. The method according to claim 2, wherein the virtual hardware abstraction layer is provided with an encode module;
the invoking, by the coding engine, the virtual hardware abstraction layer to perform image coding on the original image data to obtain the video stream, including:
and calling the encode module to perform image encoding on the original image data through the encoding engine to obtain the video stream, wherein the encode module is used for calling hardware to perform software encoding or hardware encoding.
7. The method of claim 2, further comprising:
registering an encoding service through the encoding engine, and registering a stream pushing service through the stream pushing engine;
the invoking, by the coding engine, the virtual hardware abstraction layer to perform image coding on the original image data to obtain the video stream, including:
acquiring the coding service through a binder mechanism in response to the completion of image rendering;
based on the obtained coding service, calling the virtual hardware abstraction layer through the coding engine to perform image coding on the original image data to obtain the video stream;
the pushing the video stream to the terminal through the stream pushing engine comprises:
acquiring the push streaming service through a binder mechanism in response to the completion of image coding;
and pushing the video stream to the terminal through the stream pushing engine based on the obtained stream pushing service.
8. The method of any of claims 2 to 7, further comprising:
receiving an instruction stream sent by the terminal;
identifying, by the plug flow engine, an instruction type of an instruction in the instruction stream;
and in response to the instruction type being an operation instruction, sending an operation event to the cloud application through the plug-flow engine, wherein the operation event is obtained by converting the instruction through the plug-flow engine, and the cloud application is used for sending the image rendering instruction based on the operation event.
9. The method of claim 8, wherein after identifying, by the offload engine, an instruction type of an instruction in the instruction stream, the method further comprises:
in response to the instruction type being an encoding control instruction, sending, by the stream pushing engine, an encoding parameter adjustment instruction to the encoding engine, where the encoding parameter adjustment instruction is used to instruct the encoding engine to adjust at least one of resolution and frame rate.
10. An image transmission apparatus for cloud application, the apparatus comprising:
the rendering unit is used for responding to an image rendering instruction of the cloud application, calling the virtual hardware abstraction layer through the runtime library layer to perform image rendering, and obtaining original image data, wherein the original image data is image data of a cloud application picture;
the coding unit is used for calling the virtual hardware abstraction layer through the runtime library layer to perform image coding on the original image data to obtain a video stream;
and the stream pushing unit is used for pushing the video stream to a terminal through the runtime library layer.
11. A server, comprising a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the image transmission method of the cloud application of any of claims 1 to 9.
12. A computer-readable storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the image transmission method of the cloud application according to any one of claims 1 to 9.
CN202110819854.2A 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium Active CN113542757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110819854.2A CN113542757B (en) 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110819854.2A CN113542757B (en) 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium

Publications (2)

Publication Number Publication Date
CN113542757A true CN113542757A (en) 2021-10-22
CN113542757B CN113542757B (en) 2024-04-02

Family

ID=78129005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110819854.2A Active CN113542757B (en) 2021-07-20 2021-07-20 Image transmission method and device for cloud application, server and storage medium

Country Status (1)

Country Link
CN (1) CN113542757B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113946402A (en) * 2021-11-09 2022-01-18 中国电信股份有限公司 Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
CN114594993A (en) * 2022-05-10 2022-06-07 海马云(天津)信息技术有限公司 Graphics rendering instruction stream processing device, processing method, server and rendering method
CN114827186A (en) * 2022-02-25 2022-07-29 阿里巴巴(中国)有限公司 Cloud application processing method and system
CN114866802A (en) * 2022-04-14 2022-08-05 青岛海尔科技有限公司 Video stream transmission method and device, storage medium and electronic device
CN115278289A (en) * 2022-09-27 2022-11-01 海马云(天津)信息技术有限公司 Cloud application rendering video frame processing method and device
CN116546228A (en) * 2023-07-04 2023-08-04 腾讯科技(深圳)有限公司 Plug flow method, device, equipment and storage medium for virtual scene
WO2023216618A1 (en) * 2022-05-13 2023-11-16 合肥杰发科技有限公司 Operation method for vehicle-mounted display system, and vehicle-mounted display system
CN117278780A (en) * 2023-09-06 2023-12-22 上海久尺网络科技有限公司 Video encoding and decoding method, device, equipment and storage medium
WO2024051824A1 (en) * 2022-09-09 2024-03-14 维沃移动通信有限公司 Image processing method, image processing circuit, electronic device, and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050223080A1 (en) * 2004-04-05 2005-10-06 Microsoft Corporation Updatable user experience
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN111381914A (en) * 2018-12-29 2020-07-07 中兴通讯股份有限公司 Method and system for realizing 3D (three-dimensional) capability of cloud desktop virtual machine
CN111563879A (en) * 2020-03-27 2020-08-21 北京视博云信息技术有限公司 Method and device for detecting display quality of application picture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050223080A1 (en) * 2004-04-05 2005-10-06 Microsoft Corporation Updatable user experience
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN111381914A (en) * 2018-12-29 2020-07-07 中兴通讯股份有限公司 Method and system for realizing 3D (three-dimensional) capability of cloud desktop virtual machine
CN111563879A (en) * 2020-03-27 2020-08-21 北京视博云信息技术有限公司 Method and device for detecting display quality of application picture

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113946402A (en) * 2021-11-09 2022-01-18 中国电信股份有限公司 Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
CN114827186A (en) * 2022-02-25 2022-07-29 阿里巴巴(中国)有限公司 Cloud application processing method and system
CN114866802A (en) * 2022-04-14 2022-08-05 青岛海尔科技有限公司 Video stream transmission method and device, storage medium and electronic device
CN114866802B (en) * 2022-04-14 2024-04-19 青岛海尔科技有限公司 Video stream sending method and device, storage medium and electronic device
CN114594993A (en) * 2022-05-10 2022-06-07 海马云(天津)信息技术有限公司 Graphics rendering instruction stream processing device, processing method, server and rendering method
CN114594993B (en) * 2022-05-10 2022-08-19 海马云(天津)信息技术有限公司 Graphics rendering instruction stream processing device, processing method, server and rendering method
WO2023216618A1 (en) * 2022-05-13 2023-11-16 合肥杰发科技有限公司 Operation method for vehicle-mounted display system, and vehicle-mounted display system
WO2024051824A1 (en) * 2022-09-09 2024-03-14 维沃移动通信有限公司 Image processing method, image processing circuit, electronic device, and readable storage medium
CN115278289A (en) * 2022-09-27 2022-11-01 海马云(天津)信息技术有限公司 Cloud application rendering video frame processing method and device
CN116546228A (en) * 2023-07-04 2023-08-04 腾讯科技(深圳)有限公司 Plug flow method, device, equipment and storage medium for virtual scene
CN116546228B (en) * 2023-07-04 2023-09-22 腾讯科技(深圳)有限公司 Plug flow method, device, equipment and storage medium for virtual scene
CN117278780A (en) * 2023-09-06 2023-12-22 上海久尺网络科技有限公司 Video encoding and decoding method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113542757B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN113542757B (en) Image transmission method and device for cloud application, server and storage medium
KR102389235B1 (en) Resource placement methods, devices, terminals and storage media
CA2814420C (en) Load balancing between general purpose processors and graphics processors
WO2022257699A1 (en) Image picture display method and apparatus, device, storage medium and program product
CN111882626A (en) Image processing method, apparatus, server and medium
CN113457160B (en) Data processing method, device, electronic equipment and computer readable storage medium
CN111494936A (en) Picture rendering method, device, system and storage medium
US10165058B2 (en) Dynamic local function binding apparatus and method
CN115065684B (en) Data processing method, apparatus, device and medium
US20090323799A1 (en) System and method for rendering a high-performance virtual desktop using compression technology
CN110968395B (en) Method for processing rendering instruction in simulator and mobile terminal
CN112354176A (en) Cloud game implementation method, cloud game implementation device, storage medium and electronic equipment
CN115292020B (en) Data processing method, device, equipment and medium
JP2022546145A (en) Cloud native 3D scene game method and system
CN113535063A (en) Live broadcast page switching method, video page switching method, electronic device and storage medium
CN112749022A (en) Camera resource access method, operating system, terminal and virtual camera
CN112843676A (en) Data processing method, device, terminal, server and storage medium
CN116546228B (en) Plug flow method, device, equipment and storage medium for virtual scene
CN114598931A (en) Streaming method, system, device and medium for multi-open cloud game
CN114268796A (en) Method and device for processing video stream
CN113411660B (en) Video data processing method and device and electronic equipment
CN111026406A (en) Application running method, device and computer readable storage medium
CN115364477A (en) Cloud game control method and device, electronic equipment and storage medium
CN114222185B (en) Video playing method, terminal equipment and storage medium
CN115018693A (en) Docker image acceleration method and system based on software-defined graphics processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant