CN111818120A - End cloud user interaction method and system, corresponding equipment and storage medium - Google Patents

End cloud user interaction method and system, corresponding equipment and storage medium Download PDF

Info

Publication number
CN111818120A
CN111818120A CN202010431050.0A CN202010431050A CN111818120A CN 111818120 A CN111818120 A CN 111818120A CN 202010431050 A CN202010431050 A CN 202010431050A CN 111818120 A CN111818120 A CN 111818120A
Authority
CN
China
Prior art keywords
cloud
rendering
terminal
channel
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010431050.0A
Other languages
Chinese (zh)
Other versions
CN111818120B (en
Inventor
奚智
沙斌
衣春雷
邹仕洪
朱睿
李翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanxin Science and Technology Co Ltd
Original Assignee
Beijing Yuanxin Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanxin Science and Technology Co Ltd filed Critical Beijing Yuanxin Science and Technology Co Ltd
Priority to CN202010431050.0A priority Critical patent/CN111818120B/en
Publication of CN111818120A publication Critical patent/CN111818120A/en
Application granted granted Critical
Publication of CN111818120B publication Critical patent/CN111818120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/08Protocols specially adapted for terminal emulation, e.g. Telnet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a method and a system for end cloud user interaction, corresponding equipment and a storage medium, wherein a control information instruction channel, an end cloud collaborative rendering channel and/or an audio and video data interaction channel are/is established between a terminal and a cloud end, and the method comprises the following steps: the terminal uploads a human-computer interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, the cloud end updates and distributes a message list based on the received human-computer interaction instruction, system event and/or message, and the cloud end calls a terminal kernel program through the control information instruction channel to complete specific operation of the terminal; and/or performing end cloud collaborative graphic rendering through an end cloud collaborative rendering channel and outputting a graphic at the terminal, wherein the cloud end updates graphic rendering data of the terminal through the end cloud collaborative rendering channel; and/or the terminal plays the cloud audio and video resources through the audio and video data interaction channel and/or uploads the collected audio and video to the cloud. The invention can improve the real-time property, sensitivity and stability of the cloud integrated operating system.

Description

End cloud user interaction method and system, corresponding equipment and storage medium
Technical Field
The present application relates to digital information transmission, and in particular, to a method and a system for end cloud user interaction, and a corresponding device and a storage medium.
Background
The operating system is composed of a kernel, a driver, a file system, multimedia, application management, security management, input and output, and the like, and almost all of the operating system is usually operated on a terminal, so that the configuration requirement on the terminal is high. The user's use mode directly calls the relevant service in the operation system through the input and output device of the terminal, and all the operation interaction is completed on the terminal.
A Virtual Desktop Infrastructure (VDI) has been proposed, which virtualizes a Desktop of a user by running an operating system through a server of a data center, as shown in fig. 1. Users interface with the virtual desktop via a client computing protocol from a client device (client or home PC) and access their desktop is as if accessing a traditional locally installed desktop.
A Virtual Mobile Infrastructure (VMI) has also been proposed, which centrally deploys "virtual handsets" in a data center, and implements unified management and control of "virtual handsets", as schematically illustrated in fig. 2.
The VDI or VMI architecture can solve a plurality of problems of the traditional computer/mobile terminal scheme, such as low utilization rate of host resources, easy aging of hardware, large repetitive workload of installing and upgrading system and software, non-centralized and unsafe data, easy poisoning and the like. However, the VMI/VDI interaction is performed by transmitting a large amount of images between end clouds (terminal and cloud), and has poor experience, unstable desktop, and poor centralized operation performance.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a method and a system for end cloud user interaction, corresponding equipment and a storage medium, which can improve the real-time property, sensitivity and stability of an end cloud integrated operating system and solve the problem of fragmentation of terminal equipment and application.
In a first aspect of the present invention, an end cloud user interaction method is provided, in which an operation and control information instruction channel, an end cloud collaborative rendering channel, and/or an audio/video data interaction channel are/is established between a terminal and a cloud, and the method includes:
the terminal uploads a human-computer interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, the cloud end updates and distributes a message list based on the received human-computer interaction instruction, system event and/or message, and the cloud end calls a terminal kernel program through the control information instruction channel to complete specific operation of the terminal; and/or
Performing end cloud collaborative graphic rendering through the end cloud collaborative rendering channel and outputting a graphic at the terminal, wherein the cloud end updates graphic rendering data of the terminal through the end cloud collaborative rendering channel; and/or
And the terminal plays the cloud audio and video resources through the audio and video data interaction channel and/or uploads the collected audio and video to the cloud.
In an embodiment, a terminal cloud cooperation mode of terminal cloud cooperation graphic rendering and/or multimedia cooperation processing of audio and video data interaction are determined dynamically or statically at least according to terminal device configuration and a network environment.
In a second aspect of the present invention, an end cloud user interaction system is provided, in which a control information instruction channel, an end cloud collaborative rendering channel, and/or an audio/video data interaction channel are/is established between a terminal and a cloud, and the system includes:
the control information instruction interaction module is used for enabling the terminal to upload a human-computer interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, the cloud end updates and distributes a message list based on the received human-computer interaction instruction, system event and/or message, and the cloud end calls a terminal kernel program through the control information instruction channel to complete specific operation of the terminal; and/or
The terminal cloud collaborative rendering module is used for performing terminal cloud collaborative graphic rendering through the terminal cloud collaborative rendering channel and outputting a graphic at the terminal, wherein the cloud end updates graphic rendering data of the terminal through the terminal cloud collaborative rendering channel; and/or
And the audio and video data interaction module is used for enabling the terminal to play the cloud audio and video resources through the audio and video data interaction channel and/or uploading the collected audio and video to the cloud.
In a third aspect of the present invention, a computer readable storage medium is provided, having stored thereon a computer program, which when executed by a processor, performs the steps of the end cloud user interaction method according to the first aspect of the present invention.
In a fourth aspect of the present invention, there is provided a computer device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the end cloud user interaction method according to the first aspect of the present invention.
The invention provides a terminal cloud integrated user interaction method and system, which can meet the use requirements of different types of equipment and scenes and reduce the cost of an intelligent terminal to the maximum extent. The end cloud system interaction mainly comprises: three modes of information/instruction interaction, end cloud collaborative rendering and multimedia collaborative processing are controlled, and end cloud collaborative interaction is specifically achieved by establishing three network channels among end clouds. The specific functions of the end cloud cooperative rendering and the multimedia cooperative processing adopt a distributed cooperative mode, task division between end clouds is flexibly distributed, and the influence of network communication on the system is reduced, so that the real-time performance, the sensitivity and the stability of the end cloud integrated operating system are improved. Due to the distributed cooperation mode, a traditional operating system can be divided into a very simple terminal closely related to bottom hardware and a cloud end closely related to a user, namely the system adopts a thin end and cloud end mode, functions of a microkernel, peripheral equipment drivers, multimedia and the like are put on an intelligent equipment terminal layer, functional services of the rest of the operating system such as file management, application management, a rendering engine and the like are used as a cloud end service layer, the terminal focuses on a constant peripheral, and the user finishes interaction through transmission of control instructions and rendering instructions between end clouds. Under the condition that the terminal is only responsible for the basic operating system functions of input and output, the requirement on system resources is low, and the terminal can be widely adapted to run on various hardware devices.
Other features and advantages of the present invention will become more apparent from the detailed description of the embodiments of the present invention when taken in conjunction with the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of a prior art VDI architecture;
FIG. 2 is a schematic diagram of a prior art VMI architecture;
FIG. 3 is a schematic diagram of an end-cloud integrated operating system architecture according to an embodiment of the present invention;
FIG. 4 is a flow chart of an embodiment of a method according to the present invention;
FIG. 5 is a schematic diagram illustrating interaction of command channels for manipulating information according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an interaction of a peer cloud collaborative rendering channel according to an embodiment of the present invention;
fig. 7 is an interaction diagram of an audio/video data interaction channel according to an embodiment of the present invention;
FIG. 8 is a block diagram of one embodiment of a system in accordance with the present invention.
For the sake of clarity, the figures are schematic and simplified drawings, which only show details which are necessary for understanding the invention and other details are omitted.
Detailed Description
Embodiments and examples of the present invention will be described in detail below with reference to the accompanying drawings.
The scope of applicability of the present invention will become apparent from the detailed description given hereinafter. It should be understood, however, that the detailed description and the specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only.
Fig. 3 shows a schematic diagram of an end cloud integrated operating system architecture according to an embodiment of the present invention. The end-cloud integrated operating system is divided into an end (terminal) and a cloud (cloud). The terminal can only reserve the function which is close to the terminal hardware layer and realizes the minimized operation system as a simple terminal; the cloud end is responsible for the rest of functions related to the user of the operating system, including all user data, applications and the like needing to be protected. The operating system side is divided into a bottom micro-kernel layer and a system service layer; the cloud side is divided into a virtual device layer, an application framework layer and an application layer.
The terminal can adopt a very simple operating system which only provides basic functions of basic hardware resource management, input/output and the like, and the method specifically comprises the following steps: the micro kernel layer realizes a basic OS function part, is responsible for scheduling and managing terminal hardware resources (CPU, memory, IO and the like), and is a simplified version of the kernel; the system service/driver layer is mainly responsible for the input/output function of the terminal, and relates to an input/output system related to an OS, a locally-cut file system, a communication protocol and a terminal peripheral driver part, and the system service/driver layer operates on the microkernel layer to provide basic system service, and all modules communicate through the microkernel; the microkernel is used as a basic operating system kernel, and modules with different functions are integrated on an upper system service/drive layer, so that flexible adaptation to terminals with different types and specifications is realized; and the system is responsible for driving terminal hardware equipment and constructing and maintaining the most basic operating system operating environment of the terminal.
The high in the clouds can realize the most function of traditional operation application framework layer and application layer, specifically includes: the virtual equipment layer is adopted to abstract the terminal equipment, so that the hardware interface details of a specific terminal platform are hidden, and a virtual hardware platform is provided for an operating system, so that the virtual hardware platform has hardware independence, and the application can be conveniently transplanted on various platforms; the core function of traditional system is realized to the application frame layer, plays the supporting role to upper application layer, includes: modules for window management, file management, application management and the like; the application layer runs various service applications directly facing users; the parts needing safety protection, such as application, user data and the like in the traditional software system are put to the cloud part of the end-cloud integrated operating system in a unified mode, the user data are not stored in the terminal, and the safety of the application and the data is guaranteed through cloud technologies such as cloud storage.
And the end cloud realizes the cooperative interaction between the two sides of the end cloud by maintaining the network interaction channel. Three network channels, namely an operation information instruction channel, an end cloud collaborative rendering channel and an audio and video data interaction channel, are established on two sides of an end cloud, so that operation information/instruction interaction, end cloud collaborative rendering and multimedia collaborative processing are realized.
Fig. 4 is a block diagram of a preferred embodiment of an end cloud user interaction method according to the present invention.
At 410, the terminal uploads a human-computer interaction instruction, a system event and/or a message to the cloud via the control information instruction channel, the cloud updates and distributes a message list based on the received human-computer interaction instruction, system event and/or message, and the cloud calls a terminal kernel program via the control information instruction channel to complete specific terminal operations. For example: when a terminal hot-swaps a peripheral, an event list needs to be updated and is informed to upper-layer application, and the application can use (know)/release the peripheral; the terminal operates the application window, sends a message and synchronizes to the cloud message list, the window manager distributes the message to the target application window, and the window receives the message and performs feedback operation according to self logic. Fig. 5 shows a schematic diagram of a user interacting with an operating system through a terminal cloud control information/instruction channel. Specifically, the end side processes input event information (mouse, keyboard, touch and the like) and control instructions and data of a user through a terminal system, and sends system events and messages of the terminal to the cloud side through a control information/instruction channel. And a function module (such as a window manager) related to the cloud side application framework layer receives the event and the message, updates a message list and distributes the message (such as a specific application window and an application thread), the application layer receives the system message and makes a response, and the application framework layer calls a terminal kernel program in a system calling mode through a terminal cloud control information/instruction channel to complete specific operation of the terminal and realize one-time complete user interactive operation.
At 420, performing end cloud collaborative graphics rendering through an end cloud collaborative rendering channel and outputting graphics at the terminal, wherein the cloud end updates graphics rendering data of the terminal through the end cloud collaborative rendering channel. FIG. 6 illustrates a side cloud collaborative rendering channel interaction diagram according to an embodiment. The end cloud integrated operation system achieves end cloud graphic collaborative rendering through an end cloud collaborative rendering channel, the cloud side updates relevant data of terminal graphic rendering through the channel, and the system divides end cloud collaborative rendering into different collaboration levels.
Firstly, different windows of different applications serve as an independent rendering node, have independent graphic contexts of current drawing environments of various windows, contain parameters required by a drawing system to realize subsequent drawing operation and all information of specified equipment, and define basic drawing characteristics, such as: color, clipping region, width and style of line, font information, composition options, etc.; the graphic context of each rendering node is responsible for managing and maintaining independent graphic resources, such as textures and frame cache objects, required by drawing of each window; the GUI graphic system is responsible for managing a group of rendering node queues corresponding to each application program window of the desktop, and when an application program initiates a drawing operation, a graphic rendering instruction calls an API (application programming interface) interface and submits the operation instruction and data to an instruction queue of a rendering node of the GUI graphic system; the bottom graphics system reads the operation instructions and data in the queue, performs calculation execution through a rendering pipeline, and notifies the system that the window rendering target changes after rendering is finished and the corresponding window area on the screen needs to be updated; and the GUI graphic system performs new window mixing work once, updates the frame buffer of the display system, and finally the display controller reads the updated frame buffer to display on a screen.
The graphics of the operating system are rendered and output the whole workflow, which can be generally divided into 4 steps:
1. dynamically initiating window drawing operation by an application program, preparing the graphic rendering operation and data (such as points, lines, colors, positions, buffer areas, operation, setting and the like) needing to be drawn, transferring the operation and the data to a bottom graphic system through API (application programming interface) calling, and adding the operation and the data into an instruction queue of a rendering node;
2. the rendering node of each window reads the operation instruction and data in sequence, and the updated graphics rendering resources (such as texture and frame cache objects) of each application program are obtained through the calculation of a rendering pipeline at the bottom layer;
3. the graphics system is based on the display scene graph (scene graph) of each application, and on the updated graphics resources and context, such as: and mixing, assembling and synthesizing textures, positions, areas, attributes and the like of all display units to generate bitmap output of all windows, wherein the bitmap output comprises the following steps: FrameBuffer for each application window;
4. and finally, updating each application program window by the graphic system, mixing the application program windows in the corresponding screen area to generate a new desktop bitmap, writing the new desktop bitmap into a system video memory, and reading the system video memory and displaying and outputting the final desktop bitmap by the display controller.
And the end cloud collaborative rendering is carried out by respectively placing the specific rendering operation steps described above at an end or a cloud side and carrying out distributed collaborative rendering.
In an embodiment, the cloud is responsible for completing the above-described graphic rendering work of each window rendering node in steps 1, 2 and 3 to obtain an updated window bitmap; when a citation program window initiates drawing operation or a desktop window, namely a rendering node list is changed (such as the construction, destruction, movement, state and relative position change of the window), a rendering node window bitmap and a rendering node list which need to be updated are synchronized to an end side by a cloud side through a network; the mixed output of the 4 th step operation window is left and is completed by the end side. In this collaborative rendering mode, the network bandwidth requirements are closely related to the performance of the peer-side display device (i.e., the format and size of the required synchronized peer-side screen window bitmap); the performance requirement on the cloud side is high, the dependence is large, and most of graphic rendering work of the system is mainly completed by the cloud side; the requirements on the software and hardware environment of the opposite end side are low, and the lightweight terminal can well complete the window mixing and other small amount of work only by being responsible for completing the window mixing and the like. The collaborative rendering mode has the advantages that the requirement on the software and hardware environment of the end side is not high, but the requirement on the network bandwidth is required, and the performance of the display equipment of the end side is depended on. When a high-end display device is adopted, very high requirements are put on network bandwidth, and meanwhile, the requirements of a system on a cloud end are high. Therefore, the collaborative rendering mode is suitable for the condition that the network bandwidth is not high, and the display device is configured with a lower embedded terminal or a low-end mobile terminal, and a wearable embedded device, such as a smart watch.
In another embodiment, the cloud side is only responsible for the work of step 1 described above, and pushes the operation instructions and data of the application program to the end side; the remaining 2 nd, 3 rd, 4 th steps, i.e. all graphics rendering work, are done by the end-side GUI graphics system, as shown in fig. 6. When a drawing operation is initiated on an application program window or a desktop window, namely a rendering node list is changed (such as the construction, destruction, movement, state and relative position change of the window), the cloud side needs to update the instruction queue or list information of the rendering nodes of the end-side display system, and finally the end side executes the operation to complete the whole rendering operation process. In the cooperation mode, the end/cloud needs to synchronously update rendering operation instructions, operation data and rendering node list information, wherein the required network bandwidth is closely related to the size of the operation data which actually needs to be synchronized, and is generally few; the requirement on the opposite end side is highest, and because the terminal is required to finish all the graphic rendering operations, the graphic rendering work can be well finished only by relatively complete software and hardware operating environments (such as a GPU rendering pipeline, a font engine, a GUI graphic system and the like); the requirement on the cloud side is the lowest, the cloud side only needs to push operation instructions and data, and actual graphic rendering work does not need to be completed. The collaborative rendering mode has the advantages that requirements on network bandwidth and cloud side computing resources are not high, requirements on terminal software and hardware environments are high, functions of a complete graphic GUI system are required, and the size of operation data required to be synchronized by a terminal/cloud is limited by the network bandwidth. Therefore, the collaborative rendering mode is suitable for scenes that the requirement on network bandwidth is not high and a large number of terminals need to be supported on line at the cloud side.
In another embodiment, the cloud side is responsible for completing the steps 1 and 2 and completing the rendering and updating work of the basic graphic resources of each rendering node of the application window; the cloud side synchronizes the rendering node list needing to be updated and the context environment and the graphic resource after the rendering node is updated to the end side through the network; the remaining operations 3, 4 steps are left to be completed by the end side. In order to enable the operation sensitivity and the frame rate of the end cloud collaborative rendering to achieve the best effect, the end cloud rendering architecture process can be shown as follows:
the cloud system preloads the resources such as pictures, characters, color states and the like required by rendering to form textures in the starting stage, and synchronously uploads the textures to the GPU on the end side, so that the different processes on the cloud side can be shared by rendering resources.
The cloud application is in the rendering stage:
different windows of different applications maintain independent rendering nodes, and only when the number of the rendering nodes changes, the node information is synchronized to the end side for maintenance;
when the cloud application generates a rendering instruction draw-op, the cloud side application does not immediately submit the rendering to the end-side GPU, but caches the rendering instruction draw-op in a chunk queue of a rendering node where the rendering instruction draw-op is located, and waits for VSYNC signal notification of the end-side GPU;
when each rendering node at the cloud receives the vsync notification, the cached chunk instruction queues are rearranged according to the drawing sequence, and meanwhile, the adjacent rendering instructions are merged according to the types of the rendering instructions, whether cache resources are used or not, whether overlapping shielding effects are generated or not and the like;
synchronizing the rearranged and combined rendering instruction queues to each rendering node queue at the end side, executing rendering and rendering by using graphics resource data cached in the GPU within a VSYNC interval period (16ms) reported by graphics hardware after the GPU at the end side receives the combined and sequenced rendering instruction queues, and updating a rendering result to a screen at the end side through window synthesis to finish one frame rendering at the cloud end.
The mechanism ensures that the data bandwidth of the rendering instruction in the transmission process is minimized through the operation strategies of caching, rearranging and combining, and simultaneously minimizes the state switching of the GPU rendering pipeline by the rendering operation (such as drawing a rectangle, updating textures and the like) of the same type close to the instruction, thereby improving the GPU efficiency. Different processes share rendering resources through a resource preloading mechanism, the utilization rate of a video memory is improved, and the optimal rendering time of each frame is ensured to the greatest extent.
In the collaborative rendering mode, the cloud side and the end side adopt a distributed collaborative mode, so that the load of the cloud end is reduced, the delay time of application can be effectively reduced under the condition that the resources of the cloud side are limited, and the response speed of the system is improved; the data volume of the cloud side image rendering resources which are dynamically updated to the end side is generally not much in the running process of the application program, and only less network bandwidth is needed; both the end side and the cloud side need to have certain software and hardware environments, because both the end side and the cloud side need to independently complete the respective responsible graphic rendering work, only the end side does not need to have the complete capability of the GUI display system in the traditional sense (such as drawing primitives, drawing character sets, bitmap format support, time management and the like) to complete the responsible graphic rendering work. The advantages of this collaborative rendering mode are: the requirement on network bandwidth is not high in the application running process; higher display performance is achieved based on a relatively lower end-side configuration; when multiple users collaborate and interact in the same scene based on the same application scene, because rendering resources of the same scene are basically the same, multiple terminals can be multiplexed simultaneously, and the utilization rate of cloud-side computing resources is effectively improved. But also has certain disadvantages, such as high requirement on network bandwidth in the application starting stage; certain requirements are imposed on the performance of the cloud side and the end side, and a basic graphics rendering software and hardware environment is required to be provided at the same time. Therefore, the 2-collaborative rendering mode is suitable for scenes which have high requirements on network bandwidth and are based on the same scene and a plurality of end-user interactive applications, such as 2D (two-dimensional), 3D (three-dimensional) games, design, VR/AR (virtual reality/real-time) and the like.
At 430, the terminal plays the cloud audio and video resources through the audio and video data interaction channel and/or uploads the collected audio and video to the cloud. Fig. 7 shows an interaction diagram of an audio-video data interaction channel. The functional module for specifically realizing multimedia processing (specifically realizing partial functions of a multimedia pipeline (pipeline)) in the figure in the multimedia framework can adopt two modes to be realized in the terminal or the cloud multimedia framework according to the conditions of terminal equipment configuration and the like. Cloud audio/video resource terminal playing: the cloud end pushes the audio/video data to the terminal through the end cloud audio/video interaction channel, the data stream of the audio/video resource passes through the multimedia pipeline, separated audio/image data is generally obtained through work flows of analysis, buffering, decoding and the like, and the audio/image system is called to output through calling an audio interface and a graphic interface of a multimedia framework of the terminal. Acquiring and uploading terminal audio/video: the audio/image data collected by the terminal equipment is input to the multimedia processing module through an interface by the terminal audio collection equipment (MIC) or the image collection equipment (Camera), and the final audio/video data stream is synthesized through the multimedia pipeline generally through the working flows of audio/video data processing, decoding, buffering, synthesizing and the like. And the terminal uploads the related audio/video data to the cloud terminal through the terminal cloud audio/video interaction channel. In the process of playing or collecting the audio/video terminal, interactive operations such as information, events, query and the like generated by the system and the application (such as fast forward, pause, query of playing time, format adjustment, playing ending and the like) need to realize the cooperation between end clouds through an end cloud audio/video interactive channel.
The cloud can adopt the virtual device layer to abstract various terminal hardware devices, the purpose of decoupling cloud application and the terminal hardware devices is achieved, one-time cloud deployment development of specific application is achieved, and cross-terminal application of users is achieved. The extremely simple terminal can adopt an extensible lightweight kernel or microkernel technology to improve the extensibility of the system, flexibly adapt to hardware equipment of various types and specifications and improve the reusability of the system. The user data is stored and applied in a cloud mode, and the user data is not stored on the terminal equipment, so that the safety of application and data is improved. And the cooperative interaction of the end cloud is comprehensively realized by maintaining three interaction channels of the end cloud. The specific function realization part of the end cloud cooperative rendering and the multimedia cooperative processing can adopt a dynamic or static cooperative mode according to the conditions of terminal equipment configuration, network environment and the like, and can be flexibly distributed to the terminal or the cloud, so that the influence of network communication on the system can be effectively reduced, and the real-time performance, the sensitivity and the stability of the end cloud integrated operation system are improved.
Fig. 8 is a block diagram of a preferred embodiment of the end cloud user interaction system according to the present invention, wherein a control information instruction channel, an end cloud cooperative rendering channel, and/or an audio/video data interaction channel are established between a terminal and a cloud end. The system comprises: the control information instruction interaction module 810 is used for enabling the terminal to upload a human-computer interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, the cloud end updates and distributes a message list based on the received human-computer interaction instruction, system event and/or message, and the cloud end calls a terminal kernel program through the control information instruction channel to complete specific operation of the terminal; and/or an end cloud collaborative rendering module 820, configured to perform end cloud collaborative graphics rendering through the end cloud collaborative rendering channel and output graphics at the terminal, where the cloud end updates graphics rendering data of the terminal through the end cloud collaborative rendering channel; and/or the audio/video data interaction module 830, configured to enable the terminal to play the cloud audio/video resource through the audio/video data interaction channel, and/or upload the acquired audio/video to the cloud. In an embodiment, the end cloud cooperation mode of the end cloud cooperation graphic rendering and/or the multimedia cooperation processing of audio and video data interaction can be dynamically or statically determined according to the configuration of the terminal device, the network environment and the like.
In another embodiment, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method embodiment shown and described in connection with fig. 3-7 or other corresponding method embodiments, which will not be described herein again.
In another embodiment, the present invention provides a computer device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the method embodiments shown and described in conjunction with fig. 3 to 7 or other corresponding method embodiments when executing the computer program, and details are not repeated herein.
The various embodiments described herein, or certain features, structures, or characteristics thereof, may be combined as suitable in one or more embodiments of the invention. Additionally, in some cases, the order of steps depicted in the flowcharts and/or in the pipelined process may be modified, as appropriate, and need not be performed exactly in the order depicted. In addition, various aspects of the invention may be implemented using software, hardware, firmware, or a combination thereof, and/or other computer implemented modules or devices that perform the described functions. Software implementations of the present invention may include executable code stored in a computer readable medium and executed by one or more processors. The computer-readable medium may include a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as CD-ROM, DVD-ROM, flash drives, and/or other devices with a Universal Serial Bus (USB) interface, and/or any other suitable tangible or non-transitory computer-readable medium or computer memory on which executable code may be stored and executed by a processor. The present invention may be used in conjunction with any suitable operating system.
As used herein, the singular forms "a", "an" and "the" include plural references (i.e., have the meaning "at least one"), unless the context clearly dictates otherwise. It will be further understood that the terms "has," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The foregoing describes some preferred embodiments of the present invention, but it should be emphasized that the invention is not limited to these embodiments, but can be implemented in other ways within the scope of the inventive subject matter. Various modifications and alterations of this invention will become apparent to those skilled in the art without departing from the spirit and scope of this invention.

Claims (10)

1. A terminal cloud user interaction method is characterized in that a control information instruction channel, a terminal cloud collaborative rendering channel and/or an audio and video data interaction channel are/is established between a terminal and a cloud end, and the method comprises the following steps:
the terminal uploads a human-computer interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, the cloud end updates and distributes a message list based on the received human-computer interaction instruction, system event and/or message, and the cloud end calls a terminal kernel program through the control information instruction channel to complete specific operation of the terminal; and/or
Performing end cloud collaborative graphic rendering through the end cloud collaborative rendering channel and outputting a graphic at the terminal, wherein the cloud end updates graphic rendering data of the terminal through the end cloud collaborative rendering channel; and/or
And the terminal plays the cloud audio and video resources through the audio and video data interaction channel and/or uploads the collected audio and video to the cloud.
2. The method according to claim 1, wherein a terminal cloud coordination mode of terminal cloud coordination graphic rendering and/or multimedia coordination processing of audio and video data interaction is determined dynamically or statically at least according to terminal device configuration and network environment.
3. The method according to claim 2, wherein the end cloud collaborative graphics rendering and output comprises the steps of:
A. responding to the dynamic window drawing operation of an application program, preparing the graphic rendering operation and data needing to be drawn, transmitting the operation and the data to a bottom graphic system, and adding the operation and the data to an instruction queue of a rendering node, wherein different windows of different application programs are used as an independent rendering node;
B. the rendering nodes of the windows read the operation instructions and data in sequence, and the updated graph rendering resources of the application programs are obtained through the calculation of a rendering pipeline at the bottom layer;
C. the graphics system performs mixing, assembly and synthesis based on the display scene graph, the updated graphics rendering resources and the context environment of each application program to generate bitmap output of each window;
D. the graphic system updates each application program window, performs mixing in the corresponding screen area to generate a final desktop bitmap, writes the final desktop bitmap into the system video memory, and the display controller reads the system video memory and displays and outputs the final desktop bitmap.
4. The method of claim 3, wherein the terminal and the cloud operate in a first collaboration mode during graphics rendering and output, wherein step A is performed at the cloud and steps B-D are performed at the terminal, and wherein the cloud updates an instruction queue of a rendering node of the terminal display system via the end cloud collaborative rendering channel.
5. The method of claim 3, wherein the terminal and the cloud operate in a second collaboration mode during graphics rendering and output, wherein steps A-B are performed at the cloud, and steps C-D are performed at the terminal, wherein the cloud synchronizes the list of rendering nodes to be updated and the updated context and graphics resources of the rendering nodes to the terminal through the end cloud collaborative rendering channel.
6. The method of claim 3, wherein the terminal and the cloud operate in a third collaboration mode during graphics rendering and output, wherein steps A-C are performed at the cloud, and step D is performed at the terminal, wherein the cloud synchronizes the rendering node window bitmap and the rendering node list that need to be updated to the terminal through the end cloud collaborative rendering channel.
7. The method of claim 5, further comprising:
the cloud system preloads resources required by rendering to form textures in a starting stage and synchronously uploads the textures to a terminal Graphic Processing Unit (GPU);
the cloud application responds to the change of the number of rendering nodes in the rendering stage and synchronizes the rendering node information to the terminal for maintenance;
responding to the cloud application to generate a rendering instruction draw-op, caching the rendering instruction draw-op in a chunk queue of a rendering node where the rendering instruction draw-op is located by the cloud application, and waiting for VSYNC signal notification of a terminal GPU;
in response to the fact that each rendering node of the cloud receives VSYNC signal notification, rearranging the cached chunk instruction queues according to the drawing sequence, and meanwhile, carrying out merging operation on adjacent rendering instructions according to the types of the rendering instructions, whether cache resources are used and whether an overlapping shielding effect is generated;
and synchronizing the rearranged and combined rendering instruction queues to each rendering node queue of the terminal, executing drawing and rendering by using the cached graphics resource data after the terminal GPU receives the combined and sequenced rendering instruction queues in the VSYNC interval period reported by the graphics hardware, and updating a rendering result to a terminal screen through window synthesis to finish one frame of cloud-to-end rendering.
8. The utility model provides an end cloud user interaction system which characterized in that, has established between terminal and high in the clouds control information instruction passageway, end cloud collaborative rendering passageway and/or audio frequency and video data interaction channel, the system includes:
the control information instruction interaction module is used for enabling the terminal to upload a human-computer interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, the cloud end updates and distributes a message list based on the received human-computer interaction instruction, system event and/or message, and the cloud end calls a terminal kernel program through the control information instruction channel to complete specific operation of the terminal; and/or
The terminal cloud collaborative rendering module is used for performing terminal cloud collaborative graphic rendering through the terminal cloud collaborative rendering channel and outputting a graphic at the terminal, wherein the cloud end updates graphic rendering data of the terminal through the terminal cloud collaborative rendering channel; and/or
And the audio and video data interaction module is used for enabling the terminal to play the cloud audio and video resources through the audio and video data interaction channel and/or uploading the collected audio and video to the cloud.
9. A computer device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1-7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010431050.0A 2020-05-20 2020-05-20 End cloud user interaction method and system, corresponding equipment and storage medium Active CN111818120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010431050.0A CN111818120B (en) 2020-05-20 2020-05-20 End cloud user interaction method and system, corresponding equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010431050.0A CN111818120B (en) 2020-05-20 2020-05-20 End cloud user interaction method and system, corresponding equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111818120A true CN111818120A (en) 2020-10-23
CN111818120B CN111818120B (en) 2023-05-02

Family

ID=72848411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010431050.0A Active CN111818120B (en) 2020-05-20 2020-05-20 End cloud user interaction method and system, corresponding equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111818120B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347586A (en) * 2020-11-12 2021-02-09 上海电气液压气动有限公司 System for digitally twinning a hydraulic system
CN112614202A (en) * 2020-12-24 2021-04-06 北京元心科技有限公司 GUI rendering display method, terminal, server, electronic device and storage medium
CN112667234A (en) * 2020-12-22 2021-04-16 完美世界(北京)软件科技发展有限公司 Rendering pipeline creating method and device, storage medium and computing equipment
CN113244614A (en) * 2021-06-07 2021-08-13 腾讯科技(深圳)有限公司 Image picture display method, device, equipment and storage medium
CN113453073A (en) * 2021-06-29 2021-09-28 北京百度网讯科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN113946402A (en) * 2021-11-09 2022-01-18 中国电信股份有限公司 Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
CN114090247A (en) * 2021-11-22 2022-02-25 北京百度网讯科技有限公司 Method, device, equipment and storage medium for processing data
CN114205359A (en) * 2022-01-27 2022-03-18 腾讯科技(深圳)有限公司 Video rendering coordination method, device and equipment
CN114443192A (en) * 2021-12-27 2022-05-06 天翼云科技有限公司 Multi-window virtual application method and device based on cloud desktop
CN114553890A (en) * 2020-11-24 2022-05-27 腾讯科技(深圳)有限公司 System message processing method and device, computer equipment and storage medium
CN114827186A (en) * 2022-02-25 2022-07-29 阿里巴巴(中国)有限公司 Cloud application processing method and system
CN115220832A (en) * 2021-04-21 2022-10-21 电科云(北京)科技有限公司 Security collaboration method and system based on cloud platform
CN115802098A (en) * 2023-02-09 2023-03-14 北京易智时代数字科技有限公司 Data interaction method, client, rendering end and system for cloud application
TWI814134B (en) * 2021-11-16 2023-09-01 財團法人工業技術研究院 Remote rendering system, method and device based on virtual mobile architecture
WO2023191710A1 (en) * 2022-03-31 2023-10-05 脸萌有限公司 End-cloud collaboration media data processing method and apparatus, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224885B1 (en) * 2009-01-26 2012-07-17 Teradici Corporation Method and system for remote computing session management
CN103650458A (en) * 2013-08-16 2014-03-19 华为技术有限公司 Transmission method, device and system of media streams
CN106713485A (en) * 2017-01-11 2017-05-24 杨立群 Cloud computing mobile terminal and working method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224885B1 (en) * 2009-01-26 2012-07-17 Teradici Corporation Method and system for remote computing session management
CN103650458A (en) * 2013-08-16 2014-03-19 华为技术有限公司 Transmission method, device and system of media streams
CN106713485A (en) * 2017-01-11 2017-05-24 杨立群 Cloud computing mobile terminal and working method thereof

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347586A (en) * 2020-11-12 2021-02-09 上海电气液压气动有限公司 System for digitally twinning a hydraulic system
CN114553890A (en) * 2020-11-24 2022-05-27 腾讯科技(深圳)有限公司 System message processing method and device, computer equipment and storage medium
CN114553890B (en) * 2020-11-24 2023-08-08 腾讯科技(深圳)有限公司 System message processing method, device, computer equipment and storage medium
CN112667234A (en) * 2020-12-22 2021-04-16 完美世界(北京)软件科技发展有限公司 Rendering pipeline creating method and device, storage medium and computing equipment
CN112614202B (en) * 2020-12-24 2023-07-14 北京元心科技有限公司 GUI rendering display method, terminal, server, electronic equipment and storage medium
CN112614202A (en) * 2020-12-24 2021-04-06 北京元心科技有限公司 GUI rendering display method, terminal, server, electronic device and storage medium
CN115220832A (en) * 2021-04-21 2022-10-21 电科云(北京)科技有限公司 Security collaboration method and system based on cloud platform
CN113244614A (en) * 2021-06-07 2021-08-13 腾讯科技(深圳)有限公司 Image picture display method, device, equipment and storage medium
CN113244614B (en) * 2021-06-07 2021-10-26 腾讯科技(深圳)有限公司 Image picture display method, device, equipment and storage medium
CN113453073A (en) * 2021-06-29 2021-09-28 北京百度网讯科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN113946402A (en) * 2021-11-09 2022-01-18 中国电信股份有限公司 Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
CN113946402B (en) * 2021-11-09 2024-10-08 中国电信股份有限公司 Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
TWI814134B (en) * 2021-11-16 2023-09-01 財團法人工業技術研究院 Remote rendering system, method and device based on virtual mobile architecture
CN114090247A (en) * 2021-11-22 2022-02-25 北京百度网讯科技有限公司 Method, device, equipment and storage medium for processing data
CN114443192A (en) * 2021-12-27 2022-05-06 天翼云科技有限公司 Multi-window virtual application method and device based on cloud desktop
CN114443192B (en) * 2021-12-27 2024-04-26 天翼云科技有限公司 Multi-window virtual application method and device based on cloud desktop
CN114205359A (en) * 2022-01-27 2022-03-18 腾讯科技(深圳)有限公司 Video rendering coordination method, device and equipment
CN114827186A (en) * 2022-02-25 2022-07-29 阿里巴巴(中国)有限公司 Cloud application processing method and system
WO2023191710A1 (en) * 2022-03-31 2023-10-05 脸萌有限公司 End-cloud collaboration media data processing method and apparatus, device and storage medium
CN115802098B (en) * 2023-02-09 2023-04-28 北京易智时代数字科技有限公司 Cloud application data interaction method, client, rendering end and system
CN115802098A (en) * 2023-02-09 2023-03-14 北京易智时代数字科技有限公司 Data interaction method, client, rendering end and system for cloud application

Also Published As

Publication number Publication date
CN111818120B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN111818120B (en) End cloud user interaction method and system, corresponding equipment and storage medium
JP4901261B2 (en) Efficient remote display system with high-quality user interface
CN101263485B (en) Remoting redirection layer for graphics device interface
CN112614202B (en) GUI rendering display method, terminal, server, electronic equipment and storage medium
KR101087361B1 (en) System and method for a unified composition engine in a graphics processing system
US7937452B2 (en) Framework for rendering plug-ins in remote access services
US8244051B2 (en) Efficient encoding of alternative graphic sets
EP3525093B1 (en) Remoting of windows presentation framework based applications in a non-composed desktop
WO2019114185A1 (en) App remote control method and related devices
JP7475610B2 (en) Cloud native 3D scene gaming method and system
KR20100114050A (en) Graphics remoting architecture
US9396001B2 (en) Window management for an embedded system
CN115350479B (en) Rendering processing method, device, equipment and medium
KR100490401B1 (en) Apparatus and method for processing image in thin-client environment
CN114570020A (en) Data processing method and system
CN116546228B (en) Plug flow method, device, equipment and storage medium for virtual scene
CN117609646A (en) Scene rendering method and device, electronic equipment and storage medium
JP2007519074A (en) Protocol for remote visual composition
CN114615546A (en) Video playing method and device, electronic equipment and storage medium
CN117742997A (en) WebRTC-based three-dimensional cloud rendering plug flow fusion terminal computing power interaction method
CN118718390A (en) Cloud game picture presentation method and device, electronic equipment and storage medium
CN114924819A (en) Watermark synthesis method, cloud desktop server, client and computer readable storage medium
Herrb et al. New evolutions in the x window system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant