CN112565869B - Window fusion method, device and equipment for video redirection - Google Patents

Window fusion method, device and equipment for video redirection Download PDF

Info

Publication number
CN112565869B
CN112565869B CN202011460208.3A CN202011460208A CN112565869B CN 112565869 B CN112565869 B CN 112565869B CN 202011460208 A CN202011460208 A CN 202011460208A CN 112565869 B CN112565869 B CN 112565869B
Authority
CN
China
Prior art keywords
image frame
frame queue
window
desktop
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011460208.3A
Other languages
Chinese (zh)
Other versions
CN112565869A (en
Inventor
刘海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Os Easy Cloud Computing Co ltd
Original Assignee
Wuhan Os Easy Cloud Computing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Os Easy Cloud Computing Co ltd filed Critical Wuhan Os Easy Cloud Computing Co ltd
Priority to CN202011460208.3A priority Critical patent/CN112565869B/en
Publication of CN112565869A publication Critical patent/CN112565869A/en
Application granted granted Critical
Publication of CN112565869B publication Critical patent/CN112565869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to a window fusion method, a device and equipment for video redirection, which relate to the technical field of desktop virtualization, and are characterized in that after redirection video stream data and virtualization desktop attribute data are decoded and sequentially rendered into an SDL window, a shielding window area image frame queue in a desktop image frame queue is intercepted according to shielding window area attributes in the redirection video stream data, and the intercepted shielding window area image frame queue is rendered into the SDL window, so that a shielding window is displayed above a video image, a user can view and operate the shielding window while watching the redirection video without changing operation habits, and the experience degree of the user is improved; in addition, the CPU instruction set, the color mixing, the transparency mixing and the OpenGL are fully utilized to carry out optimization processing on the image frame queue, and delay processing is carried out on the image frame queue of the shielding window, so that the loss of fusion processing time is greatly reduced, and the fluency of video playing is improved.

Description

Window fusion method, device and equipment for video redirection
Technical Field
The present application relates to the field of desktop virtualization technologies, and in particular, to a method, an apparatus, and a device for fusing windows for video redirection.
Background
Multimedia video playing is a great challenge for virtual desktops, because it needs to be considered in many aspects such as user experience, audio and video synchronization, bandwidth, and the like. In the conventional video playing process, a video file is generally read and decoded into an image by a Central Processing Unit (CPU), and the image data is sent to a client via a network for presentation. However, the video playing method has the following two defects: on one hand, as the video decoding process is performed in the server side, it generates a large CPU pressure; on the other hand, the decoded image data has a large flow and is limited by the network bandwidth, the quality of the picture finally presented on the client is poor, and the video is not smoothly played.
In view of the above drawbacks, the following two approaches are generally adopted in the related art: the first method is that the multimedia video playing image of the server is subjected to video coding processing again, and then the video coding data is transmitted to the client for decoding, playing and displaying; the other mode is a video redirection mode, and the video coding stream required to be played by the server player is captured and directly sent to the client for decoding, playing and displaying.
The video redirection has the effects that the video played on the virtual desktop is redirected to the terminal computer to be played, and computing resources of the terminal computer are fully utilized, so that the resource consumption of the virtual desktop and the network flow from the virtual desktop to the terminal computer are reduced, and the fluency of video playing is improved. However, in the current video redirection method, video connection or video content is basically pushed to a terminal computer, and then a window is newly created by the terminal computer for decoding and top-up display, which may cause the display window interface of the virtualized desktop to be blocked, thereby causing a problem that a user cannot view and operate the blocked display window interface of the virtualized desktop when watching the redirected video.
Disclosure of Invention
The embodiment of the application provides a window fusion method, a window fusion device and a window fusion device for video redirection, which are used for solving the problem that a blocked window cannot be viewed and operated in the related technology.
In a first aspect, a window fusion method for video redirection is provided, which includes the following steps:
acquiring redirected video stream data and virtualized desktop attribute data, wherein the redirected video stream data comprises attributes of a shielding window area;
decoding the redirected video stream data to obtain a video stream image frame queue, and decoding the virtualized desktop attribute data to obtain a desktop image frame queue;
rendering the video stream image frame queue and the desktop image frame queue to an SDL window in sequence;
and intercepting an image frame queue of the shielding window area in the desktop image frame queue according to the attribute of the shielding window area, and rendering the image frame queue of the shielding window area to the SDL window so that the shielding window is displayed above the redirected video.
In some embodiments, the decoding the virtualized desktop attribute data into a desktop image frame queue includes:
and decoding the virtualized desktop attribute data through GPU decoding and a CPU instruction set to obtain a desktop image frame queue in an RGBA color format.
After the obtaining of the table top image frame queue in the RGBA color format, the method further includes the steps of: and caching the desktop image frame queue in the RGBA color format into the Surface of the SPICE image.
The sequentially and respectively rendering the video stream image frame queue and the desktop image frame queue to the SDL window comprises:
converting the video stream image frame queue into a video stream image frame queue in an RGBA color format through OpenGL;
rendering a video stream image frame queue in an RGBA color format to the SDL window through OpenGL;
and performing color mixing and transparency overlapping processing on a desktop image frame queue in an RGBA color format in the Surface and a rendered video stream image frame queue through OpenGL, and overlapping and rendering the overlapping processing result to the SDL window.
The processing of color mixing and transparency superposition on the desktop image frame queue in the RGBA color format in the Surface and the rendered video stream image frame queue through OpenGL includes:
the transparency attribute value of the video stream image frame queue is 1, and the transparency attribute value of the desktop image frame queue is 0-1;
and performing superposition calculation on the transparency attribute value of the desktop image frame queue and the transparency attribute value of the video stream image frame queue through OpenGL to obtain the target fusion transparency.
The rendering the occluded window area image frame queue to the SDL window comprises:
after the image frame queue in the shielding window area is subjected to time delay processing;
and rendering the image frame queue in the shielding window area after the time delay processing to the SDL window through OpenGL.
The redirected video stream data further comprises redirected video frame data and redirected video position data, and the virtualized desktop attribute data comprises display change image data of the virtualized desktop and position data corresponding to the display change image of the virtualized desktop.
And the video stream image frame queue is in YUV format.
In a second aspect, a window fusion apparatus for video redirection is provided, which includes:
an obtaining unit, configured to obtain redirected video stream data and virtualized desktop attribute data, where the redirected video stream data includes an attribute of a blocked window area;
the decoding unit is used for decoding the redirected video stream data to obtain a video stream image frame queue and decoding the virtualized desktop attribute data to obtain a desktop image frame queue;
the first rendering unit is used for sequentially rendering the video stream image frame queue and the desktop image frame queue to the SDL window respectively;
and the second rendering unit is used for intercepting the image frame queue of the shielding window area in the desktop image frame queue according to the attribute of the shielding window area and rendering the image frame queue of the shielding window area to the SDL window so that the shielding window is displayed above the redirected video.
In a third aspect, a window fusion device for video redirection is provided, which includes: the window fusion system comprises a memory and a processor, wherein at least one instruction is stored in the memory, and is loaded and executed by the processor to realize the window fusion method.
The beneficial effect that technical scheme that this application provided brought includes: the user can view and operate the shielding window while watching the redirected video, the operation habit of the user does not need to be changed, and the experience degree of the user is improved.
The embodiment of the application provides a window fusion method, a device and equipment for video redirection, after redirection video stream data and virtualization desktop attribute data are decoded and sequentially rendered into an SDL window, a shielding window area image frame queue in a desktop image frame queue is intercepted according to shielding window area attributes in the redirection video stream data, and the intercepted shielding window area image frame queue is rendered into the SDL window, so that a shielding window is displayed above a video image, the shielding window can be displayed for a user, the user does not need to change operation habits, the shielding window can be checked and operated while watching redirection video, and the experience degree of the user is improved.
In addition, the embodiment of the application not only makes full use of GPU decoding, a CPU instruction set, color mixing, transparency mixing and OpenGL to carry out optimization processing on the image frame queue, but also carries out delay processing on the image frame queue of the shielding window, so that the data decoding speed and mixing performance of each frame can be effectively improved, further the loss of fusion processing time is greatly reduced, the number of redirected video frames processed per second is improved, and the smoothness of video playing is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow structure diagram of a window fusion method for video redirection according to an embodiment of the present application;
FIG. 2 is a schematic diagram of window fusion provided in an embodiment of the present application;
fig. 3 is a block diagram of a structure of a window fusion apparatus for video redirection according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a window fusion method, a window fusion device and window fusion equipment for video redirection, which can solve the problem that a blocked window cannot be viewed and operated in the related technology.
Fig. 1 is a schematic flowchart of a window fusion method for video redirection according to an embodiment of the present application, including the following steps:
s1: and acquiring redirected video stream data and virtualized desktop attribute data, wherein the redirected video stream data comprises attributes of a shielding window area.
The redirected video stream data further includes redirected video frame data and redirected video position data, and the virtualized desktop attribute data (i.e., SPICE image data, which is image display data of a virtual desktop, where SPICE is an open source network protocol, provides remote interaction with a virtual desktop device, is mainly applied to desktop virtualization, and supports image, 2D transmission, and 1080P video playing) further includes display change image data of the virtualized desktop and position data corresponding to a display change image of the virtualized desktop.
S2: and decoding the reorientation video stream data to obtain a video stream image frame queue, and decoding the virtualization desktop attribute data to obtain a desktop image frame queue.
Furthermore, in the embodiment of the present application, the image frame queue of the video stream in YUV format is cached by performing YUV (YUV is a color coding method) format decoding processing on the redirected video stream data.
Further, in this embodiment of the present application, decoding the virtualized desktop attribute data to obtain a desktop image frame queue specifically includes the following steps: and decoding the virtualized desktop attribute data through GPU decoding and a CPU instruction set to obtain a desktop image frame queue in an RGBA (RGBA is a color space representing Red (Red), green (Green), blue (Blue) and Alpha (transparent)) color format.
Through GPU decoding and CPU instruction set decoding, the conversion of color formats is accelerated, and the efficiency of program execution is improved; and the virtualized desktop attribute data is converted from a BGRA color format to a desktop image frame queue in an RGBA color format, so that the time for transmitting SPICE image data from a CPU cache to a GPU (Graphics Processing Unit) cache by OpenGL (Open Graphics Library, cross-language and cross-platform application programming interface for rendering 2D and 3D vector Graphics) can be shortened.
Furthermore, in this embodiment of the present application, after obtaining the desktop image frame queue in the RGBA color format, the method further includes the steps of: and caching the desktop image frame queue in the RGBA color format at the corresponding position in the virtualized desktop into the Surface of the SPICE image.
S3: and sequentially rendering the video stream image frame queue and the desktop image frame queue to an SDL (Simple direct media layer, a cross-platform development library, which mainly provides operations on video graphics, audio, a keyboard, a mouse and a joystick) window respectively.
Furthermore, in the embodiment of the present application, rendering the video stream image frame queue and the desktop image frame queue to the SDL window in sequence, respectively, specifically includes the following steps:
converting the video stream image frame queue into a video stream image frame queue in an RGBA color format through OpenGL;
rendering a video stream image frame queue in an RGBA color format to an SDL window through OpenGL;
and performing color mixing and transparency overlapping processing on a desktop image frame queue in an RGBA color format in the Surface and a rendered video stream image frame queue through OpenGL, and overlapping and rendering the overlapping processing result to an SDL window.
Specifically, an SDL window and an SPICE GTK (remote connection client for connecting with an SPICE server) window are created by utilizing a composite window attribute and a transparent attribute of Linux (GNU/Linux, a free-use and free-propagation operating system), SDL image processing performances of different hardware platforms are compatible, a video stream image frame queue in a YUV format is converted into a video stream image frame queue in an RGBA color format through a 2D acceleration function of OpenGL, and then the video stream image frame queue in the RGBA color format is zoomed or stretched and rendered to a video area specified by the SDL window;
after rendering a frame of video stream image frame, rendering a frame of desktop image frame in the Surface of the SPICE image, namely, performing color mixing and transparency overlapping processing on a desktop image frame queue in an RGBA color format in the Surface and a rendered video stream image frame queue through OpenGL, and overlapping and rendering the overlapping processing result to an SDL window.
Further, in this embodiment of the present application, color mixing and transparency overlapping processing are performed on a desktop image frame queue in an RGBA color format in the Surface and a rendered video stream image frame queue through OpenGL, which specifically includes the following steps:
when the video stream image frame queue is rendered by OpenGL, the transparency attribute value of each pixel point of the video stream image frame queue is set to be 1 (1 represents opaque), the transparency attribute value of each pixel point of the desktop image frame queue is 0-1 (comprising 0 and 1, 0 represents full transparency, and 1 represents opaque), wherein the specific attribute value distribution of the transparency in the desktop image frame queue is determined by display data of a virtualized desktop;
and performing superposition calculation on the transparency attribute value of the desktop image frame queue and the transparency attribute value 1 of the video stream image frame queue to obtain the target fusion transparency, so that the image quality and the color are more comfortable, softer and more beautiful.
S4: and intercepting the image frame queue of the shielding window area in the desktop image frame queue according to the attribute of the shielding window area, and rendering the image frame queue of the shielding window area to the SDL window, so that the shielding window is displayed above the redirection video.
Wherein, the attribute (namely, the shielding window) of the other window cutting area above the redirection video display area is received, and only the source color of the area corresponding to the SPICE image frame data is taken from the cutting area to be covered on the SDL window, so that the other window cutting area above the redirection video display area can be displayed to the user.
Because the user drags the sheltering window fast and frequently, the terminal also receives the change process fast and frequently, and the display performance of the terminal is affected by the frequency, therefore, further, in the embodiment of the application, the method renders the image frame queue in the sheltering window area to the SDL window, and specifically comprises the following steps: carrying out time delay processing on the frame queue of the image area of the shielding window; rendering the image frame queue in the shielding window area after the time delay processing to the SDL window through the 2D acceleration function of OpenGL, namely losing part of the process state of frequent rendering, and ensuring the final display state.
After the reorientation video stream data and the virtualization desktop attribute data are decoded and sequentially rendered into the SDL window, the shielding window area image frame queue in the desktop image frame queue is intercepted according to the shielding window area attribute in the reorientation video stream data, and the intercepted shielding window area image frame queue is rendered to the SDL window, so that the shielding window is displayed above the video image, the shielding window can be displayed to a user, the user does not need to change the operation habit, the shielding window can be checked and operated while watching the reorientation video, and the experience degree of the user is improved.
In addition, GPU decoding, a CPU instruction set, color mixing, transparency mixing and OpenGL are fully utilized to carry out optimization processing on the image frame queue, and delay processing is carried out on the image frame queue of the shielding window, so that the data decoding speed and mixing performance of each frame can be effectively improved, further the loss of fusion processing time is greatly reduced, the number of redirected video frames processed every second is improved, and the smoothness of video playing is improved.
Referring to fig. 2, a working principle of a window fusion method for video redirection provided by the embodiment of the present application is as follows: the method comprises the steps that in the process that a virtual machine desktop server provides video redirection service for a terminal SPICE client, the change of a window is detected, if the desktop window changes and the change is overlapped with a redirection video area, the changed window attribute is actively sent to a video redirection client, the video redirection client receives redirection video stream data and virtualization desktop attribute data of the virtual machine desktop server, wherein the redirection video stream data comprises the attribute of a shielding window area, the video redirection client decodes the redirection video stream data to obtain a video stream image frame queue in a YUV format, and the video stream image frame queue in the YUV format and the virtualization desktop attribute data are transmitted to the SPICE client;
the method comprises the steps that an SPICE client receives a video stream image frame queue and virtualized desktop attribute data, the virtualized desktop attribute data are decoded through a GPU decoding and CPU instruction set (wherein an Intel CPU utilizes a sse instruction, and an ARM CPU utilizes a neon instruction), so that a desktop image frame queue in an RGBA color format is obtained, byte sequences of RGBA after decoding in various image coding formats (bmp, lz4, jpeg, h264 and the like) are adjusted, and the conversion of the color format is accelerated and the efficiency of program execution is improved through the GPU decoding and CPU instruction set decoding; the virtualized desktop attribute data is converted into a desktop image frame queue converted from BGRA to RGBA color format, so that the time for transmitting SPICE image data from CPU cache to GPU cache by OpenGL can be shortened;
secondly, caching a desktop image frame queue in an RGBA color format into Surface of an SPICE image, creating an SDL window by utilizing a composite window attribute and a transparent attribute of Linux, rendering a video stream image frame queue to the SDL window in a GPU in an SPICE client, performing color mixing and transparency overlapping processing on the video stream image frame queue and the desktop image frame queue in the GPU, and performing overlapping rendering on the desktop image frame queue after the overlapping processing to the SDL window through a 2D acceleration function of OpenGL, wherein the desktop image frame queue is processed through the 2D acceleration function of OpenGL and color mixing and transparency (ALPHA) mixing, the hard rendering of the GPU is fully utilized, and the load of a CPU can be effectively reduced;
the image frame queues in the shielding window areas in the desktop image frame queues are intercepted according to the attributes of the shielding window areas, and are rendered to the SDL window through the 2D acceleration function of OpenGL after delay processing, so that the shielding window is displayed above the video image, the effect of watching for a user is achieved, the user does not need to change operation habits, the shielding window can be looked over and operated while watching the redirection video, and the experience degree of the user is improved.
Therefore, when the video redirection client detects that the shielding states of the video playing window and other windows corresponding to the video redirection service change, the SPICE client is timely notified about the regional attribute of the video playing window and the regional attribute of the shielding part, and after receiving the notification, the SPICE client can timely seamlessly overlap and fuse the redirection video playing window and the shielding window into the SDL window and render and display the window to a user, so that the user can view and operate the shielding window while watching the redirection video; in addition, GPU decoding, a CPU instruction set, color mixing, transparency mixing and OpenGL are fully utilized to optimize the image frame queue, and delay processing is carried out on the image frame queue of the shielding window, so that the data decoding speed and mixing performance of each frame can be effectively improved, the loss of fusion processing time is greatly reduced, the number of redirected video frames processed every second is increased, and the smoothness of video playing is improved.
Referring to fig. 3, an embodiment of the present application further provides a window fusion apparatus for video redirection, including:
an obtaining unit, configured to obtain redirected video stream data and virtualized desktop attribute data, where the redirected video stream data includes an attribute of a blocked window area;
the decoding unit is used for decoding the reorientation video stream data to obtain a video stream image frame queue and decoding the virtualization desktop attribute data to obtain a desktop image frame queue;
the first rendering unit is used for sequentially and respectively rendering a video stream image frame queue and a desktop image frame queue to the SDL window;
and the second rendering unit is used for intercepting the image frame queue of the shielding window area in the desktop image frame queue according to the attribute of the shielding window area and rendering the image frame queue of the shielding window area to the SDL window so that the shielding window is displayed above the redirected video.
The obtaining unit can be a video redirection client, the decoding unit can comprise a video redirection client and a SPICE client, and both the first rendering unit and the second rendering unit can be set as SPICE clients.
After the reorientation video stream data and the virtualization desktop attribute data are decoded and sequentially rendered into the SDL window, the shielding window area image frame queue in the desktop image frame queue is intercepted according to the shielding window area attribute in the reorientation video stream data, and the intercepted shielding window area image frame queue is rendered to the SDL window, so that the shielding window is displayed above the video image, the shielding window can be displayed to a user, the user does not need to change the operation habit, the shielding window can be checked and operated while watching the reorientation video, and the experience degree of the user is improved.
The embodiment of the present application further provides a window fusion device for video redirection, including: the window fusion system comprises a memory and a processor, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to realize all or part of the steps of the window fusion method.
The Processor may be a CPU, or may be another general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or another programmable logic device, a discrete gate, or a discrete hardware component of a transistor logic device, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory can be used for storing a calculation program and/or module, and the processor can realize various functions of the computer device by running or executing the calculation program and/or module stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a video playing function, an image playing function, etc.), and the like; the storage data area may store data (such as video data, image data, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements all or part of the method steps described above.
The embodiments of the present application may implement all or part of the foregoing processes, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the foregoing methods. Wherein the computer program comprises computer program code, and the calculation program code may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code any of which may be, for example, a computer readable medium, a recording medium, a USB flash disk, a removable hard disk, a magnetic diskette, an optical disk, a computer memory, a Read-Only memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, or the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, server, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (9)

1. A window fusion method for video redirection is characterized by comprising the following steps:
acquiring redirected video stream data and virtualized desktop attribute data, wherein the redirected video stream data comprises attributes of a shielding window area;
decoding the redirected video stream data to obtain a video stream image frame queue, and decoding the virtualized desktop attribute data to obtain a desktop image frame queue;
rendering the video stream image frame queue and the desktop image frame queue to an SDL window in sequence;
intercepting an image frame queue of a shielding window area in the desktop image frame queue according to the attribute of the shielding window area, and rendering the image frame queue of the shielding window area to the SDL window so that the shielding window is displayed above the redirection video;
wherein the rendering the occluded window area image frame queue to the SDL window comprises:
after the image frame queue in the shielding window area is subjected to time delay processing;
and rendering the image frame queue of the shielding window area after the time delay processing to the SDL window through OpenGL.
2. The method as claimed in claim 1, wherein said decoding the virtualized desktop property data into a desktop image frame queue comprises:
and decoding the virtualized desktop attribute data through GPU decoding and a CPU instruction set to obtain a desktop image frame queue in an RGBA color format.
3. The method as claimed in claim 2, further comprising, after said obtaining the image frame queue of the desktop in RGBA color format, the steps of: and caching the desktop image frame queue in the RGBA color format into the Surface of the SPICE image.
4. A method of video redirection window fusion as claimed in claim 3, wherein: the sequentially and respectively rendering the video stream image frame queue and the desktop image frame queue to the SDL window comprises:
converting the video stream image frame queue into a video stream image frame queue in an RGBA color format through OpenGL;
rendering a video stream image frame queue in an RGBA color format to the SDL window through OpenGL;
and performing color mixing and transparency overlapping processing on a desktop image frame queue in an RGBA color format in the Surface and a rendered video stream image frame queue through OpenGL, and overlapping and rendering the overlapping processing result to the SDL window.
5. A method of video redirection window fusion as claimed in claim 4, wherein: the processing of color mixing and transparency superposition on the desktop image frame queue in the RGBA color format in the Surface and the rendered video stream image frame queue through OpenGL includes:
the transparency attribute value of the video stream image frame queue is 1, and the transparency attribute value of the desktop image frame queue is 0-1;
and overlapping and calculating the transparency attribute value of the desktop image frame queue and the transparency attribute value of the video stream image frame queue through OpenGL to obtain the target fusion transparency.
6. A method of video redirection window fusion as claimed in claim 1, wherein: the redirected video stream data further comprises redirected video frame data and redirected video position data, and the virtualized desktop attribute data comprises display change image data of the virtualized desktop and position data corresponding to the display change image of the virtualized desktop.
7. A method of video redirection window fusion as claimed in claim 1, wherein: the video stream image frame queue is in YUV format.
8. A video redirection window fusion apparatus, comprising:
an obtaining unit, configured to obtain redirected video stream data and virtualized desktop attribute data, where the redirected video stream data includes an attribute of a blocked window area;
the decoding unit is used for decoding the redirected video stream data to obtain a video stream image frame queue and decoding the virtualized desktop attribute data to obtain a desktop image frame queue;
the first rendering unit is used for sequentially rendering the video stream image frame queue and the desktop image frame queue to the SDL window respectively;
the second rendering unit is used for intercepting an image frame queue of a shielding window area in the desktop image frame queue according to the attribute of the shielding window area and rendering the image frame queue of the shielding window area to an SDL window so that the shielding window is displayed above the redirection video;
wherein the rendering the occluded window area image frame queue to the SDL window comprises:
after the image frame queue in the shielding window area is subjected to time delay processing;
and rendering the image frame queue in the shielding window area after the time delay processing to the SDL window through OpenGL.
9. A video-redirecting window fusion apparatus, comprising: a memory and a processor, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the window fusion method of any of claims 1-7.
CN202011460208.3A 2020-12-11 2020-12-11 Window fusion method, device and equipment for video redirection Active CN112565869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011460208.3A CN112565869B (en) 2020-12-11 2020-12-11 Window fusion method, device and equipment for video redirection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011460208.3A CN112565869B (en) 2020-12-11 2020-12-11 Window fusion method, device and equipment for video redirection

Publications (2)

Publication Number Publication Date
CN112565869A CN112565869A (en) 2021-03-26
CN112565869B true CN112565869B (en) 2023-04-07

Family

ID=75062457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011460208.3A Active CN112565869B (en) 2020-12-11 2020-12-11 Window fusion method, device and equipment for video redirection

Country Status (1)

Country Link
CN (1) CN112565869B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114443192B (en) * 2021-12-27 2024-04-26 天翼云科技有限公司 Multi-window virtual application method and device based on cloud desktop
CN114327722A (en) * 2021-12-28 2022-04-12 武汉噢易云计算股份有限公司 Mobile terminal performance optimization method, device, equipment and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019127726A1 (en) * 2018-10-26 2020-04-30 Nvidia Corporation SUITABLE STREAMING OF INDIVIDUAL APPLICATION WINDOWS FOR REMOTE WORKPLACE APPLICATIONS

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9565227B1 (en) * 2014-06-16 2017-02-07 Teradici Corporation Composition control method for remote application delivery
US10182126B2 (en) * 2016-05-02 2019-01-15 Dell Products L.P. Multilevel redirection in a virtual desktop infrastructure environment
CN106657206A (en) * 2016-06-27 2017-05-10 南京理工大学 Virtual desktop infrastructure web video redirection method
CN108563479A (en) * 2018-03-21 2018-09-21 新华三云计算技术有限公司 Redirect control method, device, virtual machine and the Redirectional system of window
US10462216B1 (en) * 2018-05-04 2019-10-29 Citrix Systems, Inc. WebRTC API redirection with interception techniques
CN110062286A (en) * 2019-02-22 2019-07-26 上海映云信息技术有限公司 A method of realizing that video redirects in virtual desktop

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019127726A1 (en) * 2018-10-26 2020-04-30 Nvidia Corporation SUITABLE STREAMING OF INDIVIDUAL APPLICATION WINDOWS FOR REMOTE WORKPLACE APPLICATIONS

Also Published As

Publication number Publication date
CN112565869A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
US11909984B2 (en) Video encoding and decoding for cloud gaming
CN112235626B (en) Video rendering method and device, electronic equipment and storage medium
US9619916B2 (en) Method for transmitting digital scene description data and transmitter and receiver scene processing device
US9077970B2 (en) Independent layered content for hardware-accelerated media playback
WO2022048097A1 (en) Single-frame picture real-time rendering method based on multiple graphics cards
US11882297B2 (en) Image rendering and coding method and related apparatus
CN112565869B (en) Window fusion method, device and equipment for video redirection
US11243786B2 (en) Streaming application visuals using page-like splitting of individual windows
WO2002010898A2 (en) Method and system for receiving interactive dynamic overlays through a data stream and displaying them over a video content
CN109361950B (en) Video processing method and device, electronic equipment and storage medium
CN113946402A (en) Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
US10237563B2 (en) System and method for controlling video encoding using content information
CN112714357A (en) Video playing method, video playing device, electronic equipment and storage medium
CN109587555B (en) Video processing method and device, electronic equipment and storage medium
CN113473226B (en) Method and device for improving video rendering efficiency, computer equipment and storage medium
US11593908B2 (en) Method for preprocessing image in augmented reality and related electronic device
CN113411660B (en) Video data processing method and device and electronic equipment
CN116966546A (en) Image processing method, apparatus, medium, device, and program product
WO2022033162A1 (en) Model loading method and related apparatus
CN117377976A (en) High quality UI element boundaries using masks in time interpolated frames
US20230042078A1 (en) Encoding and decoding views on volumetric image data
US20240087169A1 (en) Realtime conversion of macroblocks to signed distance fields to improve text clarity in video streaming
CN116055540B (en) Virtual content display system, method, apparatus and computer readable medium
CN115665461B (en) Video recording method and virtual reality device
WO2023193524A1 (en) Live streaming video processing method and apparatus, electronic device, computer-readable storage medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant