CN117834946A - Graphic image display system and method - Google Patents
Graphic image display system and method Download PDFInfo
- Publication number
- CN117834946A CN117834946A CN202410005661.7A CN202410005661A CN117834946A CN 117834946 A CN117834946 A CN 117834946A CN 202410005661 A CN202410005661 A CN 202410005661A CN 117834946 A CN117834946 A CN 117834946A
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- client
- module
- cloud server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 38
- 238000004891 communication Methods 0.000 claims abstract description 11
- 230000005540 biological transmission Effects 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 26
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 11
- 238000012545 processing Methods 0.000 description 10
- 230000015556 catabolic process Effects 0.000 description 7
- 238000006731 degradation reaction Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000002776 aggregation Effects 0.000 description 6
- 238000004220 aggregation Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000000354 decomposition reaction Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005538 encapsulation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 108091008695 photoreceptors Proteins 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23412—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Image Processing (AREA)
Abstract
The invention provides a graphic image display system which comprises a cloud server, an edge computing module and a client. The cloud server comprises an image reconstruction module for expanding the image of the high-resolution frame into a plurality of images of the low-resolution frame of the same scene at the same position. The edge computing module is in communication connection with the cloud server and comprises a motion blur image restoration module, the motion blur image restoration module is used for restoring a series of dynamic motion low-resolution frame images into one or more high-resolution frame images, and the client is in communication connection with the edge computing module and the cloud server and is used for storing and displaying the rendered images. The system can effectively lighten the burden of the cloud server and improve the computing power utilization efficiency of the user terminal equipment.
Description
Technical Field
The invention relates to the technical field of computer graphic image processing, in particular to a graphic image display system and a graphic image display method.
Background
Cloud computing (clouding) is one type of distributed computing that breaks down a huge data computing process into multiple applets through a network cloud, then processes and analyzes the multiple applets through a system consisting of multiple servers, and then returns to a user. Depending on cloud computing, a user can intensively process computing demands by utilizing a super-strong efficient computing platform of a cloud computing center only by uploading data to the cloud. Based on this, cloud computing is widely used in the processing of graphic images. For example, the presentation of the graphic image can be realized by combining cloud computing and a remote desktop technology, specifically, the graphic image data is uploaded to a cloud for computing and rendering, and then the result is displayed to a user through the remote desktop technology.
However, in the graphics image processing task, the amount of data is often large, and the consumption of computing resources is large, which may cause the cloud computing performance to be limited by network delay and/or bandwidth, and may cause display delay and degradation of picture quality. In addition, because cloud computing resources need to be relied on, problems of computing efficiency and instantaneity can be encountered in large-scale graphic image processing.
In addition, the remote desktop technology has high requirements on the stability of the network connection, and if the network connection is unstable or the bandwidth is insufficient, the transmission and quality of the image data can be affected. Furthermore, performance bottlenecks and compatibility issues may be encountered in processing complex graphical image data due to the need to rely on the capabilities of the remote client device.
Disclosure of Invention
Aiming at part or all of the problems in the prior art, a first aspect of the present invention provides a graphic image display system, which adopts a cloud edge cooperative technology to display a graphic image, the display system comprises:
the cloud server comprises an image reconstruction module, wherein the image reconstruction module is used for expanding the image of the high-resolution frame into a plurality of images of low-resolution frames of the same scene at the same position;
the edge computing module is in communication connection with the cloud server and comprises a motion blur image restoration module, wherein the motion blur image restoration module is used for restoring a series of dynamic motion low-resolution frame images into one or more high-resolution frame images; and
and the client is in communication connection with the edge computing module and the cloud server and is used for displaying images.
Further, the display system also includes an image transmission module communicatively coupled to the edge calculation module and the client for identifying the same portion of the same scene, the same series of graphical image data, and compressing the different portions.
Further, the number and/or location of the edge computing modules is determined according to the distribution, network status, and data security requirements of the determined clients.
Further, the edge computing module is deployed on a local server at the location of the client.
Further, the edge computing module and client communicate with the cloud server using an internally standardized application programming interface (Application Programming Interface, API).
Further, the edge computing module is connected with the client through the internet or a local area network.
Further, the edge calculation module communicates with the client by adopting QUIC protocol based on UDP protocol or trusted UDP protocol.
Based on the display system as described above, a second aspect of the present invention provides a method for displaying a graphic image, including:
the client sends a display request to the cloud server, wherein the request comprises the image type, the purpose, the required resolution and the color depth information;
the cloud server obtains a high-resolution picture according to the display request, decomposes and renders the picture to generate a multi-frame low-resolution picture with the same scene and the same position, and returns the multi-frame low-resolution picture to the client;
the client analyzes the multi-frame low-resolution picture, extracts resolution and image quality information of an image, and sends the multi-frame low-resolution picture to an edge calculation module;
the edge calculation module restores the multi-frame low-resolution picture into one or more high-resolution frame images; and
and the client stores and displays the restored image.
Further, the display method further comprises:
the same parts of the same scene and the same series of graphic image data are identified through the image transmission module, and the same parts are compressed and then transmitted.
Further, the display method further comprises:
and selecting one or more edge calculation modules through a load balancing algorithm, and restoring the image.
Further, the display method further comprises:
and adjusting color, contrast and brightness parameters of the restored image by the edge calculation module to optimize the visual effect.
According to the graphic image display system and method, a cloud server is used for carrying out graphic image decomposition rendering tasks to obtain multi-frame low-quality graphic image data, and then a plurality of motion blurred images are restored and displayed to be rendered high-quality graphic images through edge computing equipment arranged between user equipment and the cloud server. The system and the method can effectively lighten the burden of the cloud server and improve the computing power utilization efficiency of the user terminal equipment. In addition, the system and the method also carry out information maintenance, information state update and data packet encapsulation processing when transmitting the image data on a plurality of image data which can be aggregated by taking an aggregation queue as a unit in network transmission, thereby reducing the bandwidth space occupation of the image data in a network interface, reducing the repeated transmission rate of the image data, improving the speed of image restoration, being capable of carrying out smooth image display under the condition of low network bandwidth or unstable network of batch remote desktop and batch cloud application, and avoiding image delay and image quality reduction.
Drawings
To further clarify the above and other advantages and features of embodiments of the present invention, a more particular description of embodiments of the invention will be rendered by reference to the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. In the drawings, for clarity, the same or corresponding parts will be designated by the same or similar reference numerals.
FIG. 1 is a schematic diagram of a graphic image display system according to one embodiment of the present invention;
FIG. 2 is a flow chart of a method of displaying a graphical image according to an embodiment of the present invention;
FIG. 3 shows a schematic flow diagram of image reconstruction according to one embodiment of the invention;
FIG. 4 is a flow chart illustrating image data transmission according to one embodiment of the present invention; and
fig. 5 shows a flow diagram of image restoration according to an embodiment of the present invention.
Detailed Description
In the following description, the present invention is described with reference to various embodiments. One skilled in the relevant art will recognize, however, that the embodiments may be practiced without one or more of the specific details, or with other alternative and/or additional methods or components. In other instances, well-known structures or operations are not shown or described in detail to avoid obscuring aspects of the invention. Similarly, for purposes of explanation, specific numbers and configurations are set forth in order to provide a thorough understanding of embodiments of the present invention. However, the invention is not limited to these specific details. Furthermore, it should be understood that the embodiments shown in the drawings are illustrative representations and are not necessarily drawn to scale.
Reference throughout this specification to "one embodiment" or "the embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
It should be noted that the embodiments of the present invention describe the steps of the method in a specific order, however, this is merely for the purpose of illustrating the specific embodiments, and not for limiting the order of the steps. In contrast, in different embodiments of the present invention, the sequence of each step may be adjusted according to the adjustment of the actual requirement.
In the present invention, the modules of the system according to the present invention may be implemented using software, hardware, firmware or a combination thereof. When implemented in software, the functions of the modules may be performed by a computer program flow, e.g., the modules may be implemented by code segments (e.g., code segments in a language such as C, C ++) stored in a storage device (e.g., hard disk, memory, etc.), which when executed by a processor, perform the corresponding functions of the modules. When a module is implemented in hardware, the functionality of the module may be implemented by providing corresponding hardware structures, such as by hardware programming of a programmable device, e.g., a Field Programmable Gate Array (FPGA), or by designing an Application Specific Integrated Circuit (ASIC) comprising a plurality of transistors, resistors, and capacitors, etc. When implemented in firmware, the functions of the module may be written in program code form in a read-only memory of the device, such as EPROM or EEPROM, and the corresponding functions of the module may be implemented when the program code is executed by a processor. In addition, some functions of the module may need to be implemented by separate hardware or by cooperation with the hardware, for example, a detection function is implemented by a corresponding sensor (e.g., a proximity sensor, an acceleration sensor, a gyroscope, etc.), a signal transmission function is implemented by a corresponding communication device (e.g., a bluetooth device, an infrared communication device, a baseband communication device, a Wi-Fi communication device, etc.), an output function is implemented by a corresponding output device (e.g., a display, a speaker, etc.), and so on.
In the present invention, the clients may include various types of computer systems, such as handheld devices, laptop computers, personal Digital Assistants (PDAs), multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, network servers, tablet computers, and the like.
In the conventional remote graphic image display method, each remote image display is rendered and transmitted by the cloud server, so that once the number of connected clients is large, a large amount of graphic image context information is maintained, so that a large amount of computing resources of the server and network data bandwidth are occupied, and if enough frame number calculation and data transmission cannot be performed in unit time, display delay, blocking or serious degradation of display picture quality can be caused. The inventor finds that if the calculated amount of the cloud server is reduced, only a low-quality image is generated for transmission, so that the network transmission data amount can be effectively reduced, the low-quality image can be further restored through the calculation force of the client, and further cloud-edge collaborative high-quality image transmission and display are realized.
Based on the above, the invention provides a graphic image display system and a graphic image display method. The cloud server is adopted to conduct decomposition rendering tasks of the graphic images so as to generate multi-frame low-quality graphic image data. And then, the edge computing equipment positioned between the client and the cloud server is adopted to restore and display the rendered high-quality graphic images by a plurality of motion blur images, so that the load of the cloud server is lightened, the computing power utilization efficiency of the client equipment is improved, and compared with the traditional method, the rendering efficiency and the transmission efficiency are greatly improved.
The embodiments of the present invention will be further described with reference to the drawings.
Fig. 1 is a schematic diagram showing the structure of a graphic image display system according to an embodiment of the present invention. As shown in fig. 1, a graphic image display system includes a cloud server 101, an edge computing module 102, and a client 103. The cloud server 101 is used for performing image reconstruction of one high-resolution frame image of the same scene and the same series of graphic images into a plurality of blurred low-resolution frames, the edge calculation module 102 is communicatively connected with the cloud server 101 and is used for restoring a series of dynamic motion low-resolution frame images into one or more high-resolution frame images, and the client 103 is communicatively connected with the cloud server 101 and the edge calculation module 102 so as to send a rendering request and store and display the rendered images. In one embodiment of the invention, the client 103 and the edge computation module 102 communicate with the cloud server using an internal standardized API.
In order to improve the transmission efficiency, in one embodiment of the present invention, the display system further includes an image transmission module 104, which is configured to identify the same scene and the same part of the same series of graphics image data, and compress the non-passing part, so as to achieve a process of transmitting a series of dynamic graphics images very efficiently. Specifically, when image information of one aggregation queue is used instead of a plurality of pieces of native image queue information that can be aggregated, a corresponding plurality of pieces of image data are subjected to packet encapsulation processing in order to perform information maintenance, update of information status, and transfer of image data in units of the aggregation queue. The same parts of the same scene and the same series of queue image data are marked, repeated data transmission can be avoided, and different parts are marked sequentially and then transmitted.
As shown in fig. 1, the cloud server 101 includes an image reconstruction module 111, which can convert a high-definition graphical interface of a cloud application of the cloud server from one or several frames of high-resolution pictures to more frames of low-resolution pictures through an image reconstruction algorithm. Because the GPU computing power of the cloud server is very rich, image reconstruction computation of tens of cloud applications can be performed in parallel, and further more low-quality graphic images can be rapidly generated. Since data reduction is not required, the computational burden of the cloud server is greatly reduced. In an embodiment of the present invention, the image reconstruction module 111 may further perform corresponding storage of the same application aggregate packet queue on the image data according to the client to be displayed.
As shown in fig. 1, the edge computing module 102 includes a motion blur image restoration module 121, where the motion blur image restoration module 121 may confirm that a plurality of native image queues corresponding to a plurality of native clients are polymerizable in connection in a case where application packet queues corresponding to the plurality of native clients are low resolution image queues generated by the same client initiating a communication request to the same edge computing module, then extract information of the image queues, fuse the information together through registration processing, and output one or several frames of high resolution images.
The deployment location of the edge computation module 102 needs to comprehensively consider the distribution of clients, network delay, and data security factors. For example, the edge computing module can be deployed in areas where the user equipment is densely distributed to reduce the data transmission distance and delay; meanwhile, in order to ensure data security, the edge computing module can be deployed on a local server of the user equipment. In an embodiment of the present invention, the number of the edge calculation modules 102 may be determined according to the actual application scenario and requirements. For example, where rendering tasks are complex, requiring a large amount of computing resources, the number of edge computing modules may be increased to increase overall computing power and efficiency. Meanwhile, for user equipment in different areas and different network environments, corresponding edge computing modules are required to be deployed according to actual conditions so as to ensure the high efficiency and the availability of services.
In one embodiment of the present invention, the edge computing module 102 is connected to the client 103 through the internet or a local area network, and may select a suitable connection manner and protocol according to the actual network environment and requirements. In one embodiment of the present invention, in order to improve data transmission efficiency and security, the edge calculation module 102 and the client 103 use a QUIC protocol or a trusted UDP protocol based on the UDP protocol. In one embodiment of the present invention, the communication security between the client 103 and the edge computing module 102 is ensured by an encryption and authentication mechanism. In addition, in order to improve the rendering efficiency and the user experience, in an embodiment of the present invention, the edge calculation module 102 may also perform adaptive adjustment according to the performance of the client 103, the bandwidth limitation, and the like, for example, when the performance of the client 103 is low or the bandwidth is limited, a lower rendering quality, resolution, and compressed image queue are selected.
Based on the presentation system as described above, fig. 2 shows a flow chart of a method for presenting a graphic image according to an embodiment of the present invention. As shown in fig. 2, a method for displaying a graphic image includes:
first, in step 201, a request is sent. After the client establishes stable connection with the cloud server and the edge computing module, the required graphic image parameters such as resolution, image quality and the like are preset, and then a display request is sent to the cloud server to obtain related data of the same group of image queues. Wherein the related data includes information such as resolution, image quality, and the like of the image. In one embodiment of the present invention, the edge calculation module may attach a description file to specify the type, purpose, resolution, color depth, etc. of the image when the client obtains the image file. For example, the following may be included: "Image Type Engineering Drawing, image Purpose 3D Modeling,Resolution:1920x1080,Color Depth:32-bit RGBA", etc. When the client receives the description file, a suitable rendering algorithm and parameters can be selected according to the information in the description file. Specifically, before processing the graphic image, the client sends a request to the cloud server, where the request includes information such as an image type, a purpose, a resolution, a color depth, and the like. And after receiving the request, the cloud server allocates corresponding resources for the client and returns a result to the client after rendering is completed. As previously described, in one embodiment of the invention, the client and edge computing module communicate with the cloud server using an internally standardized API. Based on the above, the client first needs to send an API call to the edge computing module, where the call includes information such as image type, use, resolution, color depth, etc., and after the edge computing module receives the API call, the edge computing module selects a suitable rendering algorithm and parameters according to the information, and returns the result to the user equipment;
next, at step 202, the rendering is decomposed. And the cloud server acquires the high-resolution picture according to the display request, performs decomposition rendering, generates a multi-frame low-resolution picture with the same scene and the same position, and returns the multi-frame low-resolution picture to the client. Fig. 3 shows a schematic flow diagram of image reconstruction according to an embodiment of the invention. As shown in fig. 3, in one embodiment of the present invention, the cloud server performs deformation, blurring, and downsampling calculation on the high-resolution image through the image reconstruction module 111, so as to obtain a plurality of low-resolution images with pixel displacement, where the plurality of low-resolution images include different scene information, so that a high-resolution image can be reconstructed. Wherein the image reconstruction module 111 comprises an image degradation model, in one embodiment of the invention, the matrix expression of the degradation model is as follows:
Yk=DkHkFkX+Vk,k=1,2…N;
wherein,
the matrix Dk is a downsampling matrix, which indicates the process of obtaining a low-resolution observation image from an original high-resolution image;
the matrix Hk is a fuzzy matrix representing a Point Spread Function (PSF) of the imaging system;
the matrix Fk represents the geometrical distortion that exists between the image interpolated and enlarged by the observation image Yk and the original high resolution image; and
the vector Vk represents additive gaussian noise on the observed image.
In particular, the matrix Fk is used to describe the motion deformation and can be generally divided into two main categories: parameterized global motion and non-parameterized local motion. If the object of observation remains stationary, then there is only a global rigid transformation between the multiple images, such as: translation, rotation, scaling, etc.; if the image data object is in constant motion, there is local motion between the multiple images. The matrix Hk is used to describe blur degradation, where blur may be caused by a variety of factors, such as optical blur may be caused by the performance of the optical components, the shape and size of the sensor, motion blur caused by relative motion between the imaging system and the original scene. In the SRR technique, the limitation of the physical size of the photoreceptor that acquires the LR image is an important factor causing blurring. Typically the PSF of a photoreceptor is considered to be spatially averaged. The sampling matrix Dk is then used to describe the downsampling degradation factor. The sampling matrix produces a spectrally aliased low resolution image from the distorted and blurred high resolution image. Although blurring can suppress spectral aliasing more or less, in super-resolution image reconstruction, it can be considered that spectral aliasing always exists in low-resolution images. Noise is an important factor causing image degradation, and is often interfered by noise in the process of image generation, transmission and recording, and the visual effect of the image is seriously affected due to the high-frequency characteristic of the noise, so that the possible restoration degree of the image, in particular to a spatial high-frequency band, is limited. In different application environments, the characteristics of noise are different, and most of classical denoising models are discussed as Gaussian noise, salt and pepper noise and impulse noise are very common types;
next, in step 203, the data is received and parsed. The client analyzes the multi-frame low-resolution picture received from the cloud server, as described above, the multi-frame low-resolution picture may be multi-frame and motion blurred, so the client needs to analyze the multi-frame low-resolution picture to extract resolution and quality information of the image, and send the multi-frame low-resolution picture to the edge calculation module. In one embodiment of the present invention, in order to improve the utilization rate of the computing resources of the edge computing modules, the client allocates the graphics image decomposition rendering tasks to different edge computing modules according to a certain rule. For example, the task allocation is dynamically adjusted according to the computing power and network condition of the edge computing nodes so as to realize the balanced utilization of the computing resources. By means of the load balancing technology, graphic image rendering tasks are distributed to different edge computing nodes, and excessive consumption of computing resources of a single node can be avoided. Meanwhile, a fault switching technology can be further adopted, when a certain edge computing node fails, tasks are rapidly switched to other nodes, and stability and reliability of the system are guaranteed. In one embodiment of the invention, the operation state of the edge computing node can be monitored in real time, and the adjustment of the computing resource and the transmission bandwidth can be performed according to the monitored data. By dynamically adjusting the computing tasks and the data transmission strategies of the edge computing nodes, the efficient utilization of computing resources and transmission bandwidths is realized.
In one embodiment of the present invention, in order to improve transmission efficiency, image information of one aggregation queue is used by the image data transmission module to replace a plurality of pieces of native image queue information which can be aggregated. Fig. 4 shows a schematic flow chart of image data transmission according to an embodiment of the present invention, and as shown in fig. 4, the image data transmission includes:
first, in step 401, an image is acquired. Assuming that the transmission module transmits multi-frame images, the resolution of the images is MxN, the acquisition frequency is f, and the acquired image data amount per second is MxN xf;
next, at step 402, the image data is preprocessed. Preprocessing the collected original image data, such as scaling and graying, and recording the preprocessed image data amount as M'
×N′;
Next, at step 403, features are extracted. Extracting features of the preprocessed image, wherein the data volume of each feature is M 'x N' x M, and M is the extracted feature number; next, at step 404, the queues are aggregated. Aggregating the extracted features to form an aggregation queue, and arranging elements in the queue according to the sequence of feature extraction;
next, at step 405, the data is encapsulated. And carrying out data package packaging treatment by taking the aggregation queue as a unit, wherein the number of the packaged data packages is as follows: (M '×n' ×m)/L, wherein L is the data amount of each packet; and
finally, in step 406, data is transmitted. And transmitting the packaged data packet to a receiving party, wherein the transmission time is T= (M '×N' ×m)/B, and B is the bandwidth in the transmission process. Compared with the original image queue transmission mode, the image data transmission method has the advantages that the data transmission quantity is less, the transmission time is shorter, and therefore the bandwidth utilization rate can be effectively improved;
next, at step 204, an image is reconstructed. The edge calculation module restores the multi-frame low-resolution picture into one or more high-resolution frame images. Fig. 5 is a schematic flow chart of image restoration according to an embodiment of the present invention, as shown in fig. 5, in an embodiment of the present invention, a motion blur image restoration module of the edge calculation module extracts a plurality of pieces of low resolution image information of the same group of image queues, and restores the image queues to high resolution images through registration and fusion processing, so as to effectively improve resolution and definition of the images, thereby providing a better visual effect, and the image reconstruction includes:
first, at step 501, images are registered. Selecting one LR image with the ground resolution in an image queue as a reference frame, then estimating motion change parameters between other images in the same queue and the reference image, and if the image registration is inaccurate, seriously affecting the effect of the subsequent steps, so the image registration is a key step of high-resolution reconstruction. In practical applications, the motion variation parameters typically include rotation, translation, and scaling, so the feature point matching-based method can be used for image registration: extracting feature points of the reference frame, calculating descriptors of the feature points, extracting feature points of other images, calculating similarity between the feature points and the descriptors of the reference frame, and finally registering the images according to the feature points with highest similarity:
R=argmin||F(I1)-F(I2)||2;
wherein R is a transformation matrix, I1 and I2 are two images to be registered, and F (I) represents a feature vector obtained by extracting feature points of the image I;
next, at step 502, the images are fused. Complementary non-redundant information between multiple frames of low resolution images is processed to fuse useful information into an HR image. In one embodiment of the invention, an area-based approach is employed for image fusion: firstly, expanding characteristic points of a reference frame onto other images to form corresponding areas, then fusing pixels in each area, which can adopt a method of weighted average and wavelet transformation, and finally, combining the fused pixels into a high-resolution image:
HR=argmin||F(LR)-F(SR)||2;
wherein LR is a low-resolution image, SR is a super-resolution image, and F (X) represents a feature vector obtained by extracting depth features of the image X; and
finally, at step 503, the image is restored. And removing blurring and noise of the fused high-resolution image, such as Gaussian blurring and Gaussian white noise commonly used in the classical super-resolution problem, so as to further improve the image quality and form a super-resolution SR image. In one embodiment of the invention, blurring and noise is removed by a deconvolution method: designing a corresponding Point Spread Function (PSF) according to the blur type and the noise type, removing the PSF from a frequency domain by adopting an invconv operation, and convolving a result with an original image to obtain a restored image:
x=invconv(LR,PSF)+invconv(PSF,PSF)*LR;
where LR is the low resolution image, PSF is the point spread function, x is the restored image, invconv represents the deconvolution operation.
In one embodiment of the present invention, the reconstructed image may be further optimized according to the actual requirements of the user equipment. For example, parameters such as color, contrast, brightness and the like of the image are adjusted to achieve the best visual effect; and
finally, at step 205, the storage and presentation is performed. And the client stores and displays the restored image. In one embodiment of the invention, the restored high-quality graphic image is displayed by means of screen display, printing and the like. In one embodiment of the present invention, the client further feeds back the rendering effect to the cloud server and the edge computing module in real time, so as to adjust the image processing process when necessary, and achieve the best performance optimization.
The graphic image display system and the graphic image display method provided by the invention enable the client to extract a plurality of pieces of low-resolution image information of the same group of image queues and realize the rendering and display of high-quality graphic images. Compared with the traditional method, the method has the advantages that the rendering efficiency and the transmission efficiency are remarkably improved, the calculation burden of the cloud server is reduced, and the calculation power utilization rate of the user side equipment is improved.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to those skilled in the relevant art that various combinations, modifications, and variations can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention as disclosed herein should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
Claims (10)
1. A graphic image display system, comprising:
the cloud server comprises an image reconstruction module, wherein the image reconstruction module is configured to expand images of high-resolution frames into images of low-resolution frames of the same scene at the same position;
an edge calculation module communicatively connected to the cloud server and comprising a motion blur image restoration module configured to restore a series of dynamically moving low resolution frame images to one or several high resolution frame images; and
and the client is in communication connection with the edge computing module and the cloud server and is used for storing and displaying the rendered image.
2. The display system of claim 1, further comprising an image transmission module communicatively coupled to the edge calculation module and the client and configured to identify the same portion of the same scene, the same series of graphical image data, and to compress and transmit the different portions.
3. The presentation system of claim 1, wherein the number and/or location of deployments of the edge computing module is determined based on determining a distribution of clients, network status, and data security requirements.
4. The display system of claim 1, wherein the edge computation module is deployed on a local server at the client site.
5. The display system of claim 1, wherein the edge computing module and client communicate with the cloud server using an internally standardized application programming interface.
6. The presentation system of claim 1, wherein the edge computation modules communicate with each other and with the client based on the quitc protocol of the UDP protocol or a trusted UDP protocol.
7. A method of displaying a graphic image, the method comprising the steps of:
a client sends a display request to a cloud server, wherein the request comprises an image type, a purpose, required resolution and color depth information;
the cloud server obtains a high-resolution picture according to the display request, decomposes and renders the picture to generate a multi-frame low-resolution picture with the same scene and the same position, and returns the multi-frame low-resolution picture to the client;
analyzing the multi-frame low-resolution picture by the client, extracting resolution and image quality information of an image, and sending the multi-frame low-resolution picture to an edge calculation module;
restoring the multi-frame low-resolution picture into one or more high-resolution frame images by the edge calculation module; and
and storing and displaying the restored image by the client.
8. The display method of claim 7, further comprising the step of:
the image transmission module identifies the same scene and the same part of the same series of graphic image data, compresses the same scene and the same series of graphic image data, and then transmits the same scene and the same series of graphic image data.
9. The display method of claim 7, further comprising the step of:
and selecting one or more edge calculation modules through a load balancing algorithm, and restoring the image.
10. The display method of claim 7, further comprising the step of:
and the edge calculation module adjusts the color, contrast and brightness parameters of the restored image so as to optimize the visual effect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410005661.7A CN117834946A (en) | 2024-01-02 | 2024-01-02 | Graphic image display system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410005661.7A CN117834946A (en) | 2024-01-02 | 2024-01-02 | Graphic image display system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117834946A true CN117834946A (en) | 2024-04-05 |
Family
ID=90515110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410005661.7A Pending CN117834946A (en) | 2024-01-02 | 2024-01-02 | Graphic image display system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117834946A (en) |
-
2024
- 2024-01-02 CN CN202410005661.7A patent/CN117834946A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8463020B1 (en) | Centralized immersive image rendering for thin client | |
CN105677279B (en) | Desktop area sharing method, system and corresponding shared end and viewing end | |
WO2019153671A1 (en) | Image super-resolution method and apparatus, and computer readable storage medium | |
EP3829173A1 (en) | Transmission of images and videos using artificial intelligence models | |
CN104144313B (en) | Video communication method, system and high in the clouds image server | |
US20100134494A1 (en) | Remote shading-based 3d streaming apparatus and method | |
JP2018527687A (en) | Image processing system for reducing an image using a perceptual reduction method | |
WO2022111631A1 (en) | Video transmission method, server, terminal, and video transmission system | |
WO2022057868A1 (en) | Image super-resolution method and electronic device | |
CN111625211B (en) | Screen projection method and device, android device and display device | |
US20110210960A1 (en) | Hierarchical blurring of texture maps | |
US20140347452A1 (en) | Efficient stereo to multiview rendering using interleaved rendering | |
CN116248955A (en) | VR cloud rendering image enhancement method based on AI frame extraction and frame supplement | |
CN110689498B (en) | High-definition video optimization method based on hierarchical blurring of non-focus part | |
CN110740352B (en) | SPICE protocol-based difference image display method in video card transparent transmission environment | |
EP2919186A1 (en) | Image processing device and image processing method | |
US20150281699A1 (en) | Information processing device and method | |
CN113515193A (en) | Model data transmission method and device | |
CN117834946A (en) | Graphic image display system and method | |
CN115375539A (en) | Image resolution enhancement, multi-frame image super-resolution system and method | |
CN115170713A (en) | Three-dimensional scene cloud rendering method and system based on hyper network | |
CN112367492A (en) | Low-bandwidth artificial intelligence portrait video transmission method | |
CN107318021A (en) | A kind of data processing method and system remotely shown | |
WO2019066704A1 (en) | Method in an image compression device and in an image decompression device | |
CN114286113B (en) | Image compression recovery method and system based on multi-head heterogeneous convolution self-encoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |