CN116385318A - Image quality enhancement method and system based on cloud desktop - Google Patents

Image quality enhancement method and system based on cloud desktop Download PDF

Info

Publication number
CN116385318A
CN116385318A CN202310661815.3A CN202310661815A CN116385318A CN 116385318 A CN116385318 A CN 116385318A CN 202310661815 A CN202310661815 A CN 202310661815A CN 116385318 A CN116385318 A CN 116385318A
Authority
CN
China
Prior art keywords
image
cloud desktop
super
desktop end
resolution reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310661815.3A
Other languages
Chinese (zh)
Other versions
CN116385318B (en
Inventor
吴德志
刘洋
杨俊�
左明放
肖尧威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zongjun Information Technology Co ltd
Original Assignee
Hunan Zongjun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zongjun Information Technology Co ltd filed Critical Hunan Zongjun Information Technology Co ltd
Priority to CN202310661815.3A priority Critical patent/CN116385318B/en
Publication of CN116385318A publication Critical patent/CN116385318A/en
Application granted granted Critical
Publication of CN116385318B publication Critical patent/CN116385318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a cloud desktop-based image quality enhancement method and a cloud desktop-based image quality enhancement system, which comprise the following steps: s1: acquiring an original image and a cloud desktop end image transmitted to a cloud desktop, and denoising the image to obtain the denoised original image and the cloud desktop end image; s2: extracting image gradients of the denoised original image and the cloud desktop end image; s3: constructing a super-resolution reconstruction network based on the original image after denoising in the step S1, the cloud desktop end image and the image extracted in the step S2; s4: training the network constructed in the step S3, and obtaining a cloud desktop end image with improved definition based on the super-resolution reconstruction network after training; s5: and carrying out dynamic range adjustment on the cloud desktop end image with improved definition to obtain the cloud desktop end image with enhanced final image quality. The invention has the advantages of improving the definition of the image, improving the visual experience, improving the usability of the application, maintaining the transmission and processing efficiency, providing the self-adaptability and the customizability, and the like.

Description

Image quality enhancement method and system based on cloud desktop
Technical Field
The invention belongs to the field of image quality enhancement, and particularly relates to an image quality enhancement method and system based on a cloud desktop.
Background
In a cloud desktop environment, image quality is critical to the user experience and requirements of the application scene. However, due to limitations in transmission and processing, the cloud desktop end image may be affected by problems such as noise, low definition, and dynamic range limitations, reducing the quality and look and feel of the image. Therefore, a method for improving the image quality of the cloud desktop end image is needed, and the definition improvement and the visual effect enhancement of the image are realized through technologies such as denoising, super-resolution reconstruction and dynamic range adjustment. In the prior art, some image enhancement methods already exist, but they may have some limitations such as low processing efficiency, high complexity, image quality loss, and the like. The methods involve complex algorithms and calculations in the image enhancement process, resulting in long processing times and inapplicability to cloud desktop environments with high real-time requirements. Meanwhile, these methods may introduce additional noise or artifacts while improving the definition of the image, resulting in loss of image quality or unnatural vision.
Disclosure of Invention
In view of the above, the invention provides a method and a system for enhancing image quality based on a cloud desktop, which aim to provide higher quality visual experience for cloud desktop end images on the premise of ensuring transmission and processing efficiency through the steps of denoising, super-resolution reconstruction, dynamic range adjustment and the like.
The image quality enhancement method based on the cloud desktop provided by the invention comprises the following steps of:
s1: acquiring an original image and a cloud desktop end image transmitted to a cloud desktop, and denoising the image to obtain the denoised original image and the cloud desktop end image;
s2: extracting image gradients of the denoised original image and the cloud desktop end image;
s3: constructing a super-resolution reconstruction network based on the original image after denoising in the step S1, the cloud desktop end image and the image extracted in the step S2;
s4: training the network constructed in the step S3, and obtaining a cloud desktop end image with improved definition based on the super-resolution reconstruction network after training;
s5: and carrying out dynamic range adjustment on the cloud desktop end image with improved definition to obtain the cloud desktop end image with enhanced final image quality.
As a further improvement of the present invention:
optionally, the step S1 obtains an original image and a cloud desktop end image transmitted to a cloud desktop, and denoises the image to obtain the denoised original image and the cloud desktop end image, including:
the method comprises the steps of obtaining an original image and a cloud desktop end image transmitted to a cloud desktop, wherein the obtained image data are as follows:
Figure SMS_1
wherein ,
Figure SMS_2
and />
Figure SMS_3
Respectively obtaining an ith original image and a cloud desktop end image;
denoising an original image and a cloud desktop end image based on self-adaptive space-time median filtering, wherein the self-adaptive space-time median filtering comprises the following steps:
s11: calculating the adaptive filter window size:
Figure SMS_4
wherein, size represents the size of the filtering window, th is the window screening threshold, and the calculation mode is as follows:
Figure SMS_5
wherein ,
Figure SMS_6
Figure SMS_7
Figure SMS_8
respectively the maximum value, the minimum value and the average value of the ith image;
Figure SMS_9
is the mean value of the i-1 th image;
s12: calculating a space-time median filtering result:
order the
Figure SMS_10
For the median filter window of the ith image,
Figure SMS_11
for the median filter window of the i-1 th image, the median filter results are:
Figure SMS_12
wherein ,
Figure SMS_13
calculating the median value of the input sequence by the function;
Figure SMS_14
Figure SMS_15
and
Figure SMS_16
the method comprises the steps of respectively obtaining an original image and a cloud desktop image.
Optionally, extracting the image gradient of the denoised original image and the cloud desktop image in the step S2 includes:
based on the denoised original image and the cloud desktop image obtained in the step S2, calculating the gradient of the image by using the image edge response, wherein the image edge response calculation flow is as follows:
s21: constructing an image edge detection operator in any direction:
Figure SMS_17
wherein ,
Figure SMS_18
an angle representing an edge of the image; />
Figure SMS_19
Is the pixel coordinate position; />
Figure SMS_20
and />
Figure SMS_21
The differential operators along the x axis and the y axis are calculated by the following modes:
Figure SMS_22
wherein ,
Figure SMS_23
variance as gaussian function; e is a natural constant;
s22: calculating an edge response of the image:
Figure SMS_24
wherein ,
Figure SMS_25
and />
Figure SMS_26
Edge response along the x-axis and y-axis, respectively:
Figure SMS_27
wherein ,
Figure SMS_28
representing a convolution operation; />
Figure SMS_29
,/>
Figure SMS_30
and />
Figure SMS_31
The images are respectively the original image after denoising and the cloud desktop image;
s23: edge response of the composite image calculates gradients of the image:
Figure SMS_32
wherein ,
Figure SMS_34
for the angle class of the image edge +.>
Figure SMS_38
Are respectively->
Figure SMS_41
、/>
Figure SMS_35
、/>
Figure SMS_37
And
Figure SMS_40
;/>
Figure SMS_43
is->
Figure SMS_33
Edge response under angle; />
Figure SMS_36
,/>
Figure SMS_39
and />
Figure SMS_42
The gradient of the image of the original image after denoising and the image of the cloud desktop end are respectively obtained.
Optionally, in the step S3, a super-resolution reconstruction network is constructed based on the denoised original image in the step S1, the cloud desktop image and the image extracted in the step S2, and the method includes:
inputting the denoised original image obtained in the step S1, the cloud desktop end image and the gradient extracted in the step S2 into a super-resolution reconstruction network, wherein the flow of the super-resolution reconstruction network is as follows:
s31: defining the output of the super-resolution reconstruction network:
Figure SMS_44
wherein SR is the super-resolution reconstruction network,
Figure SMS_45
and />
Figure SMS_46
Respectively reconstructing the weight and the bias of the network for super resolution; />
Figure SMS_47
and />
Figure SMS_48
Respectively obtaining cloud desktop end images with improved definition and gradients thereof;
s32: calculating a loss function of the super-resolution reconstruction network:
the loss function of the super-resolution reconstruction network consists of two parts, wherein the first part calculates the error between the cloud desktop end image with improved definition and the original image after denoising:
Figure SMS_49
wherein H, W and C are the height, width and dimension of the image respectively;
Figure SMS_50
Figure SMS_51
,/>
Figure SMS_52
the second part calculates the error between the gradient of the cloud desktop end image with improved definition and the gradient of the original image after denoising:
Figure SMS_53
the errors of the two parts are combined to form a loss function of the super-resolution reconstruction network:
Figure SMS_54
wherein ,
Figure SMS_55
is a balance coefficient.
Optionally, training the network constructed in step S3 in step S4, and obtaining the cloud desktop image with improved definition based on the super-resolution reconstructed network after training, including:
the parameter updating targets of the super-resolution reconstruction network are as follows:
Figure SMS_56
wherein ,
Figure SMS_57
and />
Figure SMS_58
Respectively obtaining the weight and the bias of the super-resolution reconstruction network after updating;
Figure SMS_59
representing acquisition minimization->
Figure SMS_60
Reconstructing parameters of a network by using time super-resolution;
the parameter updating mode is as follows:
Figure SMS_61
wherein ,
Figure SMS_62
;/>
Figure SMS_63
for momentum coefficients, the control parameter update depends on the specific gravity of the last update; />
Figure SMS_64
For learning rate, controlling the amplitude of parameter update; t is the current update times; />
Figure SMS_65
The size of the parameter update for the t time; />
Figure SMS_66
Representing the partial derivative of the loss function to the super-resolution reconstruction network parameters;
after the super-resolution reconstruction network training is completed, the denoised cloud desktop end image is input into the super-resolution reconstruction network to obtain a cloud desktop end image with improved definition:
Figure SMS_67
wherein ,
Figure SMS_68
and />
Figure SMS_69
The method comprises the steps of respectively obtaining cloud desktop end images with improved definition and corresponding image gradients.
Optionally, in the step S5, dynamic range adjustment is performed on the cloud desktop end image with improved definition, so as to obtain a cloud desktop end image with enhanced final image quality, including:
the cloud desktop end image with improved definition is subjected to dynamic range adjustment according to the dynamic range adjustment interval, and the calculation method of the dynamic range adjustment comprises the following steps:
Figure SMS_70
wherein ,
Figure SMS_71
is a dynamic adjustment range; />
Figure SMS_72
The cloud desktop end image with improved definition is obtained; />
Figure SMS_73
And (5) obtaining the cloud desktop end image with enhanced final image quality.
The invention also provides an image quality enhancement system based on the cloud desktop, which comprises:
and the image acquisition and denoising module: collecting an original image and a cloud desktop end image transmitted to a cloud desktop, and denoising the image;
an image gradient extraction module: extracting image gradients of the denoised original image and the cloud desktop end image
And a super-resolution reconstruction module: obtaining a cloud desktop end image with improved definition based on the super-resolution reconstruction network after training;
and a dynamic adjustment module: and carrying out dynamic range adjustment on the cloud desktop end image with improved definition.
Advantageous effects
Through the steps of denoising, super-resolution reconstruction, dynamic range adjustment and the like, the method can improve the definition of the cloud desktop end image. The denoising operation removes noise in the image, the super-resolution reconstruction utilizes the local features of the image to increase the details and resolution of the image, and the dynamic range is adjusted to improve the contrast and detail expressive force of the image, so that the image is clearer and more vivid.
By enhancing the image quality of the cloud desktop end image, the user can enjoy better visual experience. The definition of the image is improved, the visual effect is enhanced, the image is clearer and finer, the details are clearer and more visible, and therefore the perception and understanding of the user on the image content are improved.
Through the steps of denoising, super-resolution reconstruction, dynamic range adjustment and the like on the image at the cloud desktop end, the improvement of the image quality of the image is ensured, and meanwhile, the burden of data transmission and processing is not excessively increased, so that the method is suitable for the real-time requirement of the cloud desktop environment.
In summary, the image quality enhancement method based on the cloud desktop provides better image display effects by improving image definition, improving visual experience, improving application usability, maintaining transmission and processing efficiency, providing adaptability and customizable aspects, and the like.
Drawings
Fig. 1 is a flowchart illustrating a method for enhancing image quality based on a cloud desktop according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings, without limiting the invention in any way, and any alterations or substitutions based on the teachings of the invention are intended to fall within the scope of the invention.
Example 1: an image quality enhancement method based on a cloud desktop, as shown in fig. 1, comprises the following steps:
s1: the method comprises the steps of obtaining an original image and a cloud desktop end image transmitted to a cloud desktop, denoising the image, and obtaining the denoised original image and the cloud desktop end image.
The method comprises the steps of obtaining an original image and a cloud desktop end image transmitted to a cloud desktop, wherein the obtained image data are as follows:
Figure SMS_74
wherein ,
Figure SMS_75
and />
Figure SMS_76
Respectively obtaining an ith original image and a cloud desktop end image;
denoising an original image and a cloud desktop end image based on self-adaptive space-time median filtering, wherein the self-adaptive space-time median filtering comprises the following steps:
s11: calculating the adaptive filter window size:
Figure SMS_77
the size represents the size of the filtering window, in this embodiment, 25 th is a window screening threshold, and the calculation mode is as follows:
Figure SMS_78
wherein ,
Figure SMS_79
,/>
Figure SMS_80
,/>
Figure SMS_81
respectively the maximum value, the minimum value and the average value of the ith image;
Figure SMS_82
for the i-1 th imageThe average value;
s12: calculating a space-time median filtering result:
order the
Figure SMS_83
For the median filter window of the ith image,
Figure SMS_84
for the median filter window of the i-1 th image, the median filter results are:
Figure SMS_85
wherein ,
Figure SMS_86
calculating the median value of the input sequence by the function; />
Figure SMS_87
,/>
Figure SMS_88
and />
Figure SMS_89
The method comprises the steps of respectively obtaining an original image and a cloud desktop image.
Denoising can improve the image quality and lay a foundation for subsequent processing. Noise existing in the image can cause interference to image content and details, denoising can restore clearer and continuous edges and details of the original image, higher-quality input is provided for subsequent image processing steps such as feature extraction, image matching and the like, and more accurate processing results are facilitated to be obtained.
The two denoised images are closer in visual effect, which is beneficial to comparison and fusion between the two images. Denoising can reduce image quality degradation caused by transmission, so that the cloud desktop end image is more similar to the original image, and a more consistent basis is provided for directly comparing pixels and the like.
The denoised image can better reflect the real scene and provide more accurate image content. The image noise can distort the image, particularly noise generated in the image compression and transmission process can distort the image content and details, denoising can restore the real information of the image to a certain extent, the image content can reflect the real scene more accurately, and the method is very important for image analysis tasks such as content identification and understanding.
S2: and extracting the image gradient of the denoised original image and the cloud desktop end image.
Based on the denoised original image and the cloud desktop image obtained in the step S2, calculating the gradient of the image by using the image edge response, wherein the image edge response calculation flow is as follows:
s21: constructing an image edge detection operator in any direction:
Figure SMS_90
wherein ,
Figure SMS_91
an angle representing an edge of the image; />
Figure SMS_92
Is the pixel coordinate position; />
Figure SMS_93
and />
Figure SMS_94
The differential operators along the x axis and the y axis are calculated by the following modes:
Figure SMS_95
wherein ,
Figure SMS_96
variance of Gaussian function, in this embodiment +.>
Figure SMS_97
The method comprises the steps of carrying out a first treatment on the surface of the e is a natural constant;
s22: calculating an edge response of the image:
Figure SMS_98
wherein ,
Figure SMS_99
and />
Figure SMS_100
Edge response along the x-axis and y-axis, respectively:
Figure SMS_101
wherein ,
Figure SMS_102
representing a convolution operation; />
Figure SMS_103
,/>
Figure SMS_104
and />
Figure SMS_105
The images are respectively the original image after denoising and the cloud desktop image;
s23: edge response of the composite image calculates gradients of the image:
Figure SMS_106
wherein ,
Figure SMS_108
for the angle class of the image edge +.>
Figure SMS_112
Are respectively->
Figure SMS_115
、/>
Figure SMS_109
、/>
Figure SMS_111
And
Figure SMS_114
;/>
Figure SMS_117
is->
Figure SMS_107
Edge response under angle; />
Figure SMS_110
,/>
Figure SMS_113
and />
Figure SMS_116
The gradient of the image of the original image after denoising and the image of the cloud desktop end are respectively obtained.
Compared with the pixel value, the gradient characteristic has lower sensitivity to image blurring and compression, and can retain the image structure information lost in the image compression transmission process to a certain extent. The extracted gradient features are more stable, and the method is more suitable for judging the corresponding relation between the original image and the cloud desktop end image.
By analyzing the difference of gradient characteristics of the original image and the cloud desktop image, the areas of the image can be determined to lose more information in the transmission process, and the type of the lost information can be determined, so that guidance can be provided for designing an enhancement scheme for the cloud desktop image.
S3: and (3) constructing a super-resolution reconstruction network based on the original image after denoising in the step (S1), the cloud desktop end image and the gradient extracted in the step (S2).
Inputting the denoised original image obtained in the step S1, the cloud desktop end image and the image extracted in the step S2 into a super-resolution reconstruction network, wherein the flow of the super-resolution reconstruction network is as follows:
s31: defining the output of the super-resolution reconstruction network:
Figure SMS_118
wherein SR is the super-resolution reconstruction network,
Figure SMS_119
and />
Figure SMS_120
Respectively reconstructing the weight and the bias of the network for super resolution; />
Figure SMS_121
and />
Figure SMS_122
Respectively obtaining cloud desktop end images with improved definition and gradients thereof;
s32: calculating a loss function of the super-resolution reconstruction network:
the loss function of the super-resolution reconstruction network consists of two parts, wherein the first part calculates the error between the cloud desktop end image with improved definition and the original image after denoising:
Figure SMS_123
wherein H, W and C are the height, width and dimension of the image respectively;
Figure SMS_124
,/>
Figure SMS_125
,/>
Figure SMS_126
the second part calculates the error between the gradient of the cloud desktop end image with improved definition and the gradient of the original image after denoising:
Figure SMS_127
the errors of the two parts are combined to form a loss function of the super-resolution reconstruction network:
Figure SMS_128
wherein ,
Figure SMS_129
for the balance coefficient, 0.8 in this embodiment.
The resolution of the original image is higher, and the original image contains more abundant image details, so that information support is provided for obtaining a high-quality super-resolution reconstruction result. The super-resolution network can migrate the detail information to the cloud desktop end image through learning, so that resolution improvement and detail recovery are realized.
During image compression and transmission, a large loss of pixel information may occur, while gradient features may preserve structural information of the image to some extent. The network can obtain more comprehensive and accurate image representation by combining pixel information and gradient characteristics, which is beneficial to learning the restoration transformation of the image.
S4: training the network constructed in the step S3, and obtaining the cloud desktop end image with improved definition based on the super-resolution reconstruction network after training.
The parameter updating targets of the super-resolution reconstruction network are as follows:
Figure SMS_130
wherein ,
Figure SMS_131
and />
Figure SMS_132
Respectively obtaining the weight and the bias of the super-resolution reconstruction network after updating;
Figure SMS_133
representing acquisition minimization->
Figure SMS_134
Reconstructing parameters of a network by using time super-resolution;
the parameter updating mode is as follows:
Figure SMS_135
wherein ,
Figure SMS_136
;/>
Figure SMS_137
for the momentum coefficient, the control parameter update depends on the specific gravity of the last update, which is 0.8 in this embodiment; />
Figure SMS_138
For learning rate, the magnitude of parameter update is controlled, in this embodiment, to be 0.001; t is the current update times; />
Figure SMS_139
The size of the parameter update for the t time; />
Figure SMS_140
Representing the partial derivative of the loss function to the super-resolution reconstruction network parameters;
after the super-resolution reconstruction network training is completed, the denoised cloud desktop end image is input into the super-resolution reconstruction network to obtain a cloud desktop end image with improved definition:
Figure SMS_141
wherein ,
Figure SMS_142
and />
Figure SMS_143
The method comprises the steps of respectively obtaining cloud desktop end images with improved definition and corresponding image gradients.
S5: and carrying out dynamic range adjustment on the cloud desktop end image with improved definition to obtain the cloud desktop end image with enhanced final image quality.
The cloud desktop end image with improved definition is subjected to dynamic range adjustment according to the dynamic range adjustment interval, and the calculation method of the dynamic range adjustment comprises the following steps:
Figure SMS_144
wherein ,
Figure SMS_145
is a dynamic adjustment range; />
Figure SMS_146
The cloud desktop end image with improved definition is obtained; />
Figure SMS_147
And (5) obtaining the cloud desktop end image with enhanced final image quality.
Super-resolution reconstruction networks, while improving image sharpness and detail, also reduce the dynamic range of the image compared to the original image, which reduces the overall contrast of the image, resulting in insufficient brightness and reduced image quality. The dynamic range adjustment can be used for carrying out contrast stretching on the super-resolution image, so that the dynamic range of the super-resolution image is close to that of the original image, and the visual effect of the image is obviously improved.
Dynamic range adjustment is a simple and efficient means of enhancing the visual quality of an image as an image post-processing tool. It is easier to implement and apply than complex image operations, but can also bring about a significant image quality improvement, which makes it very suitable for use in image enhancement systems.
Example 2: the invention also discloses an image quality enhancement system based on the cloud desktop, which comprises the following four modules:
and the image acquisition and denoising module: collecting an original image and a cloud desktop end image transmitted to a cloud desktop, and denoising the image;
an image gradient extraction module: extracting image gradients of the denoised original image and the cloud desktop end image
And a super-resolution reconstruction module: obtaining a cloud desktop end image with improved definition based on the super-resolution reconstruction network after training;
and a dynamic adjustment module: and carrying out dynamic range adjustment on the cloud desktop end image with improved definition.
It should be noted that, the foregoing reference numerals of the embodiments of the present invention are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. The image quality enhancement method based on the cloud desktop is characterized by comprising the following steps of:
s1: acquiring an original image and a cloud desktop end image transmitted to a cloud desktop, and denoising the image to obtain the denoised original image and the cloud desktop end image;
s2: extracting image gradients of the denoised original image and the cloud desktop end image;
s3: constructing a super-resolution reconstruction network based on the original image after denoising in the step S1, the cloud desktop end image and the image extracted in the step S2;
s4: training the network constructed in the step S3, and obtaining a cloud desktop end image with improved definition based on the super-resolution reconstruction network after training;
s5: and carrying out dynamic range adjustment on the cloud desktop end image with improved definition to obtain the cloud desktop end image with enhanced final image quality.
2. The cloud desktop-based image quality enhancement method according to claim 1, wherein the step S1 includes:
the method comprises the steps of obtaining an original image and a cloud desktop end image transmitted to a cloud desktop, wherein the obtained image data are as follows:
Figure QLYQS_1
wherein ,
Figure QLYQS_2
and />
Figure QLYQS_3
Respectively the acquired firstiAn original image and a cloud desktop end image;
denoising an original image and a cloud desktop end image based on self-adaptive space-time median filtering, wherein the self-adaptive space-time median filtering comprises the following steps:
s11: calculating the adaptive filter window size:
Figure QLYQS_4
wherein ,sizethe size of the filter window is indicated,ththe window screening threshold is calculated by the following steps:
Figure QLYQS_5
wherein ,
Figure QLYQS_6
,/>
Figure QLYQS_7
,/>
Figure QLYQS_8
respectively the firstiMaximum, minimum and mean values of the images;
Figure QLYQS_9
is the firsti-a mean of 1 image;
s12: calculating a space-time median filtering result:
order the
Figure QLYQS_10
Is the firstiThe median filter window of the sheet of image,
Figure QLYQS_11
is the firsti-median filter window of 1 image, then the result of median filtering is: />
Figure QLYQS_12
wherein ,
Figure QLYQS_13
calculating the median value of the input sequence by the function; />
Figure QLYQS_14
,/>
Figure QLYQS_15
and />
Figure QLYQS_16
The method comprises the steps of respectively obtaining an original image and a cloud desktop image.
3. The cloud desktop-based image quality enhancement method according to claim 2, wherein the step S2 includes:
based on the denoised original image and the cloud desktop image obtained in the step S2, calculating the gradient of the image by using the image edge response, wherein the image edge response calculation flow is as follows:
s21: constructing an image edge detection operator in any direction:
Figure QLYQS_17
wherein ,
Figure QLYQS_18
an angle representing an edge of the image; />
Figure QLYQS_19
Is the pixel coordinate position; />
Figure QLYQS_20
and />
Figure QLYQS_21
Respectively the edgesxShaft and method for producing the sameyThe differential operator of the shaft is calculated by the following steps:
Figure QLYQS_22
wherein ,
Figure QLYQS_23
variance as gaussian function;eis a natural constant;
s22: calculating an edge response of the image:
Figure QLYQS_24
wherein ,
Figure QLYQS_25
and />
Figure QLYQS_26
Respectively the edgesxShaft and method for producing the sameyEdge response of the shaft:
Figure QLYQS_27
wherein ,
Figure QLYQS_28
representing a convolution operation; />
Figure QLYQS_29
,/>
Figure QLYQS_30
and />
Figure QLYQS_31
The images are respectively the original image after denoising and the cloud desktop image;
s23: edge response of the composite image calculates gradients of the image:
Figure QLYQS_32
wherein ,
Figure QLYQS_35
for the angle class of the image edge +.>
Figure QLYQS_37
Are respectively->
Figure QLYQS_40
、/>
Figure QLYQS_34
、/>
Figure QLYQS_38
and />
Figure QLYQS_41
Figure QLYQS_43
Is->
Figure QLYQS_33
Edge response under angle; />
Figure QLYQS_36
,/>
Figure QLYQS_39
and />
Figure QLYQS_42
The gradient of the image of the original image after denoising and the image of the cloud desktop end are respectively obtained.
4. The cloud desktop-based image quality enhancement method according to claim 3, wherein in the step S3, it includes:
inputting the denoised original image obtained in the step S1, the cloud desktop end image and the gradient extracted in the step S2 into a super-resolution reconstruction network, wherein the flow of the super-resolution reconstruction network is as follows:
s31: defining the output of the super-resolution reconstruction network:
Figure QLYQS_44
wherein ,SRfor the super-resolution reconstruction of the network,
Figure QLYQS_45
and />
Figure QLYQS_46
Respectively reconstructing the weight and the bias of the network for super resolution;
Figure QLYQS_47
and />
Figure QLYQS_48
Respectively obtaining cloud desktop end images with improved definition and gradients thereof;
s32: calculating a loss function of the super-resolution reconstruction network:
the loss function of the super-resolution reconstruction network is composed of two parts, wherein the first part calculates the error between the cloud desktop end image with improved definition and the original image after denoising:
Figure QLYQS_49
wherein ,HWandCthe height, width and dimension of the image respectively;
Figure QLYQS_50
,/>
Figure QLYQS_51
Figure QLYQS_52
the second part calculates the error between the gradient of the cloud desktop end image with improved definition and the gradient of the original image after denoising:
Figure QLYQS_53
the errors of the two parts are combined to form a loss function of the super-resolution reconstruction network:
Figure QLYQS_54
wherein ,
Figure QLYQS_55
is a balance coefficient.
5. The cloud desktop-based image quality enhancement method according to claim 4, wherein in the step S4, it includes:
the parameter updating targets of the super-resolution reconstruction network are as follows:
Figure QLYQS_56
wherein ,
Figure QLYQS_57
and />
Figure QLYQS_58
Respectively obtaining the weight and the bias of the super-resolution reconstruction network after updating; />
Figure QLYQS_59
Representing acquisition minimization->
Figure QLYQS_60
Reconstructing parameters of a network by using time super-resolution;
the parameter updating mode is as follows:
Figure QLYQS_61
wherein ,
Figure QLYQS_62
;/>
Figure QLYQS_63
for momentum coefficients, the control parameter update depends on the specific gravity of the last update;
Figure QLYQS_64
for learning rate, controlling the amplitude of parameter update;tthe current update times; />
Figure QLYQS_65
Is the firsttThe size of the secondary parameter update;
Figure QLYQS_66
representing the partial derivative of the loss function to the super-resolution reconstruction network parameters;
after the super-resolution reconstruction network training is completed, the denoised cloud desktop end image is input into the super-resolution reconstruction network to obtain a cloud desktop end image with improved definition:
Figure QLYQS_67
wherein ,
Figure QLYQS_68
and />
Figure QLYQS_69
The method comprises the steps of respectively obtaining cloud desktop end images with improved definition and corresponding image gradients.
6. The cloud desktop-based image quality enhancement method according to claim 5, wherein in the step S5, it includes:
the cloud desktop end image with improved definition is subjected to dynamic range adjustment according to the dynamic range adjustment interval, and the calculation method of the dynamic range adjustment comprises the following steps:
Figure QLYQS_70
wherein ,
Figure QLYQS_71
is a dynamic adjustment range; />
Figure QLYQS_72
The cloud desktop end image with improved definition is obtained; />
Figure QLYQS_73
And (5) obtaining the cloud desktop end image with enhanced final image quality.
7. An image quality enhancement system based on a cloud desktop, comprising:
and the image acquisition and denoising module: collecting an original image and a cloud desktop end image transmitted to a cloud desktop, and denoising the image;
an image gradient extraction module: extracting image gradients of the denoised original image and the cloud desktop end image
And a super-resolution reconstruction module: obtaining a cloud desktop end image with improved definition based on the super-resolution reconstruction network after training;
and a dynamic adjustment module: carrying out dynamic range adjustment on the cloud desktop end image with improved definition;
to realize the cloud desktop-based image quality enhancement method according to any one of claims 1 to 6.
CN202310661815.3A 2023-06-06 2023-06-06 Image quality enhancement method and system based on cloud desktop Active CN116385318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310661815.3A CN116385318B (en) 2023-06-06 2023-06-06 Image quality enhancement method and system based on cloud desktop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310661815.3A CN116385318B (en) 2023-06-06 2023-06-06 Image quality enhancement method and system based on cloud desktop

Publications (2)

Publication Number Publication Date
CN116385318A true CN116385318A (en) 2023-07-04
CN116385318B CN116385318B (en) 2023-10-10

Family

ID=86969794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310661815.3A Active CN116385318B (en) 2023-06-06 2023-06-06 Image quality enhancement method and system based on cloud desktop

Country Status (1)

Country Link
CN (1) CN116385318B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4973111A (en) * 1988-09-14 1990-11-27 Case Western Reserve University Parametric image reconstruction using a high-resolution, high signal-to-noise technique
KR20140040322A (en) * 2012-09-24 2014-04-03 재단법인대구경북과학기술원 Single image super-resolution image reconstruction device and method thereof
US20200249314A1 (en) * 2019-02-01 2020-08-06 GM Global Technology Operations LLC Deep learning for super resolution in a radar system
CN111696033A (en) * 2020-05-07 2020-09-22 中山大学 Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
CN112733894A (en) * 2020-12-29 2021-04-30 广东省电信规划设计院有限公司 Intelligent video identification sensing method based on wireless communication technology
WO2021107290A1 (en) * 2019-11-28 2021-06-03 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
WO2022057837A1 (en) * 2020-09-16 2022-03-24 广州虎牙科技有限公司 Image processing method and apparatus, portrait super-resolution reconstruction method and apparatus, and portrait super-resolution reconstruction model training method and apparatus, electronic device, and storage medium
WO2022116933A1 (en) * 2020-12-04 2022-06-09 华为技术有限公司 Model training method, data processing method and apparatus
CN114841308A (en) * 2022-03-17 2022-08-02 阿里巴巴(中国)有限公司 Super-resolution reconstruction method, device and equipment for cloud desktop image and storage medium
US20230052483A1 (en) * 2020-02-17 2023-02-16 Intel Corporation Super resolution using convolutional neural network
WO2023035531A1 (en) * 2021-09-10 2023-03-16 平安科技(深圳)有限公司 Super-resolution reconstruction method for text image and related device thereof
CN115941965A (en) * 2022-12-09 2023-04-07 阿里巴巴(中国)有限公司 Cloud desktop coding method, reconstruction method, display method and display system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4973111A (en) * 1988-09-14 1990-11-27 Case Western Reserve University Parametric image reconstruction using a high-resolution, high signal-to-noise technique
KR20140040322A (en) * 2012-09-24 2014-04-03 재단법인대구경북과학기술원 Single image super-resolution image reconstruction device and method thereof
US20200249314A1 (en) * 2019-02-01 2020-08-06 GM Global Technology Operations LLC Deep learning for super resolution in a radar system
WO2021107290A1 (en) * 2019-11-28 2021-06-03 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
US20230052483A1 (en) * 2020-02-17 2023-02-16 Intel Corporation Super resolution using convolutional neural network
CN111696033A (en) * 2020-05-07 2020-09-22 中山大学 Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
WO2022057837A1 (en) * 2020-09-16 2022-03-24 广州虎牙科技有限公司 Image processing method and apparatus, portrait super-resolution reconstruction method and apparatus, and portrait super-resolution reconstruction model training method and apparatus, electronic device, and storage medium
WO2022116933A1 (en) * 2020-12-04 2022-06-09 华为技术有限公司 Model training method, data processing method and apparatus
CN112733894A (en) * 2020-12-29 2021-04-30 广东省电信规划设计院有限公司 Intelligent video identification sensing method based on wireless communication technology
WO2023035531A1 (en) * 2021-09-10 2023-03-16 平安科技(深圳)有限公司 Super-resolution reconstruction method for text image and related device thereof
CN114841308A (en) * 2022-03-17 2022-08-02 阿里巴巴(中国)有限公司 Super-resolution reconstruction method, device and equipment for cloud desktop image and storage medium
CN115941965A (en) * 2022-12-09 2023-04-07 阿里巴巴(中国)有限公司 Cloud desktop coding method, reconstruction method, display method and display system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
G. SEIF: ""Edge-Based Loss Function for Single Image Super-Resolution"", 《ICASSP》, pages 1 - 5 *
孟志青: ""监督损失函数光滑化图像超分辨率重建"", 《中国图象图形学报》, pages 1 - 12 *

Also Published As

Publication number Publication date
CN116385318B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
CN111539879B (en) Video blind denoising method and device based on deep learning
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
Dong et al. Nonlocal back-projection for adaptive image enlargement
CN110163818B (en) Low-illumination video image enhancement method for maritime unmanned aerial vehicle
JPH11150669A (en) Image improving device
CN111667410B (en) Image resolution improving method and device and electronic equipment
CN110827397B (en) Texture fusion method for real-time three-dimensional reconstruction of RGB-D camera
CN110796616B (en) Turbulence degradation image recovery method based on norm constraint and self-adaptive weighted gradient
WO2014070273A1 (en) Recursive conditional means image denoising
CN109003233B (en) Image denoising method based on self-adaptive weight total variation model
CN116385318B (en) Image quality enhancement method and system based on cloud desktop
Raveendran et al. Image fusion using LEP filtering and bilinear interpolation
CN112819739A (en) Scanning electron microscope image processing method and system
CN112509144A (en) Face image processing method and device, electronic equipment and storage medium
CN117333359A (en) Mountain-water painting image super-resolution reconstruction method based on separable convolution network
CN115965552B (en) Frequency-space-time domain joint denoising and recovering system for low signal-to-noise ratio image sequence
CN115116468A (en) Video generation method and device, storage medium and electronic equipment
CN115082296A (en) Image generation method based on wavelet domain image generation framework
Son et al. A pair of noisy/blurry patches-based PSF estimation and channel-dependent deblurring
Tun et al. Joint Training of Noisy Image Patch and Impulse Response of Low-Pass Filter in CNN for Image Denoising
Zheng et al. Regularization parameter selection for total variation model based on local spectral response
CN111724332B (en) Image enhancement method and system suitable for closed cavity detection
CN112435192B (en) Lightweight image definition enhancing method
CN108510449B (en) Total variation image noise elimination method based on self-adaptive kernel regression
CN116721015A (en) Noise-containing rapid image enhancement and super-resolution reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant