CN113096014A - Video super-resolution processing method, electronic device and storage medium - Google Patents

Video super-resolution processing method, electronic device and storage medium Download PDF

Info

Publication number
CN113096014A
CN113096014A CN202110351978.2A CN202110351978A CN113096014A CN 113096014 A CN113096014 A CN 113096014A CN 202110351978 A CN202110351978 A CN 202110351978A CN 113096014 A CN113096014 A CN 113096014A
Authority
CN
China
Prior art keywords
gradient operator
image frame
super
edge
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110351978.2A
Other languages
Chinese (zh)
Other versions
CN113096014B (en
Inventor
程志鹏
王�琦
丁丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Bravo Technology Co ltd
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
Beijing Bravo Technology Co ltd
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bravo Technology Co ltd, China Mobile Communications Group Co Ltd, MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical Beijing Bravo Technology Co ltd
Priority to CN202110351978.2A priority Critical patent/CN113096014B/en
Publication of CN113096014A publication Critical patent/CN113096014A/en
Application granted granted Critical
Publication of CN113096014B publication Critical patent/CN113096014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The invention discloses a video super-resolution processing method, electronic equipment and a storage medium, wherein the method comprises the following steps: determining an image frame of a video to be subjected to super-resolution processing; and calculating the edge strength of the image frame according to a transverse gradient operator and a longitudinal gradient operator of the image frame and the gradient operators in the rest directions except the transverse direction and the longitudinal direction, and performing super-division processing on the image frame according to the edge strength of the image frame. The invention adds the remaining direction gradient operators except the transverse direction and the longitudinal direction when calculating the edge strength, thereby leading the effect of the super-resolution enhancement in the directions except the transverse direction and the longitudinal direction to be better, and leading the improved animal 4k algorithm to be applied to general videos and not to be limited to be applied to cartoon images without more complex transverse striations.

Description

Video super-resolution processing method, electronic device and storage medium
Technical Field
The invention relates to the technical field of image enhancement, in particular to a video super-resolution processing method, electronic equipment and a storage medium.
Background
Super-Resolution (Super-Resolution) is to improve the Resolution of the original image by a hardware or software method, and a process of obtaining a high-Resolution image by a series of low-Resolution images is Super-Resolution reconstruction. In recent years, super-resolution (hereinafter referred to as "super-resolution") technology has a wide application prospect in the aspects of picture enhancement, picture amplification, detail restoration and the like.
With the upgrading of consumption forms of internet users, network videos and network live broadcasts change life styles of people. Users have increasingly high video quality requirements. Anime4K is a more advanced, high quality hyper-scoring algorithm that exists today. But the method is suitable for being applied to cartoon images without more complex horizontal stripes and has poor effect when being applied to general video images. How to improve the Anime4K algorithm to be applied to general video is an exploratory problem.
Disclosure of Invention
Because the existing methods have the above problems, embodiments of the present invention provide a video super-resolution processing method, an electronic device, and a storage medium.
Specifically, the embodiment of the invention provides the following technical scheme:
in a first aspect, an embodiment of the present invention provides a video super-resolution processing method, including:
determining an image frame of a video to be subjected to super-resolution processing;
and calculating the edge strength of the image frame according to a transverse gradient operator and a longitudinal gradient operator of the image frame and the gradient operators in the rest directions except the transverse direction and the longitudinal direction, and performing super-division processing on the image frame according to the edge strength of the image frame.
Further, calculating the edge strength of the image frame according to a transverse gradient operator, a longitudinal gradient operator and a gradient operator in the remaining direction except for the transverse direction and the longitudinal direction of the image frame, and performing a super-resolution process on the image frame according to the edge strength of the image frame, including:
and calculating the edge strength of the image frame according to the transverse gradient operator, the longitudinal gradient operator, the 45-degree direction gradient operator and the 135-degree direction gradient operator of the image frame, and performing super-division processing on the image frame according to the edge strength of the image frame.
Further, calculating the edge strength of the image frame according to a transverse gradient operator, a longitudinal gradient operator, a 45 ° directional gradient operator and a 135 ° directional gradient operator of the image frame, and performing a super-resolution process on the image frame according to the edge strength of the image frame, including:
calculating a first transverse gradient operator, a first longitudinal gradient operator, a first 45 ° directional gradient operator and a first 135 ° directional gradient operator of the image frame;
calculating to obtain a first edge map according to the first transverse gradient operator, the first longitudinal gradient operator, the first 45-degree directional gradient operator and the first 135-degree directional gradient operator;
calculating a second transverse gradient operator, a second longitudinal gradient operator, a second 45 ° directional gradient operator and a second 135 ° directional gradient operator of the first edge map;
calculating to obtain a second edge map according to the second transverse gradient operator, the second longitudinal gradient operator, the second 45-degree directional gradient operator and the second 135-degree directional gradient operator;
respectively carrying out normalization processing on the second transverse gradient operator, the second longitudinal gradient operator, the second 45-degree direction gradient operator and the second 135-degree direction gradient operator according to the second edge map to obtain a normalized second transverse gradient operator, a normalized second longitudinal gradient operator, a normalized second 45-degree direction gradient operator and a normalized second 135-degree direction gradient operator;
distributing the weights in the x direction, the y direction, the 45-degree direction and the 135-degree direction according to the normalized second transverse gradient operator, the normalized second longitudinal gradient operator, the normalized second 45-degree direction gradient operator and the normalized second 135-degree direction gradient operator;
and determining the replacement pixel value of the pixel point in the image frame according to the weights in the x direction, the y direction, the 45-degree direction and the 135-degree direction.
Further, determining image frames of the video to be subjected to the super-resolution processing includes:
acquiring an image frame of a video to be subjected to super-resolution processing;
and carrying out sharpening processing on the image frame to obtain the sharpened image frame.
Further, determining image frames of the video to be subjected to the super-resolution processing includes:
acquiring an image frame of a video to be subjected to super-resolution processing;
and denoising the image frame to obtain a denoised image frame.
Further, sharpening the image frame to obtain a sharpened image frame includes:
determining an edge strength of an edge point of the image frame;
determining the sharpening weight of the edge point according to the edge strength of the edge point; wherein, the stronger the edge strength, the larger the sharpening weight;
and carrying out sharpening processing on the image frame according to the sharpening weight of the edge point to obtain the sharpened image frame.
Further, after obtaining the image frame after the sharpening process, the method further includes:
and carrying out black and white edge suppression processing on the sharpened image frame through a preset brightness value interval.
Further, denoising the image frame to obtain a denoised image frame, including:
inputting the image frame into a denoising model to obtain a denoised image frame;
the denoising model is obtained by taking a sample image with noise as an input and taking the denoised sample image as an output and carrying out deep learning training; in the loss function corresponding to the denoising model, a first loss calculation weight is distributed to the edge points, and a second loss calculation weight is distributed to the non-edge points, wherein the first loss calculation weight is greater than the second loss calculation weight.
In a second aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the video super-resolution processing method according to the first aspect.
In a third aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for video super-resolution processing according to the first aspect is implemented.
As can be seen from the foregoing technical solutions, in the video super-resolution processing method, the electronic device, and the storage medium provided in the embodiments of the present invention, an image frame of a video to be subjected to super-resolution processing is determined, then, according to a horizontal gradient operator and a vertical gradient operator of the image frame and gradient operators in remaining directions other than the horizontal direction and the vertical direction, an edge strength of the image frame is calculated, and according to the edge strength of the image frame, the super-resolution processing is performed on the image frame. According to the embodiment of the invention, the remaining direction gradient operators except for the transverse direction and the longitudinal direction are added when the edge intensity is calculated, so that the effect of the super-resolution enhancement in the directions except for the transverse direction and the longitudinal direction is better, and the improved Anime4k algorithm can be applied to a general video and is not limited to be applied to cartoon images without more complex cross striations.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a video super-resolution processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a hyper-resolution enhancement provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a denoising model training method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a model inference process provided by an embodiment of the present invention;
FIG. 5 is an original image to be subjected to a super-resolution process;
FIG. 6 is an image obtained by performing a super-resolution process on the image of FIG. 5 using a bilinear interpolation method;
FIG. 7 is an image obtained by performing a super-resolution process on FIG. 5 using the method proposed by the present application;
fig. 8 is a schematic structural diagram of a video super-resolution processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following further describes embodiments of the present invention with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The term "and/or" in the following embodiments of the present invention describes an association relationship of associated objects, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone.
Fig. 1 shows a flowchart of a video super-resolution processing method according to an embodiment of the present invention, and as shown in fig. 1, the video super-resolution processing method according to the embodiment of the present invention specifically includes the following contents:
step 101: determining an image frame of a video to be subjected to super-resolution processing;
in this step, it can be understood that the super-resolution processing refers to improving the resolution of the original image by a hardware or software method, that is, obtaining a high-resolution image through a series of low-resolution images is super-resolution reconstruction. The super-resolution processing technology is well applied to links such as picture enhancement, picture amplification, detail recovery and the like. In the step, image frames are extracted from the video to be subjected to the super-resolution processing, and then the subsequent super-resolution processing is carried out.
Step 102: and calculating the edge strength of the image frame according to a transverse gradient operator and a longitudinal gradient operator of the image frame and the gradient operators in the rest directions except the transverse direction and the longitudinal direction, and performing super-division processing on the image frame according to the edge strength of the image frame.
In this step, when calculating the edge strength, in addition to the horizontal gradient operator and the vertical gradient operator, gradient operators in other directions are additionally added, such as gradient operators in one direction or multiple directions between 0 ° and 90 °, and/or gradient operators in one direction or multiple directions between 90 ° and 180 °, so that the super-score enhancement effect becomes better in the directions other than the horizontal direction and the vertical direction, and the super-score algorithm can be used in all video scenes without limiting animation scenes without more complex cross-stripes.
For example, in one implementation, when calculating the edge strength, a gradient operator in a 45 ° direction is additionally added in addition to the horizontal gradient operator and the vertical gradient operator.
As another example, in one implementation, a gradient operator in the 135 ° direction is additionally added in addition to the horizontal gradient operator and the vertical gradient operator when calculating the edge strength.
As another example, in one implementation, gradient operators in 45 ° and 135 ° directions are additionally added in addition to the horizontal gradient operator and the vertical gradient operator when calculating the edge strength.
As another example, in one implementation, gradient operators in 60 ° and 150 ° directions are additionally added in addition to the horizontal gradient operator and the vertical gradient operator when calculating the edge strength.
As another example, in one implementation, gradient operators in 70 ° and 160 ° directions are additionally added in addition to the horizontal gradient operator and the vertical gradient operator when calculating the edge strength.
As another example, in one implementation, gradient operators in 40 ° and 130 ° directions are additionally added in addition to the horizontal gradient operator and the vertical gradient operator when calculating the edge strength.
As another example, in one implementation, gradient operators in the directions of 30 ° and 120 ° are additionally added in addition to the horizontal gradient operator and the vertical gradient operator when calculating the edge strength.
As another example, in one implementation, gradient operators in the directions of 20 ° and 110 ° are additionally added in addition to the horizontal gradient operator and the vertical gradient operator when calculating the edge strength.
For another example, in one implementation, gradient operators in the directions of 20 °, 30 °, 45 °, 60 °, and 110 °, 120 °, 135 °, and 150 ° are additionally added in calculating the edge strength, in addition to the horizontal gradient operator and the vertical gradient operator. It can be understood that when the additionally added gradient operators include more directions and are distributed more uniformly in all directions, the super-splitting effect in the directions can be effectively enhanced, so that the final super-splitting effect becomes better.
It should be noted that the above cases are only for illustration and do not have a limiting effect, and in practical applications, a gradient operator in a specific direction or directions may be selected and added according to characteristics of an image to be processed (such as image texture distribution characteristics, etc.), so that the over-division effect in the corresponding direction becomes more ideal. For example, in practical applications, one or more gradient operators in a certain direction or a plurality of directions between 0 and 90 ° and/or gradient operators in a certain direction or a plurality of directions between 90 ° and 180 ° may be selected and used in combination as required. Therefore, the embodiment aims to enhance the super-resolution effect in the directions other than the horizontal and vertical directions, so that the Anime4k algorithm can be more widely applied to general video images, or the Anime4k algorithm can be more widely applied to various different types of video images, and is not limited to be applied to cartoon images without more complex horizontal stripes.
As can be seen from the foregoing technical solutions, in the video super-resolution processing method provided in the embodiments of the present invention, an image frame of a video to be subjected to super-resolution processing is determined, then the edge strength of the image frame is calculated according to a horizontal gradient operator and a vertical gradient operator of the image frame and a gradient operator in the remaining direction except for the horizontal direction and the vertical direction, and the super-resolution processing is performed on the image frame according to the edge strength of the image frame. According to the embodiment of the invention, the remaining direction gradient operators except for the transverse direction and the longitudinal direction are added when the edge intensity is calculated, so that the effect of the super-resolution enhancement in the directions except for the transverse direction and the longitudinal direction is better, and the improved Anime4k algorithm can be applied to a general video and is not limited to be applied to cartoon images without more complex cross striations.
Based on the content of the foregoing embodiment, in this embodiment, calculating the edge strength of the image frame according to a horizontal gradient operator, a vertical gradient operator, and a gradient operator in the remaining direction except for the horizontal direction and the vertical direction of the image frame, and performing a super-resolution process on the image frame according to the edge strength of the image frame includes:
and calculating the edge strength of the image frame according to the transverse gradient operator, the longitudinal gradient operator, the 45-degree direction gradient operator and the 135-degree direction gradient operator of the image frame, and performing super-division processing on the image frame according to the edge strength of the image frame.
In this embodiment, it should be noted that the animal 4k super-resolution technology is designed for animation data, and the effect in the horizontal and vertical directions is good, but if the technology is directly used in a general video, the effect in other directions is not good enough, and a jaggy feeling exists. Aiming at the existing technical problem of the Anime4k algorithm, gradient operators in the directions of 45 degrees and 135 degrees are added when the edge strength is obtained, so that the effect of the super-resolution enhancement effect in the directions other than the transverse direction and the longitudinal direction is better, and the embodiment aims to enhance the super-resolution effect in the directions other than the horizontal direction and the vertical direction, so that the Anime4k algorithm can be applied to general videos. It should be noted that, in the processing manner of this embodiment, the typical directions of 45 ° and 135 ° are selected, and since these two directions can better cover the texture distribution characteristics in other directions except the horizontal direction and the vertical direction, a better over-score effect can be obtained while adding gradient operators in fewer directions (possibly not adding too much computation).
Based on the content of the foregoing embodiment, in this embodiment, calculating the edge strength of the image frame according to the horizontal gradient operator, the vertical gradient operator, the 45 ° directional gradient operator, and the 135 ° directional gradient operator of the image frame, and performing the super-resolution processing on the image frame according to the edge strength of the image frame includes:
calculating a first transverse gradient operator, a first longitudinal gradient operator, a first 45 ° directional gradient operator and a first 135 ° directional gradient operator of the image frame;
calculating to obtain a first edge map according to the first transverse gradient operator, the first longitudinal gradient operator, the first 45-degree directional gradient operator and the first 135-degree directional gradient operator;
calculating a second transverse gradient operator, a second longitudinal gradient operator, a second 45 ° directional gradient operator and a second 135 ° directional gradient operator of the first edge map;
calculating to obtain a second edge map according to the second transverse gradient operator, the second longitudinal gradient operator, the second 45-degree directional gradient operator and the second 135-degree directional gradient operator;
respectively carrying out normalization processing on the second transverse gradient operator, the second longitudinal gradient operator, the second 45-degree direction gradient operator and the second 135-degree direction gradient operator according to the second edge map to obtain a normalized second transverse gradient operator, a normalized second longitudinal gradient operator, a normalized second 45-degree direction gradient operator and a normalized second 135-degree direction gradient operator;
distributing the weights in the x direction, the y direction, the 45-degree direction and the 135-degree direction according to the normalized second transverse gradient operator, the normalized second longitudinal gradient operator, the normalized second 45-degree direction gradient operator and the normalized second 135-degree direction gradient operator;
and determining the replacement pixel value of the pixel point in the image frame according to the weights in the x direction, the y direction, the 45-degree direction and the 135-degree direction.
In this embodiment, the sobel operators for adding 45 ° direction and 135 ° direction are respectively G for calculating the edge strengthxyAnd GyxAs shown in tables 1, 2, 3 and 4 below:
TABLE 1
Figure BDA0003002616470000091
TABLE 2
Figure BDA0003002616470000092
TABLE 3
Figure BDA0003002616470000101
TABLE 4
Figure BDA0003002616470000102
Calculating gradient values in four directions by using four sobel operator templates:
Figure BDA0003002616470000103
wherein the content of the first and second substances,
Figure BDA0003002616470000104
represents the convolution operation, and is specifically calculated as follows:
GX(x,y)=(-1)*I(x-1,y-1)+0*I(x,y-1)+1*I(x+1,y-1)+(-2)*I(x-1,y)+0*I(x,y)+2*I(x+1,y)+(-1)*I(x-1,y+1)+0*I(x,y+1)+1*I(x+1,y+1)=[I(x+1,y-1)+2*I(x+1,y)+I(x+1,y+1)]-[I(x-1,y-1)+2*I(x-1,y)+I(x-1,y+1)]
GY(x,y)=(-1)*I(x-1,y-1)+(-2)*I(x,y-1)+(-1)*I(x+1,y-1)+0*I(x-1,y)+0*I(x,y)+0*I(x+1,y)+1*I(x-1,y+1)+2*I(x,y+1)+1*I(x+1,y+1)=[I(x-1,y+1)+2*I(x,y+1)+I(x+1,y+1)]-[I(x-1,y-1)+2*I(x,y-1)+I(x+1,y-1)]
GXY(x,y)=(-2)*I(x-1,y-1)+(-1)*I(x,y-1)+0*I(x+1,y-1)+(-1)*I(x-1,y)+0*I(x,y)+1*I(x+1,y)+0*I(x-1,y+1)+1*I(x,y+1)+2*I(x+1,y+1)=[I(x+1,y)+2*I(x+1,y+1)+I(x,y+1)]-[I(x-1,y)+2*I(x-1,y-1)+I(x,y-1)]
GYX(x,y)=0*I(x-1,y-1)+(-1)*I(x,y-1)+(-2)*I(x+1,y-1)+1*I(x-1,y)+0*I(x,y)+(-1)*I(x+1,y)+2*I(x-1,y+1)+1*I(x,y+1)+0*I(x+1,y+1)=[I(x-1,y)+2*I(x-1,y+1)+I(x,y+1)]-[I(x,y-1)+2*I(x+1,y-1)+I(x+1,y)]
where I (x, y) represents the gray scale value of the image (x, y) point.
The overall gradient is then:
Figure BDA0003002616470000111
then, an edge G of the edge map G is calculated2Using G as input, obtaining the transverse gradient G by the above method2XLongitudinal gradient G2YGradient G in 45 DEG direction2XYAnd a gradient G in the 135 direction2YXAnd overall edge strength G2Namely:
Figure BDA0003002616470000112
then, normalization is carried out:
Figure BDA0003002616470000113
Figure BDA0003002616470000114
where eps is to prevent divide by zero.
Reassigning the high-resolution image I obtained by bilinear interpolation, firstly obtaining new values in the horizontal direction and the longitudinal direction, the 45-degree direction and the 135-degree direction of each pixel point, and calculating the method as shown in the specification:
Figure BDA0003002616470000115
Figure BDA0003002616470000116
finally, according to
Figure BDA0003002616470000117
And
Figure BDA0003002616470000118
weights in the x and y directions and in the 45 ° and 135 ° directions are assigned, and the pixel value I' (x, y) replaced by the point is obtained by weighting.
And (3) calculating the weight:
Figure BDA0003002616470000121
Figure BDA0003002616470000122
value after weighting:
I'(x,y)=wx*I'x(x,y)+wy*I'y(x,y)+wxy*I'xy(x,y)+wyx*I'yx(x,y)。
based on the content of the above embodiment, in the present embodiment, determining the image frame of the video to be subjected to the super-resolution processing includes:
acquiring an image frame of a video to be subjected to super-resolution processing;
and denoising the image frame to obtain a denoised image frame.
In this embodiment, it should be noted that, since the super-resolution technique increases the resolution of the original image, inevitably, the noise in the original image is also amplified. Therefore, it is very important to perform denoising preprocessing on the video image before performing the super-resolution processing on the video image. According to the embodiment, denoising pretreatment is performed on the image before the super-resolution treatment, and then the super-resolution treatment is performed on the denoised image frame, so that the super-resolution effect can be improved.
In addition, it should be noted that what denoising method is adopted is also a relatively important issue when denoising an image frame. The traditional denoising methods, such as guided image filtering denoising, bilateral filtering denoising and the like, mainly aim at gaussian noise, and most of video frames are compression noise, so the traditional methods have a less ideal effect on the compression noise. At present, a better denoising method is a method based on deep learning, however, in the model training process, the existing loss function cannot keep the image edge, so how to improve the training model is a more potential direction. To this end, the present application improves on this.
Based on the content of the foregoing embodiment, in this embodiment, performing denoising processing on the image frame to obtain a denoised image frame includes:
inputting the image frame into a denoising model to obtain a denoised image frame;
the denoising model is obtained by taking a sample image with noise as an input and taking the denoised sample image as an output and carrying out deep learning training; in the loss function corresponding to the denoising model, a first loss calculation weight is distributed to the edge points, and a second loss calculation weight is distributed to the non-edge points, wherein the first loss calculation weight is greater than the second loss calculation weight.
In this embodiment, it should be noted that, in the prior art, when calculating the loss function, the weights assigned to the edge and the non-edge are the same, in which case the true edge would be worn out as noise. In order to solve the problem, the loss function is improved in the CNN denoising model training process, the weight coefficient is increased on the basis of the loss function, higher weight is distributed to edges, and lower weight is distributed to non-edges, so that the effect of keeping normal edges in the denoising process can be realized.
Based on the content of the above embodiment, in the present embodiment, determining the image frame of the video to be subjected to the super-resolution processing includes:
acquiring an image frame of a video to be subjected to super-resolution processing;
and carrying out sharpening processing on the image frame to obtain the sharpened image frame.
In this embodiment, it should be noted that, in the video enhancement process, sharpening may improve video definition, after an image frame is obtained, the image frame is sharpened first, and then the sharpened image is subjected to super-resolution processing, so that an effect of the super-resolution processing may be improved.
In this embodiment, it should be noted that, in the current sharpening method with a good effect and a common use, the method is usm (unsharpen mask), in which an original video frame is blurred, then the blurred video frame is subtracted from the original video frame to obtain an edge residual, the edge residual is multiplied by a coefficient (the coefficient is sharpening strength), and then the edge residual is added to the original video frame to obtain a sharpening result. However, the USM method has the following problems: fine noise is easy to amplify; ② the black and white edges are easy to be generated. Therefore, aiming at the existing technical problem of the USM method, the fusion weight is obtained by calculating the edge strength on the basis of the USM method, the weak edge is assigned with smaller weight, and the strong edge is assigned with higher weight. Weights are assigned to avoid amplifying small noises. After the sharpening result is obtained, if the luminance value is too low, a black edge may be sharpened, and if the luminance value is too high, a white edge may be sharpened. In order to inhibit black and white edges, the sharpening result with too low or too high brightness value is re-corrected, so that the effect of inhibiting the black and white edges is achieved. These two improvements of the present application will be explained below by means of 2 examples.
Based on the content of the foregoing embodiment, in this embodiment, performing sharpening processing on the image frame to obtain a sharpened image frame includes:
determining an edge strength of an edge point of the image frame;
determining the sharpening weight of the edge point according to the edge strength of the edge point; wherein, the stronger the edge strength, the larger the sharpening weight;
and carrying out sharpening processing on the image frame according to the sharpening weight of the edge point to obtain the sharpened image frame.
Therefore, in the embodiment, aiming at the existing technical problem of the USM method, the fusion weight is obtained by calculating the edge strength on the basis of the USM method, a smaller weight is assigned to a weak edge, a higher weight is assigned to a strong edge, and amplification of small noise is avoided by assigning weights, so that the problem in the prior art is solved.
Based on the content of the foregoing embodiment, in this embodiment, after obtaining the image frame after the sharpening process, the method further includes:
and carrying out black and white edge suppression processing on the sharpened image frame through a preset brightness value interval.
In this embodiment, after the sharpening result is obtained, if the luminance value is too low, there is a possibility that a black edge is sharpened, and if the luminance value is too high, there is a possibility that a white edge is sharpened. In order to suppress the black and white edges, the embodiment re-corrects the sharpening result with too low or too high brightness value, so as to achieve the effect of suppressing the black and white edges.
Therefore, the USM sharpening method is improved, the weight is obtained by calculating the strength of the edge, the weak edge is assigned with a smaller weight, and the strong edge is assigned with a higher weight, so that amplification of fine noise is avoided.
Sharpening the result
S(x,y)=I(x,y)+(I(x,y)-B(x,y))*α
The improvement is that:
S(x,y)=I(x,y)+(I(x,y)-B(x,y))*α*wsh(x,y)
wherein, wshAre weights.
In addition, in the embodiment, the USM sharpening method is secondarily improved, and in order to suppress the sharpened black and white edge, the sharpening strength is weakened by the pixel with the S (x, y) brightness value too low (possibly sharpened black edge) and the brightness value too high (possibly sharpened white edge), that is, the sharpened image to which the weight is assigned is multiplied by the penalty factor of the black and white edge:
β(x,y)=1.0-min(((S(x,y)-127.5)/120)2,1.0)
and finally obtaining a sharpened image for inhibiting black and white edges:
S'(x,y)=I(x,y)+(I(x,y)-B(x,y))*α*wsh(x,y)*β(x,y)
therefore, the method aims to provide a method for enhancing the real-time super-resolution of the mobile terminal, and the method firstly uses an improved USM method to sharpen the video picture to be subjected to super-resolution so as to avoid amplifying noise and inhibit the generation of black and white edges; then, inputting the video picture into the improved CNN denoising network model for denoising, and reserving the normal edge of the picture; and finally, performing super-division operation on the video pictures through the improved animal 4k algorithm, and outputting a high-definition video. The method for real-time super-resolution enhancement provided by the present embodiment is explained and explained with reference to fig. 2, fig. 3 and fig. 4.
Referring to fig. 2, the technical solution adopted in the embodiment of the present invention is as follows:
step 1, based on fusion weight WshAnd a black and white edge penalty factor, sharpening the picture frame of the video to be processed;
step 2, inputting the sharpened picture frame into a trainingThe trained CNN denoising network model is based on the weight coefficient We in the training processdgeImproving the Loss function;
and 3, carrying out hyper-resolution enhancement on the denoised image output by the CNN by adopting an improved Anime4K algorithm, wherein the improvement point is that gradient operators in two directions of 45 degrees and 135 degrees are added to calculate the edge strength.
The sharpening process is described in detail below.
Firstly, a source picture frame to be sharpened is obtained, then sobel edge detection is carried out on the source picture frame, then the edge intensity is transformed, and the transformation formula is as follows: f (x) p5x5+p4x4+p3x3+p2x2+p1x+p0Substituting the edge intensity into a transformation formula to obtain a fusion weight Wsh,WshWeak edges will be assigned a smaller weight and strong edges will be assigned a higher weight, and the sharpening result is as follows:
S(x,y)=I(x,y)+(I(x,y)-B(x,y))*α*wsh(x,y)
in addition, a penalty factor of black and white edges is added on the basis of the sharpening result:
β(x,y)=1.0-min(((S(x,y)-127.5)/120)2,1.0)
obtaining a final sharpened picture:
S'(x,y)=I(x,y)+(I(x,y)-B(x,y))*α*wsh(x,y)*β(x,y)
and then inputting the sharpened picture into a trained CNN denoising network model to obtain a denoised video picture.
As shown in fig. 3 and 4, the training process includes the following steps:
firstly, screening to obtain a batch of high-definition videos, performing transcoding compression with different code rates by using different encoders to obtain a plurality of groups of low-definition videos, then randomly extracting pictures of the low-definition videos and the corresponding high-definition videos to be used as a training set, and finally inputting a CNN denoising network model for training. In the embodiment, the average filtering is used for blurring the original image, and then the original image is subtracted from the average filtering to obtain an absolute value, so that the edge intensity of the picture is obtained. And multiplying the edge strength by a factor lambda to obtain a final weight coefficient, wherein the calculation formula is as follows, wherein lambda represents the strength of edge protection:
wedge(x,y)=|I(x,y)-B(x,y)|*λ
performing loss calculation on the high-definition picture I' output by denoising and the original high-definition picture I, and performing a loss function:
Figure BDA0003002616470000161
the improvement is that:
Figure BDA0003002616470000162
and finally, outputting the denoising model parameters.
Therefore, the CNN denoising model is improved, the weight coefficient is added in the loss function, and normal edges can be reserved while denoising is achieved.
In summary, when the edge strength is calculated, gradient operators in the directions of 45 degrees and 135 degrees are added, so that the original animal 4k algorithm obtains a better over-scoring effect in the directions other than the transverse direction and the longitudinal direction. In addition, the USM sharpening method is improved, and the sharpening weight is added on the basis of the original method to avoid amplifying noise. In addition, black and white edges generated by sharpening can be restrained by adding a penalty factor. In addition, the weight coefficient is added in the loss function, so that the edge can be reserved while denoising is realized. Therefore, the application has the following advantages: A. compared with the original method, the USM sharpening method in the prior art is improved, and compared with the original method, the USM sharpening method avoids amplified noise in the sharpening process and inhibits black and white edges generated after sharpening. B. Compared with the original model, the CNN denoising model has the advantages that the weight is increased in the loss function, and the edge can be kept while denoising. C. Compared with the original algorithm, the overdividing algorithm of the prior animal 4k is improved, and the overdividing effect in the directions other than the transverse direction and the longitudinal direction is enhanced. The effect of the solution of the present application can be seen in conjunction with fig. 5, 6 and 7. Fig. 5 is an original image, fig. 6 is an image after being subjected to a super-resolution process by using a bilinear interpolation method, and fig. 7 is an image after being subjected to a super-resolution process by using the method provided by the present application. Through comparison, the image after the super-division processing is carried out by the method is clearer than the effect of bilinear interpolation.
Fig. 8 is a schematic structural diagram illustrating a video super-resolution processing apparatus according to an embodiment of the present invention. As shown in fig. 8, the video super-resolution processing apparatus according to the embodiment of the present invention includes: a determination module 21 and a processing module 22, wherein:
a determining module 21, configured to determine an image frame of a video to be subjected to super-resolution processing;
and the processing module 22 is configured to calculate an edge strength of the image frame according to a transverse gradient operator and a longitudinal gradient operator of the image frame and a gradient operator in the remaining direction except the transverse direction and the longitudinal direction, and perform a super-resolution process on the image frame according to the edge strength of the image frame.
Since the video super-resolution processing device provided by the embodiment can be used for executing the video super-resolution processing method provided by the above embodiment, the operation principle and the beneficial effects are similar, and detailed description is omitted here.
Based on the same inventive concept, another embodiment of the present invention provides an electronic device, which specifically includes the following components, with reference to fig. 9: a processor 301, a memory 302, a communication interface 303, and a communication bus 304;
the processor 301, the memory 302 and the communication interface 303 complete mutual communication through the communication bus 304; the communication interface 303 is used for realizing information transmission between the devices;
the processor 301 is configured to call a computer program in the memory 302, and when the processor executes the computer program, the processor implements all the steps of the video super-resolution processing method, for example, when the processor executes the computer program, the processor implements the following steps: determining an image frame of a video to be subjected to super-resolution processing; and calculating the edge strength of the image frame according to a transverse gradient operator and a longitudinal gradient operator of the image frame and the gradient operators in the rest directions except the transverse direction and the longitudinal direction, and performing super-division processing on the image frame according to the edge strength of the image frame.
Based on the same inventive concept, another embodiment of the present invention provides a non-transitory computer-readable storage medium, having a computer program stored thereon, which when executed by a processor implements all the steps of the video super-resolution processing method described above, for example, when the processor executes the computer program, the processor implements the following steps: determining an image frame of a video to be subjected to super-resolution processing; and calculating the edge strength of the image frame according to a transverse gradient operator and a longitudinal gradient operator of the image frame and the gradient operators in the rest directions except the transverse direction and the longitudinal direction, and performing super-division processing on the image frame according to the edge strength of the image frame.
In addition, the logic instructions in the memory may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions may be essentially or partially implemented in the form of software products, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the video super-resolution processing method according to the embodiments or some parts of the embodiments.
In addition, in the present invention, terms such as "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Moreover, in the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Furthermore, in the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A video super-resolution processing method is characterized by comprising the following steps:
determining an image frame of a video to be subjected to super-resolution processing;
and calculating the edge strength of the image frame according to a transverse gradient operator and a longitudinal gradient operator of the image frame and the gradient operators in the rest directions except the transverse direction and the longitudinal direction, and performing super-division processing on the image frame according to the edge strength of the image frame.
2. The video super-resolution processing method according to claim 1, wherein calculating edge strengths of the image frames according to a horizontal gradient operator, a vertical gradient operator, and a gradient operator in a remaining direction other than the horizontal direction and the vertical direction of the image frames, and super-resolution processing the image frames according to the edge strengths of the image frames comprises:
and calculating the edge strength of the image frame according to the transverse gradient operator, the longitudinal gradient operator, the 45-degree direction gradient operator and the 135-degree direction gradient operator of the image frame, and performing super-division processing on the image frame according to the edge strength of the image frame.
3. The video super-resolution processing method according to claim 2, wherein calculating the edge strength of the image frame according to the transverse gradient operator, the longitudinal gradient operator, the 45 ° directional gradient operator, and the 135 ° directional gradient operator of the image frame, and performing super-resolution processing on the image frame according to the edge strength of the image frame comprises:
calculating a first transverse gradient operator, a first longitudinal gradient operator, a first 45 ° directional gradient operator and a first 135 ° directional gradient operator of the image frame;
calculating to obtain a first edge map according to the first transverse gradient operator, the first longitudinal gradient operator, the first 45-degree directional gradient operator and the first 135-degree directional gradient operator;
calculating a second transverse gradient operator, a second longitudinal gradient operator, a second 45 ° directional gradient operator and a second 135 ° directional gradient operator of the first edge map;
calculating to obtain a second edge map according to the second transverse gradient operator, the second longitudinal gradient operator, the second 45-degree directional gradient operator and the second 135-degree directional gradient operator;
respectively carrying out normalization processing on the second transverse gradient operator, the second longitudinal gradient operator, the second 45-degree direction gradient operator and the second 135-degree direction gradient operator according to the second edge map to obtain a normalized second transverse gradient operator, a normalized second longitudinal gradient operator, a normalized second 45-degree direction gradient operator and a normalized second 135-degree direction gradient operator;
distributing the weights in the x direction, the y direction, the 45-degree direction and the 135-degree direction according to the normalized second transverse gradient operator, the normalized second longitudinal gradient operator, the normalized second 45-degree direction gradient operator and the normalized second 135-degree direction gradient operator;
and determining the replacement pixel value of the pixel point in the image frame according to the weights in the x direction, the y direction, the 45-degree direction and the 135-degree direction.
4. The video super-resolution processing method according to claim 1, wherein determining the image frames of the video to be super-resolved comprises:
acquiring an image frame of a video to be subjected to super-resolution processing;
and carrying out sharpening processing on the image frame to obtain the sharpened image frame.
5. The video super-resolution processing method according to claim 1, wherein determining the image frames of the video to be super-resolved comprises:
acquiring an image frame of a video to be subjected to super-resolution processing;
and denoising the image frame to obtain a denoised image frame.
6. The video super-resolution processing method according to claim 4, wherein sharpening the image frame to obtain a sharpened image frame comprises:
determining an edge strength of an edge point of the image frame;
determining the sharpening weight of the edge point according to the edge strength of the edge point; wherein, the stronger the edge strength, the larger the sharpening weight;
and carrying out sharpening processing on the image frame according to the sharpening weight of the edge point to obtain the sharpened image frame.
7. The video super-resolution processing method according to claim 6, wherein after obtaining the image frame after sharpening, the method further comprises:
and carrying out black and white edge suppression processing on the sharpened image frame through a preset brightness value interval.
8. The video super-resolution processing method according to claim 5, wherein denoising the image frame to obtain a denoised image frame comprises:
inputting the image frame into a denoising model to obtain a denoised image frame;
the denoising model is obtained by taking a sample image with noise as an input and taking the denoised sample image as an output and carrying out deep learning training; in the loss function corresponding to the denoising model, a first loss calculation weight is distributed to the edge points, and a second loss calculation weight is distributed to the non-edge points, wherein the first loss calculation weight is greater than the second loss calculation weight.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the video super-resolution processing method according to any one of claims 1 to 8 when executing the computer program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the video super-resolution processing method according to any one of claims 1 to 8.
CN202110351978.2A 2021-03-31 2021-03-31 Video super processing method, electronic device and storage medium Active CN113096014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110351978.2A CN113096014B (en) 2021-03-31 2021-03-31 Video super processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110351978.2A CN113096014B (en) 2021-03-31 2021-03-31 Video super processing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113096014A true CN113096014A (en) 2021-07-09
CN113096014B CN113096014B (en) 2023-12-08

Family

ID=76672577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110351978.2A Active CN113096014B (en) 2021-03-31 2021-03-31 Video super processing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113096014B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102651122A (en) * 2011-02-24 2012-08-29 索尼公司 Image enhancement apparatus and method
CN103391392A (en) * 2012-05-11 2013-11-13 索尼公司 Image enhancement apparatus and method
US20140126834A1 (en) * 2011-06-24 2014-05-08 Thomson Licensing Method and device for processing of an image
CN104637064A (en) * 2015-02-28 2015-05-20 中国科学院光电技术研究所 Defocus blurred image definition detecting method based on edge strength weight
CN105678700A (en) * 2016-01-11 2016-06-15 苏州大学 Image interpolation method and system based on prediction gradient
US9589324B1 (en) * 2014-03-27 2017-03-07 Pixelworks, Inc. Overshoot protection of upscaled images
CN106604057A (en) * 2016-12-07 2017-04-26 乐视控股(北京)有限公司 Video processing method and apparatus thereof
CN109242807A (en) * 2018-11-07 2019-01-18 厦门欢乐逛科技股份有限公司 Rendering parameter adaptive edge softening method, medium and computer equipment
CN109934785A (en) * 2019-03-12 2019-06-25 湖南国科微电子股份有限公司 Image sharpening method and device
CN110880160A (en) * 2019-11-14 2020-03-13 Oppo广东移动通信有限公司 Picture frame super-division method and device, terminal equipment and computer readable storage medium
CN110956594A (en) * 2019-11-27 2020-04-03 北京金山云网络技术有限公司 Image filtering method and device, electronic equipment and storage medium
CN111028182A (en) * 2019-12-24 2020-04-17 北京金山云网络技术有限公司 Image sharpening method and device, electronic equipment and computer-readable storage medium
CN111415399A (en) * 2020-03-19 2020-07-14 北京奇艺世纪科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111986069A (en) * 2019-05-22 2020-11-24 三星电子株式会社 Image processing apparatus and image processing method thereof
CN112561802A (en) * 2021-02-20 2021-03-26 杭州太美星程医药科技有限公司 Interpolation method of continuous sequence images, interpolation model training method and system thereof

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102651122A (en) * 2011-02-24 2012-08-29 索尼公司 Image enhancement apparatus and method
US20140126834A1 (en) * 2011-06-24 2014-05-08 Thomson Licensing Method and device for processing of an image
CN103391392A (en) * 2012-05-11 2013-11-13 索尼公司 Image enhancement apparatus and method
US9589324B1 (en) * 2014-03-27 2017-03-07 Pixelworks, Inc. Overshoot protection of upscaled images
CN104637064A (en) * 2015-02-28 2015-05-20 中国科学院光电技术研究所 Defocus blurred image definition detecting method based on edge strength weight
CN105678700A (en) * 2016-01-11 2016-06-15 苏州大学 Image interpolation method and system based on prediction gradient
CN106604057A (en) * 2016-12-07 2017-04-26 乐视控股(北京)有限公司 Video processing method and apparatus thereof
CN109242807A (en) * 2018-11-07 2019-01-18 厦门欢乐逛科技股份有限公司 Rendering parameter adaptive edge softening method, medium and computer equipment
CN109934785A (en) * 2019-03-12 2019-06-25 湖南国科微电子股份有限公司 Image sharpening method and device
CN111986069A (en) * 2019-05-22 2020-11-24 三星电子株式会社 Image processing apparatus and image processing method thereof
CN110880160A (en) * 2019-11-14 2020-03-13 Oppo广东移动通信有限公司 Picture frame super-division method and device, terminal equipment and computer readable storage medium
CN110956594A (en) * 2019-11-27 2020-04-03 北京金山云网络技术有限公司 Image filtering method and device, electronic equipment and storage medium
CN111028182A (en) * 2019-12-24 2020-04-17 北京金山云网络技术有限公司 Image sharpening method and device, electronic equipment and computer-readable storage medium
CN111415399A (en) * 2020-03-19 2020-07-14 北京奇艺世纪科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112561802A (en) * 2021-02-20 2021-03-26 杭州太美星程医药科技有限公司 Interpolation method of continuous sequence images, interpolation model training method and system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王琦; 谢淑翠; 王至琪: "基于稀疏表示的图像超分辨率重建算法", 《现代电子技术》, vol. 42, no. 3, pages 45 - 48 *

Also Published As

Publication number Publication date
CN113096014B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
Xiao et al. An enhancement method for X-ray image via fuzzy noise removal and homomorphic filtering
US9142009B2 (en) Patch-based, locally content-adaptive image and video sharpening
Liu et al. Image denoising via adaptive soft-thresholding based on non-local samples
CN103236040B (en) A kind of color enhancement method and device
Gupta et al. Review of different local and global contrast enhancement techniques for a digital image
Ma et al. An effective fusion defogging approach for single sea fog image
Xu et al. A new approach for very dark video denoising and enhancement
CN110298792B (en) Low-illumination image enhancement and denoising method, system and computer equipment
US7826678B2 (en) Adaptive image sharpening method
Zheng et al. Ultra-high-definition image hdr reconstruction via collaborative bilateral learning
CN111353955A (en) Image processing method, device, equipment and storage medium
CN105931206A (en) Method for enhancing sharpness of color image with color constancy
Mu et al. Low and non-uniform illumination color image enhancement using weighted guided image filtering
US8265419B2 (en) Image processing apparatus and image processing method
CN114092407A (en) Method and device for processing video conference shared document in clear mode
CN111415317B (en) Image processing method and device, electronic equipment and computer readable storage medium
Li et al. Saliency guided naturalness enhancement in color images
Cho et al. Enhancement technique of image contrast using new histogram transformation
Li et al. Adaptive Bregmanized total variation model for mixed noise removal
CN113096014B (en) Video super processing method, electronic device and storage medium
Jeon Computational intelligence approach for medical images by suppressing noise
CN115063314A (en) Self-adaptive video sharpening method, device and equipment based on table lookup method and storage medium
Sheng et al. Mixed noise removal by bilateral weighted sparse representation
Chen et al. An entropy-preserving histogram modification algorithm for image contrast enhancement
Jeon Denoising in contrast-enhanced X-ray images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant