CN110619668B - Image abstraction method and device and terminal equipment - Google Patents

Image abstraction method and device and terminal equipment Download PDF

Info

Publication number
CN110619668B
CN110619668B CN201910772446.9A CN201910772446A CN110619668B CN 110619668 B CN110619668 B CN 110619668B CN 201910772446 A CN201910772446 A CN 201910772446A CN 110619668 B CN110619668 B CN 110619668B
Authority
CN
China
Prior art keywords
image
level
kth
structure tensor
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910772446.9A
Other languages
Chinese (zh)
Other versions
CN110619668A (en
Inventor
冼雪琳
张运生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Information Technology
Original Assignee
Shenzhen Institute of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Information Technology filed Critical Shenzhen Institute of Information Technology
Priority to CN201910772446.9A priority Critical patent/CN110619668B/en
Publication of CN110619668A publication Critical patent/CN110619668A/en
Application granted granted Critical
Publication of CN110619668B publication Critical patent/CN110619668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides an image abstraction method, an image abstraction device and terminal equipment, wherein the method comprises the following steps: down-sampling the first image to obtain a second image; sampling an abstract image of a (k-1) th level image to obtain a k level up-sampled image, and fusing the k level up-sampled image and the k level image to obtain a k level fused image; sampling the k-1 level fusion structure tensor to obtain a k level up-sampling structure tensor, and fusing the structure tensor of the k level fusion image with the k level up-sampling structure tensor to obtain a k level fusion structure tensor; performing abstraction processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image; and outputting the target abstract image. The method and the device can solve the problem that when an anisotropic Kuwahara filter is used for image abstraction in the prior art, if the abstraction level is improved, artifacts are easy to generate.

Description

Image abstraction method and device and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image abstraction method, an image abstraction device, and a terminal device.
Background
Image abstraction refers to creating non-photorealistic rendered images from images or videos, such as creating images of card ventilation grids from images or videos.
Image abstraction does not focus on simulating a particular artistic skill or style, but rather simplifies scene information by removing unnecessary information that is not relevant to a particular purpose.
A common method of current image abstraction is image segmentation or the use of filters. When a filter is used, the anisotropic Kuwahara filter has better performance, unlike other nonlinear smoothing filters, the anisotropic Kuwahara filter can resist high contrast noise, avoid excessive blurring of low contrast regions, and provide a consistent level of abstraction throughout an image, and when applied to video, achieve excellent temporal coherence.
However, with an anisotropic Kuwahara filter, the level of abstraction achievable by the anisotropic Kuwahara filter is limited by the filter radius, and if it is desired to increase the level of abstraction, the current practice is to increase the filter radius. However, simply increasing the filter radius is not a good solution, since increasing the filter radius easily leads to artifacts.
In summary, when an image is abstracted by using an anisotropic Kuwahara filter, if the abstraction level needs to be increased, the filter radius needs to be increased, and artifacts are likely to occur.
Disclosure of Invention
In view of this, embodiments of the present application provide an image abstraction method, an image abstraction apparatus, and a terminal device, so as to solve the problem that when an anisotropic Kuwahara filter is used for image abstraction in the prior art, if an abstraction level needs to be increased, a filter radius needs to be increased, and artifacts are easily generated.
A first aspect of an embodiment of the present application provides an image abstraction method, including:
carrying out downsampling on the first image through a preset downsampling filter to obtain a preset number of second images;
arranging the first image and the second image according to the sequence of image resolution from small to large, taking the second image with the minimum resolution as a level 1 image, and taking the first image as an N +1 level image, wherein N is the preset number;
performing upsampling processing on an abstract image corresponding to a k-1 level image to obtain a k level upsampled image with the resolution consistent with that of the k level image, and performing weighted fusion on the k level upsampled image and the k level image to obtain a k level fusion image, wherein the initial value of k is 1, and when k is equal to 1, the k level fusion image is consistent with the k level image;
calculating a structure tensor of a kth-level fusion image, performing upsampling on a kth-1-level fusion structure tensor to obtain a kth-level upsampled structure tensor, and performing weighted fusion on the structure tensor of the kth-level fusion image and the kth-level upsampled structure tensor to obtain a kth-level fusion structure tensor, wherein when k is equal to 1, the structure tensor of the kth-level fusion image is consistent with the kth-level fusion structure tensor;
performing abstraction processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image;
and sequentially calculating the abstract images corresponding to the images of all levels, and outputting the abstract image corresponding to the (N + 1) th level image as a target abstract image to finish the image abstraction operation.
A second aspect of an embodiment of the present application provides an image abstraction apparatus, including:
the down-sampling module is used for down-sampling the first image through a preset down-sampling filter to obtain a preset number of second images;
the image sorting module is used for sorting the first image and the second image according to the sequence of image resolution from small to large, taking the second image with the minimum resolution as a level 1 image and taking the first image as an N +1 level image, wherein N is the preset number;
the image fusion module is used for performing up-sampling processing on an abstract image corresponding to a k-1 level image to obtain a k level up-sampled image with the resolution consistent with that of the k level image, and performing weighted fusion on the k level up-sampled image and the k level image to obtain a k level fusion image, wherein the initial value of k is 1, and when k is equal to 1, the k level fusion image is consistent with the k level image;
the structure fusion module is used for calculating a structure tensor of a kth-level fusion image, up-sampling the kth-1-level fusion structure tensor to obtain a kth-level up-sampling structure tensor, and performing weighted fusion on the structure tensor of the kth-level fusion image and the kth-level up-sampling structure tensor to obtain a kth-level fusion structure tensor, wherein when k is equal to 1, the structure tensor of the kth-level fusion image is consistent with the kth-level fusion structure tensor;
the abstract processing module is used for carrying out abstract processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image;
and the target output module is used for sequentially calculating the abstract images corresponding to the images of all levels, outputting the abstract image corresponding to the (N + 1) th level image as a target abstract image and finishing the image abstraction operation.
A third aspect of the embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the steps of the method as described above.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the image abstraction method, the first image and the second image are arranged according to the pyramid structure, the abstract image and the structure tensor are solved layer by layer and are spread to the upper-level image, a strong abstraction effect is provided, the radius of the filter of the anisotropic Kuwahara filter is not required to be increased, and the problem that when the anisotropic Kuwahara filter is used for image abstraction in the prior art, if the abstraction level is required to be increased, the radius of the filter is required to be increased, and artifacts are easily generated is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating an implementation of an image abstraction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image abstraction device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
The first embodiment is as follows:
referring to fig. 1, an image abstraction method provided in an embodiment of the present application is described below, where the image abstraction method in the embodiment of the present application includes:
s101, performing downsampling on the first image through a preset downsampling filter to obtain a preset number of second images;
when the image abstraction is performed, a preset downsampling filter may be used to downsample the first image to obtain a preset number of second images, where the first image is an input image to be abstracted.
Step S102, arranging the first image and the second image according to the sequence of image resolution from small to large, taking the second image with the minimum resolution as a level 1 image, and taking the first image as an N +1 level image, wherein N is the preset number;
after the first image and the second image are obtained, the first image and the second image may be arranged in order of the image resolutions from small to large, the second image with the smallest resolution is used as the level 1 image, and the first image is used as the N +1 level image because the resolution of the first image is the largest, and N is a preset number, so as to form a pyramid structure.
Step S103, performing upsampling processing on an abstract image corresponding to a k-1 level image to obtain a k level upsampled image with the resolution consistent with that of the k level image, and performing weighted fusion on the k level upsampled image and the k level image to obtain a k level fusion image, wherein the initial value of k is 1, and when k is equal to 1, the k level fusion image is consistent with the k level image;
in the kth level, there are a kth level image, a kth level up-sampled image, and a kth level fused image.
And the k-th level up-sampling image is obtained by up-sampling the abstract image corresponding to the k-1 level image, and the resolution of the k-th level up-sampling image is consistent with that of the k-th level image.
The k-th level fused image is obtained by weighted fusion of the k-th level image and the k-th level up-sampling image, and the process of weighted fusion of the images can be represented as follows:
fk3=βkfk1+(1-βk)fk2
wherein f isk1Representing the k-th order image, fk2Representing the k-th level up-sampled image, fk3Representing a k-th level fusion image, betakAnd representing the k-th level image fusion weight.
βkCan be set according to actual conditions, can be set as a fixed weight, can also be independently calculated for each level, and when each level is independently calculated, beta is calculatedkThe computational expression of (c) may be:
βk=clamp(sqkps(pd)k2,0,1)
wherein, Ps、PdAnd τ2To fix the parameters, PsMay be selected to be 0.5, PdMay be selected to be 1.25, tau2A typical value of (c) may be chosen to be 0.1. PsAnd PdProviding additional user control, PsApplied in a uniform manner to all levels, PdThe proportion of each stage is considered when in application. Tau is2It is used to resolve the small standard deviation due to noise.
sqkS corresponding to each pixel point in the k-1 level fusion image(k-1)maxUp-sampling to obtain s corresponding to the pixel point in the k-1 level fusion image(k-1)maxThe expression of (a) is:
Figure BDA0002174005860000061
wherein i is a positive integer greater than or equal to 1 and less than or equal to q, q is the number of sectors of the anisotropic Kuwahara filter, s(k-1)iAnd the standard deviation of the ith sector in the anisotropic Kuwahara filter corresponding to the pixel point in the fused image of the (k-1) th level is shown.
When k is 1, since there is no abstract image corresponding to the 0 th-level image, there is no 1 st-level up-sampled image, and the 1 st-level image is directly used as the 1 st-level fusion image.
Step S104, calculating a structure tensor of a kth-level fusion image, performing up-sampling on the k-1-level fusion structure tensor to obtain a kth-level up-sampling structure tensor, and performing weighted fusion on the structure tensor of the kth-level fusion image and the kth-level up-sampling structure tensor to obtain a kth-level fusion structure tensor, wherein when k is equal to 1, the structure tensor of the kth-level fusion image is consistent with the kth-level fusion structure tensor;
after the k-th level fused image is obtained, the structure tensor of the k-th level fused image can be calculated.
When calculating the structure tensor, it is necessary to calculate the approximate partial derivative, which may be preferable in this embodiment
Figure BDA0002174005860000077
et al, optimized for rotational symmetry, which can be expressed as:
Figure BDA0002174005860000071
Figure BDA0002174005860000072
wherein p is1Assuming that f denotes the input image, then the partial derivatives approximations in the x and y directions can be expressed as:
Figure BDA0002174005860000073
Figure BDA0002174005860000074
wherein the content of the first and second substances,
Figure BDA0002174005860000075
the partial derivative in the x-direction is indicated,
Figure BDA0002174005860000076
representing the partial derivative in the y-direction.
When the structure tensor is calculated, a specific calculation mode can be selected according to actual conditions, in this embodiment, the smooth structure tensor can be preferably calculated, and in the image with damaged noise, the image can be processed more stably by selecting the smooth structure tensor.
In the kth level, there is a structure tensor of the kth-level fused image, a kth-level upsampled structure tensor, and a kth-level fused structure tensor.
The k-th up-sampling structure tensor is obtained by up-sampling the k-1-th fusion structure tensor, and the k-th fusion structure tensor is obtained by weighted fusion of the structure tensor of the k-th fusion image and the k-th up-sampling structure tensor.
When k is 1, since the 0 th-level fusion structure tensor does not exist, the 1 st-level upsampling structure tensor does not exist, and the structure tensor of the 1 st-level fusion image is directly used as the 1 st-level fusion structure tensor.
Step S105, performing abstraction processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image;
after the kth-level fusion image and the kth-level fusion structure tensor are obtained through calculation, the kth-level fusion image can be abstracted by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor, and an abstract image corresponding to the kth-level image is obtained.
And S106, sequentially calculating the abstract images corresponding to the images of all levels, and outputting the abstract image corresponding to the (N + 1) th level image as a target abstract image to finish the image abstraction operation.
Repeating the above process, the abstract images corresponding to the images of all levels can be sequentially calculated, the abstract images corresponding to all levels and the fusion structure tensor are propagated to the next level until the abstract image corresponding to the (N + 1) th level image, namely the abstract image corresponding to the first image, is obtained through calculation, and the abstract image corresponding to the (N + 1) th level image is used as the target image to complete the image abstraction operation.
Further, the calculating a structure tensor of the kth-level fusion image, upsampling the k-1-level fusion structure tensor to obtain a kth-level upsampled structure tensor, and performing weighted fusion on the structure tensor of the kth-level fusion image and the kth-level upsampled structure tensor to obtain the kth-level fusion structure tensor specifically includes:
a1, calculating the structure tensor of the kth level fusion image, and calculating the anisotropy value A of the kth level according to the structure tensor of the kth level fusion imagek
Calculating the structure tensor of the kth level fusion image, wherein the structure tensor of the kth level fusion image has a group of eigenvalues lambdak1And λk2The k-th order anisotropy value A may be calculatedk
Figure BDA0002174005860000081
AkRanges from 0 to 1, where 0 represents isotropy and 1 represents complete anisotropy.
A2, upsampling the k-1 level fusion structure tensor to obtain the k-level upsampled structure tensor, and performing anisotropic value A on the k-1 levelk-1Up-sampling to obtain the k-th up-sampling anisotropy value Aks
The k-1 level fusion structure tensor is subjected to up-sampling to obtain a k-level up-sampling structure tensor, and meanwhile, the k-1 level anisotropy value A is subjected tok-1Up-sampling to obtain the k-th up-sampling anisotropy value Aks
A3 with the k-th order anisotropy value AkDivided by the k-th order anisotropy value AkAnd on the k-th stageSampling the anisotropy value AksSum to obtain a first weighting factor alphak
The first weighting factor α can be determined before the weighted fusion of the structure tensorkWeighting factor alphakCan be set according to actual requirements, for example, the weighting factor alphakThe calculation of (d) may be:
Figure BDA0002174005860000091
with the k-th order anisotropy value AkDivided by the k-th order anisotropy value AkAnd the k-th level up-sampling anisotropy value AksSum to obtain a first weighting factor alphakExpressing more weights at the first weighting factor gives a more anisotropic structure tensor, which in turn leads to a more robust estimation result.
And A4, performing weighted fusion on the kth-level up-sampling structure tensor and the kth-level structure tensor according to the first weighting factor to obtain a kth-level fusion structure tensor.
Calculating to obtain a first weighting factor alphakThen, the k-th up-sampling structure tensor and the k-th structure tensor can be weighted and fused according to the first weighting factor to obtain a k-th fused structure tensor, and the process of weighted fusion of the structure tensor can be expressed as:
Jρkr=αkJρk+(1-αk)Jρks
wherein, JρkrRepresenting the k-th fusion structure tensor, JρkRepresenting the kth-order structure tensor, JρksRepresenting the k-th order up-sampled structure tensor.
Further, the abstracting the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image specifically includes:
b1, according to the k-th level fusion structure tensor, calculating the local corresponding to each sector in the anisotropic Kuwahara filter corresponding to the pixel point in the k-th level fusion imagePartial weighted average mkiAnd standard deviation skiWherein i is a positive integer greater than or equal to 1, and i is less than or equal to q, q is the number of sectors of the anisotropic Kuwahara filter;
when an anisotropic Kuwahara filter is used, the image of the input filter is denoted by f, (x)0,y0) Representing any one of the pixel coordinates on the image,
Figure BDA0002174005860000092
representing the local direction, a representing the tuning parameter, typically set to 1, r being the filter radius, the eccentricity S can be defined as:
Figure BDA0002174005860000101
let
Figure BDA0002174005860000104
Is shown to pass through
Figure BDA0002174005860000105
Defined rotation matrix, map
Figure BDA0002174005860000106
The elliptical filter window is linearly translated into a unit disk.
The anisotropic Kuwahara filter divides the elliptical filter window into different sectors, similar to the rectangular area of the original Kuwahara filter, q is the number of sectors, a typical value can be set to 4 or 8, and different sectors must have overlapping parts.
The anisotropic Kuwahara filter uses a weighting function to define the influence degree of pixels in a sector on the sector, and it is a common practice to define a corresponding weighting function on a unit disk, and then convert the weighting function to an elliptical window, where the weighting function can be defined as:
ωdi=(χi*Gσs)Gσr
ωdion a unit discOf the ith sector, χiRepresents the value range of the i-th sector, GбsGaussian kernel function, G, representing standard deviation Be sбrThe gaussian kernel function with the standard deviation of σ r is shown, the reasonable value of σ r can be set to be 0.4, and the reasonable value of σ s can be set to be one third of σ r.
The weighting function of other sectors can be determined by smoothing χiOr simply rotate ωdiThus obtaining the product.
The weighting function on the unit disk can then be pulled back to the window of the ellipse, resulting in the weighting function ω defined on the ellipsei
Figure BDA0002174005860000103
Then, a local weighted average m corresponding to each sector can be calculatedkiAnd standard deviation skiLocal weighted mean mkiAnd standard deviation skiThe calculation expression of (a) is:
Figure BDA0002174005860000102
and q is the number of pixel points in the ith sector.
B2, acquiring a preset standard deviation threshold value tau1Calculating the weight value z corresponding to each sector by a weight calculation formulakiWherein, the weight calculation formula specifically is as follows:
zki=(max(τ1,||ski||))-a
a is a first preset parameter, a typical value can be set to 8, and a standard deviation threshold tau is preset1A typical value of (c) may be set to 0.02;
calculating to obtain a local weighted average value m corresponding to each sector in the anisotropic Kuwahara filter corresponding to the pixel point in the kth level fusion imagekiAnd standard deviation skiThen, the weight value z corresponding to each sector can be calculatedki
In this example, theNewly defining weight value z corresponding to each sectorkiIn contrast to the conventional weight value, the weight value z in the present embodimentkiEnsure that the pair has a low standard deviation skiThe sectors of (a) give more weight, i.e. those sectors with more evenly distributed pixel values; at the color zone boundary, the sectors located completely on one side of the boundary have a low standard deviation skiAnd therefore have a high weight value; the sectors crossing the boundary have a high standard deviation skiAnd therefore has a low weight value. Since the two sectors may not be completely located in different color regions due to overlapping portions of the different sectors, but in a uniform region, the standard deviation of the sectors is low, and therefore, in some cases, for example, a slight difference in standard deviation due to noise may result in a randomly selected weight value, which typically results in artifacts, and therefore, the standard deviation is thresholded before exponentiation, setting a preset standard deviation threshold τ1This problem can be avoided, which also avoids the zero division problem for flat areas with zero standard deviation.
B3 local weighted average m corresponding to each sectorkiWeight value z corresponding to each sectorkiPerforming weighted calculation to obtain an abstract value corresponding to the pixel point;
obtaining the weight value z corresponding to each sectorkiThen, the local weighted average m corresponding to each sector can be obtainedkiWeight value z corresponding to each sectorkiAnd performing weighted calculation to obtain an abstract value corresponding to the pixel point:
Figure BDA0002174005860000111
wherein, O is an abstraction value corresponding to the pixel point.
And B4, synthesizing the abstraction value corresponding to each pixel point in the k-th level fusion image to obtain an abstract image corresponding to the k-th level image.
And after the corresponding abstract value of each pixel point in the kth level fusion image is obtained, combining to obtain the abstract image corresponding to the kth level image.
Further, the downsampling the first image through a preset downsampling filter to obtain a preset number of second images specifically includes:
and C1, down-sampling the first image through a Lanczos3 filter to obtain a preset number of second images.
The Lanczos3 filter can better keep low frequency and restrain high frequency than other down-sampling filters, and the image abstraction method applied in the embodiment can obtain better effect.
The upsampling algorithm in this embodiment may be selected according to actual requirements, for example, a bilinear interpolation algorithm may be selected for upsampling in this embodiment.
In the image abstraction method provided by the embodiment, the first image and the second image are arranged according to a pyramid structure, the abstract image and the structure tensor are solved layer by layer and are propagated to the upper-level image, the abstract image is propagated from the image with lower resolution to the image with higher resolution, a strong abstraction effect is provided through multi-scale calculation, the filter radius of the anisotropic Kuwahara filter is not required to be increased, and the problem that when the existing anisotropic Kuwahara filter is used for image abstraction, if the abstraction level is required to be increased, the filter radius is required to be increased, and artifacts are easily generated is solved.
In the process of fusing the structure tensor, a kth-level anisotropy and a kth-level upsampling anisotropy may be calculated, a first weighting factor may be calculated according to the anisotropy, and the kth-level structure tensor and the kth-level upsampling structure tensor may be fused according to the first weighting factor.
When outputting the extraction value corresponding to the pixel point in the anisotropic Kuwahara filter, a new weight value z is adoptedkiAnd performing threshold processing on the standard deviation before the new weighted value is exponentiated, setting a preset standard deviation threshold value, avoiding random selection of the weighted value due to small difference of the standard deviation caused by noise, reducing generation of artifacts, and avoiding the problem of zero division of a flat area with zero standard deviation.
When the preset downsampling filter is selected, a Lanczos3 filter can be selected, and a Lanczos3 filter can better keep low frequency and restrain high frequency than other downsampling filters, so that the image abstraction method applied to the embodiment can obtain a better effect.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
the second embodiment of the present application provides an image abstraction device, which is only shown in relevant parts of the present application for convenience of description, and as shown in fig. 2, the image abstraction device includes,
a down-sampling module 201, configured to perform down-sampling on the first image through a preset down-sampling filter to obtain a preset number of second images;
the image sorting module 202 is configured to sort the first image and the second image in order of increasing image resolution, use the second image with the smallest resolution as a level 1 image, and use the first image as an N +1 level image, where N is the preset number;
the image fusion module 203 is configured to perform upsampling processing on an abstract image corresponding to a k-1 th-level image to obtain a k-level upsampled image with a resolution consistent with that of the k-level image, and perform weighted fusion on the k-level upsampled image and the k-level image to obtain a k-level fusion image, where an initial value of k is 1, and when k is equal to 1, the k-level fusion image is consistent with the k-level image;
the structure fusion module 204 is configured to calculate a structure tensor of the kth-level fusion image, perform upsampling on the kth-1-level fusion structure tensor to obtain a kth-level upsampled structure tensor, and perform weighted fusion on the structure tensor of the kth-level fusion image and the kth-level upsampled structure tensor to obtain a kth-level fusion structure tensor, where when k is equal to 1, the structure tensor of the kth-level fusion image is consistent with the kth-level fusion structure tensor;
the abstraction processing module 205 is configured to perform abstraction processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstraction image corresponding to the kth-level image;
and the target output module 206 is configured to sequentially calculate the abstract images corresponding to the images at each level, and output the abstract image corresponding to the (N + 1) th level image as a target abstract image, thereby completing the image abstraction operation.
Further, the structure fusion module 204 specifically includes:
a fusion structure submodule for calculating the structure tensor of the kth level fusion image and calculating the kth level anisotropy value A according to the structure tensor of the kth level fusion imagek
A sampling structure submodule for up-sampling the k-1 level fusion structure tensor to obtain the k-level up-sampling structure tensor, and for the k-1 level anisotropic value Ak-1Up-sampling to obtain the k-th up-sampling anisotropy value Aks
A weighting factor submodule for applying the k-th order anisotropy value AkDivided by the k-th order anisotropy value AkAnd the k-th level up-sampling anisotropy value AksSum to obtain a first weighting factor alphak
And the weighted fusion submodule is used for carrying out weighted fusion on the kth-level up-sampling structure tensor and the kth-level structure tensor according to the first weighting factor to obtain a kth-level fusion structure tensor.
Further, the abstraction processing module 205 specifically includes:
a parameter submodule, configured to calculate, according to the kth-level fusion structure tensor, a local weighted average value m corresponding to each sector in an anisotropic Kuwahara filter corresponding to a pixel point in the kth-level fusion imagekiAnd standard deviation skiWherein i is a positive integer greater than or equal to 1, and i is less than or equal to q, q is the number of sectors of the anisotropic Kuwahara filter;
a weight submodule for obtaining a preset standard deviation threshold tau1Calculating the weight value z corresponding to each sector by a weight calculation formulakiWherein, the weight calculation formula specifically is as follows:
zki=(max(τ1,||ski||))-a
a is a first preset parameter;
an abstract submodule for calculating a local weighted average m corresponding to each sectorkiWeight value z corresponding to each sectorkiPerforming weighted calculation to obtain an abstract value corresponding to the pixel point;
and the image submodule is used for integrating the abstraction values corresponding to all the pixel points in the k-th level fusion image to obtain an abstract image corresponding to the k-th level image.
Further, the down-sampling module 201 is specifically configured to down-sample the first image through a Lanczos3 filter to obtain a preset number of second images.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example three:
fig. 3 is a schematic diagram of a terminal device provided in the third embodiment of the present application. As shown in fig. 3, the terminal device 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32 stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, implements the steps in the above-described image abstraction method embodiments, such as steps S101 to S106 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 32, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 201 to 206 shown in fig. 2.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 32 in the terminal device 3. For example, the computer program 32 may be divided into a down-sampling module, an image sorting module, an image fusion module, a structure fusion module, an abstraction processing module, and a target output module, and the specific functions of each module are as follows:
the down-sampling module is used for down-sampling the first image through a preset down-sampling filter to obtain a preset number of second images;
the image sorting module is used for sorting the first image and the second image according to the sequence of image resolution from small to large, taking the second image with the minimum resolution as a level 1 image and taking the first image as an N +1 level image, wherein N is the preset number;
the image fusion module is used for performing up-sampling processing on an abstract image corresponding to a k-1 level image to obtain a k level up-sampled image with the resolution consistent with that of the k level image, and performing weighted fusion on the k level up-sampled image and the k level image to obtain a k level fusion image, wherein the initial value of k is 1, and when k is equal to 1, the k level fusion image is consistent with the k level image;
the structure fusion module is used for calculating a structure tensor of a kth-level fusion image, up-sampling the kth-1-level fusion structure tensor to obtain a kth-level up-sampling structure tensor, and performing weighted fusion on the structure tensor of the kth-level fusion image and the kth-level up-sampling structure tensor to obtain a kth-level fusion structure tensor, wherein when k is equal to 1, the structure tensor of the kth-level fusion image is consistent with the kth-level fusion structure tensor;
the abstract processing module is used for carrying out abstract processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image;
and the target output module is used for sequentially calculating the abstract images corresponding to the images of all levels, outputting the abstract image corresponding to the (N + 1) th level image as a target abstract image and finishing the image abstraction operation.
The terminal device 3 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 30, a memory 31. It will be understood by those skilled in the art that fig. 3 is only an example of the terminal device 3, and does not constitute a limitation to the terminal device 3, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device may also include an input-output device, a network access device, a bus, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal device 3, such as a hard disk or a memory of the terminal device 3. The memory 31 may also be an external storage device of the terminal device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the terminal device 3. The memory 31 is used for storing the computer program and other programs and data required by the terminal device. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (8)

1. An image abstraction method, comprising:
carrying out downsampling on the first image through a preset downsampling filter to obtain a preset number of second images;
arranging the first image and the second image according to the sequence of image resolution from small to large, taking the second image with the minimum resolution as a level 1 image, and taking the first image as an N +1 level image, wherein N is the preset number;
performing upsampling processing on an abstract image corresponding to a k-1 level image to obtain a k level upsampled image with the resolution consistent with that of the k level image, and performing weighted fusion on the k level upsampled image and the k level image to obtain a k level fusion image, wherein the initial value of k is 1, and when k is equal to 1, the k level fusion image is consistent with the k level image;
calculating a structure tensor of a kth-level fusion image, performing upsampling on a kth-1-level fusion structure tensor to obtain a kth-level upsampled structure tensor, and performing weighted fusion on the structure tensor of the kth-level fusion image and the kth-level upsampled structure tensor to obtain a kth-level fusion structure tensor, wherein when k is equal to 1, the structure tensor of the kth-level fusion image is consistent with the kth-level fusion structure tensor;
performing abstraction processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image, which specifically includes:
according to the kth level fusion structure tensor, calculating a local weighted average value m corresponding to each sector in an anisotropic Kuwahara filter corresponding to a pixel point in the kth level fusion imagekiAnd standard deviation skiWherein i is a positive integer greater than or equal to 1, and i is less than or equal to q, q is the number of sectors of the anisotropic Kuwahara filter;
obtaining a preset standard deviation threshold value tau1Calculating the weight value z corresponding to each sector by a weight calculation formulakiWherein, the weight calculation formula specifically is as follows:
zki=(max(τ1,||ski||))-a
a is a first preset parameter;
according to the local weighted average m corresponding to each sectorkiWeight value z corresponding to each sectorkiPerforming weighted calculation to obtain an abstract value corresponding to the pixel point;
integrating the abstract values corresponding to all the pixel points in the kth-level fusion image to obtain an abstract image corresponding to the kth-level image;
and sequentially calculating the abstract images corresponding to the images of all levels, and outputting the abstract image corresponding to the (N + 1) th level image as a target abstract image to finish the image abstraction operation.
2. The image abstraction method of claim 1, wherein said calculating a structure tensor of a kth level fused image, upsampling a k-1 level fused structure tensor to obtain a kth level upsampled structure tensor, and weighting and fusing the structure tensor of the kth level fused image with the kth level upsampled structure tensor to obtain the kth level fused structure tensor specifically includes:
calculating the structure tensor of the kth level fusion image, and calculating the kth level anisotropy value A according to the structure tensor of the kth level fusion imagek
The k-1 level fusion structure tensor is subjected to up-sampling to obtain a k-level up-sampling structure tensor, and the k-1 level anisotropy value A is subjected to up-samplingk-1Up-sampling to obtain the k-th up-sampling anisotropy value Aks
At the k-th order anisotropy value AkDivided by the k-th order anisotropy value AkAnd the k-th level up-sampling anisotropy value AksSum to obtain a first weighting factor alphak
And performing weighted fusion on the kth-level up-sampling structure tensor and the kth-level structure tensor according to the first weighting factor to obtain a kth-level fusion structure tensor.
3. The image abstraction method of claim 1, wherein downsampling the first image through a predetermined downsampling filter to obtain a predetermined number of second images specifically comprises:
the first image is downsampled by a Lanczos3 filter to obtain a preset number of second images.
4. An image abstraction device, comprising:
the down-sampling module is used for down-sampling the first image through a preset down-sampling filter to obtain a preset number of second images;
the image sorting module is used for sorting the first image and the second image according to the sequence of image resolution from small to large, taking the second image with the minimum resolution as a level 1 image and taking the first image as an N +1 level image, wherein N is the preset number;
the image fusion module is used for performing up-sampling processing on an abstract image corresponding to a k-1 level image to obtain a k level up-sampled image with the resolution consistent with that of the k level image, and performing weighted fusion on the k level up-sampled image and the k level image to obtain a k level fusion image, wherein the initial value of k is 1, and when k is equal to 1, the k level fusion image is consistent with the k level image;
the structure fusion module is used for calculating a structure tensor of a kth-level fusion image, up-sampling the kth-1-level fusion structure tensor to obtain a kth-level up-sampling structure tensor, and performing weighted fusion on the structure tensor of the kth-level fusion image and the kth-level up-sampling structure tensor to obtain a kth-level fusion structure tensor, wherein when k is equal to 1, the structure tensor of the kth-level fusion image is consistent with the kth-level fusion structure tensor;
the abstract processing module is used for carrying out abstract processing on the kth-level fusion image by using an anisotropic Kuwahara filter according to the kth-level fusion structure tensor to obtain an abstract image corresponding to the kth-level image;
the target output module is used for sequentially calculating the abstract images corresponding to the images at all levels, outputting the abstract image corresponding to the (N + 1) th level image as a target abstract image and finishing image abstraction operation;
the abstraction processing module specifically includes:
a parameter submodule, configured to calculate, according to the kth-level fusion structure tensor, a local weighted average value m corresponding to each sector in an anisotropic Kuwahara filter corresponding to a pixel point in the kth-level fusion imagekiAnd standard deviation skiWherein i is a positive integer greater than or equal to 1, and i is less than or equal to q, q is the number of sectors of the anisotropic Kuwahara filter;
a weight submodule for obtaining a preset standard deviation threshold tau1Calculating the weight value z corresponding to each sector by a weight calculation formulakiWherein, the weight calculation formula specifically is as follows:
zki=(max(τ1,||ski||))-a
a is a first preset parameter;
an abstract submodule for calculating a local weighted average m corresponding to each sectorkiWeight value z corresponding to each sectorkiPerforming weighted calculation to obtain an abstract value corresponding to the pixel point;
and the image submodule is used for integrating the abstraction values corresponding to all the pixel points in the k-th level fusion image to obtain an abstract image corresponding to the k-th level image.
5. The image abstraction device of claim 4, wherein the structure fusion module specifically includes:
a fusion structure submodule for calculating the structure tensor of the kth level fusion image and calculating the kth level anisotropy value A according to the structure tensor of the kth level fusion imagek
A sampling structure submodule for up-sampling the k-1 level fusion structure tensor to obtain the k-level up-sampling structure tensor, and for the k-1 level anisotropic value Ak-1Up-sampling to obtain the k-th up-sampling anisotropy value Aks
A weighting factor submodule for applying the k-th order anisotropy value AkDivided by the k-th order anisotropy value AkAnd the k-th level up-sampling anisotropy value AksSum to obtain a first weighting factor alphak
And the weighted fusion submodule is used for carrying out weighted fusion on the kth-level up-sampling structure tensor and the kth-level structure tensor according to the first weighting factor to obtain a kth-level fusion structure tensor.
6. An image abstraction device according to claim 4, wherein the down-sampling module is specifically configured to down-sample the first image through a Lanczos3 filter to obtain a predetermined number of second images.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 3 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN201910772446.9A 2019-08-21 2019-08-21 Image abstraction method and device and terminal equipment Active CN110619668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910772446.9A CN110619668B (en) 2019-08-21 2019-08-21 Image abstraction method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910772446.9A CN110619668B (en) 2019-08-21 2019-08-21 Image abstraction method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110619668A CN110619668A (en) 2019-12-27
CN110619668B true CN110619668B (en) 2020-11-03

Family

ID=68922375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910772446.9A Active CN110619668B (en) 2019-08-21 2019-08-21 Image abstraction method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN110619668B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560945A (en) * 2020-12-14 2021-03-26 珠海格力电器股份有限公司 Equipment control method and system based on emotion recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800063A (en) * 2012-07-12 2012-11-28 中国科学院软件研究所 Image enhancement and abstraction method based on anisotropic filtering
CN102930576A (en) * 2012-10-15 2013-02-13 中国科学院软件研究所 Feature flow-based method for generating abstract line drawing
CN104504670A (en) * 2014-12-11 2015-04-08 上海理工大学 Multi-scale gradient domain image fusion algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800063A (en) * 2012-07-12 2012-11-28 中国科学院软件研究所 Image enhancement and abstraction method based on anisotropic filtering
CN102930576A (en) * 2012-10-15 2013-02-13 中国科学院软件研究所 Feature flow-based method for generating abstract line drawing
CN104504670A (en) * 2014-12-11 2015-04-08 上海理工大学 Multi-scale gradient domain image fusion algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Structure Adaptive Stylization of Images and Video";VON Jan Eric Kyprianidis;《Institutional Repository of the University of Potsdam:http://opus.kobv.de/ubp/volltexte/2013/6410/》;20130205;全文 *
"基于局部结构张量的图像三边滤波器";许光宇等;《计算机工程》;20170415;第43卷(第4期);全文 *
"基于虚拟图像金字塔序列融合的快速图像增强算法";戴霞等;《计算机学报》;20140315;第37卷(第3期);全文 *

Also Published As

Publication number Publication date
CN110619668A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
CN108921806B (en) Image processing method, image processing device and terminal equipment
CN109064428B (en) Image denoising processing method, terminal device and computer readable storage medium
CN108765343B (en) Image processing method, device, terminal and computer readable storage medium
Li et al. Fast guided global interpolation for depth and motion
CN107358586B (en) Image enhancement method, device and equipment
Yu et al. A unified learning framework for single image super-resolution
Yang et al. Constant time median and bilateral filtering
Kongskov et al. Directional total generalized variation regularization
JP2013518336A (en) Method and system for generating an output image with increased pixel resolution from an input image
CN109785246B (en) Noise reduction method, device and equipment for non-local mean filtering
Yang et al. Svm for edge-preserving filtering
Portilla et al. Efficient and robust image restoration using multiple-feature L2-relaxed sparse analysis priors
Cao et al. New architecture of deep recursive convolution networks for super-resolution
Liu et al. Automatic blur-kernel-size estimation for motion deblurring
Zhao et al. Iterative projection reconstruction for fast and efficient image upsampling
CN114511449A (en) Image enhancement method, device and computer readable storage medium
Li et al. A novel weighted anisotropic total variational model for image applications
Bastanfard et al. Toward image super-resolution based on local regression and nonlocal means
CN116071279A (en) Image processing method, device, computer equipment and storage medium
Kapuriya et al. Detection and restoration of multi-directional motion blurred objects
CN110619668B (en) Image abstraction method and device and terminal equipment
Lu et al. Video super resolution based on non-local regularization and reliable motion estimation
CN111754435B (en) Image processing method, device, terminal equipment and computer readable storage medium
Khan et al. Multi‐scale GAN with residual image learning for removing heterogeneous blur

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant