CN113808020A - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
CN113808020A
CN113808020A CN202111101796.6A CN202111101796A CN113808020A CN 113808020 A CN113808020 A CN 113808020A CN 202111101796 A CN202111101796 A CN 202111101796A CN 113808020 A CN113808020 A CN 113808020A
Authority
CN
China
Prior art keywords
value
pixel
low
resolution
coherence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111101796.6A
Other languages
Chinese (zh)
Inventor
赵突
张耿祥
甘易
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202111101796.6A priority Critical patent/CN113808020A/en
Publication of CN113808020A publication Critical patent/CN113808020A/en
Priority to PCT/CN2022/113117 priority patent/WO2023040563A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure provides an image processing method and equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a hash value of a pixel of a low-resolution image, inquiring a corresponding filter bank according to the hash value, respectively filtering corresponding areas of the pixel of the low-resolution image according to a plurality of filters in the filter bank, enabling each filter to output a filtering pixel value, and writing each filtering pixel value into a corresponding coordinate position of a high-resolution image, so that super-resolution processing of the image is achieved, and the high-resolution image is obtained. According to the method and the device, the pixels in the low-resolution image are filtered, so that the number of the pixels subjected to filtering processing is small, and the processing efficiency is greatly improved.

Description

Image processing method and apparatus
Technical Field
The embodiment of the disclosure relates to the technical field of computer and network communication, in particular to an image processing method and device.
Background
In the process of video conference, live webcast, online course teaching and the like, when the network bandwidth of a user is reduced, the resolution of a video can be reduced in order to ensure that a video picture cannot be blocked, but the display effect of the video image can be influenced. Therefore, it is necessary to improve the image resolution of each frame in the video, that is, to perform super-resolution image processing.
At present, the most common super-resolution image processing method is to perform up-sampling and amplification on a low-resolution image, calculate a hash value of a pixel of the amplified image, find a corresponding pre-trained filter according to the hash value, perform filtering processing on the pixel according to the filter, and then perform the above processing on each pixel in the amplified image to finally obtain a high-resolution image.
However, in this method, since filtering is performed on the up-sampled and amplified image, there are many pixels in the image, and the processing speed is slow.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device, so as to overcome the problem that in the prior art, filtering needs to be performed on an up-sampled and amplified image, and the number of pixels in the image is large, which results in a low processing speed.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including:
acquiring a low-resolution image, and calculating a hash value of each pixel in the low-resolution image;
inquiring the corresponding relation between a pre-stored hash value and a pre-trained filter bank to obtain the filter bank corresponding to the hash value of each pixel, wherein the filter bank comprises a plurality of filters;
respectively filtering the pixel regions corresponding to the pixels according to the filters in the filter bank to obtain a plurality of filtered pixel values;
and respectively writing the plurality of filtering pixel values into corresponding coordinate positions of a high-resolution image to obtain a high-resolution image corresponding to the low-resolution image.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the hash value determining module is used for acquiring a low-resolution image and calculating the hash value of each pixel in the low-resolution image;
the filter determining module is used for inquiring the corresponding relation between a pre-stored hash value and a pre-trained filter bank to obtain the filter bank corresponding to the hash value of each pixel, wherein the filter bank comprises a plurality of filters;
the filtering processing module is used for respectively filtering the pixel regions corresponding to the pixels according to the filters in the filter bank to obtain a plurality of filtering pixel values;
and the pixel writing module is used for respectively writing the filtering pixel values into corresponding coordinate positions of a high-resolution image to obtain a high-resolution image corresponding to the low-resolution image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the image processing method as set forth above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the image processing method according to the first aspect and various possible designs of the first aspect is implemented.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements an image processing method as set forth in the first aspect above and in various possible designs of the first aspect.
In the image processing method and device provided by the embodiment, the hash value of the pixel of the low-resolution image is acquired, the corresponding filter bank is queried according to the hash value, the corresponding region of the pixel of the low-resolution image is filtered according to a plurality of filters in the filter bank, so that each filter outputs one filtered pixel value, and each filtered pixel value is written into the corresponding coordinate position of the high-resolution image, so that the super-resolution processing of the image is realized, and the high-resolution image is obtained. Firstly, because the pixels in the low-resolution image are filtered, compared with the prior art that the number of the pixels for filtering is less in the embodiment of filtering the up-sampled and amplified image, the processing efficiency is greatly improved; secondly, in the prior art, the filters that need to read the cache one at a time from the memory need to frequently access the memory, which results in a slow response speed, while in the embodiment, the filter bank formed by a plurality of continuously stored filters is read from the memory at one time, which does not need to frequently access the memory, thereby increasing the response speed.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic view of a scene of image processing provided by an embodiment of the present disclosure;
fig. 2 is a first flowchart of an image processing method according to an embodiment of the disclosure;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the disclosure;
fig. 4 is a schematic flowchart of a third image processing method according to an embodiment of the disclosure;
fig. 5 is a fourth schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the process of video conference, live webcast, online course teaching and the like, when the network bandwidth of a user is reduced, the resolution of a video can be reduced in order to ensure that a video picture cannot be blocked, but the display effect of the video picture can be affected. In order to improve the display effect of the video picture, the image resolution of each frame in the video can be improved through an up-sampling algorithm. Conventional upsampling algorithms include bilinear interpolation or trilinear interpolation, but both of these conventional upsampling algorithms result in the absence of high frequency information in the video image, which blurs the video image. To solve this problem, various super-resolution algorithms have been proposed to process video images to improve the problem of up-sampled picture blur. First, a method based on deep learning is proposed, in which a low-resolution image is directly input to a deep learning model and upsampled to obtain a high-resolution image, but the method is slow and requires a gpu (graphics Processing unit) for acceleration, which increases the cost, although the image effect can be improved. Secondly, it is proposed to perform processing for enhancing high frequency information on a low resolution image by using an image sharpening processing method, and although an image processed by the method is relatively sharp, the details of the image are not rich enough, and the sharpening effect is not balanced. Therefore, the above two methods are not widely used. At present, the most common super-resolution image processing method is to perform up-sampling and amplification on a low-resolution image, calculate a hash value of a pixel for the amplified image, find a corresponding pre-trained filter according to the hash value, and perform filtering processing on the pixel according to the filter to obtain a high-resolution image. However, in this method, since filtering is performed on the up-sampled and amplified image, there are many image pixels, which results in a slow processing speed.
In order to solve the technical problem, the present disclosure provides the following technical solutions: the method comprises the steps of obtaining a hash value of a pixel of a low-resolution image, inquiring a corresponding filter bank according to the hash value, respectively filtering corresponding areas of the pixel of the low-resolution image according to a plurality of filters in the filter bank, enabling each filter to output a filtering pixel value, and writing each filtering pixel value into a corresponding coordinate position of a high-resolution image, so that super-resolution processing of the image is achieved, and the high-resolution image is obtained. Because the pixels in the low-resolution image are filtered, the number of the pixels subjected to filtering processing is small, and the processing efficiency is greatly improved.
Referring to fig. 1, fig. 1 is a schematic view of a scene of image processing provided by an embodiment of the present disclosure. As shown in fig. 1, includes a terminal 101 and a server 102. The terminal 101 may be any form of terminal device, and the terminal device according to the present disclosure may be a wireless terminal or a wired terminal. A wireless terminal may refer to a device that provides voice and/or other traffic data connectivity to a user, a handheld device having wireless connection capability, or other processing device connected to a wireless modem. A wireless terminal, which may be a mobile terminal such as a mobile telephone (or "cellular" telephone) and a computer having a mobile terminal, for example, a portable, pocket, hand-held, computer-included, or vehicle-mounted mobile device, may communicate with one or more core Network devices via a Radio Access Network (RAN), and may exchange language and/or data with the RAN. For another example, the Wireless terminal may also be a Personal Communication Service (PCS) phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), and other devices. A wireless Terminal may also be referred to as a system, a Subscriber Unit (Subscriber Unit), a Subscriber Station (Subscriber Station), a Mobile Station (Mobile), a Remote Station (Remote Station), a Remote Terminal (Remote Terminal), an Access Terminal (Access Terminal), a User Terminal (User Terminal), a User Agent (User Agent), and a User Device or User Equipment (User Equipment), which are not limited herein. Optionally, the terminal device may also be a mobile phone, an intelligent wearable device, a tablet computer, or other terminal devices.
The server 102 may be a server or a cluster of servers, and the server may communicate with the terminal through a network, and the server may provide various communication data for the terminal 101.
Referring to fig. 2, fig. 2 is a first flowchart illustrating an image processing method according to an embodiment of the disclosure. The method of the embodiment can be applied to a terminal or a server as shown in fig. 1, and the present disclosure is not limited in any way, and the image processing method includes:
s201: and acquiring a low-resolution image, and calculating the hash value of each pixel in the low-resolution image.
In this embodiment, the low-resolution image may be an image generated after the resolution of the video is reduced when the network bandwidth of the user is reduced in the video process of a video conference, a live webcast, a web lesson teaching, and the like.
Specifically, a plurality of pixel values in a region corresponding to any pixel in the low-fraction image are determined, and a hash value of the region is calculated according to the plurality of pixel values in the region, that is, the hash value corresponding to any pixel in the low-fraction image.
S202: and inquiring the corresponding relation between a pre-stored hash value and a pre-trained filter bank to obtain the filter bank corresponding to the hash value of each pixel, wherein the filter bank comprises a plurality of filters.
In this embodiment, the correspondence between the hash value and the pre-trained filter bank is locally stored in advance, and the storage format of the correspondence between the pre-stored hash value and the pre-trained filter bank may be a table format or a database format, for example.
The pre-stored corresponding relation between the hash value and the pre-trained filter bank comprises a filter bank formed by a plurality of pre-trained filters corresponding to different hash values.
Specifically, the pre-trained filter banks corresponding to different hash values are subjected to down-sampling processing according to a large number of high-resolution pictures to obtain a large number of low-resolution pictures; and then taking a large number of high-resolution pictures and low-resolution pictures corresponding to the high-resolution pictures as training data sets, taking the low-resolution pictures as input and the high-resolution pictures as output to solve equations of the filter bank to obtain the filter bank corresponding to different hash values.
S203: and respectively filtering the pixel regions corresponding to the pixels according to the filters in the filter bank to obtain a plurality of filtered pixel values.
In this embodiment, the number of filters included in the filter bank is the same as the number of pixels at the corresponding coordinate position of the high-resolution image. Here, the up-sampling multiple from the low resolution image to the high resolution image is s times, and the number of filters in the filter bank is s2The number of pixels at the corresponding coordinate position of the corresponding high-resolution image is also s2Wherein s is a positive integer.
Wherein the size of each filter is the same as the size of the pixel region corresponding to each pixel. For example, the size of each filter is k × k, and the size of the pixel region corresponding to each pixel is also k × k, where k is the number of pixels.
S204: and respectively writing the plurality of filtering pixel values into corresponding coordinate positions of a high-resolution image to obtain a high-resolution image corresponding to the low-resolution image.
In this embodiment, the up-sampling multiple from the low resolution image to the high resolution image is s times, and the number of filters in the filter bank is s2Defining the coordinate of any pixel in the low-resolution image as (x, y), the corresponding coordinate position of the high-resolution image is a square region from (sx, sy) to (sx + s-1, sy + s-1), and the region has s in total2A pixel position, this s2Writing s obtained by each pixel position2The filtered pixel values.
As can be seen from the above description, the super-resolution processing on the image is implemented by obtaining the hash value of the pixel of the low-resolution image, querying the corresponding filter bank according to the hash value, and filtering the corresponding region of the pixel of the low-resolution image according to the plurality of filters in the filter bank, so that each filter outputs one filtered pixel value, and writing each filtered pixel value into the corresponding coordinate position of the high-resolution image, thereby obtaining the high-resolution image. Firstly, because the pixels in the low-resolution image are filtered, compared with the prior art that the number of the pixels for filtering is less in the embodiment of filtering the up-sampled and amplified image, the processing efficiency is greatly improved; secondly, in the prior art, the filters that need to read the cache one at a time from the memory need to frequently access the memory, which results in a slow response speed, while in the embodiment, the filter bank formed by a plurality of continuously stored filters is read from the memory at one time, which does not need to frequently access the memory, thereby increasing the response speed.
Referring to fig. 3, fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the disclosure. On the basis of the foregoing embodiment, the foregoing step S201 specifically includes:
s301: and determining a square area which takes each pixel as the center and has the pixel size k x k in the low-resolution image, wherein k is an odd number which is larger than 1.
In this embodiment, for each pixel in the low-resolution image, a square region having a corresponding pixel size k × k is determined,
here, k is an odd number greater than 1. Optionally, k is 3 or 5.
S302: and calculating the pixel gradient value of each pixel in the square area.
Specifically, a first pixel gradient in the horizontal direction and a second pixel gradient in the vertical direction of each pixel of the square region are calculated, respectively.
In the present embodiment, the first pixel gradient in the horizontal direction is calculated as using the pixel value of the pixel adjacent to the right of the current pixel minus the pixel value of the current pixel; the second pixel gradient in the vertical direction is calculated using the pixel value of the pixel below the current pixel minus the pixel value of the current pixel. The specific calculation formula is as follows:
gx=I(i+1,j)-I(i,j)
gy=I(i,j+1)-I(i,j)
in the formula, gxIs the first pixel gradient, gyIs a first pixel gradient; i (I, j) is the pixel value of the current pixel, I, j is the coordinate.
S303: and generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square area.
Specifically, a 2 x 2 pixel gradient matrix is constructed according to the pixel gradient value of each pixel of the square area; wherein a first element of the 2 x 2 pixel gradient matrix is equal to a sum of squares of a first pixel gradient of the pixels in a horizontal direction, a second element and a third element are equal to a sum of products of the first pixel gradient of the pixels in the horizontal direction and a second pixel gradient of the pixels in a vertical direction, and a fourth element is equal to a sum of squares of a second pixel gradient of the pixels in the vertical direction.
In the present embodiment, a 2 × 2 pixel gradient matrix is generated according to the pixel gradient values of the pixels in the k × k square region, and four elements in this matrix are as follows:
Figure BDA0003271210180000071
wherein the content of the first and second substances,
Figure BDA0003271210180000072
is the first element;
Figure BDA0003271210180000073
gxgyare the second element and the third element,
Figure BDA0003271210180000074
is the fourth element.
S304: calculating eigenvalues of the pixel gradient matrix.
Specifically, the eigenvalue of the 2 × 2 pixel gradient matrix is calculated, and a first eigenvalue and a second eigenvalue of the 2 × 2 pixel gradient matrix are obtained.
In this embodiment, the eigenvalue of the 2 × 2 pixel gradient matrix is calculated to obtain two eigenvalues, where the first eigenvalue is denoted as λ1The second characteristic value is recorded as lambda2
S305: and calculating the intensity value, coherence value and angle value of the pixel gradient matrix according to the characteristic value.
Specifically, the intensity value is calculated according to the first characteristic value; calculating to obtain the coherence value according to the first characteristic value and the second characteristic value; and calculating a feature vector corresponding to the first feature value, and calculating the angle value according to the feature vector.
In this embodiment, the intensity value is calculated according to the first feature value, and the formula is as follows:
str=λ1
in the formula, str is a strength value.
In this embodiment, the coherence value is calculated according to the first eigenvalue and the second eigenvalue, and the following formula is added:
Figure BDA0003271210180000081
in the formula, coh is a coherence value.
In this embodiment, a eigenvector corresponding to the first eigenvalue is calculated and is denoted as λ1Corresponding feature vector is
Figure BDA0003271210180000082
Then, the angle value is calculated according to the feature vector, and the calculation formula is as follows:
Figure BDA0003271210180000083
in the formula, ang is an angle value, and non _ linear _ func is to perform nonlinear mapping calculation.
S306: and carrying out rounding quantization processing on the intensity value, the coherence value and the angle value to obtain a rounded intensity value, a rounded coherence value and an angle value.
Specifically, according to the intensity value, a pre-trained intensity value interval and the number of intensity value interval paragraphs, an integrated intensity value corresponding to the intensity value is determined; determining a rounded coherence value corresponding to the coherence value according to the coherence value, a pre-trained coherence value interval and the number of paragraphs of the coherence value interval; and determining the rounded angle value corresponding to the angle value according to the angle value and the segmentation quantity of the angle value.
In this embodiment, according to the intensity value, the pre-trained intensity value interval, and the number of intensity value interval paragraphs, an integrated intensity value corresponding to the intensity value is determined, and a calculation formula thereof is as follows:
Figure BDA0003271210180000091
wherein lambda is the rounded strength value, str1、str2、...、strq-1Is a pre-trained intensity value interval; q is the number of intensity value interval segments.
In this embodiment, according to the coherence value, the pre-trained coherence value interval, and the number of paragraphs of the coherence value interval, an integrated coherence value corresponding to the coherence value is determined, and a calculation formula thereof is as follows:
Figure BDA0003271210180000092
wherein u is the rounded coherence value, coh1,coh2、...、cohc-1Is a pre-trained coherence value interval; c is the number of coherency value interval paragraphs.
In this embodiment, according to the angle value and the number of segments of the angle value, a rounded angle value corresponding to the angle value is determined, and a calculation formula thereof is as follows:
theta=floor(ang*P)
in the formula, theta is the angle value after rounding, floor is rounding downwards; p is the number of segments of the angle value.
It should be noted that the pre-trained intensity value interval and the number of intensity value interval paragraphs, and the pre-trained coherence value interval and the number of coherence value interval paragraphs provided in this embodiment are obtained by performing downsampling processing on a large number of high-resolution pictures to obtain a large number of low-resolution pictures, calculating a large number of different intensity values and coherence values according to the large number of low-resolution pictures to perform respective sorting, and then performing interval value taking according to the number of preset interval paragraphs.
S307: and calculating to obtain the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value and angle value corresponding to each pixel in the low-resolution image.
Specifically, the hash value of each pixel in the low-resolution image is determined according to the product of the rounded angle value, the number of segments of the intensity value, the number of paragraphs of the coherence value interval, and the rounded coherence value.
In this embodiment, the calculation formula for determining the hash value of each pixel in the low-resolution image according to the product of the rounded angle value, the number of segments of the intensity value, the number of segments of the coherence value interval, and the rounded coherence value is as follows:
hash=theta*Q*C+lambda*C+u
in the formula, the hash is the hash value of the pixel in the low-resolution image; theta is the angle value after rounding; q is the number of intensity value interval paragraphs; c is the number of the sections of the coherence value interval; lambda is the rounded intensity value; u is the rounded coherence value.
As can be seen from the above description, a square region with each pixel as a center and a pixel size of k × k in the low-resolution image is determined, a pixel gradient value is calculated according to the square region, a pixel gradient matrix is constructed according to the pixel gradient value, an intensity value, a coherence value, and an angle value of the pixel gradient matrix are obtained according to the pixel gradient value, and a hash value of each pixel in the low-resolution image is obtained according to the intensity value, the coherence value, and the angle value of the pixel gradient matrix.
Referring to fig. 4, fig. 4 is a third schematic flowchart of an image processing method according to an embodiment of the present disclosure. On the basis of the embodiment corresponding to fig. 3, the embodiment further provides a process how to perform pre-training to obtain the pre-trained intensity value interval and the number of intensity value interval paragraphs, and the pre-trained coherence value interval and the number of coherence value interval paragraphs, which is detailed as follows:
s401: and acquiring a plurality of high-resolution pictures, and performing downsampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures.
In this embodiment, the plurality of high resolution pictures are subjected to downsampling interpolation processing, and the downsampling interpolation multiple is the same as the above-described upsampling multiple from the low resolution picture to the high resolution picture, for example, s times.
S402: a square region of pixel size k x k centered on each pixel on each low fraction plot is determined.
In this embodiment, for each pixel in each low resolution picture, a corresponding square region with the pixel as a center and a pixel size of k × k is determined.
S403: calculating the pixel gradient value of each pixel in a square area corresponding to each pixel on each low-fraction image; generating a pixel gradient matrix according to the pixel gradient value of each pixel; calculating eigenvalues of the pixel gradient matrix; and calculating the intensity value and the coherence value of the pixel gradient matrix according to the characteristic value.
In this embodiment, the calculation process in this step is the same as the calculation process in the steps S302 to S305, and please refer to the description related to the steps S302 to S305, which is not repeated herein.
S404: and sequencing the obtained multiple intensity values in an ascending order, and carrying out interval value taking on the multiple intensity values according to the number of preset intensity value interval paragraphs to obtain the pre-trained intensity value interval.
In this embodiment, a plurality of intensity values are obtained in each square region on each low-fraction map.
Here, assuming that the number of preset intensity value interval paragraphs is Q, the intensity values sorted in ascending order are assigned according to the number of preset intensity value interval paragraphs
Figure BDA0003271210180000111
Is selected by the interval
Figure BDA0003271210180000112
To
Figure BDA0003271210180000113
The value of the point is obtained to obtain the strength value interval of pre-training, which is marked as str1、str2、...、strq-1
S405: and sequencing the obtained plurality of coherence values in an ascending order, and carrying out interval value taking on the plurality of coherence values according to the number of preset coherence value interval paragraphs to obtain the pre-trained coherence value interval.
In this embodiment, a plurality of coherence values are obtained in each square region on each low-fraction tile.
Here, the number of preset coherency value interval paragraphs is C, and the coherency values ordered in ascending order are ordered according to
Figure BDA0003271210180000114
Interval selection
Figure BDA0003271210180000115
To
Figure BDA0003271210180000116
The value of the point is obtained and the pre-trained coherence value interval is marked as coh1,coh2、...、cohc-1
According to the description, a large number of low-fraction pictures are obtained by performing downsampling processing on a large number of high-resolution pictures, a large number of different intensity values and coherence values are obtained by calculation according to the large number of low-fraction pictures and are respectively sequenced, then, values are taken according to the number of preset interval paragraphs, a pre-trained intensity value interval and a pre-trained coherence value interval are obtained, and an accurate reference basis can be provided for rounding and quantizing subsequent intensity values, coherence values and angle values.
Referring to fig. 5, fig. 5 is a fourth schematic flowchart of an image processing method according to an embodiment of the present disclosure. On the basis of any one of the embodiments corresponding to fig. 1 to 4, this embodiment further provides a process how to construct a corresponding relationship between a pre-stored hash value and a pre-trained filter bank, which is specifically detailed as follows:
s501: and acquiring a plurality of high-resolution pictures, and performing downsampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures.
In this embodiment, the plurality of high resolution pictures are subjected to downsampling interpolation processing, and the downsampling interpolation multiple is the same as the above-described upsampling multiple from the low resolution picture to the high resolution picture, for example, s times.
S502: and determining each pixel in each low-resolution picture and a plurality of target pixels corresponding to each pixel in the high-resolution picture.
In this embodiment, in the process of performing the downsampling interpolation process in step S501, downsampling interpolation is performed on a plurality of target pixels in the high-resolution picture to obtain one pixel in the low-resolution picture.
S503: and calculating the hash value of a square area which takes each pixel as the center and has the pixel size of k x k in the low-resolution picture, wherein k is an odd number larger than 1.
In this embodiment, the calculation process in this step is the same as the calculation process in the steps S301 to S307, and reference is specifically made to the description related to the steps S301 to S307, which is not repeated herein.
S504: and arranging square area pixels corresponding to all pixels in the low-resolution pictures with the same hash value and a plurality of target pixels in the corresponding high-resolution pictures as a group to obtain m groups of square area pixels of the low-resolution pictures and a plurality of target pixels in the corresponding high-resolution pictures, wherein m is a positive integer.
In this embodiment, the value of m may be adjusted according to actual conditions. Optionally, m is greater than 100000.
S505: and performing simultaneous matrix equations on the square area pixels of the m groups of low-resolution pictures and a plurality of target pixels in the corresponding high-resolution pictures.
In one embodiment of the present disclosure, the matrix equation is:
PF=b
in the formula, P is each square area pixel in the low-resolution picture, and the matrix size is m rows and k x k columns; f is a filter bank with a matrix size of 1 row, s x s columns; and b is s x s target pixels on the high-resolution picture, wherein s is a multiple of the down-sampling interpolation.
S506: and solving the filter banks in the m groups of matrix equations to obtain the filter bank corresponding to each hash value.
In this embodiment, a least square method is used to solve a matrix equation PF ═ b corresponding to any hash value, and a filter bank F ═ P (corresponding to the hash value) is obtainedTP)-1PTb。
As can be seen from the above description, the filter banks corresponding to different hash values perform downsampling processing according to a large number of high-resolution pictures to obtain a large number of low-resolution pictures; then, a large number of high-resolution pictures and low-resolution pictures corresponding to the high-resolution pictures are used as training data sets, the low-resolution pictures are used as input, the high-resolution pictures are used as output, and corresponding filter banks can be obtained through solving.
Fig. 6 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure, corresponding to the image processing method according to the above embodiment. For ease of illustration, only portions that are relevant to embodiments of the present disclosure are shown. Referring to fig. 6, the apparatus includes: a hash value determination module 601, a filter determination module 602, a filter processing module 603, and a pixel writing module 604.
The hash value determining module 601 is configured to obtain a low-resolution image and calculate a hash value of each pixel in the low-resolution image;
a filter determining module 602, configured to query a correspondence between a pre-stored hash value and a pre-trained filter bank to obtain a filter bank corresponding to the hash value of each pixel, where the filter bank includes multiple filters;
a filtering processing module 603, configured to filter, according to each filter in the filter bank, a pixel region corresponding to each pixel, respectively, to obtain a plurality of filtered pixel values;
a pixel writing module 604, configured to write the multiple filtered pixel values into corresponding coordinate positions of a high-resolution image, respectively, to obtain a high-resolution image corresponding to the low-resolution image.
According to one or more embodiments of the present disclosure, the hash value determining module 601 is specifically configured to determine a square region in the low-resolution image, which is centered on each pixel and has a pixel size k × k, where k is an odd number greater than 1; calculating the pixel gradient value of each pixel in the square area; generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square area; calculating eigenvalues of the pixel gradient matrix; calculating an intensity value, a coherence value and an angle value of the pixel gradient matrix according to the characteristic value; carrying out rounding quantization processing on the intensity value, the coherence value and the angle value to obtain a rounded intensity value, a rounded coherence value and an angle value; and calculating to obtain the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value and angle value corresponding to each pixel in the low-resolution image.
According to one or more embodiments of the present disclosure, the hash value determining module 601, specifically configured to calculate the pixel gradient value of the square region, includes: respectively calculating a first pixel gradient of each pixel of the square area in the horizontal direction and a second pixel gradient in the vertical direction; correspondingly, the generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square area includes: constructing a 2 x 2 pixel gradient matrix according to the pixel gradient value of each pixel of the square area; wherein a first element of the 2 x 2 pixel gradient matrix is equal to a sum of squares of a first pixel gradient of the respective pixel in a horizontal direction, a second element and a third element are equal to a sum of products of the first pixel gradient of the respective pixel in the horizontal direction and a second pixel gradient in a vertical direction, and a fourth element is equal to a sum of squares of a second pixel gradient of the respective pixel in the vertical direction; accordingly, the calculating the eigenvalues of the pixel gradient matrix comprises: calculating the eigenvalue of the 2 x 2 pixel gradient matrix to obtain a first eigenvalue and a second eigenvalue of the 2 x 2 pixel gradient matrix: correspondingly, the calculating the intensity value, coherence value and angle value of the pixel gradient matrix according to the characteristic value comprises: calculating to obtain the intensity value according to the first characteristic value; calculating to obtain the coherence value according to the first characteristic value and the second characteristic value; and calculating a feature vector corresponding to the first feature value, and calculating the angle value according to the feature vector.
According to one or more embodiments of the present disclosure, the hash value determining module 601 is specifically configured to perform rounding quantization processing on the intensity value, the coherence value, and the angle value to obtain a rounded intensity value, coherence value, and angle value, and includes: determining an integrated intensity value corresponding to the intensity value according to the intensity value, a pre-trained intensity value interval and the number of intensity value interval paragraphs; determining a rounded coherence value corresponding to the coherence value according to the coherence value, a pre-trained coherence value interval and the number of paragraphs of the coherence value interval; and determining the rounded angle value corresponding to the angle value according to the angle value and the segmentation quantity of the angle value.
According to one or more embodiments of the present disclosure, the hash value determining module 601 is specifically configured to calculate the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value, and angle value corresponding to each pixel in the low-resolution image, and includes: and determining the hash value of each pixel in the low-resolution image according to the product of the rounded angle value, the segmentation number of the intensity value and the number of the paragraphs in the coherence value interval, the number of the rounded intensity value and the paragraphs in the coherence value interval and the rounded coherence value.
According to one or more embodiments of the present disclosure, the apparatus further comprises: an interval information pre-training module 605, configured to obtain multiple high-resolution pictures, and perform downsampling interpolation processing on the multiple high-resolution pictures to obtain multiple low-resolution pictures; determining a square region with each pixel as the center and the pixel size k x k on each low fraction graph; calculating the pixel gradient value of each pixel in a square area corresponding to each pixel on each low-fraction image; generating a pixel gradient matrix according to the pixel gradient value of each pixel; calculating eigenvalues of the pixel gradient matrix; calculating an intensity value and a coherence value of the pixel gradient matrix according to the characteristic value; sequencing the obtained multiple intensity values in an ascending order, and carrying out interval value taking on the multiple intensity values according to the number of preset intensity value interval paragraphs to obtain the pre-trained intensity value interval; and sequencing the obtained plurality of coherence values in an ascending order, and carrying out interval value taking on the plurality of coherence values according to the number of preset coherence value interval paragraphs to obtain the pre-trained coherence value interval.
According to one or more embodiments of the present disclosure, the apparatus further comprises: a filter bank pre-training module 606, configured to obtain multiple high-resolution pictures, and perform downsampling interpolation processing on the multiple high-resolution pictures to obtain multiple low-resolution pictures; determining each pixel in each low-resolution picture and a plurality of target pixels in the high-resolution picture corresponding to each pixel; calculating the hash value of a square area which takes each pixel as the center and has the pixel size of k x k in the low-resolution picture, wherein k is an odd number larger than 1; the square area pixels corresponding to the pixels in the low-resolution pictures with the same hash value and the target pixels in the high-resolution pictures are arranged into a group, and m groups of square area pixels of the low-resolution pictures and a plurality of target pixels in the high-resolution pictures are obtained, wherein m is a positive integer; performing simultaneous matrix equations on square area pixels of the m groups of low-resolution pictures and a plurality of target pixels in the corresponding high-resolution pictures; and solving the filter banks in the m groups of matrix equations to obtain the filter bank corresponding to each hash value.
The apparatus provided in this embodiment may be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
In order to realize the above embodiments, the embodiments of the present disclosure further provide an electronic device.
Referring to fig. 7, a schematic structural diagram of an electronic device 700 suitable for implementing the embodiment of the present disclosure is shown, where the electronic device 700 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided an image processing method including:
acquiring a low-resolution image, and calculating a hash value of each pixel in the low-resolution image;
inquiring the corresponding relation between a pre-stored hash value and a pre-trained filter bank to obtain the filter bank corresponding to the hash value of each pixel, wherein the filter bank comprises a plurality of filters;
respectively filtering the pixel regions corresponding to the pixels according to the filters in the filter bank to obtain a plurality of filtered pixel values;
and respectively writing the plurality of filtering pixel values into corresponding coordinate positions of a high-resolution image to obtain a high-resolution image corresponding to the low-resolution image.
According to one or more embodiments of the present disclosure, the calculating the hash value of each pixel in the low resolution image includes: determining a square region with the pixel size k x k and taking each pixel as the center in the low-resolution image, wherein k is an odd number larger than 1; calculating the pixel gradient value of each pixel in the square area; generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square area; calculating eigenvalues of the pixel gradient matrix; calculating an intensity value, a coherence value and an angle value of the pixel gradient matrix according to the characteristic value; carrying out rounding quantization processing on the intensity value, the coherence value and the angle value to obtain a rounded intensity value, a rounded coherence value and an angle value; and calculating to obtain the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value and angle value corresponding to each pixel in the low-resolution image.
According to one or more embodiments of the present disclosure, the calculating the pixel gradient value of the square region and generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square region includes: respectively calculating a first pixel gradient of each pixel of the square area in the horizontal direction and a second pixel gradient in the vertical direction; constructing a 2 x 2 pixel gradient matrix according to the pixel gradient value of each pixel of the square area; wherein a first element of the 2 x 2 pixel gradient matrix is equal to a sum of squares of a first pixel gradient of the pixels in a horizontal direction, a second element and a third element are equal to a sum of products of the first pixel gradient of the pixels in the horizontal direction and a second pixel gradient of the pixels in a vertical direction, and a fourth element is equal to a sum of squares of a second pixel gradient of the pixels in the vertical direction.
According to one or more embodiments of the present disclosure, the calculating the eigenvalues of the pixel gradient matrix comprises: and calculating the characteristic value of the 2 x 2 pixel gradient matrix to obtain a first characteristic value and a second characteristic value of the 2 x 2 pixel gradient matrix.
According to one or more embodiments of the present disclosure, the calculating the intensity value, the coherence value, and the angle value of the pixel gradient matrix according to the eigenvalue includes: calculating to obtain the intensity value according to the first characteristic value; calculating to obtain the coherence value according to the first characteristic value and the second characteristic value; and calculating a feature vector corresponding to the first feature value, and calculating the angle value according to the feature vector.
According to one or more embodiments of the present disclosure, the rounding and quantizing the intensity value, the coherence value, and the angle value to obtain a rounded intensity value, a rounded coherence value, and an angle value includes: determining an integrated intensity value corresponding to the intensity value according to the intensity value, a pre-trained intensity value interval and the number of intensity value interval paragraphs; determining a rounded coherence value corresponding to the coherence value according to the coherence value, a pre-trained coherence value interval and the number of paragraphs of the coherence value interval; and determining the rounded angle value corresponding to the angle value according to the angle value and the segmentation quantity of the angle value.
According to one or more embodiments of the present disclosure, the calculating the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value, and angle value corresponding to each pixel in the low-resolution image includes: and determining the hash value of each pixel in the low-resolution image according to the product of the rounded angle value, the segmentation number of the intensity value and the number of the paragraphs in the coherence value interval, the number of the rounded intensity value and the paragraphs in the coherence value interval and the rounded coherence value.
According to one or more embodiments of the present disclosure, the method further comprises: obtaining a plurality of high-resolution pictures, and performing downsampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures; determining a square region with each pixel as the center and the pixel size k x k on each low fraction graph; calculating the pixel gradient value of each pixel in a square area corresponding to each pixel on each low-fraction image; generating a pixel gradient matrix according to the pixel gradient value of each pixel; calculating eigenvalues of the pixel gradient matrix; calculating an intensity value and a coherence value of the pixel gradient matrix according to the characteristic value; sequencing the obtained multiple intensity values in an ascending order, and carrying out interval value taking on the multiple intensity values according to the number of preset intensity value interval paragraphs to obtain the pre-trained intensity value interval; and sequencing the obtained plurality of coherence values in an ascending order, and carrying out interval value taking on the plurality of coherence values according to the number of preset coherence value interval paragraphs to obtain the pre-trained coherence value interval.
According to one or more embodiments of the present disclosure, the method further comprises: obtaining a plurality of high-resolution pictures, and performing downsampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures; determining each pixel in each low-resolution picture and a plurality of target pixels in the high-resolution picture corresponding to each pixel; calculating the hash value of a square area which takes each pixel as the center and has the pixel size of k x k in the low-resolution picture, wherein k is an odd number larger than 1; the square area pixels corresponding to the pixels in the low-resolution pictures with the same hash value and the target pixels in the high-resolution pictures are arranged into a group, and m groups of square area pixels of the low-resolution pictures and a plurality of target pixels in the high-resolution pictures are obtained, wherein m is a positive integer; performing simultaneous matrix equations on square area pixels of the m groups of low-resolution pictures and a plurality of target pixels in the corresponding high-resolution pictures; and solving the filter banks in the m groups of matrix equations to obtain the filter bank corresponding to each hash value.
In accordance with one or more embodiments of the present disclosure, the matrix equation is:
PF=b
in the formula, P is each square area pixel in the low-resolution picture, and the matrix size is m rows and k x k columns; f is a filter bank with a matrix size of 1 row, s x s columns; and b is s x s target pixels on the high-resolution picture, wherein s is a multiple of the down-sampling interpolation.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided an image processing apparatus including:
the hash value determining module is used for acquiring a low-resolution image and calculating the hash value of each pixel in the low-resolution image;
the filter determining module is used for inquiring the corresponding relation between a pre-stored hash value and a pre-trained filter bank to obtain the filter bank corresponding to the hash value of each pixel, wherein the filter bank comprises a plurality of filters;
the filtering processing module is used for respectively filtering the pixel regions corresponding to the pixels according to the filters in the filter bank to obtain a plurality of filtering pixel values;
and the pixel writing module is used for respectively writing the filtering pixel values into corresponding coordinate positions of a high-resolution image to obtain a high-resolution image corresponding to the low-resolution image.
According to one or more embodiments of the present disclosure, the hash value determining module is specifically configured to determine a square region in the low-resolution image, which is centered on each pixel and has a pixel size k × k, where k is an odd number greater than 1; calculating the pixel gradient value of each pixel in the square area; generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square area; calculating eigenvalues of the pixel gradient matrix; calculating an intensity value, a coherence value and an angle value of the pixel gradient matrix according to the characteristic value; carrying out rounding quantization processing on the intensity value, the coherence value and the angle value to obtain a rounded intensity value, a rounded coherence value and an angle value; and calculating to obtain the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value and angle value corresponding to each pixel in the low-resolution image.
According to one or more embodiments of the present disclosure, the hash value determining module, specifically configured to calculate the pixel gradient value of the square region, includes: respectively calculating a first pixel gradient of each pixel of the square area in the horizontal direction and a second pixel gradient in the vertical direction; correspondingly, the generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square area includes: constructing a 2 x 2 pixel gradient matrix according to the pixel gradient value of each pixel of the square area; wherein a first element of the 2 x 2 pixel gradient matrix is equal to a sum of squares of a first pixel gradient of the respective pixel in a horizontal direction, a second element and a third element are equal to a sum of products of the first pixel gradient of the respective pixel in the horizontal direction and a second pixel gradient in a vertical direction, and a fourth element is equal to a sum of squares of a second pixel gradient of the respective pixel in the vertical direction; accordingly, the calculating the eigenvalues of the pixel gradient matrix comprises: calculating the eigenvalue of the 2 x 2 pixel gradient matrix to obtain a first eigenvalue and a second eigenvalue of the 2 x 2 pixel gradient matrix: correspondingly, the calculating the intensity value, coherence value and angle value of the pixel gradient matrix according to the characteristic value comprises: calculating to obtain the intensity value according to the first characteristic value; calculating to obtain the coherence value according to the first characteristic value and the second characteristic value; and calculating a feature vector corresponding to the first feature value, and calculating the angle value according to the feature vector.
According to one or more embodiments of the present disclosure, the hash value determining module is specifically configured to perform rounding quantization processing on the intensity value, the coherence value, and the angle value to obtain a rounded intensity value, coherence value, and angle value, and includes: determining an integrated intensity value corresponding to the intensity value according to the intensity value, a pre-trained intensity value interval and the number of intensity value interval paragraphs; determining a rounded coherence value corresponding to the coherence value according to the coherence value, a pre-trained coherence value interval and the number of paragraphs of the coherence value interval; and determining the rounded angle value corresponding to the angle value according to the angle value and the segmentation quantity of the angle value.
According to one or more embodiments of the present disclosure, the hash value determining module is specifically configured to calculate the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value, and angle value corresponding to each pixel in the low-resolution image, and includes: and determining the hash value of each pixel in the low-resolution image according to the product of the rounded angle value, the segmentation number of the intensity value and the number of the paragraphs in the coherence value interval, the number of the rounded intensity value and the paragraphs in the coherence value interval and the rounded coherence value.
According to one or more embodiments of the present disclosure, the apparatus further comprises: the interval information pre-training module is used for acquiring a plurality of high-resolution pictures and performing down-sampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures; determining a square region with each pixel as the center and the pixel size k x k on each low fraction graph; calculating the pixel gradient value of each pixel in a square area corresponding to each pixel on each low-fraction image; generating a pixel gradient matrix according to the pixel gradient value of each pixel; calculating eigenvalues of the pixel gradient matrix; calculating an intensity value and a coherence value of the pixel gradient matrix according to the characteristic value; sequencing the obtained multiple intensity values in an ascending order, and carrying out interval value taking on the multiple intensity values according to the number of preset intensity value interval paragraphs to obtain the pre-trained intensity value interval; and sequencing the obtained plurality of coherence values in an ascending order, and carrying out interval value taking on the plurality of coherence values according to the number of preset coherence value interval paragraphs to obtain the pre-trained coherence value interval.
According to one or more embodiments of the present disclosure, the apparatus further comprises: the filter bank pre-training module is used for acquiring a plurality of high-resolution pictures and performing down-sampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures; determining each pixel in each low-resolution picture and a plurality of target pixels in the high-resolution picture corresponding to each pixel; calculating the hash value of a square area which takes each pixel as the center and has the pixel size of k x k in the low-resolution picture, wherein k is an odd number larger than 1; the square area pixels corresponding to the pixels in the low-resolution pictures with the same hash value and the target pixels in the high-resolution pictures are arranged into a group, and m groups of square area pixels of the low-resolution pictures and a plurality of target pixels in the high-resolution pictures are obtained, wherein m is a positive integer; performing simultaneous matrix equations on square area pixels of the m groups of low-resolution pictures and a plurality of target pixels in the corresponding high-resolution pictures; and solving the filter banks in the m groups of matrix equations to obtain the filter bank corresponding to each hash value.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the image processing method as set forth in the first aspect above and in various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the image processing method as described in the first aspect above and in various possible designs of the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the image processing method as described above in the first aspect and various possible designs of the first aspect.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (14)

1. An image processing method, comprising:
acquiring a low-resolution image, and calculating a hash value of each pixel in the low-resolution image;
inquiring the corresponding relation between a pre-stored hash value and a pre-trained filter bank to obtain the filter bank corresponding to the hash value of each pixel, wherein the filter bank comprises a plurality of filters;
respectively filtering the pixel regions corresponding to the pixels according to the filters in the filter bank to obtain a plurality of filtered pixel values;
and respectively writing the plurality of filtering pixel values into corresponding coordinate positions of a high-resolution image to obtain a high-resolution image corresponding to the low-resolution image.
2. The method of claim 1, wherein the computing the hash value for each pixel in the low resolution image comprises:
determining a square region with the pixel size k x k and taking each pixel as the center in the low-resolution image, wherein k is an odd number larger than 1;
calculating the pixel gradient value of each pixel in the square area;
generating a pixel gradient matrix according to the pixel gradient value of each pixel in the square area;
calculating eigenvalues of the pixel gradient matrix;
calculating an intensity value, a coherence value and an angle value of the pixel gradient matrix according to the characteristic value;
carrying out rounding quantization processing on the intensity value, the coherence value and the angle value to obtain a rounded intensity value, a rounded coherence value and an angle value;
and calculating to obtain the hash value of each pixel in the low-resolution image according to the rounded intensity value, coherence value and angle value corresponding to each pixel in the low-resolution image.
3. The method of claim 2, wherein calculating the pixel gradient values for the square region and generating a pixel gradient matrix based on the pixel gradient values for each pixel in the square region comprises:
respectively calculating a first pixel gradient of each pixel of the square area in the horizontal direction and a second pixel gradient in the vertical direction;
constructing a 2 x 2 pixel gradient matrix according to the pixel gradient value of each pixel of the square area; wherein a first element of the 2 x 2 pixel gradient matrix is equal to a sum of squares of a first pixel gradient of the pixels in a horizontal direction, a second element and a third element are equal to a sum of products of the first pixel gradient of the pixels in the horizontal direction and a second pixel gradient of the pixels in a vertical direction, and a fourth element is equal to a sum of squares of a second pixel gradient of the pixels in the vertical direction.
4. The method of claim 3, wherein the computing eigenvalues of the pixel gradient matrix comprises:
and calculating the characteristic value of the 2 x 2 pixel gradient matrix to obtain a first characteristic value and a second characteristic value of the 2 x 2 pixel gradient matrix.
5. The method of claim 4, wherein said calculating intensity values, coherence values, and angle values of the pixel gradient matrix from the eigenvalues comprises:
calculating to obtain the intensity value according to the first characteristic value;
calculating to obtain the coherence value according to the first characteristic value and the second characteristic value;
and calculating a feature vector corresponding to the first feature value, and calculating the angle value according to the feature vector.
6. The method of claim 2, wherein the rounding quantization process on the intensity values, coherence values and angle values to obtain rounded intensity values, coherence values and angle values comprises:
determining an integrated intensity value corresponding to the intensity value according to the intensity value, a pre-trained intensity value interval and the number of intensity value interval paragraphs;
determining a rounded coherence value corresponding to the coherence value according to the coherence value, a pre-trained coherence value interval and the number of paragraphs of the coherence value interval;
and determining the rounded angle value corresponding to the angle value according to the angle value and the segmentation quantity of the angle value.
7. The method according to claim 6, wherein the calculating the hash value of each pixel in the low resolution image according to the rounded intensity value, coherence value and angle value corresponding to each pixel in the low resolution image comprises:
and determining the hash value of each pixel in the low-resolution image according to the product of the rounded angle value, the segmentation number of the intensity value and the number of the paragraphs in the coherence value interval, the number of the rounded intensity value and the paragraphs in the coherence value interval and the rounded coherence value.
8. The method of claim 6, further comprising:
obtaining a plurality of high-resolution pictures, and performing downsampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures;
determining a square region with each pixel as the center and the pixel size k x k on each low fraction graph;
calculating the pixel gradient value of each pixel in a square area corresponding to each pixel on each low-fraction image; generating a pixel gradient matrix according to the pixel gradient value of each pixel; calculating eigenvalues of the pixel gradient matrix; calculating an intensity value and a coherence value of the pixel gradient matrix according to the characteristic value;
sequencing the obtained multiple intensity values in an ascending order, and carrying out interval value taking on the multiple intensity values according to the number of preset intensity value interval paragraphs to obtain the pre-trained intensity value interval;
and sequencing the obtained plurality of coherence values in an ascending order, and carrying out interval value taking on the plurality of coherence values according to the number of preset coherence value interval paragraphs to obtain the pre-trained coherence value interval.
9. The method according to any one of claims 1 to 8, further comprising:
obtaining a plurality of high-resolution pictures, and performing downsampling interpolation processing on the high-resolution pictures to obtain a plurality of low-resolution pictures;
determining each pixel in each low-resolution picture and a plurality of target pixels in the high-resolution picture corresponding to each pixel;
calculating the hash value of a square area which takes each pixel as the center and has the pixel size of k x k in the low-resolution picture, wherein k is an odd number larger than 1;
the square area pixels corresponding to the pixels in the low-resolution pictures with the same hash value and the target pixels in the high-resolution pictures are arranged into a group, and m groups of square area pixels of the low-resolution pictures and a plurality of target pixels in the high-resolution pictures are obtained, wherein m is a positive integer;
performing simultaneous matrix equations on square area pixels of the m groups of low-resolution pictures and a plurality of target pixels in the corresponding high-resolution pictures;
and solving the filter banks in the m groups of matrix equations to obtain the filter bank corresponding to each hash value.
10. The method of claim 9, wherein the matrix equation is:
PF=b
in the formula, P is each square area pixel in the low-resolution picture, and the matrix size is m rows and k x k columns; f is a filter bank with a matrix size of 1 row, s x s columns; and b is s x s target pixels on the high-resolution picture, wherein s is a multiple of the down-sampling interpolation.
11. An image processing apparatus characterized by comprising:
the hash value determining module is used for acquiring a low-resolution image and calculating the hash value of each pixel in the low-resolution image;
the filter determining module is used for inquiring the corresponding relation between a pre-stored hash value and a pre-trained filter bank to obtain the filter bank corresponding to the hash value of each pixel, wherein the filter bank comprises a plurality of filters;
the filtering processing module is used for respectively filtering the pixel regions corresponding to the pixels according to the filters in the filter bank to obtain a plurality of filtering pixel values;
and the pixel writing module is used for respectively writing the filtering pixel values into corresponding coordinate positions of a high-resolution image to obtain a high-resolution image corresponding to the low-resolution image.
12. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing the computer-executable instructions stored by the memory causes the processor to perform the image processing method of any of claims 1 to 10.
13. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement the image processing method according to any one of claims 1 to 10.
14. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the method of image processing according to any one of claims 1 to 10.
CN202111101796.6A 2021-09-18 2021-09-18 Image processing method and apparatus Pending CN113808020A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111101796.6A CN113808020A (en) 2021-09-18 2021-09-18 Image processing method and apparatus
PCT/CN2022/113117 WO2023040563A1 (en) 2021-09-18 2022-08-17 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111101796.6A CN113808020A (en) 2021-09-18 2021-09-18 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
CN113808020A true CN113808020A (en) 2021-12-17

Family

ID=78896005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111101796.6A Pending CN113808020A (en) 2021-09-18 2021-09-18 Image processing method and apparatus

Country Status (2)

Country Link
CN (1) CN113808020A (en)
WO (1) WO2023040563A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040563A1 (en) * 2021-09-18 2023-03-23 北京字节跳动网络技术有限公司 Image processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
CN108765343A (en) * 2018-05-29 2018-11-06 Oppo(重庆)智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of image procossing
CN110070486A (en) * 2018-01-24 2019-07-30 杭州海康威视数字技术股份有限公司 A kind of image processing method, device and electronic equipment
CN111445424A (en) * 2019-07-23 2020-07-24 广州市百果园信息技术有限公司 Image processing method, image processing device, mobile terminal video processing method, mobile terminal video processing device, mobile terminal video processing equipment and mobile terminal video processing medium
CN111783896A (en) * 2020-07-08 2020-10-16 汪金玲 Image identification method and system based on kernel method
CN111951167A (en) * 2020-08-25 2020-11-17 深圳思谋信息科技有限公司 Super-resolution image reconstruction method, super-resolution image reconstruction device, computer equipment and storage medium
WO2021102644A1 (en) * 2019-11-25 2021-06-03 中国科学院深圳先进技术研究院 Image enhancement method and apparatus, and terminal device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10776904B2 (en) * 2017-05-03 2020-09-15 Samsung Electronics Co., Ltd. Method and apparatus for processing image
CN113808020A (en) * 2021-09-18 2021-12-17 北京字节跳动网络技术有限公司 Image processing method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
CN110070486A (en) * 2018-01-24 2019-07-30 杭州海康威视数字技术股份有限公司 A kind of image processing method, device and electronic equipment
CN108765343A (en) * 2018-05-29 2018-11-06 Oppo(重庆)智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of image procossing
CN111445424A (en) * 2019-07-23 2020-07-24 广州市百果园信息技术有限公司 Image processing method, image processing device, mobile terminal video processing method, mobile terminal video processing device, mobile terminal video processing equipment and mobile terminal video processing medium
WO2021102644A1 (en) * 2019-11-25 2021-06-03 中国科学院深圳先进技术研究院 Image enhancement method and apparatus, and terminal device
CN111783896A (en) * 2020-07-08 2020-10-16 汪金玲 Image identification method and system based on kernel method
CN111951167A (en) * 2020-08-25 2020-11-17 深圳思谋信息科技有限公司 Super-resolution image reconstruction method, super-resolution image reconstruction device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨帆: "数字图像处理与分析", 北京:北京航空航天大学出版社, pages: 10 - 13 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040563A1 (en) * 2021-09-18 2023-03-23 北京字节跳动网络技术有限公司 Image processing method and device

Also Published As

Publication number Publication date
WO2023040563A1 (en) 2023-03-23

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
WO2019153671A1 (en) Image super-resolution method and apparatus, and computer readable storage medium
CN110796664B (en) Image processing method, device, electronic equipment and computer readable storage medium
WO2022237811A1 (en) Image processing method and apparatus, and device
CN110288520B (en) Image beautifying method and device and electronic equipment
CN108700988B (en) Digital image presentation
CN110298851B (en) Training method and device for human body segmentation neural network
CN110070495B (en) Image processing method and device and electronic equipment
CN113055611B (en) Image processing method and device
CN108876716B (en) Super-resolution reconstruction method and device
CN113141518B (en) Control method and control device for video frame images in live classroom
CN111325704A (en) Image restoration method and device, electronic equipment and computer-readable storage medium
CN111127603B (en) Animation generation method and device, electronic equipment and computer readable storage medium
WO2023040563A1 (en) Image processing method and device
CN110310293B (en) Human body image segmentation method and device
CN114445269A (en) Image special effect processing method, device, equipment and medium
CN112927163A (en) Image data enhancement method and device, electronic equipment and storage medium
CN117274055A (en) Polarized image super-resolution reconstruction method and system based on information multiplexing
CN110223220B (en) Method and device for processing image
CN110321454B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN108074281B (en) Pyramid panorama model generation method and device, storage medium and electronic device
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN113255812B (en) Video frame detection method and device and electronic equipment
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN111738958B (en) Picture restoration method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination