WO2021254381A1 - 图像处理方法及装置、电子设备、计算机可读存储介质 - Google Patents
图像处理方法及装置、电子设备、计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021254381A1 WO2021254381A1 PCT/CN2021/100328 CN2021100328W WO2021254381A1 WO 2021254381 A1 WO2021254381 A1 WO 2021254381A1 CN 2021100328 W CN2021100328 W CN 2021100328W WO 2021254381 A1 WO2021254381 A1 WO 2021254381A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- edge direction
- value
- edge
- pixel
- neighborhood
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 65
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims description 49
- 238000004364 calculation method Methods 0.000 claims description 34
- 238000004590 computer program Methods 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 16
- 238000013507 mapping Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 description 12
- 230000009286 beneficial effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/403—Edge-driven scaling; Edge-based scaling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- the present disclosure relates to the field of image processing technology.
- it relates to an image processing method and device, electronic equipment, and computer-readable storage medium.
- image resolution enhancement is an important research direction.
- the existing image resolution enhancement methods are basically implemented based on image interpolation algorithms. Among them, each interpolation pixel is processed in the same manner, resulting in a low definition of the processed image, and the quality is still poor, and it is impossible to achieve an arbitrary ratio of resolution enhancement to the original image.
- an image processing method includes:
- the position coordinates of any interpolated pixel to be processed on the target image in the target image determine the position coordinates of the interpolated pixel in the original image; the target image is the original image after resolution enhancement, so The interpolated pixel points are pixels generated when the resolution is enhanced.
- the pixel value of the interpolated pixel is calculated by a first interpolation algorithm based on all original pixels in the n ⁇ n neighborhood.
- the gradient values of at least two edge directions in the n ⁇ n neighborhood are calculated respectively, and the judgment is made according to the gradient values of the at least two edge directions Whether there is a strong edge direction in the at least two edge directions. If it exists, the pixel value of the interpolated pixel is calculated by the first interpolation algorithm based on the multiple original pixels in the strong edge direction in the n ⁇ n neighborhood; if it does not exist, it is based on the n For all original pixels in the ⁇ n neighborhood, the pixel value of the interpolated pixel is calculated by the second interpolation algorithm.
- the respectively calculating the gradient values of at least two edge directions in the n ⁇ n neighborhood includes: separately obtaining a plurality of original pixel points in each edge direction in the n ⁇ n neighborhood Calculate the absolute value of the difference between the gray values of every two adjacent original pixels in each edge direction; calculate the gray values of every two adjacent original pixels in the selected range in each edge direction The sum or the mean value of the absolute value of the value difference is used as the gradient value of the edge direction.
- the at least two edge directions include a first edge direction, a second edge direction, a third edge direction, and a fourth edge direction.
- Said calculating respectively the gradient values of at least two edge directions in the n ⁇ n neighborhood includes: calculating the gray value difference of each adjacent two original pixel points in a selected range in the first edge direction The sum or the average value of the absolute value of is used as the gradient value of the first edge direction; the sum of the absolute value of the gray value difference of each adjacent two original pixels in the selected range in the second edge direction Or the mean value is used as the gradient value of the second edge direction; the sum or the mean value of the absolute value of the gray value difference of every two adjacent original pixels in the selected range in the third edge direction is used as the The gradient value of the third edge direction; the absolute value or the mean value of the gray value difference of each adjacent two original pixel points in the selected range in the fourth edge direction is taken as the fourth edge direction The gradient value.
- the determining whether there is a strong edge direction in the at least two edge directions according to the gradient values of the at least two edge directions includes: determining the gradient value of the first edge direction and the The ratio of the larger value to the smaller value among the gradient values of the second edge direction is ⁇ 1 , and the larger value of the gradient value of the third edge direction and the gradient value of the fourth edge direction is smaller than the larger value.
- the ratio of the small value is ⁇ 2 , and the preset ratio threshold is T; if ⁇ 1 > ⁇ 2 >T, it is determined that there is a strong edge direction, and the gradient between the first edge direction and the second edge direction
- the edge direction with a larger value is determined as a strong edge direction; if ⁇ 2 > ⁇ 1 > T, it is determined that there is a strong edge direction, and the gradient value of the third edge direction and the fourth edge direction is larger
- the edge direction is determined to be a strong edge direction; if ⁇ 1 ⁇ T and/or ⁇ 2 ⁇ T, it is determined that there is no strong edge direction.
- the at least two edge directions include a first edge direction and a second edge direction.
- the determining whether a strong edge direction exists in the at least two edge directions according to the gradient values of the at least two edge directions includes: determining among the gradient values of the first edge direction and the second edge direction The ratio of the larger value to the smaller value of is ⁇ , and the preset ratio threshold is T; if ⁇ >T, it is determined that there is a strong edge direction, and the gradient values between the first edge direction and the second edge direction The larger edge direction is determined as the strong edge direction; if ⁇ T, it is determined that there is no strong edge direction.
- the at least two edge directions include a first edge direction, a second edge direction, and a third edge direction.
- the determining whether a strong edge direction exists in the at least two edge directions according to the gradient values of the at least two edge directions includes: determining the gradient value of the first edge direction and the gradient of the second edge direction The ratio of the larger value to the smaller value among the values is ⁇ 3 , and the ratio of the larger value to the smaller value among the gradient value of the second edge direction and the gradient value of the third edge direction is ⁇ 4 , and the preset ratio threshold is T; if ⁇ 3 > ⁇ 4 > T, it is determined that there is a strong edge direction, and the first edge direction and the second edge direction of the edge direction with the larger gradient value Determine as a strong edge direction; if ⁇ 4 > ⁇ 3 > T, it is determined that there is a strong edge direction, and the edge direction with the larger gradient value in the second edge direction and the third edge direction is determined as the strong edge Direction; if ⁇ 3 ⁇ T and/
- the sum of the absolute value of the gray value difference of every two adjacent original pixel points in the selected range in the edge direction is used as the gradient value of the edge direction
- the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal .
- the preset ratio threshold T has a value range of 1.2 to 1.3.
- the at least two edge directions include a first edge direction and a second edge direction, and the first edge direction is substantially perpendicular to the second edge direction.
- the at least two edge directions further include a third edge direction and a fourth edge direction, the third edge direction is substantially perpendicular to the fourth edge direction, and the first edge direction is The angle of the third edge direction is approximately 45°.
- the first edge direction is substantially parallel to one of the two diagonals of the rectangle defined by the n ⁇ n neighborhood; the second edge direction is substantially parallel to the n ⁇ n neighborhood The other of the two diagonals of the determined rectangle is approximately parallel; the third edge direction is approximately parallel to the row direction of the plurality of pixel points in the n ⁇ n neighborhood, and the fourth edge direction It is substantially parallel to the column direction in which the plurality of pixel points in the n ⁇ n neighborhood are arranged.
- the preset entropy threshold value ranges from 0.3 to 0.8.
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood.
- the first interpolation algorithm is a bicubic convolution interpolation algorithm
- the second interpolation algorithm is a B-spline interpolation algorithm
- Image processing methods include:
- the position coordinates of any interpolated pixel to be processed on the target image in the target image determine the position coordinates of the interpolated pixel in the original image; the target image is the original image after resolution enhancement, so The interpolated pixel points are pixels generated when the resolution is enhanced.
- the gradient values of at least two edge directions of the interpolated pixel in the n ⁇ n neighborhood in the original image are respectively calculated, and according to the at least two edge directions
- n ⁇ 2 and n is a positive integer. If it exists, calculate the pixel value of the interpolated pixel by the first interpolation algorithm based on multiple original pixels in the strong edge direction in the n ⁇ n neighborhood; if it does not exist, calculate the interpolated value The two-dimensional image entropy of the pixel in the n ⁇ n neighborhood.
- the pixel value of the interpolated pixel is calculated by a first interpolation algorithm based on all original pixels in the n ⁇ n neighborhood.
- the pixel value of the interpolated pixel is calculated by a second interpolation algorithm based on all original pixels in the n ⁇ n neighborhood.
- an image processing device in another aspect, includes a coordinate mapping part, a two-dimensional image entropy calculation part, a strong edge direction judgment part, a first pixel value calculation part, and a second pixel value calculation part.
- the coordinate mapping component is configured to determine the position coordinates of the interpolation pixel in the original image according to the position coordinates of any interpolation pixel to be processed on the target image; For an image with an enhanced resolution, the interpolated pixel points are pixels generated when the resolution is enhanced.
- the two-dimensional image entropy calculation component is configured to calculate the two-dimensional image entropy of the n ⁇ n neighborhood of the interpolated pixel in the original image according to the position coordinates of the interpolated pixel in the original image, n ⁇ 2 And n is a positive integer.
- the strong edge direction judgment component is configured to calculate the gradient values of at least two edge directions of the interpolation pixel point in the n ⁇ n neighborhood in the original image according to the position coordinates of the interpolation pixel point in the original image , And determine whether there is a strong edge direction in the at least two edge directions according to the gradient values of the at least two edge directions.
- the first pixel value calculation component is configured to, in the case where there is a strong edge direction in the at least two edge directions, pass through a plurality of original pixels in the strong edge direction in the n ⁇ n neighborhood
- the first interpolation algorithm calculates the pixel value of the interpolation pixel; and, in the case that the entropy of the two-dimensional image is greater than or equal to a preset entropy threshold, based on all original pixels in the n ⁇ n neighborhood Calculate the pixel value of the interpolation pixel point by using the first interpolation algorithm.
- the second pixel value calculation component is configured to, in a case where the entropy of the two-dimensional image is less than a preset entropy threshold, and there is no strong edge direction in the at least two edge directions, based on the n ⁇ n For all original pixels in the neighborhood, the pixel value of the interpolated pixel is calculated by the second interpolation algorithm.
- an electronic device in another aspect, includes a processor and a memory, and computer program instructions suitable for execution by the processor are stored in the memory.
- Image processing method In another aspect, an electronic device is provided.
- the electronic device includes a processor and a memory, and computer program instructions suitable for execution by the processor are stored in the memory.
- Image processing method In another aspect, an electronic device is provided.
- a computer-readable storage medium stores computer program instructions, where the computer program instructions, when run on a computer, cause the computer to execute the image processing method as described in any of the foregoing embodiments.
- a computer program product includes computer program instructions, wherein when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute the image processing method as described in any of the foregoing embodiments.
- a computer program is provided.
- the computer program When the computer program is executed on a computer, the computer program causes the computer to execute the image processing method described in any of the above-mentioned embodiments.
- Fig. 1 is a flowchart of an image processing method according to some embodiments
- Fig. 2 is another flowchart of an image processing method according to some embodiments.
- Fig. 3 is yet another flowchart of an image processing method according to some embodiments.
- Fig. 4 is another flowchart of an image processing method according to some embodiments.
- Fig. 5 is yet another flowchart of an image processing method according to some embodiments.
- Fig. 6 is still another flowchart of an image processing method according to some embodiments.
- FIG. 7 is a schematic diagram of interpolation pixels and their 4 ⁇ 4 neighborhood pixels in an image processing method according to some embodiments.
- Figure 8 is a schematic diagram of different texture types according to some embodiments.
- FIG. 9 is a schematic diagram of the first edge direction, the second edge direction, the third edge direction, and the fourth edge direction marked in FIG. 7;
- FIG. 10 is a schematic diagram of original pixels in a selected range in a first edge direction of an image processing method according to some embodiments
- FIG. 11 is a schematic diagram of original pixels in a selected range in a second edge direction of an image processing method according to some embodiments.
- FIG. 12 is a schematic diagram of original pixels in a selected range in a third edge direction of an image processing method according to some embodiments.
- FIG. 13 is a schematic diagram of original pixels in a selected range in a fourth edge direction of an image processing method according to some embodiments.
- FIG. 14 is a comparative effect diagram of image processing methods before and after processing according to some embodiments.
- Figure 15 is a partial enlarged view of the box in Figure 14;
- Fig. 16 is a flowchart of an image processing method according to other embodiments.
- Fig. 17 is a structural diagram of an image processing apparatus according to some embodiments.
- FIG. 18 is a structural diagram of another image processing apparatus according to some embodiments.
- Figure 19 is a structural diagram of an electronic device according to some embodiments.
- first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
- plural means two or more.
- the expressions “coupled” and “connected” and their extensions may be used.
- the term “connected” may be used when describing some embodiments to indicate that two or more components are in direct physical or electrical contact with each other.
- the term “coupled” may be used when describing some embodiments to indicate that two or more components have direct physical or electrical contact.
- the term “coupled” or “communicatively coupled” may also mean that two or more components are not in direct contact with each other, but still cooperate or interact with each other.
- the embodiments disclosed herein are not necessarily limited to the content of this document.
- a and/or B includes the following three combinations: A only, B only, and the combination of A and B.
- some embodiments of the present disclosure provide an image processing method for achieving image resolution enhancement, and the method includes S10 to S50.
- S10 Determine the position coordinates of the interpolation pixel in the original image according to the position coordinates of any interpolation pixel to be processed on the target image in the target image.
- the target image is an image obtained by performing resolution enhancement on the original image
- the interpolated pixel points are pixels generated when the resolution enhancement is performed.
- the position coordinates of the interpolation pixels in the original image can be determined through coordinate mapping, for example:
- the position coordinates of the interpolated pixel in the target image are (u, v), and the position coordinates of the interpolated pixel in the target image are (u, v) mapped to the original image according to the following mapping formula:
- mapping formula in the X axis direction (the width direction of the image) is:
- mapping formula in the Y axis direction (the height direction of the image):
- (float) means to take floating point data
- inv_scale_x means the ratio of the target image to the original image in the X-axis direction
- inv_scale_y means the ratio of the target image to the original image in the Y-axis direction
- floor(fx) means to take fx down Integer
- floor(fy) means to round down fy
- (x, y) is the position coordinate of the interpolation pixel in the original image.
- the above-mentioned coordinate mapping formula can be used to achieve any ratio of resolution enhancement to the original image.
- S20 Calculate the two-dimensional image entropy of the n ⁇ n neighborhood of the interpolation pixel in the original image according to the position coordinates of the interpolation pixel in the original image, where n ⁇ 2 and n is a positive integer.
- the n ⁇ n neighborhood is the set area around the position coordinates of the interpolation pixel in the original image (x, y), the set area includes n ⁇ n original pixels, n ⁇ 2 and n It is a positive integer; for example, the setting area may include 2 ⁇ 2 original pixels, 3 ⁇ 3 original pixels, 4 ⁇ 4 original pixels, etc.
- the present disclosure is not limited thereto.
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the position coordinates of the interpolation pixel in FIG. 7 in the original image are (x, y), which is in 4 ⁇ 4
- the position coordinates of the 16 original pixels in the neighborhood are: (x 1 , y 1 ), (x 1 , y 2 ), (x 1 , y 3 ), (x 1 , y 4 ), (x 2 , y 1 ), (x 2 , y 2 ), (x 2 , y 3 ), (x 2 , y 4 ), (x 3 , y 1 ), (x 3 , y 2 ), (x 3 , y 3 ), (x 3 , y 4 ), (x 4 , y 1 ), (x 4 , y 2 ), (x 4 , y 3 ), and (x 4 , y 4 ).
- the calculation method of the two-dimensional image entropy of the interpolation pixel in the 4 ⁇ 4 neighborhood in the original image is, for example:
- the expression of is calculated to obtain the two-dimensional image entropy of the 4 ⁇ 4 neighborhood.
- the texture type of the n ⁇ n neighborhood is the complex texture shown in 8C in FIG. 8.
- the texture type of the n ⁇ n neighborhood may be the edge texture shown in 8A in FIG. 8 or the smooth texture shown in 8B in FIG. 8.
- the two-dimensional image entropy can be combined with the preset entropy threshold.
- the texture type of the n ⁇ n neighborhood is determined to be 8C in Figure 8.
- step S30 is executed; in the case that the entropy of the two-dimensional image is less than the preset entropy threshold, it is determined that the texture type of the n ⁇ n neighborhood is the edge texture shown in 8A in FIG. 8 or the image In the case of the smooth texture shown in 8B in 8, step S40 is executed.
- the preset entropy threshold has a value range of 0.3 to 0.8; for example, the preset entropy threshold has a value of 0.3, 0.4, 0.5, 0.6, 0.7, or 0.8.
- the first interpolation algorithm is a bicubic convolution interpolation algorithm.
- the bicubic interpolation algorithm can set different weight values for the original pixels according to the distance between the original pixels and the interpolated pixels, and perform weighting operations on the original pixels according to the weight values of different original pixels, and then Using the result of the weighting operation as the pixel value of the interpolation pixel can remove the aliasing on the target image to a certain extent, and the quality of the target image obtained by the resolution enhancement is better.
- the bicubic convolution interpolation algorithm uses the following piecewise convolution kernel function to perform convolution operations:
- a is the distance between two adjacent original pixels
- s is the distance between the original pixel and the interpolated pixel
- u(s) is the pixel value of the interpolated pixel
- the pixel value of the interpolated pixel is calculated by the bicubic convolution interpolation algorithm.
- the interpolated pixels are calculated by the bicubic convolution interpolation algorithm
- the pixel value of the pixel value of the interpolation pixel can be calculated based on all the original pixels in the 4 ⁇ 4 neighborhood through the nearest neighbor interpolation algorithm, bilinear interpolation algorithm and other interpolation algorithms.
- the entropy of the two-dimensional image is less than the preset entropy threshold, in order to determine whether the texture type of the n ⁇ n neighborhood is the edge texture shown in 8A in FIG. 8 or the smooth texture shown in 8B in FIG. 8, S40 ⁇ S60.
- the at least two edge directions may include two edge directions, three edge directions, or four edge directions, but of course it is not limited thereto. It should be understood that the greater the number of edge directions included in the at least two edge directions, the better the clarity of the processed photo.
- each edge direction After obtaining the gradient value of each edge direction, determine whether there is a strong edge direction in at least two edge directions. If there is a strong edge direction in at least two edge directions, perform S50; if there is no strong edge direction in at least two edge directions, For the edge direction, execute S60.
- S50 Calculate the pixel value of the interpolated pixel by using the first interpolation algorithm based on the original pixel in the strong edge direction in the n ⁇ n neighborhood.
- the first interpolation algorithm is a bicubic convolution interpolation algorithm.
- the calculation formulas and beneficial effects adopted by the bicubic convolution interpolation algorithm are consistent with those described above, and will not be repeated here.
- the second interpolation algorithm is a B-spline interpolation algorithm.
- the spline curve of the B-spline interpolation algorithm has the advantages of being derivable at the nodes and smooth.
- S40 includes S41 to S43.
- the selected range of each edge direction in the n ⁇ n neighborhood may be preset, and the selected range of each edge direction may include multiple groups of original pixel points distributed along the edge direction.
- the selected range of each edge direction may include multiple groups of original pixel points distributed along the edge direction.
- FIG. 10 in a 4 ⁇ 4 neighborhood, three groups of original pixel points distributed along the edge direction within a selected range of an edge direction at an angle of 45° to the width direction of the picture, of which the two groups are respectively It includes 3 original pixels distributed along the edge direction, and the remaining group includes 4 original pixels distributed along the edge direction.
- the shape of the selected range is not limited, for example, it may be a rectangle, a circle, or the like.
- the number of edge directions used to determine the texture type is not limited, for example, it can be two, three, or four.
- four edge directions are used to determine the texture type, which are the first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4.
- the first edge direction L1 and the second edge direction L2 are substantially perpendicular.
- the first edge direction L1 is approximately parallel to one of the two diagonals of the rectangle determined by the n ⁇ n neighborhood; the second edge direction L2 is substantially parallel to two pairs of the rectangle determined by the n ⁇ n neighborhood The other of the corners is approximately parallel; wherein, the rectangle defined by the n ⁇ n neighborhood is the rectangle enclosed by the outermost circle of pixel points in the n ⁇ n neighborhood.
- the first edge direction L1 is a parallel line that forms an angle of 45° with the horizontal direction
- the second edge direction L2 is a parallel line that forms an angle of 135° with the horizontal direction.
- the third edge direction L3 and the fourth edge direction L4 are approximately perpendicular, and the angle between the first edge direction L1 and the third edge direction L3 is approximately 45°.
- the third edge direction L3 is approximately parallel to the row direction in which the plurality of pixels in the n ⁇ n neighborhood are arranged
- the fourth edge direction L4 is approximately parallel to the column direction in which the plurality of pixels in the n ⁇ n neighborhood are arranged. parallel.
- the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction
- the fourth edge direction L4 is a parallel line at an angle of 90° to the horizontal direction.
- first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4 can also be selected in other directions.
- first edge direction L1 is a 30° direction
- second edge direction L2 is a 120° direction
- third edge direction L3 is a -15° direction
- fourth edge direction L4 is a 75° direction.
- S40 includes S411 to S414.
- S411 Use the sum or the average value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the first edge direction L1 as the gradient value of the first edge direction L1.
- the first edge direction L1 is a parallel line at an angle of 45° to the horizontal direction
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the selected range is the rectangular box shown in FIG. 10.
- the sum of the absolute value of the difference between the gray values of two adjacent original pixels in the rectangular frame in the 45° direction is taken as an example of the gradient value in the 45° direction, namely:
- G 45° is the gradient value in the 45° direction.
- the absolute mean value of the gray value difference of each adjacent two original pixel points in the selected range in the first edge direction L1 is taken as the gradient value of the first edge direction L1
- the sum of the absolute value is divided by the absolute value of the gray value difference The number of values.
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the selected range is the rectangular box shown in FIG.
- the mean value of the absolute value of the difference between the gray values of the two adjacent original pixels in the rectangular frame in the direction is used as the gradient value in the 45° direction, and the above-mentioned G 45° needs to be divided by 7.
- S412 Use the sum or the average value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the second edge direction L2 as the gradient value of the second edge direction L2.
- the second edge direction L2 is a parallel line at an angle of 135° to the horizontal direction
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the selected range is the rectangular box shown in FIG. 11.
- the sum of the absolute value of the difference between the gray values of two adjacent original pixels in a rectangular frame in the 135° direction is taken as an example of the gradient value in the 135° direction, namely:
- the absolute mean value of the gray value difference of each adjacent two original pixel points in the selected range in the second edge direction L2 is taken as the gradient value of the second edge direction L2
- the sum of the absolute values is divided by the absolute value of the gray value difference The number of values.
- the second edge direction L2 is a parallel line at an angle of 135° to the horizontal direction
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the direction is 135°
- the mean value of the absolute value of the difference between the gray values of the two adjacent original pixels in the upper rectangular frame is taken as the gradient value in the 135° direction, and the above-mentioned G 135° needs to be divided by 7.
- S413 Use the sum or the mean value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the third edge direction L3 as the gradient value of the third edge direction L3.
- the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the selected range is the rectangular box shown in FIG. 12.
- the sum of the absolute value of the difference between the gray values of two adjacent original pixels in the rectangular frame in the 0° direction is taken as an example of the gradient value in the 0° direction, namely:
- the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the direction is 0°
- the average value of the absolute value of the difference between the gray values of the two adjacent original pixels in the upper rectangular frame is taken as the gradient value in the 0° direction, and the above G 0° needs to be divided by 6.
- S414 Use the sum or the average value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the fourth edge direction L4 as the gradient value of the fourth edge direction L4.
- the fourth edge direction L4 is a parallel line at an angle of 0° to the horizontal direction
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the selected range is the rectangular box shown in FIG. 13.
- the absolute value of the difference between the gray values of two adjacent original pixels in the rectangular frame in the 90° direction is taken as an example of the gradient value in the 90° direction, namely:
- the fourth edge direction L4 is a parallel line at an angle of 90° to the horizontal direction
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- the selected range is the rectangular box shown in FIG. 13, the 90° direction
- the mean value of the absolute value of the difference between the gray values of the two adjacent original pixels in the upper rectangular frame is taken as the gradient value in the 90° direction, and the above-mentioned G 90° needs to be divided by 6.
- S411, S412, S413, and S414 are in no particular order. S411, S412, S413, and S414 can be executed in sequence, or S412, S411, S413, and S414 can be executed in sequence. Of course, the present disclosure is not limited to this. .
- S50 includes S510 to S513.
- ⁇ 1 Max(G 1 , G 2 )/Min(G 1 , G 2 );
- ⁇ 2 Max(G 3 , G 4 )/Min(G 3 , G 4 );
- G 1 is the gradient value in the first edge direction L1
- G 2 is the gradient value in the second edge direction L2
- G 3 is the gradient value in the third edge direction L3
- G 4 is the gradient value in the fourth edge direction L4.
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- G 1 G 45° .
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- G 2 G 135° .
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- G 3 G 0° .
- the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
- G 4 G 90° .
- the value range of the preset ratio threshold is 1.2-1.3; for example, the value of the preset entropy threshold is 1.2, 1.25, 1.26, 1.27, 1.28, 1.29, 1.3, etc.
- the difference between the two gradient values is determined During the process of the ratio of the larger value to the smaller value in the two gradient values, the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal.
- the selected range of the first edge direction L1 includes 10 original pixels; as shown in FIG. 11, the selected range of the second edge direction L2 includes 10 original pixels; As shown in 12, the selected range of the third edge direction L3 includes 8 original pixels; as shown in FIG. 13, the selected range of the fourth edge direction L4 includes 8 original pixels.
- the texture type of the n ⁇ n neighborhood is determined to be the edge texture as shown in 8A in FIG. 8, and the third edge direction L3 or the fourth edge direction L4 is a strong edge direction, and S512 is executed .
- the three can be directly compared, or the sizes of ⁇ 1 and ⁇ 2 can be determined first, and then the smaller value of ⁇ 1 and ⁇ 2 can be compared with T.
- the present disclosure is not limited to this.
- the first edge direction L1 is determined to be a strong edge direction; if G 2 > G 1 , then the second edge direction L2 is determined to be a strong edge direction.
- S512 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the third edge direction L3 and the fourth edge direction L4 as the strong edge direction.
- the third edge direction L3 is determined to be a strong edge direction; if G 4 > G 3 , then the fourth edge direction L4 is determined to be a strong edge direction.
- the number of edge directions used to determine the texture type is not limited, for example, it can be two, three, or four.
- two edge directions are used, which are the first edge direction L1 and the second edge direction L2, respectively.
- the first edge direction L1 and the second edge direction L2 are substantially perpendicular.
- the first edge direction L1 is approximately parallel to one of the two diagonals of the rectangle determined by the n ⁇ n neighborhood; the second edge direction L2 is substantially parallel to two pairs of the rectangle determined by the n ⁇ n neighborhood The other of the corners is approximately parallel; wherein, the rectangle defined by the n ⁇ n neighborhood is the rectangle enclosed by the outermost circle of pixel points in the n ⁇ n neighborhood.
- the first edge direction L1 is a parallel line that forms an angle of 45° with the horizontal direction
- the second edge direction L2 is a parallel line that forms an angle of 135° with the horizontal direction.
- first edge direction L1 and the second edge direction L2 can also be other directions.
- first edge direction L1 is a 30° direction
- second edge direction L2 is a 120° direction.
- edge directions include the first edge direction L1 and the second edge direction L2, in the process of determining whether there is a strong edge direction in the first edge direction L1 and the second edge direction L2, as shown in FIG. 5, S50 Including S520 ⁇ S522.
- S520 Determine that the ratio of the larger value to the smaller value among the gradient values of the first edge direction L1 and the second edge direction L2 is ⁇ , and a preset ratio threshold value is T.
- the value range of the preset ratio threshold is 1.2-1.3; for example, the value of the preset entropy threshold is 1.2, 1.25, 1.26, 1.27, 1.28, 1.29, 1.3, etc.
- the difference between the two gradient values is determined During the process of the ratio of the larger value to the smaller value in the two gradient values, the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal.
- the selected range of the first edge direction L1 includes 10 original pixels; as shown in FIG. 11, the selected range of the second edge direction L2 includes 10 original pixels.
- the calculation process of the gradient value of the first edge direction L1 and the gradient value of the second edge direction L2 is different from that of the at least two edge directions including the first edge direction L1 and the second edge direction L2.
- the first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4 the calculation process of the gradient value of the first edge direction L1 and the gradient value of the second edge direction L2 is similar, so I won’t do it here. Go into details.
- S521 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the first edge direction L1 and the second edge direction L2 as the strong edge direction.
- S522 Determine that there is no strong edge direction, and calculate the pixel value of the interpolated pixel by a second interpolation algorithm based on all the original pixels in the n ⁇ n neighborhood.
- the number of edge directions used to determine the texture type is not limited, for example, it can be two, three, or four.
- two edge directions are used, which are the first edge direction L1, the second edge direction L2, and the third edge direction L2, respectively.
- the first edge direction L1 and the second edge direction L2 are substantially perpendicular.
- the first edge direction L1 is approximately parallel to one of the two diagonals of the rectangle determined by the n ⁇ n neighborhood; the second edge direction L2 is substantially parallel to two pairs of the rectangle determined by the n ⁇ n neighborhood The other of the corners is approximately parallel; wherein, the rectangle defined by the n ⁇ n neighborhood is the rectangle enclosed by the outermost circle of pixel points in the n ⁇ n neighborhood.
- the first edge direction L1 is a parallel line that forms an angle of 45° with the horizontal direction
- the second edge direction L2 is a parallel line that forms an angle of 135° with the horizontal direction.
- the included angle between the first edge direction L1 and the third edge direction L3 is approximately 45°.
- the third edge direction L3 is substantially parallel to the row direction in which the plurality of pixel points in the n ⁇ n neighborhood are arranged.
- the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction.
- first edge direction L1, the second edge direction L2, and the third edge direction L3 can also be other directions.
- first edge direction L1 is a 60° direction
- second edge direction L2 is a 120° direction
- third edge direction L3 is a 0° direction.
- S50 includes S530 to S533.
- the value range of the preset ratio threshold is 1.2-1.3; illustratively, the preset entropy threshold value is 1.2, 1.25, 1.26, 1.27, 1.28, 1.29, 1.3, and so on.
- the difference between the two gradient values is determined During the process of the ratio of the larger value to the smaller value in the two gradient values, the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal.
- the selected range of the first edge direction L1 includes 10 original pixels; the selected range of the second edge direction L2 includes 10 original pixels; the selected range of the third edge direction L3 includes 10 original pixels. pixel.
- the gradient value of the first edge direction L1, the gradient value of the second edge direction L2, and the third edge direction L3 when at least two edge directions include the first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4, the gradient value of the first edge direction L1, the second edge direction L4
- the gradient value in the direction L2 is similar to the gradient value in the third edge direction L3, and is not repeated here.
- S531 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the first edge direction L1 and the second edge direction L2 as the strong edge direction.
- S532 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the second edge direction L2 and the third edge direction L3 as the strong edge direction.
- S533 Determine that there is no strong edge direction, and calculate the pixel value of the interpolated pixel by a second interpolation algorithm based on all the original pixels in the n ⁇ n neighborhood.
- the definition of the target image on the right obtained is significantly improved.
- the partial enlarged view of the image area in the rectangular frame in FIG. 14 shown in FIG. 15 can more intuitively see the effect of improving the definition.
- the image processing method determines the texture type through the determination of the edge texture direction and the calculation of the two-dimensional image entropy when the image resolution is enhanced, and uses different texture types according to different texture types.
- the interpolation algorithm calculates the pixel value of the interpolated pixel, thereby enhancing the image resolution and improving the clarity of the image.
- the position coordinates of the interpolated pixel points in the target image are mapped to the original image through a mapping formula, and the original image can be enhanced with any ratio (for example, integer ratio, decimal ratio, odd ratio, even ratio, etc.).
- Some embodiments of the present disclosure provide an image processing method. As shown in FIG. 16, the image processing method includes S100 to S600.
- the target image is an image obtained by performing resolution enhancement on the original image
- the interpolated pixel points are pixels generated when the resolution enhancement is performed.
- S100 can refer to S10 of the image processing method of some of the foregoing embodiments, which is not repeated here.
- S200 can refer to S40 of the image processing method of some of the foregoing embodiments, which will not be repeated here.
- each edge direction After the gradient value of each edge direction is obtained, it is determined whether there is a strong edge direction in at least two edge directions according to the gradient value of at least two edge directions, if it exists, S300 is executed; if it does not exist, S400 is executed.
- S300 Calculate the pixel value of the interpolated pixel by the first interpolation algorithm based on the multiple original pixels in the strong edge direction in the n ⁇ n neighborhood.
- S300 can refer to S50 of the image processing method of some of the foregoing embodiments, which will not be repeated here.
- S400 Calculate the two-dimensional image entropy of the interpolation pixel in the n ⁇ n neighborhood.
- S400 can refer to S20 of the image processing method of some of the foregoing embodiments, and details are not described herein.
- the entropy of the two-dimensional image After obtaining the entropy of the two-dimensional image, it is determined whether the entropy of the two-dimensional image is greater than or equal to a preset entropy threshold. Wherein, when the entropy of the two-dimensional image is greater than or equal to the preset entropy threshold, S500 is executed. When the entropy of the two-dimensional image is less than the preset entropy threshold, S600 is executed.
- S500 Calculate the pixel value of the interpolated pixel by using the first interpolation algorithm based on all the original pixels in the n ⁇ n neighborhood.
- S500 can refer to S30 of the image processing method of some of the foregoing embodiments, which will not be repeated here.
- S600 can refer to S60 of the image processing method of some of the foregoing embodiments, and details are not described herein.
- this image processing method it is first judged whether there is a strong edge direction in at least two edge directions.
- the pixel value of the interpolated pixel is calculated by the first interpolation algorithm.
- the two-dimensional image entropy is calculated, the two-dimensional image entropy is compared with a preset entropy threshold, and the first interpolation algorithm or the second interpolation algorithm is determined to calculate the pixel value of the interpolation pixel according to the comparison result.
- the above mainly introduces the image processing method provided by the embodiments of the present disclosure.
- an image processing device that implements the above-mentioned image processing method is also provided.
- the image processing device will be exemplarily introduced below.
- the image processing device 1 may include an image processing device 1 including a coordinate mapping component 11 (21), a two-dimensional image entropy calculation component 12 (23), and a strong edge direction judgment The part 13 (22), the first pixel value calculation part 14 (24), and the second pixel value calculation part 15 (25).
- the coordinate mapping component 11 (21) is configured to determine the position coordinates of the interpolation pixel in the original image according to the position coordinates of any interpolation pixel to be processed on the target image in the target image.
- the target image is the image after the resolution enhancement of the original image, and the interpolated pixels are the pixels generated when the resolution is enhanced.
- the coordinate mapping component 11 may be configured to perform the above-mentioned S10.
- S10 For the specific working process of the coordinate mapping component 11, refer to the corresponding process of S10 in the foregoing method embodiment, which will not be repeated here. .
- the coordinate mapping component 21 may be configured to perform the above-mentioned S100.
- the specific working process of the coordinate mapping component 21 please refer to the corresponding process of S100 in the foregoing method embodiment, which will not be repeated here. .
- the two-dimensional image entropy calculation component 12 (23) is configured to calculate the two-dimensional image entropy of the n ⁇ n neighborhood of the interpolated pixel in the original image according to the position coordinates of the interpolated pixel in the original image, where n ⁇ 2 and n is a positive integer.
- the two-dimensional image entropy calculation component 12 may be configured to perform the above-mentioned S20.
- S20 For the specific working process of the two-dimensional image entropy calculation component 12, refer to the corresponding process of S20 in the foregoing method embodiment. , I won’t repeat it here.
- the two-dimensional image entropy calculation component 23 may be configured to perform the above-mentioned S400.
- S400 For the specific working process of the two-dimensional image entropy calculation component 23, refer to the corresponding process of S400 in the foregoing method embodiment. , I won’t repeat it here.
- the strong edge direction judgment component 13 (22) is configured to calculate the gradient values of at least two edge directions of the interpolated pixel in the n ⁇ n neighborhood of the original image according to the position coordinates of the interpolated pixel in the original image, and According to the gradient values of the at least two edge directions, it is determined whether there is a strong edge direction in the at least two edge directions.
- the strong edge direction judging component 13 may be configured to perform the above-mentioned S40.
- S40 For the specific working process of the strong edge direction judging component 13, refer to the corresponding process of S40 in the foregoing method embodiment. This will not be repeated here.
- the strong edge direction judging component 22 may be configured to execute the above-mentioned S200.
- S200 For the specific working process of the strong edge direction judging component 22, refer to the corresponding process of S200 in the foregoing method embodiment. This will not be repeated here.
- the first pixel value calculation unit 14 (24) is configured to, in the case where there are strong edge directions in at least two edge directions, pass the first
- the interpolation algorithm calculates the pixel value of the interpolated pixel; and, when the entropy of the two-dimensional image is greater than or equal to the preset entropy threshold, based on all the original pixels in the n ⁇ n neighborhood, the interpolated pixel is calculated by the first interpolation algorithm The pixel value.
- the first pixel value calculation component 14 may be configured to perform the above S30 and S50.
- S30 and S30 for the specific working process of the first pixel value calculation component 14, reference may be made to S30 and S30 in the foregoing method embodiment. The corresponding process of S50 will not be repeated here.
- the first pixel value calculation component 24 may be configured to perform the above S300 and S500.
- S300 and S300 for the specific working process of the first pixel value calculation component 24, refer to S300 and S300 in the foregoing method embodiment. The corresponding process of S500 will not be repeated here.
- the second pixel value calculation unit 15 (25) is configured to, in the case where the entropy of the two-dimensional image is less than the preset entropy threshold, and there is no strong edge direction in at least two edge directions, based on all the pixels in the n ⁇ n neighborhood For the original pixel, the pixel value of the interpolated pixel is calculated by the second interpolation algorithm.
- the second pixel value calculation component 15 may be configured to perform the above-mentioned S60.
- the specific working process of the second pixel value calculation component 15 refer to the corresponding process of S60 in the foregoing method embodiment. , I won’t repeat it here.
- the second pixel value calculation component 25 may be configured to perform the above-mentioned S600.
- S600 For the specific working process of the second pixel value calculation component 25, refer to the corresponding process of S600 in the foregoing method embodiment. , I won’t repeat it here.
- the electronic device 3 includes a processor 31 and a memory 32.
- the memory 32 stores computer program instructions suitable for execution by the processor 31.
- the program instructions are executed by the processor 31, the image processing method as in any of the above embodiments is executed.
- the processor 31 is used to support the electronic device 3 to execute one or more steps in the above-mentioned image processing method.
- the processor 31 may be a central processing unit (Central Processing Unit, CPU for short), or other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the memory 32 is used to store the program code and data of the electronic device 3 provided by the embodiment of the present disclosure.
- the processor 31 can execute various functions of the electronic device 3 by running or executing a software program stored in the memory 32 and calling data stored in the memory 32.
- the memory 32 can be a read-only memory (Read-Only Memory, ROM) or other types of static storage devices that can store static information and instructions, random access memory (Random Access Memory, RAM), or other types that can store information and instructions
- the dynamic storage device can also be electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disc storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
- the memory can exist independently and is connected to the processor through a communication bus.
- the memory 32 may also be integrated with the processor 31.
- Some embodiments of the present disclosure also provide a computer-readable storage medium (for example, a non-transitory computer-readable storage medium), the computer-readable storage medium stores computer program instructions, and when the computer program instructions run on a computer , So that the computer executes the image processing method in any one of the above-mentioned embodiments.
- a computer-readable storage medium for example, a non-transitory computer-readable storage medium
- the foregoing computer-readable storage medium may include, but is not limited to: magnetic storage devices (for example, hard disks, floppy disks, or tapes, etc.), optical disks (for example, CD (Compact Disk), DVD (Digital Versatile Disk, Digital universal disk), etc.), smart cards and flash memory devices (for example, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drives, etc.).
- magnetic storage devices for example, hard disks, floppy disks, or tapes, etc.
- optical disks for example, CD (Compact Disk), DVD (Digital Versatile Disk, Digital universal disk), etc.
- smart cards and flash memory devices for example, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drives, etc.
- Various computer-readable storage media described in this disclosure may represent one or more devices and/or other machine-readable storage media for storing information.
- the term "machine-readable storage medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
- Some embodiments of the present disclosure also provide a computer program product.
- the computer program product includes computer program instructions, and when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute the image processing method in any of the above-mentioned embodiments.
- Some embodiments of the present disclosure also provide a computer program.
- the computer program When the computer program is executed on the computer, the computer program causes the computer to execute the image processing method as in any of the above-mentioned embodiments.
- the disclosed device and method may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of components is only a division of logical functions, and there may be other divisions in actual implementation, for example, multiple components can be combined or integrated into another. A device or some features can be ignored or not implemented.
- the components described as separate components may or may not be physically separate, and one or some components may or may not be physical units. Some or all of the components may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present disclosure.
- various functional components in some embodiments of the present disclosure may be integrated into one processing unit, or each component may exist alone physically, or two or more components may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium, including several
- the instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (19)
- 一种图像处理方法,包括:根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点;根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数;在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第一插值算法计算所述插值像素点的像素值;在所述二维图像熵小于预设熵值阈值的情况下,分别计算所述n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向;若存在,则基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;若不存在,则基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
- 根据权利要求1所述的图像处理方法,其中,所述分别计算所述n×n邻域内的至少两个边缘方向的梯度值,包括:分别获取所述n×n邻域内的每个边缘方向上选定范围内多个原始像素点的灰度值;计算每个边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值;将每个边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和或均值作为所述边缘方向的梯度值。
- 根据权利要求1或2所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向、第二边缘方向、第三边缘方向和第四边缘方向;所述分别计算所述n×n邻域内的至少两个边缘方向的梯度值,包括:将所述第一边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第一边缘方向的梯度值;将所述第二边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第二边缘方向的梯度值;将所述第三边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第三边缘方向的梯度值;将所述第四边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第四边缘方向的梯度值。
- 根据权利要求3所述的图像处理方法,其中,所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:确定所述第一边缘方向的梯度值与所述第二边缘方向的梯度值之中的较大值与较小值的比值为α 1,所述第三边缘方向的梯度值与所述第四边缘方向的梯度值之中的较大值与较小值的比值为α 2,以及,预设比值阈值为T;若α 1>α 2>T,则确定存在强边缘方向,并将所述第一边缘方向和所述第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;若α 2>α 1>T,则确定存在强边缘方向,并将所述第三边缘方向和所述第四边缘方向中的梯度值较大的边缘方向确定为强边缘方向;若α 1≤T和/或α 2≤T,则确定不存在强边缘方向。
- 根据权利要求1或2所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向和第二边缘方向;所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:确定所述第一边缘方向和所述第二边缘方向的梯度值之中的较大值与较小值的比值为α,以及,预设比值阈值为T;若α>T,则确定存在强边缘方向,并将所述第一边缘方向和第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;若α≤T,则确定不存在强边缘方向。
- 根据权利要求1或2所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向、第二边缘方向和第三边缘方向;所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:确定所述第一边缘方向的梯度值与所述第二边缘方向的梯度值之中的较大值与较小值的比值为α 3,所述第二边缘方向的梯度值与所述第三边缘方向的梯度值之中的较大值与较小值的比值为α 4,以及,预设比值阈值为T;若α 3>α 4>T,则确定存在强边缘方向,并将所述第一边缘方向和所述第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;若α 4>α 3>T,则确定存在强边缘方向,并将所述第二边缘方向和所 述第三边缘方向中的梯度值较大的边缘方向确定为强边缘方向;若α 3≤T和/或α 4≤T,则确定不存在强边缘方向。
- 根据权利要求4~6中任一项所述的图像处理方法,其中,在将边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和作为所述边缘方向的梯度值的情况下,在确定两个梯度值之中的较大值与较小值的比值的过程中,所述两个梯度值各自对应的边缘方向上,用于计算梯度值的选定范围内所包括的原始像素点的数量相等。
- 根据权利要求4~7中任一项所述的图像处理方法,其中,所述预设比值阈值T的取值范围为1.2~1.3。
- 根据权利要求1~8中任一项所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向和第二边缘方向,所述第一边缘方向与所述第二边缘方向大致垂直。
- 根据权利要求9所述的图像处理方法,其中,所述至少两个边缘方向还包括第三边缘方向和第四边缘方向,所述第三边缘方向与所述第四边缘方向大致垂直,所述第一边缘方向与所述第三边缘方向的夹角大致为45°。
- 根据权利要求10所述的图像处理方法,其中,所述第一边缘方向与所述n×n邻域所确定的矩形的两条对角线中的一条大致平行;所述第二边缘方向与所述n×n邻域所确定的矩形的两条对角线中的另一条大致平行;所述第三边缘方向与所述n×n邻域中的多个像素点排列的行方向大致平行,所述第四边缘方向与所述n×n邻域中的多个像素点排列的列方向大致平行。
- 根据权利要求1~11中任一项所述的图像处理方法,其中,所述预设熵值阈值的取值范围为0.3~0.8。
- 根据权利要求1~12中任一项所述的图像处理方法,其中,所述n×n邻域为4×4邻域。
- 根据权利要求1~13中任一项所述的图像处理方法,其中,所述第一插值算法为双立方卷积插值算法,和/或,所述第二插值算法为B样条插值算法。
- 一种图像处理方法,包括:根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点;根据所述插值像素点在原始图像中的位置坐标,分别计算所述插值像素 点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,n≥2且n为正整数;若存在,则基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;若不存在,则计算所述插值像素点在所述n×n邻域的二维图像熵;在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第一插值算法计算所述插值像素点的像素值;在所述二维图像熵小于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
- 一种图像处理装置,包括:坐标映射部件,被配置为根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点;二维图像熵计算部件,被配置为根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数;强边缘方向判断部件,被配置为根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向;第一像素值计算部件,被配置为在所述至少两个边缘方向中存在强边缘方向的情况下,基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;以及,在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;第二像素值计算部件,被配置为在所述二维图像熵小于预设熵值阈值,且所述至少两个边缘方向中不存在强边缘方向的情况下,基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
- 一种电子设备,包括处理器和存储器,所述存储器中存储有适于所述处理器执行的计算机程序指令,所述计算机程序指令被所述处理器运行时执行如权利要求1~15中任一项所述的图像处理方法。
- 一种计算机可读存储介质,存储有计算机程序指令,其中,所述计算机程序指令在计算机上运行时,使得所述计算机执行如权利要求1~15中任一项所述的图像处理方法。
- 一种计算机程序产品,包括计算机程序指令,其中,在计算机上执行所述计算机程序指令时,所述计算机程序指令使计算机执行如权利要求1~15中任一项所述的图像处理方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/793,487 US20230082346A1 (en) | 2020-06-17 | 2021-06-16 | Image processing methods, electronic devices, and non-transitory computer-readable storage media |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010553063.5 | 2020-06-17 | ||
CN202010553063.5A CN113808012B (zh) | 2020-06-17 | 2020-06-17 | 图像处理方法、计算机设备及计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021254381A1 true WO2021254381A1 (zh) | 2021-12-23 |
Family
ID=78892638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/100328 WO2021254381A1 (zh) | 2020-06-17 | 2021-06-16 | 图像处理方法及装置、电子设备、计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230082346A1 (zh) |
CN (1) | CN113808012B (zh) |
WO (1) | WO2021254381A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115496751A (zh) * | 2022-11-16 | 2022-12-20 | 威海捷诺曼自动化股份有限公司 | 一种纤维缠绕机缠绕检测方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447819A (zh) * | 2015-12-04 | 2016-03-30 | 腾讯科技(深圳)有限公司 | 图像处理方法及装置 |
US20170053380A1 (en) * | 2015-08-17 | 2017-02-23 | Flir Systems, Inc. | Edge guided interpolation and sharpening |
CN108805806A (zh) * | 2017-04-28 | 2018-11-13 | 华为技术有限公司 | 图像处理方法及装置 |
CN110349090A (zh) * | 2019-07-16 | 2019-10-18 | 合肥工业大学 | 一种基于牛顿二阶插值的图像缩放方法 |
CN110738625A (zh) * | 2019-10-21 | 2020-01-31 | Oppo广东移动通信有限公司 | 图像重采样方法、装置、终端及计算机可读存储介质 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8085850B2 (en) * | 2003-04-24 | 2011-12-27 | Zador Andrew M | Methods and apparatus for efficient encoding of image edges, motion, velocity, and detail |
JP4701144B2 (ja) * | 2006-09-26 | 2011-06-15 | 富士通株式会社 | 画像処理装置、画像処理方法および画像処理プログラム |
WO2011096091A1 (ja) * | 2010-02-08 | 2011-08-11 | Xu Weigang | 画像圧縮装置、画像伸張装置、画像圧縮方法、画像伸張方法および記録媒体 |
KR101854611B1 (ko) * | 2016-12-05 | 2018-05-04 | 인천대학교 산학협력단 | 엔트로피를 이용한 가중치 할당에 기초한 이미지 처리 방법 |
CN108364254B (zh) * | 2018-03-20 | 2021-07-23 | 北京奇虎科技有限公司 | 图像处理方法、装置及电子设备 |
CN110555794B (zh) * | 2018-05-31 | 2021-07-23 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN109903224B (zh) * | 2019-01-25 | 2023-03-31 | 珠海市杰理科技股份有限公司 | 图像缩放方法、装置、计算机设备和存储介质 |
CN111080532B (zh) * | 2019-10-16 | 2024-07-19 | 北京理工大学深圳研究院 | 基于理想边缘外推的遥感影像超分辨率复原方法 |
-
2020
- 2020-06-17 CN CN202010553063.5A patent/CN113808012B/zh active Active
-
2021
- 2021-06-16 WO PCT/CN2021/100328 patent/WO2021254381A1/zh active Application Filing
- 2021-06-16 US US17/793,487 patent/US20230082346A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170053380A1 (en) * | 2015-08-17 | 2017-02-23 | Flir Systems, Inc. | Edge guided interpolation and sharpening |
CN105447819A (zh) * | 2015-12-04 | 2016-03-30 | 腾讯科技(深圳)有限公司 | 图像处理方法及装置 |
CN108805806A (zh) * | 2017-04-28 | 2018-11-13 | 华为技术有限公司 | 图像处理方法及装置 |
CN110349090A (zh) * | 2019-07-16 | 2019-10-18 | 合肥工业大学 | 一种基于牛顿二阶插值的图像缩放方法 |
CN110738625A (zh) * | 2019-10-21 | 2020-01-31 | Oppo广东移动通信有限公司 | 图像重采样方法、装置、终端及计算机可读存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115496751A (zh) * | 2022-11-16 | 2022-12-20 | 威海捷诺曼自动化股份有限公司 | 一种纤维缠绕机缠绕检测方法 |
Also Published As
Publication number | Publication date |
---|---|
CN113808012A (zh) | 2021-12-17 |
CN113808012B (zh) | 2024-07-12 |
US20230082346A1 (en) | 2023-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4295340B2 (ja) | 二次元画像の拡大およびピンチング | |
CN110827229B (zh) | 一种基于纹理加权直方图均衡化的红外图像增强方法 | |
JP7175197B2 (ja) | 画像処理方法および装置、記憶媒体、コンピュータ装置 | |
CN110503704B (zh) | 三分图的构造方法、装置和电子设备 | |
WO2020186385A1 (zh) | 图像处理方法、电子设备及计算机可读存储介质 | |
CN111583381B (zh) | 游戏资源图的渲染方法、装置及电子设备 | |
WO2021254381A1 (zh) | 图像处理方法及装置、电子设备、计算机可读存储介质 | |
CN111968033A (zh) | 一种图像缩放处理方法及装置 | |
CN111539238A (zh) | 二维码图像修复方法、装置、计算机设备和存储介质 | |
CN114022384A (zh) | 一种基于各向异性扩散模型的自适应边缘保持去噪方法 | |
WO2021000495A1 (zh) | 一种图像处理方法以及装置 | |
JPWO2019041842A5 (zh) | ||
CN111192302A (zh) | 一种基于运动平滑性和ransac算法的特征匹配方法 | |
CN115661464A (zh) | 图像分割方法、装置、设备及计算机存储介质 | |
JP6164977B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP2017201454A (ja) | 画像処理装置及びプログラム | |
CN112907708B (zh) | 人脸卡通化方法、设备及计算机存储介质 | |
CN118071979B (zh) | 一种基于深度学习的晶圆预对准方法及系统 | |
CN111968139B (zh) | 基于初级视觉皮层固视微动机制的轮廓检测方法 | |
JP5934019B2 (ja) | 階調復元装置及びそのプログラム | |
TWI736335B (zh) | 基於深度影像生成方法、電子裝置與電腦程式產品 | |
CN110648341B (zh) | 一种基于尺度空间和子图的目标边界检测方法 | |
CN114973288B (zh) | 一种非商品图文本检测方法、系统及计算机存储介质 | |
WO2022205606A1 (zh) | 模具处理方法、装置、电子设备、系统和存储介质 | |
WO2024212665A1 (zh) | 图像缩放方法、装置、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21826044 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21826044 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21826044 Country of ref document: EP Kind code of ref document: A1 |