WO2021254381A1 - 图像处理方法及装置、电子设备、计算机可读存储介质 - Google Patents

图像处理方法及装置、电子设备、计算机可读存储介质 Download PDF

Info

Publication number
WO2021254381A1
WO2021254381A1 PCT/CN2021/100328 CN2021100328W WO2021254381A1 WO 2021254381 A1 WO2021254381 A1 WO 2021254381A1 CN 2021100328 W CN2021100328 W CN 2021100328W WO 2021254381 A1 WO2021254381 A1 WO 2021254381A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge direction
value
edge
pixel
neighborhood
Prior art date
Application number
PCT/CN2021/100328
Other languages
English (en)
French (fr)
Inventor
刘小磊
孙建康
陈丽莉
张�浩
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US17/793,487 priority Critical patent/US20230082346A1/en
Publication of WO2021254381A1 publication Critical patent/WO2021254381A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present disclosure relates to the field of image processing technology.
  • it relates to an image processing method and device, electronic equipment, and computer-readable storage medium.
  • image resolution enhancement is an important research direction.
  • the existing image resolution enhancement methods are basically implemented based on image interpolation algorithms. Among them, each interpolation pixel is processed in the same manner, resulting in a low definition of the processed image, and the quality is still poor, and it is impossible to achieve an arbitrary ratio of resolution enhancement to the original image.
  • an image processing method includes:
  • the position coordinates of any interpolated pixel to be processed on the target image in the target image determine the position coordinates of the interpolated pixel in the original image; the target image is the original image after resolution enhancement, so The interpolated pixel points are pixels generated when the resolution is enhanced.
  • the pixel value of the interpolated pixel is calculated by a first interpolation algorithm based on all original pixels in the n ⁇ n neighborhood.
  • the gradient values of at least two edge directions in the n ⁇ n neighborhood are calculated respectively, and the judgment is made according to the gradient values of the at least two edge directions Whether there is a strong edge direction in the at least two edge directions. If it exists, the pixel value of the interpolated pixel is calculated by the first interpolation algorithm based on the multiple original pixels in the strong edge direction in the n ⁇ n neighborhood; if it does not exist, it is based on the n For all original pixels in the ⁇ n neighborhood, the pixel value of the interpolated pixel is calculated by the second interpolation algorithm.
  • the respectively calculating the gradient values of at least two edge directions in the n ⁇ n neighborhood includes: separately obtaining a plurality of original pixel points in each edge direction in the n ⁇ n neighborhood Calculate the absolute value of the difference between the gray values of every two adjacent original pixels in each edge direction; calculate the gray values of every two adjacent original pixels in the selected range in each edge direction The sum or the mean value of the absolute value of the value difference is used as the gradient value of the edge direction.
  • the at least two edge directions include a first edge direction, a second edge direction, a third edge direction, and a fourth edge direction.
  • Said calculating respectively the gradient values of at least two edge directions in the n ⁇ n neighborhood includes: calculating the gray value difference of each adjacent two original pixel points in a selected range in the first edge direction The sum or the average value of the absolute value of is used as the gradient value of the first edge direction; the sum of the absolute value of the gray value difference of each adjacent two original pixels in the selected range in the second edge direction Or the mean value is used as the gradient value of the second edge direction; the sum or the mean value of the absolute value of the gray value difference of every two adjacent original pixels in the selected range in the third edge direction is used as the The gradient value of the third edge direction; the absolute value or the mean value of the gray value difference of each adjacent two original pixel points in the selected range in the fourth edge direction is taken as the fourth edge direction The gradient value.
  • the determining whether there is a strong edge direction in the at least two edge directions according to the gradient values of the at least two edge directions includes: determining the gradient value of the first edge direction and the The ratio of the larger value to the smaller value among the gradient values of the second edge direction is ⁇ 1 , and the larger value of the gradient value of the third edge direction and the gradient value of the fourth edge direction is smaller than the larger value.
  • the ratio of the small value is ⁇ 2 , and the preset ratio threshold is T; if ⁇ 1 > ⁇ 2 >T, it is determined that there is a strong edge direction, and the gradient between the first edge direction and the second edge direction
  • the edge direction with a larger value is determined as a strong edge direction; if ⁇ 2 > ⁇ 1 > T, it is determined that there is a strong edge direction, and the gradient value of the third edge direction and the fourth edge direction is larger
  • the edge direction is determined to be a strong edge direction; if ⁇ 1 ⁇ T and/or ⁇ 2 ⁇ T, it is determined that there is no strong edge direction.
  • the at least two edge directions include a first edge direction and a second edge direction.
  • the determining whether a strong edge direction exists in the at least two edge directions according to the gradient values of the at least two edge directions includes: determining among the gradient values of the first edge direction and the second edge direction The ratio of the larger value to the smaller value of is ⁇ , and the preset ratio threshold is T; if ⁇ >T, it is determined that there is a strong edge direction, and the gradient values between the first edge direction and the second edge direction The larger edge direction is determined as the strong edge direction; if ⁇ T, it is determined that there is no strong edge direction.
  • the at least two edge directions include a first edge direction, a second edge direction, and a third edge direction.
  • the determining whether a strong edge direction exists in the at least two edge directions according to the gradient values of the at least two edge directions includes: determining the gradient value of the first edge direction and the gradient of the second edge direction The ratio of the larger value to the smaller value among the values is ⁇ 3 , and the ratio of the larger value to the smaller value among the gradient value of the second edge direction and the gradient value of the third edge direction is ⁇ 4 , and the preset ratio threshold is T; if ⁇ 3 > ⁇ 4 > T, it is determined that there is a strong edge direction, and the first edge direction and the second edge direction of the edge direction with the larger gradient value Determine as a strong edge direction; if ⁇ 4 > ⁇ 3 > T, it is determined that there is a strong edge direction, and the edge direction with the larger gradient value in the second edge direction and the third edge direction is determined as the strong edge Direction; if ⁇ 3 ⁇ T and/
  • the sum of the absolute value of the gray value difference of every two adjacent original pixel points in the selected range in the edge direction is used as the gradient value of the edge direction
  • the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal .
  • the preset ratio threshold T has a value range of 1.2 to 1.3.
  • the at least two edge directions include a first edge direction and a second edge direction, and the first edge direction is substantially perpendicular to the second edge direction.
  • the at least two edge directions further include a third edge direction and a fourth edge direction, the third edge direction is substantially perpendicular to the fourth edge direction, and the first edge direction is The angle of the third edge direction is approximately 45°.
  • the first edge direction is substantially parallel to one of the two diagonals of the rectangle defined by the n ⁇ n neighborhood; the second edge direction is substantially parallel to the n ⁇ n neighborhood The other of the two diagonals of the determined rectangle is approximately parallel; the third edge direction is approximately parallel to the row direction of the plurality of pixel points in the n ⁇ n neighborhood, and the fourth edge direction It is substantially parallel to the column direction in which the plurality of pixel points in the n ⁇ n neighborhood are arranged.
  • the preset entropy threshold value ranges from 0.3 to 0.8.
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood.
  • the first interpolation algorithm is a bicubic convolution interpolation algorithm
  • the second interpolation algorithm is a B-spline interpolation algorithm
  • Image processing methods include:
  • the position coordinates of any interpolated pixel to be processed on the target image in the target image determine the position coordinates of the interpolated pixel in the original image; the target image is the original image after resolution enhancement, so The interpolated pixel points are pixels generated when the resolution is enhanced.
  • the gradient values of at least two edge directions of the interpolated pixel in the n ⁇ n neighborhood in the original image are respectively calculated, and according to the at least two edge directions
  • n ⁇ 2 and n is a positive integer. If it exists, calculate the pixel value of the interpolated pixel by the first interpolation algorithm based on multiple original pixels in the strong edge direction in the n ⁇ n neighborhood; if it does not exist, calculate the interpolated value The two-dimensional image entropy of the pixel in the n ⁇ n neighborhood.
  • the pixel value of the interpolated pixel is calculated by a first interpolation algorithm based on all original pixels in the n ⁇ n neighborhood.
  • the pixel value of the interpolated pixel is calculated by a second interpolation algorithm based on all original pixels in the n ⁇ n neighborhood.
  • an image processing device in another aspect, includes a coordinate mapping part, a two-dimensional image entropy calculation part, a strong edge direction judgment part, a first pixel value calculation part, and a second pixel value calculation part.
  • the coordinate mapping component is configured to determine the position coordinates of the interpolation pixel in the original image according to the position coordinates of any interpolation pixel to be processed on the target image; For an image with an enhanced resolution, the interpolated pixel points are pixels generated when the resolution is enhanced.
  • the two-dimensional image entropy calculation component is configured to calculate the two-dimensional image entropy of the n ⁇ n neighborhood of the interpolated pixel in the original image according to the position coordinates of the interpolated pixel in the original image, n ⁇ 2 And n is a positive integer.
  • the strong edge direction judgment component is configured to calculate the gradient values of at least two edge directions of the interpolation pixel point in the n ⁇ n neighborhood in the original image according to the position coordinates of the interpolation pixel point in the original image , And determine whether there is a strong edge direction in the at least two edge directions according to the gradient values of the at least two edge directions.
  • the first pixel value calculation component is configured to, in the case where there is a strong edge direction in the at least two edge directions, pass through a plurality of original pixels in the strong edge direction in the n ⁇ n neighborhood
  • the first interpolation algorithm calculates the pixel value of the interpolation pixel; and, in the case that the entropy of the two-dimensional image is greater than or equal to a preset entropy threshold, based on all original pixels in the n ⁇ n neighborhood Calculate the pixel value of the interpolation pixel point by using the first interpolation algorithm.
  • the second pixel value calculation component is configured to, in a case where the entropy of the two-dimensional image is less than a preset entropy threshold, and there is no strong edge direction in the at least two edge directions, based on the n ⁇ n For all original pixels in the neighborhood, the pixel value of the interpolated pixel is calculated by the second interpolation algorithm.
  • an electronic device in another aspect, includes a processor and a memory, and computer program instructions suitable for execution by the processor are stored in the memory.
  • Image processing method In another aspect, an electronic device is provided.
  • the electronic device includes a processor and a memory, and computer program instructions suitable for execution by the processor are stored in the memory.
  • Image processing method In another aspect, an electronic device is provided.
  • a computer-readable storage medium stores computer program instructions, where the computer program instructions, when run on a computer, cause the computer to execute the image processing method as described in any of the foregoing embodiments.
  • a computer program product includes computer program instructions, wherein when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute the image processing method as described in any of the foregoing embodiments.
  • a computer program is provided.
  • the computer program When the computer program is executed on a computer, the computer program causes the computer to execute the image processing method described in any of the above-mentioned embodiments.
  • Fig. 1 is a flowchart of an image processing method according to some embodiments
  • Fig. 2 is another flowchart of an image processing method according to some embodiments.
  • Fig. 3 is yet another flowchart of an image processing method according to some embodiments.
  • Fig. 4 is another flowchart of an image processing method according to some embodiments.
  • Fig. 5 is yet another flowchart of an image processing method according to some embodiments.
  • Fig. 6 is still another flowchart of an image processing method according to some embodiments.
  • FIG. 7 is a schematic diagram of interpolation pixels and their 4 ⁇ 4 neighborhood pixels in an image processing method according to some embodiments.
  • Figure 8 is a schematic diagram of different texture types according to some embodiments.
  • FIG. 9 is a schematic diagram of the first edge direction, the second edge direction, the third edge direction, and the fourth edge direction marked in FIG. 7;
  • FIG. 10 is a schematic diagram of original pixels in a selected range in a first edge direction of an image processing method according to some embodiments
  • FIG. 11 is a schematic diagram of original pixels in a selected range in a second edge direction of an image processing method according to some embodiments.
  • FIG. 12 is a schematic diagram of original pixels in a selected range in a third edge direction of an image processing method according to some embodiments.
  • FIG. 13 is a schematic diagram of original pixels in a selected range in a fourth edge direction of an image processing method according to some embodiments.
  • FIG. 14 is a comparative effect diagram of image processing methods before and after processing according to some embodiments.
  • Figure 15 is a partial enlarged view of the box in Figure 14;
  • Fig. 16 is a flowchart of an image processing method according to other embodiments.
  • Fig. 17 is a structural diagram of an image processing apparatus according to some embodiments.
  • FIG. 18 is a structural diagram of another image processing apparatus according to some embodiments.
  • Figure 19 is a structural diagram of an electronic device according to some embodiments.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
  • plural means two or more.
  • the expressions “coupled” and “connected” and their extensions may be used.
  • the term “connected” may be used when describing some embodiments to indicate that two or more components are in direct physical or electrical contact with each other.
  • the term “coupled” may be used when describing some embodiments to indicate that two or more components have direct physical or electrical contact.
  • the term “coupled” or “communicatively coupled” may also mean that two or more components are not in direct contact with each other, but still cooperate or interact with each other.
  • the embodiments disclosed herein are not necessarily limited to the content of this document.
  • a and/or B includes the following three combinations: A only, B only, and the combination of A and B.
  • some embodiments of the present disclosure provide an image processing method for achieving image resolution enhancement, and the method includes S10 to S50.
  • S10 Determine the position coordinates of the interpolation pixel in the original image according to the position coordinates of any interpolation pixel to be processed on the target image in the target image.
  • the target image is an image obtained by performing resolution enhancement on the original image
  • the interpolated pixel points are pixels generated when the resolution enhancement is performed.
  • the position coordinates of the interpolation pixels in the original image can be determined through coordinate mapping, for example:
  • the position coordinates of the interpolated pixel in the target image are (u, v), and the position coordinates of the interpolated pixel in the target image are (u, v) mapped to the original image according to the following mapping formula:
  • mapping formula in the X axis direction (the width direction of the image) is:
  • mapping formula in the Y axis direction (the height direction of the image):
  • (float) means to take floating point data
  • inv_scale_x means the ratio of the target image to the original image in the X-axis direction
  • inv_scale_y means the ratio of the target image to the original image in the Y-axis direction
  • floor(fx) means to take fx down Integer
  • floor(fy) means to round down fy
  • (x, y) is the position coordinate of the interpolation pixel in the original image.
  • the above-mentioned coordinate mapping formula can be used to achieve any ratio of resolution enhancement to the original image.
  • S20 Calculate the two-dimensional image entropy of the n ⁇ n neighborhood of the interpolation pixel in the original image according to the position coordinates of the interpolation pixel in the original image, where n ⁇ 2 and n is a positive integer.
  • the n ⁇ n neighborhood is the set area around the position coordinates of the interpolation pixel in the original image (x, y), the set area includes n ⁇ n original pixels, n ⁇ 2 and n It is a positive integer; for example, the setting area may include 2 ⁇ 2 original pixels, 3 ⁇ 3 original pixels, 4 ⁇ 4 original pixels, etc.
  • the present disclosure is not limited thereto.
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the position coordinates of the interpolation pixel in FIG. 7 in the original image are (x, y), which is in 4 ⁇ 4
  • the position coordinates of the 16 original pixels in the neighborhood are: (x 1 , y 1 ), (x 1 , y 2 ), (x 1 , y 3 ), (x 1 , y 4 ), (x 2 , y 1 ), (x 2 , y 2 ), (x 2 , y 3 ), (x 2 , y 4 ), (x 3 , y 1 ), (x 3 , y 2 ), (x 3 , y 3 ), (x 3 , y 4 ), (x 4 , y 1 ), (x 4 , y 2 ), (x 4 , y 3 ), and (x 4 , y 4 ).
  • the calculation method of the two-dimensional image entropy of the interpolation pixel in the 4 ⁇ 4 neighborhood in the original image is, for example:
  • the expression of is calculated to obtain the two-dimensional image entropy of the 4 ⁇ 4 neighborhood.
  • the texture type of the n ⁇ n neighborhood is the complex texture shown in 8C in FIG. 8.
  • the texture type of the n ⁇ n neighborhood may be the edge texture shown in 8A in FIG. 8 or the smooth texture shown in 8B in FIG. 8.
  • the two-dimensional image entropy can be combined with the preset entropy threshold.
  • the texture type of the n ⁇ n neighborhood is determined to be 8C in Figure 8.
  • step S30 is executed; in the case that the entropy of the two-dimensional image is less than the preset entropy threshold, it is determined that the texture type of the n ⁇ n neighborhood is the edge texture shown in 8A in FIG. 8 or the image In the case of the smooth texture shown in 8B in 8, step S40 is executed.
  • the preset entropy threshold has a value range of 0.3 to 0.8; for example, the preset entropy threshold has a value of 0.3, 0.4, 0.5, 0.6, 0.7, or 0.8.
  • the first interpolation algorithm is a bicubic convolution interpolation algorithm.
  • the bicubic interpolation algorithm can set different weight values for the original pixels according to the distance between the original pixels and the interpolated pixels, and perform weighting operations on the original pixels according to the weight values of different original pixels, and then Using the result of the weighting operation as the pixel value of the interpolation pixel can remove the aliasing on the target image to a certain extent, and the quality of the target image obtained by the resolution enhancement is better.
  • the bicubic convolution interpolation algorithm uses the following piecewise convolution kernel function to perform convolution operations:
  • a is the distance between two adjacent original pixels
  • s is the distance between the original pixel and the interpolated pixel
  • u(s) is the pixel value of the interpolated pixel
  • the pixel value of the interpolated pixel is calculated by the bicubic convolution interpolation algorithm.
  • the interpolated pixels are calculated by the bicubic convolution interpolation algorithm
  • the pixel value of the pixel value of the interpolation pixel can be calculated based on all the original pixels in the 4 ⁇ 4 neighborhood through the nearest neighbor interpolation algorithm, bilinear interpolation algorithm and other interpolation algorithms.
  • the entropy of the two-dimensional image is less than the preset entropy threshold, in order to determine whether the texture type of the n ⁇ n neighborhood is the edge texture shown in 8A in FIG. 8 or the smooth texture shown in 8B in FIG. 8, S40 ⁇ S60.
  • the at least two edge directions may include two edge directions, three edge directions, or four edge directions, but of course it is not limited thereto. It should be understood that the greater the number of edge directions included in the at least two edge directions, the better the clarity of the processed photo.
  • each edge direction After obtaining the gradient value of each edge direction, determine whether there is a strong edge direction in at least two edge directions. If there is a strong edge direction in at least two edge directions, perform S50; if there is no strong edge direction in at least two edge directions, For the edge direction, execute S60.
  • S50 Calculate the pixel value of the interpolated pixel by using the first interpolation algorithm based on the original pixel in the strong edge direction in the n ⁇ n neighborhood.
  • the first interpolation algorithm is a bicubic convolution interpolation algorithm.
  • the calculation formulas and beneficial effects adopted by the bicubic convolution interpolation algorithm are consistent with those described above, and will not be repeated here.
  • the second interpolation algorithm is a B-spline interpolation algorithm.
  • the spline curve of the B-spline interpolation algorithm has the advantages of being derivable at the nodes and smooth.
  • S40 includes S41 to S43.
  • the selected range of each edge direction in the n ⁇ n neighborhood may be preset, and the selected range of each edge direction may include multiple groups of original pixel points distributed along the edge direction.
  • the selected range of each edge direction may include multiple groups of original pixel points distributed along the edge direction.
  • FIG. 10 in a 4 ⁇ 4 neighborhood, three groups of original pixel points distributed along the edge direction within a selected range of an edge direction at an angle of 45° to the width direction of the picture, of which the two groups are respectively It includes 3 original pixels distributed along the edge direction, and the remaining group includes 4 original pixels distributed along the edge direction.
  • the shape of the selected range is not limited, for example, it may be a rectangle, a circle, or the like.
  • the number of edge directions used to determine the texture type is not limited, for example, it can be two, three, or four.
  • four edge directions are used to determine the texture type, which are the first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4.
  • the first edge direction L1 and the second edge direction L2 are substantially perpendicular.
  • the first edge direction L1 is approximately parallel to one of the two diagonals of the rectangle determined by the n ⁇ n neighborhood; the second edge direction L2 is substantially parallel to two pairs of the rectangle determined by the n ⁇ n neighborhood The other of the corners is approximately parallel; wherein, the rectangle defined by the n ⁇ n neighborhood is the rectangle enclosed by the outermost circle of pixel points in the n ⁇ n neighborhood.
  • the first edge direction L1 is a parallel line that forms an angle of 45° with the horizontal direction
  • the second edge direction L2 is a parallel line that forms an angle of 135° with the horizontal direction.
  • the third edge direction L3 and the fourth edge direction L4 are approximately perpendicular, and the angle between the first edge direction L1 and the third edge direction L3 is approximately 45°.
  • the third edge direction L3 is approximately parallel to the row direction in which the plurality of pixels in the n ⁇ n neighborhood are arranged
  • the fourth edge direction L4 is approximately parallel to the column direction in which the plurality of pixels in the n ⁇ n neighborhood are arranged. parallel.
  • the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction
  • the fourth edge direction L4 is a parallel line at an angle of 90° to the horizontal direction.
  • first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4 can also be selected in other directions.
  • first edge direction L1 is a 30° direction
  • second edge direction L2 is a 120° direction
  • third edge direction L3 is a -15° direction
  • fourth edge direction L4 is a 75° direction.
  • S40 includes S411 to S414.
  • S411 Use the sum or the average value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the first edge direction L1 as the gradient value of the first edge direction L1.
  • the first edge direction L1 is a parallel line at an angle of 45° to the horizontal direction
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the selected range is the rectangular box shown in FIG. 10.
  • the sum of the absolute value of the difference between the gray values of two adjacent original pixels in the rectangular frame in the 45° direction is taken as an example of the gradient value in the 45° direction, namely:
  • G 45° is the gradient value in the 45° direction.
  • the absolute mean value of the gray value difference of each adjacent two original pixel points in the selected range in the first edge direction L1 is taken as the gradient value of the first edge direction L1
  • the sum of the absolute value is divided by the absolute value of the gray value difference The number of values.
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the selected range is the rectangular box shown in FIG.
  • the mean value of the absolute value of the difference between the gray values of the two adjacent original pixels in the rectangular frame in the direction is used as the gradient value in the 45° direction, and the above-mentioned G 45° needs to be divided by 7.
  • S412 Use the sum or the average value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the second edge direction L2 as the gradient value of the second edge direction L2.
  • the second edge direction L2 is a parallel line at an angle of 135° to the horizontal direction
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the selected range is the rectangular box shown in FIG. 11.
  • the sum of the absolute value of the difference between the gray values of two adjacent original pixels in a rectangular frame in the 135° direction is taken as an example of the gradient value in the 135° direction, namely:
  • the absolute mean value of the gray value difference of each adjacent two original pixel points in the selected range in the second edge direction L2 is taken as the gradient value of the second edge direction L2
  • the sum of the absolute values is divided by the absolute value of the gray value difference The number of values.
  • the second edge direction L2 is a parallel line at an angle of 135° to the horizontal direction
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the direction is 135°
  • the mean value of the absolute value of the difference between the gray values of the two adjacent original pixels in the upper rectangular frame is taken as the gradient value in the 135° direction, and the above-mentioned G 135° needs to be divided by 7.
  • S413 Use the sum or the mean value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the third edge direction L3 as the gradient value of the third edge direction L3.
  • the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the selected range is the rectangular box shown in FIG. 12.
  • the sum of the absolute value of the difference between the gray values of two adjacent original pixels in the rectangular frame in the 0° direction is taken as an example of the gradient value in the 0° direction, namely:
  • the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the direction is 0°
  • the average value of the absolute value of the difference between the gray values of the two adjacent original pixels in the upper rectangular frame is taken as the gradient value in the 0° direction, and the above G 0° needs to be divided by 6.
  • S414 Use the sum or the average value of the absolute value of the gray value difference of each adjacent two original pixel points in the selected range in the fourth edge direction L4 as the gradient value of the fourth edge direction L4.
  • the fourth edge direction L4 is a parallel line at an angle of 0° to the horizontal direction
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the selected range is the rectangular box shown in FIG. 13.
  • the absolute value of the difference between the gray values of two adjacent original pixels in the rectangular frame in the 90° direction is taken as an example of the gradient value in the 90° direction, namely:
  • the fourth edge direction L4 is a parallel line at an angle of 90° to the horizontal direction
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • the selected range is the rectangular box shown in FIG. 13, the 90° direction
  • the mean value of the absolute value of the difference between the gray values of the two adjacent original pixels in the upper rectangular frame is taken as the gradient value in the 90° direction, and the above-mentioned G 90° needs to be divided by 6.
  • S411, S412, S413, and S414 are in no particular order. S411, S412, S413, and S414 can be executed in sequence, or S412, S411, S413, and S414 can be executed in sequence. Of course, the present disclosure is not limited to this. .
  • S50 includes S510 to S513.
  • ⁇ 1 Max(G 1 , G 2 )/Min(G 1 , G 2 );
  • ⁇ 2 Max(G 3 , G 4 )/Min(G 3 , G 4 );
  • G 1 is the gradient value in the first edge direction L1
  • G 2 is the gradient value in the second edge direction L2
  • G 3 is the gradient value in the third edge direction L3
  • G 4 is the gradient value in the fourth edge direction L4.
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • G 1 G 45° .
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • G 2 G 135° .
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • G 3 G 0° .
  • the n ⁇ n neighborhood is a 4 ⁇ 4 neighborhood
  • G 4 G 90° .
  • the value range of the preset ratio threshold is 1.2-1.3; for example, the value of the preset entropy threshold is 1.2, 1.25, 1.26, 1.27, 1.28, 1.29, 1.3, etc.
  • the difference between the two gradient values is determined During the process of the ratio of the larger value to the smaller value in the two gradient values, the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal.
  • the selected range of the first edge direction L1 includes 10 original pixels; as shown in FIG. 11, the selected range of the second edge direction L2 includes 10 original pixels; As shown in 12, the selected range of the third edge direction L3 includes 8 original pixels; as shown in FIG. 13, the selected range of the fourth edge direction L4 includes 8 original pixels.
  • the texture type of the n ⁇ n neighborhood is determined to be the edge texture as shown in 8A in FIG. 8, and the third edge direction L3 or the fourth edge direction L4 is a strong edge direction, and S512 is executed .
  • the three can be directly compared, or the sizes of ⁇ 1 and ⁇ 2 can be determined first, and then the smaller value of ⁇ 1 and ⁇ 2 can be compared with T.
  • the present disclosure is not limited to this.
  • the first edge direction L1 is determined to be a strong edge direction; if G 2 > G 1 , then the second edge direction L2 is determined to be a strong edge direction.
  • S512 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the third edge direction L3 and the fourth edge direction L4 as the strong edge direction.
  • the third edge direction L3 is determined to be a strong edge direction; if G 4 > G 3 , then the fourth edge direction L4 is determined to be a strong edge direction.
  • the number of edge directions used to determine the texture type is not limited, for example, it can be two, three, or four.
  • two edge directions are used, which are the first edge direction L1 and the second edge direction L2, respectively.
  • the first edge direction L1 and the second edge direction L2 are substantially perpendicular.
  • the first edge direction L1 is approximately parallel to one of the two diagonals of the rectangle determined by the n ⁇ n neighborhood; the second edge direction L2 is substantially parallel to two pairs of the rectangle determined by the n ⁇ n neighborhood The other of the corners is approximately parallel; wherein, the rectangle defined by the n ⁇ n neighborhood is the rectangle enclosed by the outermost circle of pixel points in the n ⁇ n neighborhood.
  • the first edge direction L1 is a parallel line that forms an angle of 45° with the horizontal direction
  • the second edge direction L2 is a parallel line that forms an angle of 135° with the horizontal direction.
  • first edge direction L1 and the second edge direction L2 can also be other directions.
  • first edge direction L1 is a 30° direction
  • second edge direction L2 is a 120° direction.
  • edge directions include the first edge direction L1 and the second edge direction L2, in the process of determining whether there is a strong edge direction in the first edge direction L1 and the second edge direction L2, as shown in FIG. 5, S50 Including S520 ⁇ S522.
  • S520 Determine that the ratio of the larger value to the smaller value among the gradient values of the first edge direction L1 and the second edge direction L2 is ⁇ , and a preset ratio threshold value is T.
  • the value range of the preset ratio threshold is 1.2-1.3; for example, the value of the preset entropy threshold is 1.2, 1.25, 1.26, 1.27, 1.28, 1.29, 1.3, etc.
  • the difference between the two gradient values is determined During the process of the ratio of the larger value to the smaller value in the two gradient values, the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal.
  • the selected range of the first edge direction L1 includes 10 original pixels; as shown in FIG. 11, the selected range of the second edge direction L2 includes 10 original pixels.
  • the calculation process of the gradient value of the first edge direction L1 and the gradient value of the second edge direction L2 is different from that of the at least two edge directions including the first edge direction L1 and the second edge direction L2.
  • the first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4 the calculation process of the gradient value of the first edge direction L1 and the gradient value of the second edge direction L2 is similar, so I won’t do it here. Go into details.
  • S521 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the first edge direction L1 and the second edge direction L2 as the strong edge direction.
  • S522 Determine that there is no strong edge direction, and calculate the pixel value of the interpolated pixel by a second interpolation algorithm based on all the original pixels in the n ⁇ n neighborhood.
  • the number of edge directions used to determine the texture type is not limited, for example, it can be two, three, or four.
  • two edge directions are used, which are the first edge direction L1, the second edge direction L2, and the third edge direction L2, respectively.
  • the first edge direction L1 and the second edge direction L2 are substantially perpendicular.
  • the first edge direction L1 is approximately parallel to one of the two diagonals of the rectangle determined by the n ⁇ n neighborhood; the second edge direction L2 is substantially parallel to two pairs of the rectangle determined by the n ⁇ n neighborhood The other of the corners is approximately parallel; wherein, the rectangle defined by the n ⁇ n neighborhood is the rectangle enclosed by the outermost circle of pixel points in the n ⁇ n neighborhood.
  • the first edge direction L1 is a parallel line that forms an angle of 45° with the horizontal direction
  • the second edge direction L2 is a parallel line that forms an angle of 135° with the horizontal direction.
  • the included angle between the first edge direction L1 and the third edge direction L3 is approximately 45°.
  • the third edge direction L3 is substantially parallel to the row direction in which the plurality of pixel points in the n ⁇ n neighborhood are arranged.
  • the third edge direction L3 is a parallel line at an angle of 0° to the horizontal direction.
  • first edge direction L1, the second edge direction L2, and the third edge direction L3 can also be other directions.
  • first edge direction L1 is a 60° direction
  • second edge direction L2 is a 120° direction
  • third edge direction L3 is a 0° direction.
  • S50 includes S530 to S533.
  • the value range of the preset ratio threshold is 1.2-1.3; illustratively, the preset entropy threshold value is 1.2, 1.25, 1.26, 1.27, 1.28, 1.29, 1.3, and so on.
  • the difference between the two gradient values is determined During the process of the ratio of the larger value to the smaller value in the two gradient values, the number of original pixels included in the selected range for calculating the gradient value in the respective edge directions of the two gradient values is equal.
  • the selected range of the first edge direction L1 includes 10 original pixels; the selected range of the second edge direction L2 includes 10 original pixels; the selected range of the third edge direction L3 includes 10 original pixels. pixel.
  • the gradient value of the first edge direction L1, the gradient value of the second edge direction L2, and the third edge direction L3 when at least two edge directions include the first edge direction L1, the second edge direction L2, the third edge direction L3, and the fourth edge direction L4, the gradient value of the first edge direction L1, the second edge direction L4
  • the gradient value in the direction L2 is similar to the gradient value in the third edge direction L3, and is not repeated here.
  • S531 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the first edge direction L1 and the second edge direction L2 as the strong edge direction.
  • S532 Determine that there is a strong edge direction, and determine an edge direction with a larger gradient value in the second edge direction L2 and the third edge direction L3 as the strong edge direction.
  • S533 Determine that there is no strong edge direction, and calculate the pixel value of the interpolated pixel by a second interpolation algorithm based on all the original pixels in the n ⁇ n neighborhood.
  • the definition of the target image on the right obtained is significantly improved.
  • the partial enlarged view of the image area in the rectangular frame in FIG. 14 shown in FIG. 15 can more intuitively see the effect of improving the definition.
  • the image processing method determines the texture type through the determination of the edge texture direction and the calculation of the two-dimensional image entropy when the image resolution is enhanced, and uses different texture types according to different texture types.
  • the interpolation algorithm calculates the pixel value of the interpolated pixel, thereby enhancing the image resolution and improving the clarity of the image.
  • the position coordinates of the interpolated pixel points in the target image are mapped to the original image through a mapping formula, and the original image can be enhanced with any ratio (for example, integer ratio, decimal ratio, odd ratio, even ratio, etc.).
  • Some embodiments of the present disclosure provide an image processing method. As shown in FIG. 16, the image processing method includes S100 to S600.
  • the target image is an image obtained by performing resolution enhancement on the original image
  • the interpolated pixel points are pixels generated when the resolution enhancement is performed.
  • S100 can refer to S10 of the image processing method of some of the foregoing embodiments, which is not repeated here.
  • S200 can refer to S40 of the image processing method of some of the foregoing embodiments, which will not be repeated here.
  • each edge direction After the gradient value of each edge direction is obtained, it is determined whether there is a strong edge direction in at least two edge directions according to the gradient value of at least two edge directions, if it exists, S300 is executed; if it does not exist, S400 is executed.
  • S300 Calculate the pixel value of the interpolated pixel by the first interpolation algorithm based on the multiple original pixels in the strong edge direction in the n ⁇ n neighborhood.
  • S300 can refer to S50 of the image processing method of some of the foregoing embodiments, which will not be repeated here.
  • S400 Calculate the two-dimensional image entropy of the interpolation pixel in the n ⁇ n neighborhood.
  • S400 can refer to S20 of the image processing method of some of the foregoing embodiments, and details are not described herein.
  • the entropy of the two-dimensional image After obtaining the entropy of the two-dimensional image, it is determined whether the entropy of the two-dimensional image is greater than or equal to a preset entropy threshold. Wherein, when the entropy of the two-dimensional image is greater than or equal to the preset entropy threshold, S500 is executed. When the entropy of the two-dimensional image is less than the preset entropy threshold, S600 is executed.
  • S500 Calculate the pixel value of the interpolated pixel by using the first interpolation algorithm based on all the original pixels in the n ⁇ n neighborhood.
  • S500 can refer to S30 of the image processing method of some of the foregoing embodiments, which will not be repeated here.
  • S600 can refer to S60 of the image processing method of some of the foregoing embodiments, and details are not described herein.
  • this image processing method it is first judged whether there is a strong edge direction in at least two edge directions.
  • the pixel value of the interpolated pixel is calculated by the first interpolation algorithm.
  • the two-dimensional image entropy is calculated, the two-dimensional image entropy is compared with a preset entropy threshold, and the first interpolation algorithm or the second interpolation algorithm is determined to calculate the pixel value of the interpolation pixel according to the comparison result.
  • the above mainly introduces the image processing method provided by the embodiments of the present disclosure.
  • an image processing device that implements the above-mentioned image processing method is also provided.
  • the image processing device will be exemplarily introduced below.
  • the image processing device 1 may include an image processing device 1 including a coordinate mapping component 11 (21), a two-dimensional image entropy calculation component 12 (23), and a strong edge direction judgment The part 13 (22), the first pixel value calculation part 14 (24), and the second pixel value calculation part 15 (25).
  • the coordinate mapping component 11 (21) is configured to determine the position coordinates of the interpolation pixel in the original image according to the position coordinates of any interpolation pixel to be processed on the target image in the target image.
  • the target image is the image after the resolution enhancement of the original image, and the interpolated pixels are the pixels generated when the resolution is enhanced.
  • the coordinate mapping component 11 may be configured to perform the above-mentioned S10.
  • S10 For the specific working process of the coordinate mapping component 11, refer to the corresponding process of S10 in the foregoing method embodiment, which will not be repeated here. .
  • the coordinate mapping component 21 may be configured to perform the above-mentioned S100.
  • the specific working process of the coordinate mapping component 21 please refer to the corresponding process of S100 in the foregoing method embodiment, which will not be repeated here. .
  • the two-dimensional image entropy calculation component 12 (23) is configured to calculate the two-dimensional image entropy of the n ⁇ n neighborhood of the interpolated pixel in the original image according to the position coordinates of the interpolated pixel in the original image, where n ⁇ 2 and n is a positive integer.
  • the two-dimensional image entropy calculation component 12 may be configured to perform the above-mentioned S20.
  • S20 For the specific working process of the two-dimensional image entropy calculation component 12, refer to the corresponding process of S20 in the foregoing method embodiment. , I won’t repeat it here.
  • the two-dimensional image entropy calculation component 23 may be configured to perform the above-mentioned S400.
  • S400 For the specific working process of the two-dimensional image entropy calculation component 23, refer to the corresponding process of S400 in the foregoing method embodiment. , I won’t repeat it here.
  • the strong edge direction judgment component 13 (22) is configured to calculate the gradient values of at least two edge directions of the interpolated pixel in the n ⁇ n neighborhood of the original image according to the position coordinates of the interpolated pixel in the original image, and According to the gradient values of the at least two edge directions, it is determined whether there is a strong edge direction in the at least two edge directions.
  • the strong edge direction judging component 13 may be configured to perform the above-mentioned S40.
  • S40 For the specific working process of the strong edge direction judging component 13, refer to the corresponding process of S40 in the foregoing method embodiment. This will not be repeated here.
  • the strong edge direction judging component 22 may be configured to execute the above-mentioned S200.
  • S200 For the specific working process of the strong edge direction judging component 22, refer to the corresponding process of S200 in the foregoing method embodiment. This will not be repeated here.
  • the first pixel value calculation unit 14 (24) is configured to, in the case where there are strong edge directions in at least two edge directions, pass the first
  • the interpolation algorithm calculates the pixel value of the interpolated pixel; and, when the entropy of the two-dimensional image is greater than or equal to the preset entropy threshold, based on all the original pixels in the n ⁇ n neighborhood, the interpolated pixel is calculated by the first interpolation algorithm The pixel value.
  • the first pixel value calculation component 14 may be configured to perform the above S30 and S50.
  • S30 and S30 for the specific working process of the first pixel value calculation component 14, reference may be made to S30 and S30 in the foregoing method embodiment. The corresponding process of S50 will not be repeated here.
  • the first pixel value calculation component 24 may be configured to perform the above S300 and S500.
  • S300 and S300 for the specific working process of the first pixel value calculation component 24, refer to S300 and S300 in the foregoing method embodiment. The corresponding process of S500 will not be repeated here.
  • the second pixel value calculation unit 15 (25) is configured to, in the case where the entropy of the two-dimensional image is less than the preset entropy threshold, and there is no strong edge direction in at least two edge directions, based on all the pixels in the n ⁇ n neighborhood For the original pixel, the pixel value of the interpolated pixel is calculated by the second interpolation algorithm.
  • the second pixel value calculation component 15 may be configured to perform the above-mentioned S60.
  • the specific working process of the second pixel value calculation component 15 refer to the corresponding process of S60 in the foregoing method embodiment. , I won’t repeat it here.
  • the second pixel value calculation component 25 may be configured to perform the above-mentioned S600.
  • S600 For the specific working process of the second pixel value calculation component 25, refer to the corresponding process of S600 in the foregoing method embodiment. , I won’t repeat it here.
  • the electronic device 3 includes a processor 31 and a memory 32.
  • the memory 32 stores computer program instructions suitable for execution by the processor 31.
  • the program instructions are executed by the processor 31, the image processing method as in any of the above embodiments is executed.
  • the processor 31 is used to support the electronic device 3 to execute one or more steps in the above-mentioned image processing method.
  • the processor 31 may be a central processing unit (Central Processing Unit, CPU for short), or other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 32 is used to store the program code and data of the electronic device 3 provided by the embodiment of the present disclosure.
  • the processor 31 can execute various functions of the electronic device 3 by running or executing a software program stored in the memory 32 and calling data stored in the memory 32.
  • the memory 32 can be a read-only memory (Read-Only Memory, ROM) or other types of static storage devices that can store static information and instructions, random access memory (Random Access Memory, RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disc storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory can exist independently and is connected to the processor through a communication bus.
  • the memory 32 may also be integrated with the processor 31.
  • Some embodiments of the present disclosure also provide a computer-readable storage medium (for example, a non-transitory computer-readable storage medium), the computer-readable storage medium stores computer program instructions, and when the computer program instructions run on a computer , So that the computer executes the image processing method in any one of the above-mentioned embodiments.
  • a computer-readable storage medium for example, a non-transitory computer-readable storage medium
  • the foregoing computer-readable storage medium may include, but is not limited to: magnetic storage devices (for example, hard disks, floppy disks, or tapes, etc.), optical disks (for example, CD (Compact Disk), DVD (Digital Versatile Disk, Digital universal disk), etc.), smart cards and flash memory devices (for example, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drives, etc.).
  • magnetic storage devices for example, hard disks, floppy disks, or tapes, etc.
  • optical disks for example, CD (Compact Disk), DVD (Digital Versatile Disk, Digital universal disk), etc.
  • smart cards and flash memory devices for example, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drives, etc.
  • Various computer-readable storage media described in this disclosure may represent one or more devices and/or other machine-readable storage media for storing information.
  • the term "machine-readable storage medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
  • Some embodiments of the present disclosure also provide a computer program product.
  • the computer program product includes computer program instructions, and when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute the image processing method in any of the above-mentioned embodiments.
  • Some embodiments of the present disclosure also provide a computer program.
  • the computer program When the computer program is executed on the computer, the computer program causes the computer to execute the image processing method as in any of the above-mentioned embodiments.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of components is only a division of logical functions, and there may be other divisions in actual implementation, for example, multiple components can be combined or integrated into another. A device or some features can be ignored or not implemented.
  • the components described as separate components may or may not be physically separate, and one or some components may or may not be physical units. Some or all of the components may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present disclosure.
  • various functional components in some embodiments of the present disclosure may be integrated into one processing unit, or each component may exist alone physically, or two or more components may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium, including several
  • the instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法,包括:根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定插值像素点在原始图像中的位置坐标;计算插值像素点在原始图像中的n×n邻域的二维图像熵;在二维图像熵大于或等于预设熵值阈值的情况下,基于n×n邻域内的所有原始像素点,通过第一插值算法计算插值像素点的像素值;在二维图像熵小于预设熵值阈值的情况下,计算n×n邻域内的至少两个边缘方向的梯度值,判断至少两个边缘方向中是否存在强边缘方向;若存在,基于n×n邻域内的强边缘方向上的多个原始像素点,通过第一插值算法计算插值像素点的像素值;若不存在,基于n×n邻域内的所有原始像素点,通过第二插值算法计算插值像素点的像素值。

Description

图像处理方法及装置、电子设备、计算机可读存储介质
本申请要求于2020年06月17日提交的、申请号为202010553063.5的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域。尤其涉及一种图像处理方法及装置、电子设备、计算机可读存储介质。
背景技术
随着图像处理技术的发展,图像分辨率增强是一个重要的研究方向,现有的图像分辨率增强方法基本都是基于图像插值算法实现。其中,对每个插值像素点均采用相同的处理方式进行处理,导致处理后的图像清晰度较低,质量依然较差,并且无法实现对原始图像进行任意比例的分辨率增强。
发明内容
一方面,提供一种图像处理方法。所述图像处理方法包括:
根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点。
根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数。
在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第一插值算法计算所述插值像素点的像素值。
在所述二维图像熵小于预设熵值阈值的情况下,分别计算所述n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向。若存在,则基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;若不存在,则基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
在一些实施例中,所述分别计算所述n×n邻域内的至少两个边缘方向的梯度值,包括:分别获取所述n×n邻域内的每个边缘方向上的多个原始像素点的灰度值;计算每个边缘方向上每相邻两个原始像素点的灰度值差值的绝对值;将每个边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和或均值作为所述边缘方向的梯度值。
在一些实施例中,所述至少两个边缘方向包括第一边缘方向、第二边缘方向、第三边缘方向和第四边缘方向。所述分别计算所述n×n邻域内的至少两个边缘方向的梯度值,包括:将所述第一边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第一边缘方向的梯度值;将所述第二边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第二边缘方向的梯度值;将所述第三边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第三边缘方向的梯度值;将所述第四边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第四边缘方向的梯度值。
在一些实施例中,所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:确定所述第一边缘方向的梯度值与所述第二边缘方向的梯度值之中的较大值与较小值的比值为α 1,所述第三边缘方向的梯度值与所述第四边缘方向的梯度值之中的较大值与较小值的比值为α 2,以及,预设比值阈值为T;若α 1>α 2>T,则确定存在强边缘方向,并将所述第一边缘方向和所述第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;若α 2>α 1>T,则确定存在强边缘方向,并将所述第三边缘方向和所述第四边缘方向中的梯度值较大的边缘方向确定为强边缘方向;若α 1≤T和/或α 2≤T,则确定不存在强边缘方向。
在一些实施例中,所述至少两个边缘方向包括第一边缘方向和第二边缘方向。所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:确定所述第一边缘方向和所述第二边缘方向的梯度值之中的较大值与较小值的比值为α,以及,预设比值阈值为T;若α>T,则确定存在强边缘方向,并将所述第一边缘方向和第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;若α≤T,则确定不存在强边缘方向。
在一些实施例中,所述至少两个边缘方向包括第一边缘方向、第二边缘方向和第三边缘方向。所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:确定所述第一边缘方向的梯度值与所述第二边缘方向的梯度值之中的较大值与较小值的比值为α 3,所述第二边缘方向的梯度值与所述第三边缘方向的梯度值之中的较大值与较小值的比值为α 4,以及,预设比值阈值为T;若α 3>α 4>T,则确定存在强边缘方向,并将所述第一边缘方向和所述第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;若α 4>α 3>T,则确定存在强边缘方向,并 将所述第二边缘方向和所述第三边缘方向中的梯度值较大的边缘方向确定为强边缘方向;若α 3≤T和/或α 4≤T,则确定不存在强边缘方向。
在一些实施例中,在将边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和作为所述边缘方向的梯度值的情况下,在确定两个梯度值之中的较大值与较小值的比值的过程中,所述两个梯度值各自对应的边缘方向上,用于计算梯度值的选定范围内所包括的原始像素点的数量相等。
在一些实施例中,所述预设比值阈值T的取值范围为1.2~1.3。
在一些实施例中,所述至少两个边缘方向包括第一边缘方向和第二边缘方向,所述第一边缘方向与所述第二边缘方向大致垂直。
在一些实施例中,所述至少两个边缘方向还包括第三边缘方向和第四边缘方向,所述第三边缘方向与所述第四边缘方向大致垂直,所述第一边缘方向与所述第三边缘方向的夹角大致为45°。
在一些实施例中,所述第一边缘方向与所述n×n邻域所确定的矩形的两条对角线中的一条大致平行;所述第二边缘方向与所述n×n邻域所确定的矩形的两条对角线中的另一条大致平行;所述第三边缘方向与所述n×n邻域中的多个像素点排列的行方向大致平行,所述第四边缘方向与所述n×n邻域中的多个像素点排列的列方向大致平行。
在一些实施例中,所述预设熵值阈值的取值范围为0.3~0.8。
在一些实施例中,所述n×n邻域为4×4邻域。
在一些实施例中,所述第一插值算法为双立方卷积插值算法,和/或,所述第二插值算法为B样条插值算法。
另一方面,提供一种图像处理方法。图像处理方法包括:
根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点。
根据所述插值像素点在原始图像中的位置坐标,分别计算所述插值像素点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,n≥2且n为正整数。若存在,则基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;若不存在,则计算所述插值像素点在所述n×n邻域的二维图像熵。
在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第一插值算法计算所述插值像素点的像素值。
在所述二维图像熵小于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
再一方面,提供一种图像处理装置。所述图像处理装置包括坐标映射部件、二维图像熵计算部件、强边缘方向判断部件、第一像素值计算部件和第二像素值计算部件。
所述坐标映射部件被配置为根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点。
所述二维图像熵计算部件被配置为根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数。
所述强边缘方向判断部件,被配置为根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向。
所述第一像素值计算部件被配置为,在所述至少两个边缘方向中存在强边缘方向的情况下,基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;以及,在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过所述第一插值算法计算所述插值像素点的像素值。
所述第二像素值计算部件被配置为,在所述二维图像熵小于预设熵值阈值,且所述至少两个边缘方向中不存在强边缘方向的情况下,基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
再一方面,提供一种电子设备。所述电子设备包括处理器和存储器,所述存储器中存储有适于所述处理器执行的计算机程序指令,所述计算机程序指令被所述处理器运行时执行如上述任一实施例所述的图像处理方法。
又一方面,提供一种计算机可读存储介质。所述计算机可读存储介质存储有计算机程序指令,其中,所述计算机程序指令在计算机上运行时,使得所述计算机执行如上述任一实施例所述的图像处理方法。
又一方面,提供一种计算机程序产品。所述计算机程序产品包括计算机程序指令,其中,在计算机上执行所述计算机程序指令时,所述计算机程序指令使计算机执行如上述任一实施例所述的图像处理方法。
又一方面,提供一种计算机程序。当所述计算机程序在计算机上执行时,所述计算机程序使计算机执行如上述任一实施例所述的图像处理方法。
附图说明
为了更清楚地说明本公开中的技术方案,下面将对本公开一些实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例的附图,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。此外,以下描述中的附图可以视作示意图,并非对本公开实施例所涉及的产品的实际尺寸、方法的实际流程、信号的实际时序等的限制。
图1为根据一些实施例的图像处理方法的一种流程图;
图2为根据一些实施例的图像处理方法的另一种流程图;
图3为根据一些实施例的图像处理方法的又一种流程图;
图4为根据一些实施例的图像处理方法的又一种流程图;
图5为根据一些实施例的图像处理方法的又一种流程图;
图6为根据一些实施例的图像处理方法的再一种流程图;
图7为根据一些实施例的图像处理方法的插值像素点及其4×4邻域像素点的示意图;
图8为根据一些实施例的不同的纹理类型的示意图;
图9为图7中标出第一边缘方向、第二边缘方向、第三边缘方向和第四边缘方向的示意图;
图10为根据一些实施例的图像处理方法的第一边缘方向上选定范围内原始像素点的示意图;
图11为根据一些实施例的图像处理方法的第二边缘方向上选定范围内原始像素点的示意图;
图12为根据一些实施例的图像处理方法的第三边缘方向上选定范围内原始像素点的示意图;
图13为根据一些实施例的图像处理方法的第四边缘方向上选定范围内原始像素点的示意图;
图14为根据一些实施例的图像处理方法处理前与处理后的对比效果图;
图15为图14中方框内的局部放大图;
图16为根据另一些实施例的图像处理方法的一种流程图;
图17为根据一些实施例的一种图像处理装置的结构图;
图18为根据一些实施例的另一种图像处理装置的结构图;
图19为根据一些实施例的电子设备的结构图。
具体实施方式
下面将结合附图,对本公开一些实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开所提供的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本公开保护的范围。
除非上下文另有要求,否则,在整个说明书和权利要求书中,术语“包括(comprise)”及其其他形式例如第三人称单数形式“包括(comprises)”和现在分词形式“包括(comprising)”被解释为开放、包含的意思,即为“包含,但不限于”。在说明书的描述中,术语“一个实施例(one embodiment)”、“一些实施例(some embodiments)”、“示例性实施例(exemplary embodiments)”、“示例(example)”、“特定示例(specific example)”或“一些示例(some examples)”等旨在表明与该实施例或示例相关的特定特征、结构、材料或特性包括在本公开的至少一个实施例或示例中。上述术语的示意性表示不一定是指同一实施例或示例。此外,所述的特定特征、结构、材料或特点可以以任何适当方式包括在任何一个或多个实施例或示例中。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本公开实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在描述一些实施例时,可能使用了“耦接”和“连接”及其衍伸的表达。例如,描述一些实施例时可能使用了术语“连接”以表明两个或两个以上部件彼此间有直接物理接触或电接触。又如,描述一些实施例时可能使用了术语“耦接”以表明两个或两个以上部件有直接物理接触或电接触。然而,术语“耦接”或“通信耦合(communicatively coupled)”也可能指两个或两个以上部件彼此间并无直接接触,但仍彼此协作或相互作用。这里所公开的实施例并不必然限制于本文内容。
“A和/或B”,包括以下三种组合:仅A,仅B,及A和B的组合。
本文中“适用于”或“被配置为”的使用意味着开放和包容性的语言,其不 排除适用于或被配置为执行额外任务或步骤的设备。
另外,“基于”的使用意味着开放和包容性,因为“基于”一个或多个所述条件或值的过程、步骤、计算或其他动作在实践中可以基于额外条件或超出所述的值。
如图1所示,本公开的一些实施例提供了一种图像处理方法,用于实现图像分辨率增强,该方法包括S10~S50。
S10、根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定插值像素点在原始图像中的位置坐标。其中,目标图像为对原始图像进行分辨率增强后的图像,插值像素点为进行分辨率增强时所产生的像素点。
示例性地,可通过坐标映射实现对插值像素点在原始图像中的位置坐标的确定,例如:
以原始图像的宽度方向为X轴方向,以原始图像的高度方向为Y轴方向,并以原始图像上的某一原始像素点作为零点,例如,以原始图像上最左下方的一个原始像素点为零点。
其中,插值像素点在目标图像中的位置坐标为(u,v),按照如下映射公式进行将插值像素点在将目标图像中的位置坐标为(u,v)映射至原始图像中:
在X轴方向(图像的宽度方向)的映射公式为:
fx=(float)((u+0.5)×inv_scale_x-0.5);
x=floor(fx);
在Y轴方向(图像的高度方向)的映射公式:
fy=(float)((v+0.5)×inv_scale_y-0.5);
y=floor(fy);
其中,(float)表示取浮点型数据,inv_scale_x表示目标图像与原始图像在X轴方向的比例,inv_scale_y表示目标图像与原始图像在Y轴方向的比例,floor(fx)表示对fx向下取整,floor(fy)表示对fy向下取整,(x,y)即为插值像素点在原始图像中的位置坐标。
此时,利用上述坐标的映射公式可以实现对原始图像进行任意比例的分辨率增强。
S20、根据插值像素点在原始图像中的位置坐标,计算插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数。
其中,n×n邻域为插值像素点在原始图像中的位置坐标为(x,y)的 周围的设定区域,该设定区域内包括n×n个原始像素点,n≥2且n为正整数;例如,设定区域可以包括2×2个原始像素点、3×3个原始像素点、4×4个原始像素点等,本公开不限于此。
在一些实施例中,如图7所示,n×n邻域为4×4邻域,图7中的插值像素点在原始图像中的位置坐标为(x,y),其在4×4邻域内的16个原始像素点的位置坐标分别为:(x 1,y 1)、(x 1,y 2)、(x 1,y 3)、(x 1,y 4)、(x 2,y 1)、(x 2,y 2)、(x 2,y 3)、(x 2,y 4)、(x 3,y 1)、(x 3,y 2)、(x 3,y 3)、(x 3,y 4)、(x 4,y 1)、(x 4,y 2)、(x 4,y 3)和(x 4,y 4)。
此时,插值像素点在原始图像中的4×4邻域的二维图像熵的计算方法例如为:
将4×4邻域内的16个原始像素点的灰度值压缩到8个灰度等级,统计这16个原始像素点的8×8灰度共生矩阵,并根据该8×8灰度共生矩阵的表达式计算得到该4×4邻域的二维图像熵。
判断二维图像熵是否大于或等于预设熵值阈值。其中,在二维图像熵大于或等于预设熵值阈值时,n×n邻域的纹理类型为图8中8C所示的复杂纹理。在二维图像熵小于预设熵值阈值时,n×n邻域的纹理类型可能为图8中8A所示的边缘纹理,也可能为图8中8B所示的平滑纹理。
由此,可结合二维图像熵与预设熵值阈值,在二维图像熵大于或等于预设熵值阈值的情况下,即在确定n×n邻域的纹理类型为如图8中8C所示的复杂纹理时,执行步骤S30;在二维图像熵小于预设熵值阈值的情况下,即在确定n×n邻域的纹理类型为图8中8A所示的边缘纹理或为图8中8B所示的平滑纹理时,执行步骤S40。
在一些实施例中,预设熵值阈值的取值范围为0.3~0.8;示例性地,预设熵值阈值的取值为0.3、0.4、0.5、0.6、0.7或0.8等。
S30、基于n×n邻域内的所有原始像素点,通过第一插值算法计算插值像素点的像素值。
在一些实施例中,第一插值算法为双立方卷积插值算法。其中,双立方插值算法能够根据原始像素点与插值像素点之间的距离远近,为原始像素点设置不同的权重值,并根据不同原始像素点的权重值,对原始像素点进行加权运算,进而将加权运算结果作为插值像素点的像素值,可在一定程度上去除目标图像上的锯齿,分辨率增强得到的目标图像的质量较好。
双立方卷积插值算法采用如下的分段卷积核函数进行卷积运算:
Figure PCTCN2021100328-appb-000001
其中,a为相邻两个原始像素点之间的距离,s为原始像素点与插值像素点之间的距离,u(s)即为插值像素点的像素值。
示例性地,在计算得到的4×4邻域的二维图像熵H,并确定4×4邻域的二维图像熵H大于或等于0.6的情况下,基于4×4邻域内的所有原始像素点,通过双立方卷积插值算法计算插值像素点的像素值。
需要说明的是,在确定4×4邻域的二维图像熵H大于或等于0.6的情况下,除了基于4×4邻域内的所有原始像素点,通过双立方卷积插值算法计算插值像素点的像素值除双立方卷积插值算法之外,还可采用基于4×4邻域内的所有原始像素点,通过近邻插值算法、双线性插值算法等其他插值算法计算插值像素点的像素值。
在二维图像熵小于预设熵值阈值的情况下,为了确定n×n邻域的纹理类型是图8中8A所示的边缘纹理,还是图8中8B所示的平滑纹理,执行S40~S60。
S40、分别计算n×n邻域内的至少两个边缘方向的梯度值。
此处,至少两个边缘方向可以包括两个边缘方向、三个边缘方向或四个边缘方向,当然并不仅限于此。应理解,至少两个边缘方向所包括的边缘方向的数量越多,处理后的照片的清晰度越好。
在得到每个边缘方向的梯度值之后,判断至少两个边缘方向中是否存在强边缘方向,若至少两个边缘方向中存在强边缘方向,则执行S50;若至少两个边缘方向中不存在强边缘方向,则执行S60。
S50、基于n×n邻域内的强边缘方向上的原始像素点,通过第一插值算法计算插值像素点的像素值。
在一些实施例中,第一插值算法为双立方卷积插值算法。其中,双立方卷积插值算法采用的运算公式及有益效果与上面描述的一致,在此不做赘述。
S60、基于n×n邻域内的所有原始像素点,通过第二插值算法计算插值像素点的像素值。
在一些实施例中,第二插值算法为B样条插值算法。其中,B样条插值算法的样条曲线具有节点处可导、具有光滑性等优点。
在一些实施例中,如图2所示,S40包括S41~S43。
S41、分别获取n×n邻域内的每个边缘方向上选定范围内多个原始像素 点的灰度值。
其中,可以预先设定n×n邻域内的每个边缘方向的选定范围,每个边缘方向的选定范围内可以包括多组沿该边缘方向分布的原始像素点。示例性地,如图10所示,在4×4邻域内,与图片的宽度方向呈45°角的边缘方向的选定范围内三组沿该边缘方向分布的原始像素点,其中两组分别包括3个沿该边缘方向分布的原始像素点,剩余的一组包括4个沿该边缘方向分布的原始像素点。
此外,对选定范围的形状不做限定,例如可以为矩形、圆形等。
S42、计算每个边缘方向上每相邻两个原始像素点的灰度值差值的绝对值。
S43、将每个边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和或均值作为边缘方向的梯度值。
如前文所述,为了判断纹理类型所使用的边缘方向的数量并不设限,例如可以为两个、三个或四个。
在一些实施例中,为了判断纹理类型使用了四个边缘方向,分别为第一边缘方向L1、第二边缘方向L2、第三边缘方向L3和第四边缘方向L4。
在一些实施例中,如图7所示,第一边缘方向L1与第二边缘方向L2大致垂直。示例性地,第一边缘方向L1与n×n邻域所确定的矩形的两条对角线中的一条大致平行;第二边缘方向L2与n×n邻域所确定的矩形的两条对角线中的另一条大致平行;其中,n×n邻域所确定的矩形为该n×n邻域中最外侧的一圈像素点连线所围成的矩形。
例如,第一边缘方向L1为与水平方向成45°角的平行线,第二边缘方向L2为与水平方向成135°角的平行线。
在一些实施例中,如图7所示,第三边缘方向L3与第四边缘方向L4大致垂直,第一边缘方向L1与第三边缘方向L3的夹角大致为45°。示例性地,第三边缘方向L3与n×n邻域中的多个像素点排列的行方向大致平行,第四边缘方向L4与n×n邻域中的多个像素点排列的列方向大致平行。
例如,第三边缘方向L3为与水平方向成0°角的平行线,第四边缘方向L4为与水平方向成90°角的平行线。
需要说明的是,第一边缘方向L1、第二边缘方向L2、第三边缘方向L3和第四边缘方向L4也可选取其他方向,例如,第一边缘方向L1为30°方向、第二边缘方向L2为120°方向、第三边缘方向L3为-15°方向,第四边缘方向L4为75°方向。
在至少两个边缘方向包括第一边缘方向L1、第二边缘方向L2、第三边缘方向L3和第四边缘方向L4时,如图3所示,S40包括S411~S414。
S411、将第一边缘方向L1上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为第一边缘方向L1的梯度值。
示例性地,第一边缘方向L1为与水平方向成45°角的平行线,n×n邻域为4×4邻域,选定范围为图10所示的矩形框。如图10所示,以45°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值之和作为45°方向的梯度值为例,即:
Figure PCTCN2021100328-appb-000002
其中,G 45°为45°方向的梯度值。
需要说明的是,在将第一边缘方向L1上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值均值作为第一边缘方向L1的梯度值的情况下,需要在获得第一边缘方向L1上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和的基础上,将绝对值之和除以灰度值差值的绝对值的数量。
例如,在第一边缘方向L1为与水平方向成45°角的平行线,n×n邻域为4×4邻域,选定范围为图10所示的矩形框的情况下,以45°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值的均值作为45°方向的梯度值,则还需将上述G 45°除以7。
S412、将第二边缘方向L2上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为第二边缘方向L2的梯度值。
示例性地,第二边缘方向L2为与水平方向成135°角的平行线,n×n邻域为4×4邻域,选定范围为图11所示的矩形框。如图11所示,以135°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值之和作为135°方向的梯度值为例,即:
Figure PCTCN2021100328-appb-000003
需要说明的是,在将第二边缘方向L2上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值均值作为第二边缘方向L2的梯度值的情况下, 需要在获得第二边缘方向L2上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和的基础上,将绝对值之和除以灰度值差值的绝对值的数量。
例如,第二边缘方向L2为与水平方向成135°角的平行线,n×n邻域为4×4邻域,选定范围为图11所示的矩形框的情况下,以135°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值的均值作为135°方向的梯度值,则还需将上述G 135°除以7。
S413、将第三边缘方向L3上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为第三边缘方向L3的梯度值。
示例性地,第三边缘方向L3为与水平方向成0°角的平行线,n×n邻域为4×4邻域,选定范围为图12所示的矩形框。如图12所示,以0°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值之和作为0°方向的梯度值为例,即:
Figure PCTCN2021100328-appb-000004
需要说明的是,在将第三边缘方向L3上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值均值作为第三边缘方向L3的梯度值的情况下,需要在获得第三边缘方向L3上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和的基础上,将绝对值之和除以灰度值差值的绝对值的数量。
例如,第三边缘方向L3为与水平方向成0°角的平行线,n×n邻域为4×4邻域,选定范围为图12所示的矩形框的情况下,以0°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值的均值作为0°方向的梯度值,则还需将上述G 除以6。
S414、将第四边缘方向L4上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为第四边缘方向L4的梯度值。
示例性地,第四边缘方向L4为与水平方向成0°角的平行线,n×n邻域为4×4邻域,选定范围为图13所示的矩形框。如图13所示,以90°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值之和作为90°方向的梯度值为例,即:
Figure PCTCN2021100328-appb-000005
需要说明的是,在将第四边缘方向L4上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值均值作为第四边缘方向L4的梯度值的情况下,需要在获得第四边缘方向L4上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和的基础上,将绝对值之和除以灰度值差值的绝对值的数量。
例如,第四边缘方向L4为与水平方向成90°角的平行线,n×n邻域为4×4邻域,选定范围为图13所示的矩形框的情况下,以90°方向上矩形框内相邻的两个原始像素点的灰度值差值的绝对值的均值作为90°方向的梯度值,则还需将上述G 90°除以6。
需要说明的是,上述S411、S412、S413和S414的执行不分先后顺序,可以依次执行S411、S412、S413、S414,也可以依次执行S412、S411、S413、S414,当然本公开并不仅限于此。
在此基础上,根据第一边缘方向L1的梯度值、第二边缘方向L2的梯度值、第三边缘方向L3的梯度值和第四边缘方向L4的梯度值,确定第一边缘方向L1、第二边缘方向L2、第三边缘方向L3和第四边缘方向L4中是否存在强边缘方向,如图4所示,S50包括S510~S513。
S510、确定第一边缘方向L1的梯度值与第二边缘方向L2的梯度值之中的较大值与较小值的比值为α 1,第三边缘方向L3的梯度值与第四边缘方向L4的梯度值之中的较大值与较小值的比值为α 2,以及,预设比值阈值为T。
即:
α 1=Max(G 1,G 2)/Min(G 1,G 2);
α 2=Max(G 3,G 4)/Min(G 3,G 4);
其中,G 1为第一边缘方向L1的梯度值,G 2为第二边缘方向L2的梯度值,G 3为第三边缘方向L3的梯度值,G 4为第四边缘方向L4的梯度值。
示例性地,在以绝对值之和作为第一边缘方向L1的梯度值,且第一边缘方向L1为与水平方向成45°角的平行线,n×n邻域为4×4邻域,选定范围为图10所示的矩形框的情况下,G 1=G 45°
示例性地,在以绝对值之和作为第二边缘方向L2的梯度值,且第二边缘方向L2为与水平方向成135°角的平行线,n×n邻域为4×4邻域,选定范围为图11所示的矩形框的情况下,G 2=G 135°
示例性地,在以绝对值之和作为第三边缘方向L3的梯度值,且第三边缘方向L3为与水平方向成0°角的平行线,n×n邻域为4×4邻域,选定范围为图12所示的矩形框的情况下,G 3=G
示例性地,在以绝对值之和作为第四边缘方向L4的梯度值,且第四边缘方向L为与水平方向成90°角的平行线,n×n邻域为4×4邻域,选定范围为图13所示的矩形框的情况下,G 4=G 90°
在一些实施例中,预设比值阈值的取值范围为1.2~1.3;示例性地,预设熵值阈值的取值为1.2、1.25、1.26、1.27、1.28、1.29或1.3等。
需要说明的是,在将边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和作为边缘方向的梯度值的情况下,在确定两个梯度值之中的较大值与较小值的比值的过程中,两个梯度值各自对应的边缘方向上,用于计算梯度值的选定范围内所包括的原始像素点的数量相等。
例如,如图10所示,第一边缘方向L1的选定范围内包括10个原始像素点;如图11所示,第二边缘方向L2的选定范围内包括10个原始像素点;如图12所示,第三边缘方向L3的选定范围内包括8个原始像素点;如图13所示,第四边缘方向L4的选定范围内包括8个原始像素点。
在得到α 1、α 2和T之后,判断α 1、α 2和T的大小。
若α 1>α 2>T,则确定n×n邻域的纹理类型为如图8中8A所示的边缘纹理,且第一边缘方向L1或第二边缘方向L2为强边缘方向,执行S511。
若α 2>α 1>T,则确定n×n邻域的纹理类型为如图8中8A所示的边缘纹理,且第三边缘方向L3或第四边缘方向L4为强边缘方向,执行S512。
若α 1≤T和/或α 2≤T,则确定n×n邻域的纹理类型为如图8中8B所示的平滑纹理,执行S513。
其中,判断α 1、α 2和T的大小,可以三者直接比较大小,也可以先确定α 1和α 2的大小,再将α 1和α 2中较小的值与T比较,当然还可以采用其他方式,本公开不仅限于此。
S511、确定存在强边缘方向,并将第一边缘方向L1和第二边缘方向L2中梯度值较大的边缘方向确定为强边缘方向。
示例性地,若G 1>G 2,则确定第一边缘方向L1为强边缘方向;若G 2>G 1,则确定第二边缘方向L2为强边缘方向。
S512、确定存在强边缘方向,并将第三边缘方向L3和第四边缘方向L4中的梯度值较大的边缘方向确定为强边缘方向。
示例性地,若G 3>G 4,则确定第三边缘方向L3为强边缘方向;若G 4>G 3,则确定第四边缘方向L4为强边缘方向。
S513、确定不存在强边缘方向,基于n×n邻域内的所有原始像素点,通过第二插值算法计算插值像素点的像素值。
如前文所述,为了判断纹理类型所使用的边缘方向的数量并不设限,例如可以为两个、三个或四个。
在一些实施例中,为了判断纹理类型使用了两个边缘方向,分别为第一边缘方向L1和第二边缘方向L2。
在一些实施例中,如图7所示,第一边缘方向L1与第二边缘方向L2大致垂直。示例性地,第一边缘方向L1与n×n邻域所确定的矩形的两条对角线中的一条大致平行;第二边缘方向L2与n×n邻域所确定的矩形的两条对角线中的另一条大致平行;其中,n×n邻域所确定的矩形为该n×n邻域中最外侧的一圈像素点连线所围成的矩形。
例如,第一边缘方向L1为与水平方向成45°角的平行线,第二边缘方向L2为与水平方向成135°角的平行线。
需要说明的是,第一边缘方向L1和第二边缘方向L2也可选取其他方向,例如,第一边缘方向L1为30°方向、第二边缘方向L2为120°方向。
在至少两个边缘方向包括第一边缘方向L1和第二边缘方向L2时,在确定第一边缘方向L1和第二边缘方向L2中是否存在强边缘方向的过程中,如图5所示,S50包括S520~S522。
S520、确定第一边缘方向L1和第二边缘方向L2的梯度值之中的较大值与较小值的比值为α,以及,预设比值阈值为T。
在一些实施例中,预设比值阈值的取值范围为1.2~1.3;示例性地,预设熵值阈值的取值为1.2、1.25、1.26、1.27、1.28、1.29或1.3等。
需要说明的是,在将边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和作为边缘方向的梯度值的情况下,在确定两个梯度值之中的较大值与较小值的比值的过程中,两个梯度值各自对应的边缘方向上,用于计算梯度值的选定范围内所包括的原始像素点的数量相等。
例如,如图10所示,第一边缘方向L1的选定范围内包括10个原始像素点;如图11所示,第二边缘方向L2的选定范围内包括10个原始像素点。
此外,至少两个边缘方向包括第一边缘方向L1和第二边缘方向L2时,第一边缘方向L1的梯度值和第二边缘方向L2的梯度值的运算过程,与至少两个边缘方向包括第一边缘方向L1、第二边缘方向L2、第三边缘方向L3和第四边缘方向L4时,第一边缘方向L1的梯度值和第二边缘方向L2的梯度值的运算过程相似,在此不做赘述。
在得到α和T之后,判断α和T的大小。
若α>T,则执行S521。
若α≤T,则执行S522。
S521、确定存在强边缘方向,并将第一边缘方向L1和第二边缘方向L2中梯度值较大的边缘方向确定为强边缘方向。
S522、确定不存在强边缘方向,基于n×n邻域内的所有原始像素点,通过第二插值算法计算插值像素点的像素值。
如前文所述,为了判断纹理类型所使用的边缘方向的数量并不设限,例如可以为两个、三个或四个。
在一些实施例中,为了判断纹理类型使用了两个边缘方向,分别为第一边缘方向L1、第二边缘方向L2和第三边缘方向L2。
在一些实施例中,如图7所示,第一边缘方向L1与第二边缘方向L2大致垂直。示例性地,第一边缘方向L1与n×n邻域所确定的矩形的两条对角线中的一条大致平行;第二边缘方向L2与n×n邻域所确定的矩形的两条对角线中的另一条大致平行;其中,n×n邻域所确定的矩形为该n×n邻域中最外侧的一圈像素点连线所围成的矩形。
例如,第一边缘方向L1为与水平方向成45°角的平行线,第二边缘方向L2为与水平方向成135°角的平行线。
在一些实施例中,如图7所示,第一边缘方向L1与第三边缘方向L3的夹角大致为45°。示例性地,第三边缘方向L3与n×n邻域中的多个像素点排列的行方向大致平行。
例如,第三边缘方向L3为与水平方向成0°角的平行线。
需要说明的是,第一边缘方向L1、第二边缘方向L2和第三边缘方向L3也可选取其他方向,例如,第一边缘方向L1为60°方向、第二边缘方向L2为120°方向、第三边缘方向L3为0°方向。
在至少两个边缘方向包括第一边缘方向L1、第二边缘方向L2和第三边缘方向L3时,在确定第一边缘方向L1、第二边缘方向L2和第三边缘方向L3中是否存在强边缘方向的过程中,如图6所示,S50包括S530~S533。
S530、确定第一边缘方向L1和第二边缘方向L2的梯度值之中的较大值与较小值的比值为α 3,第二边缘方向L2的梯度值与第三边缘方向L3的梯度值之中的较大值与较小值的比值为α 4,以及,预设比值阈值为T;
同样地,预设比值阈值的取值范围为1.2~1.3;示例性地,预设熵值阈值的取值为1.2、1.25、1.26、1.27、1.28、1.29或1.3等。
需要说明的是,在将边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和作为边缘方向的梯度值的情况下,在确定两个梯度值之中的较大值与较小值的比值的过程中,两个梯度值各自对应的边缘方向上,用于计算梯度值的选定范围内所包括的原始像素点的数量相等。
例如,第一边缘方向L1的选定范围内包括10个原始像素点;第二边缘方向L2的选定范围内包括10个原始像素点;第三边缘方向L3的选定范围内包括10个原始像素点。
此外,至少两个边缘方向包括第一边缘方向L1、第二边缘方向L2和第三边缘方向L3时,第一边缘方向L1的梯度值、第二边缘方向L2的梯度值和第三边缘方向L3的梯度值运算过程,与至少两个边缘方向包括第一边缘方向L1、第二边缘方向L2、第三边缘方向L3和第四边缘方向L4时,第一边缘方向L1的梯度值、第二边缘方向L2的梯度值和第三边缘方向L3的梯度值运算过程相似,在此不做赘述。
在得到α 3、α 4和T之后,判断α 3、α 4和T的大小。
若α 3>α 4>T,则执行S531。
若α 4>α 3>T,则执行S532。
若α 3≤T和/或α 4≤T,则执行S533。
S531、确定存在强边缘方向,并将第一边缘方向L1和第二边缘方向L2中梯度值较大的边缘方向确定为强边缘方向。
S532、确定存在强边缘方向,并将第二边缘方向L2和第三边缘方向L3中的梯度值较大的边缘方向确定为强边缘方向。
S533、确定不存在强边缘方向,基于n×n邻域内的所有原始像素点,通过第二插值算法计算插值像素点的像素值。
如图14所示,适用本公开的一些实施例的图像处理方法对左侧的原始图像进行分辨率增强后,得到的右侧的目标图像的清晰度明显提升。图15所示的对于图14中矩形框内图像区域的局部放大图,可更为直观地看出清晰度提升的效果。
综上,本公开的一些实施例提供的图像处理方法在图像分辨率增强时,通过边缘纹理方向的确定,以及结合二维图像熵的计算,确定纹理类型,并根据不同的纹理类型使用不同的插值算法进行插值像素点的像素值的计算,从而增强图像分辨率,提升图像的清晰度。
此外,插值像素点在目标图像中的位置坐标通过映射公式映射至原始图像中,可对原始图像进行任意比例(例如整数比例、小数比例、奇数比例、 偶数比例等)的分辨率增强。
本公开的一些实施例提供了一种图像处理方法,如图16所示,图像处理方法包括S100~S600。
S100、根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定插值像素点在原始图像中的位置坐标。其中,目标图像为对原始图像进行分辨率增强后的图像,插值像素点为进行分辨率增强时所产生的像素点。
此处,S100可以参照上述一些实施例的图像处理方法的S10,在此不作赘述。
S200、根据插值像素点在原始图像中的位置坐标,分别计算插值像素点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,n≥2且n为正整数。
此处,S200可以参照上述一些实施例的图像处理方法的S40,在此不作赘述。
在得到每个边缘方向的梯度值之后,根据至少两个边缘方向的梯度值,判断至少两个边缘方向中是否存在强边缘方向,若存在,则执行S300;若不存在,则执行S400。
S300、基于n×n邻域内的强边缘方向上的多个原始像素点,通过第一插值算法计算插值像素点的像素值。
此处,S300可以参照上述一些实施例的图像处理方法的S50,在此不作赘述。
S400、计算插值像素点在n×n邻域的二维图像熵。
此处,S400可以参照上述一些实施例的图像处理方法的S20,在此不作赘述。
在得到二维图像熵之后,判断二维图像熵是否大于或等于预设熵值阈值。其中,在二维图像熵大于或等于预设熵值阈值时,执行S500。在二维图像熵小于预设熵值阈值时,执行S600。
S500、基于n×n邻域内的所有原始像素点,通过第一插值算法计算插值像素点的像素值。
此处,S500可以参照上述一些实施例的图像处理方法的S30,在此不作赘述。
S600、基于n×n邻域内的所有原始像素点,通过第二插值算法计算插值像素点的像素值。
此处,S600可以参照上述一些实施例的图像处理方法的S60,在此不作赘述。
在该图像处理方法中,先判断至少两个边缘方向中是否存在强边缘方向,在存在强边缘方向的情况下,通过第一插值算法计算插值像素点的像素值,在不存在强边缘方向的情况下,计算二维图像熵,将二维图像熵与预设熵值阈值进行比较,根据比较结果确定使用第一插值算法或者第二插值算法计算插值像素点的像素值。
本公开的一些实施例的图像处理方法的有益效果和上述任一实施例的图像处理方法的有益效果相同,此处不再赘述。
以上主要介绍了本公开实施例提供的图像处理方法。在本公开的一些实施例中,还提供了实现上述图像处理方法的图像处理装置,下面将对图像处理装置进行示例性的介绍。
在一些实施例中,如图17和图18所示,该图像处理装置1可以包括图像处理装置1包括坐标映射部件11(21)、二维图像熵计算部件12(23)、强边缘方向判断部件13(22)、第一像素值计算部件14(24)和第二像素值计算部件15(25)。
其中,坐标映射部件11(21)被配置为,根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定插值像素点在原始图像中的位置坐标。目标图像为对原始图像进行分辨率增强后的图像,插值像素点为进行分辨率增强时所产生的像素点。
在一些实施例中,如图17所示,坐标映射部件11可以被配置为执行上述S10,坐标映射部件11的具体工作过程,可以参考前述方法实施例中S10的对应过程,在此不再赘述。
在一些实施例中,如图18所示,坐标映射部件21可以被配置为执行上述S100,坐标映射部件21的具体工作过程,可以参考前述方法实施例中S100的对应过程,在此不再赘述。
二维图像熵计算部件12(23)被配置为,根据插值像素点在原始图像中的位置坐标,计算插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数。
在一些实施例中,如图17所示,二维图像熵计算部件12可以被配置为执行上述S20,二维图像熵计算部件12的具体工作过程,可以参考前述方法实施例中S20的对应过程,在此不再赘述。
在一些实施例中,如图18所示,二维图像熵计算部件23可以被配置为 执行上述S400,二维图像熵计算部件23的具体工作过程,可以参考前述方法实施例中S400的对应过程,在此不再赘述。
强边缘方向判断部件13(22),被配置为根据插值像素点在原始图像中的位置坐标,计算插值像素点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,并根据至少两个边缘方向的梯度值,判断至少两个边缘方向中是否存在强边缘方向。
在一些实施例中,如图17所示,强边缘方向判断部件13可以被配置为执行上述S40,强边缘方向判断部件13的具体工作过程,可以参考前述方法实施例中S40的对应过程,在此不再赘述。
在一些实施例中,如图18所示,强边缘方向判断部件22可以被配置为执行上述S200,强边缘方向判断部件22的具体工作过程,可以参考前述方法实施例中S200的对应过程,在此不再赘述。
第一像素值计算部件14(24)被配置为,在至少两个边缘方向中存在强边缘方向的情况下,基于n×n邻域内的强边缘方向上的多个原始像素点,通过第一插值算法计算插值像素点的像素值;以及,在二维图像熵大于或等于预设熵值阈值的情况下,基于n×n邻域内的所有原始像素点,通过第一插值算法计算插值像素点的像素值。
在一些实施例中,如图17所示,第一像素值计算部件14可以被配置为执行上述S30和S50,第一像素值计算部件14的具体工作过程,可以参考前述方法实施例中S30和S50的对应过程,在此不再赘述。
在一些实施例中,如图18所示,第一像素值计算部件24可以被配置为执行上述S300和S500,第一像素值计算部件24的具体工作过程,可以参考前述方法实施例中S300和S500的对应过程,在此不再赘述。
第二像素值计算部件15(25)被配置为,在二维图像熵小于预设熵值阈值,且至少两个边缘方向中不存在强边缘方向的情况下,基于n×n邻域内的所有原始像素点,通过第二插值算法计算插值像素点的像素值。
在一些实施例中,如图17所示,第二像素值计算部件15可以被配置为执行上述S60,第二像素值计算部件15的具体工作过程,可以参考前述方法实施例中S60的对应过程,在此不再赘述。
在一些实施例中,如图18所示,第二像素值计算部件25可以被配置为执行上述S600,第二像素值计算部件25的具体工作过程,可以参考前述方法实施例中S600的对应过程,在此不再赘述。
本公开的一些实施例的图像处理装置1的有益效果和上述任一实施例 的图像处理方法的有益效果相同,此处不再赘述。
本公开的一些实施例还提供了提供一种电子设备3,如图19所示,电子设备3包括处理器31和存储器32,存储器32中存储有适于处理器31执行的计算机程序指令,计算机程序指令被处理器31运行时执行如上述任一实施例的图像处理方法。
其中,处理器31用于支持电子设备3执行上述图像处理方法中的一个或多个步骤。
处理器31可以是中央处理单元(Central Processing Unit,简称CPU),还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器32用于存储本公开实施例提供的电子设备3的程序代码和数据。处理器31可以通过运行或执行存储在存储器32内的软件程序,以及调用存储在存储器32内的数据,执行电子设备3的各种功能。
存储器32可以是只读存储器(Read-Only Memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信总线与处理器相连接。存储器32也可以和处理器31集成在一起。
本公开的一些实施例还提供了一种计算机可读存储介质(例如,非暂态计算机可读存储介质),该计算机可读存储介质中存储有计算机程序指令,计算机程序指令在计算机上运行时,使得计算机执行如上述实施例中任一实施例的图像处理方法。
示例性的,上述计算机可读存储介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,CD(Compact Disk,压缩盘)、DVD(Digital Versatile Disk,数字通用盘)等),智能卡和闪存器件(例如,EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存 储器)、卡、棒或钥匙驱动器等)。
本公开描述的各种计算机可读存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读存储介质。术语“机器可读存储介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
本公开的一些实施例还提供了一种计算机程序产品。该计算机程序产品包括计算机程序指令,在计算机上执行该计算机程序指令时,该计算机程序指令使计算机执行如上述任一实施例的图像处理方法。
本公开的一些实施例还提供了一种计算机程序。当该计算机程序在计算机上执行时,该计算机程序使计算机执行如上述任一实施例的图像处理方法。
上述电子设备、计算机可读存储介质、计算机程序产品及计算机程序的有益效果和上述一些实施例的图像处理方法的有益效果相同,此处不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的部件及方法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本公开所提供的各实施例中,应该理解到,所揭露的装置和方法,可以通过其他的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,部件的划分仅仅为一种逻辑功能的划分,实际实现时可以有另外的划分方式,例如多个部件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。
作为分离部件说明的部件可以是或者也可以不是物理上分开的,某个或某些部件可以是或者也可以不是物理单元。可以根据实际的需要选择其中的部分或者全部部件来实现本公开实施例方案的目的。
另外,在本公开一些实施例中的各功能部件可以集成在一个处理单元中,也可以是各个部件单独物理存在,也可以两个或两个以上部件集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现,并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本 公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令,用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等),执行本公开各个实施例所述方法的全部或部分步骤。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种图像处理方法,包括:
    根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点;
    根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数;
    在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第一插值算法计算所述插值像素点的像素值;
    在所述二维图像熵小于预设熵值阈值的情况下,分别计算所述n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向;
    若存在,则基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;
    若不存在,则基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
  2. 根据权利要求1所述的图像处理方法,其中,所述分别计算所述n×n邻域内的至少两个边缘方向的梯度值,包括:
    分别获取所述n×n邻域内的每个边缘方向上选定范围内多个原始像素点的灰度值;
    计算每个边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值;
    将每个边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和或均值作为所述边缘方向的梯度值。
  3. 根据权利要求1或2所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向、第二边缘方向、第三边缘方向和第四边缘方向;所述分别计算所述n×n邻域内的至少两个边缘方向的梯度值,包括:
    将所述第一边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第一边缘方向的梯度值;
    将所述第二边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第二边缘方向的梯度值;
    将所述第三边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第三边缘方向的梯度值;
    将所述第四边缘方向上选定范围内每相邻的两个原始像素点的灰度值差值的绝对值之和或均值作为所述第四边缘方向的梯度值。
  4. 根据权利要求3所述的图像处理方法,其中,所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:
    确定所述第一边缘方向的梯度值与所述第二边缘方向的梯度值之中的较大值与较小值的比值为α 1,所述第三边缘方向的梯度值与所述第四边缘方向的梯度值之中的较大值与较小值的比值为α 2,以及,预设比值阈值为T;
    若α 1>α 2>T,则确定存在强边缘方向,并将所述第一边缘方向和所述第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;
    若α 2>α 1>T,则确定存在强边缘方向,并将所述第三边缘方向和所述第四边缘方向中的梯度值较大的边缘方向确定为强边缘方向;
    若α 1≤T和/或α 2≤T,则确定不存在强边缘方向。
  5. 根据权利要求1或2所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向和第二边缘方向;
    所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:
    确定所述第一边缘方向和所述第二边缘方向的梯度值之中的较大值与较小值的比值为α,以及,预设比值阈值为T;
    若α>T,则确定存在强边缘方向,并将所述第一边缘方向和第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;
    若α≤T,则确定不存在强边缘方向。
  6. 根据权利要求1或2所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向、第二边缘方向和第三边缘方向;
    所述根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,包括:
    确定所述第一边缘方向的梯度值与所述第二边缘方向的梯度值之中的较大值与较小值的比值为α 3,所述第二边缘方向的梯度值与所述第三边缘方向的梯度值之中的较大值与较小值的比值为α 4,以及,预设比值阈值为T;
    若α 3>α 4>T,则确定存在强边缘方向,并将所述第一边缘方向和所述第二边缘方向中梯度值较大的边缘方向确定为强边缘方向;
    若α 4>α 3>T,则确定存在强边缘方向,并将所述第二边缘方向和所 述第三边缘方向中的梯度值较大的边缘方向确定为强边缘方向;
    若α 3≤T和/或α 4≤T,则确定不存在强边缘方向。
  7. 根据权利要求4~6中任一项所述的图像处理方法,其中,在将边缘方向上选定范围内每相邻两个原始像素点的灰度值差值的绝对值之和作为所述边缘方向的梯度值的情况下,在确定两个梯度值之中的较大值与较小值的比值的过程中,所述两个梯度值各自对应的边缘方向上,用于计算梯度值的选定范围内所包括的原始像素点的数量相等。
  8. 根据权利要求4~7中任一项所述的图像处理方法,其中,所述预设比值阈值T的取值范围为1.2~1.3。
  9. 根据权利要求1~8中任一项所述的图像处理方法,其中,所述至少两个边缘方向包括第一边缘方向和第二边缘方向,所述第一边缘方向与所述第二边缘方向大致垂直。
  10. 根据权利要求9所述的图像处理方法,其中,所述至少两个边缘方向还包括第三边缘方向和第四边缘方向,所述第三边缘方向与所述第四边缘方向大致垂直,所述第一边缘方向与所述第三边缘方向的夹角大致为45°。
  11. 根据权利要求10所述的图像处理方法,其中,所述第一边缘方向与所述n×n邻域所确定的矩形的两条对角线中的一条大致平行;所述第二边缘方向与所述n×n邻域所确定的矩形的两条对角线中的另一条大致平行;所述第三边缘方向与所述n×n邻域中的多个像素点排列的行方向大致平行,所述第四边缘方向与所述n×n邻域中的多个像素点排列的列方向大致平行。
  12. 根据权利要求1~11中任一项所述的图像处理方法,其中,所述预设熵值阈值的取值范围为0.3~0.8。
  13. 根据权利要求1~12中任一项所述的图像处理方法,其中,所述n×n邻域为4×4邻域。
  14. 根据权利要求1~13中任一项所述的图像处理方法,其中,所述第一插值算法为双立方卷积插值算法,和/或,所述第二插值算法为B样条插值算法。
  15. 一种图像处理方法,包括:
    根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点;
    根据所述插值像素点在原始图像中的位置坐标,分别计算所述插值像素 点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向,n≥2且n为正整数;
    若存在,则基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;
    若不存在,则计算所述插值像素点在所述n×n邻域的二维图像熵;
    在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第一插值算法计算所述插值像素点的像素值;
    在所述二维图像熵小于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
  16. 一种图像处理装置,包括:
    坐标映射部件,被配置为根据目标图像上任一待处理的插值像素点在目标图像中的位置坐标,确定所述插值像素点在原始图像中的位置坐标;所述目标图像为对原始图像进行分辨率增强后的图像,所述插值像素点为进行分辨率增强时所产生的像素点;
    二维图像熵计算部件,被配置为根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域的二维图像熵,n≥2且n为正整数;
    强边缘方向判断部件,被配置为根据所述插值像素点在原始图像中的位置坐标,计算所述插值像素点在原始图像中的n×n邻域内的至少两个边缘方向的梯度值,并根据所述至少两个边缘方向的梯度值,判断所述至少两个边缘方向中是否存在强边缘方向;
    第一像素值计算部件,被配置为在所述至少两个边缘方向中存在强边缘方向的情况下,基于所述n×n邻域内的强边缘方向上的多个原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;以及,在所述二维图像熵大于或等于预设熵值阈值的情况下,基于所述n×n邻域内的所有原始像素点,通过所述第一插值算法计算所述插值像素点的像素值;
    第二像素值计算部件,被配置为在所述二维图像熵小于预设熵值阈值,且所述至少两个边缘方向中不存在强边缘方向的情况下,基于所述n×n邻域内的所有原始像素点,通过第二插值算法计算所述插值像素点的像素值。
  17. 一种电子设备,包括处理器和存储器,所述存储器中存储有适于所述处理器执行的计算机程序指令,所述计算机程序指令被所述处理器运行时执行如权利要求1~15中任一项所述的图像处理方法。
  18. 一种计算机可读存储介质,存储有计算机程序指令,其中,所述计算机程序指令在计算机上运行时,使得所述计算机执行如权利要求1~15中任一项所述的图像处理方法。
  19. 一种计算机程序产品,包括计算机程序指令,其中,在计算机上执行所述计算机程序指令时,所述计算机程序指令使计算机执行如权利要求1~15中任一项所述的图像处理方法。
PCT/CN2021/100328 2020-06-17 2021-06-16 图像处理方法及装置、电子设备、计算机可读存储介质 WO2021254381A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/793,487 US20230082346A1 (en) 2020-06-17 2021-06-16 Image processing methods, electronic devices, and non-transitory computer-readable storage media

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010553063.5 2020-06-17
CN202010553063.5A CN113808012B (zh) 2020-06-17 2020-06-17 图像处理方法、计算机设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021254381A1 true WO2021254381A1 (zh) 2021-12-23

Family

ID=78892638

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100328 WO2021254381A1 (zh) 2020-06-17 2021-06-16 图像处理方法及装置、电子设备、计算机可读存储介质

Country Status (3)

Country Link
US (1) US20230082346A1 (zh)
CN (1) CN113808012B (zh)
WO (1) WO2021254381A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496751A (zh) * 2022-11-16 2022-12-20 威海捷诺曼自动化股份有限公司 一种纤维缠绕机缠绕检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447819A (zh) * 2015-12-04 2016-03-30 腾讯科技(深圳)有限公司 图像处理方法及装置
US20170053380A1 (en) * 2015-08-17 2017-02-23 Flir Systems, Inc. Edge guided interpolation and sharpening
CN108805806A (zh) * 2017-04-28 2018-11-13 华为技术有限公司 图像处理方法及装置
CN110349090A (zh) * 2019-07-16 2019-10-18 合肥工业大学 一种基于牛顿二阶插值的图像缩放方法
CN110738625A (zh) * 2019-10-21 2020-01-31 Oppo广东移动通信有限公司 图像重采样方法、装置、终端及计算机可读存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8085850B2 (en) * 2003-04-24 2011-12-27 Zador Andrew M Methods and apparatus for efficient encoding of image edges, motion, velocity, and detail
JP4701144B2 (ja) * 2006-09-26 2011-06-15 富士通株式会社 画像処理装置、画像処理方法および画像処理プログラム
WO2011096091A1 (ja) * 2010-02-08 2011-08-11 Xu Weigang 画像圧縮装置、画像伸張装置、画像圧縮方法、画像伸張方法および記録媒体
KR101854611B1 (ko) * 2016-12-05 2018-05-04 인천대학교 산학협력단 엔트로피를 이용한 가중치 할당에 기초한 이미지 처리 방법
CN108364254B (zh) * 2018-03-20 2021-07-23 北京奇虎科技有限公司 图像处理方法、装置及电子设备
CN110555794B (zh) * 2018-05-31 2021-07-23 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN109903224B (zh) * 2019-01-25 2023-03-31 珠海市杰理科技股份有限公司 图像缩放方法、装置、计算机设备和存储介质
CN111080532B (zh) * 2019-10-16 2024-07-19 北京理工大学深圳研究院 基于理想边缘外推的遥感影像超分辨率复原方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170053380A1 (en) * 2015-08-17 2017-02-23 Flir Systems, Inc. Edge guided interpolation and sharpening
CN105447819A (zh) * 2015-12-04 2016-03-30 腾讯科技(深圳)有限公司 图像处理方法及装置
CN108805806A (zh) * 2017-04-28 2018-11-13 华为技术有限公司 图像处理方法及装置
CN110349090A (zh) * 2019-07-16 2019-10-18 合肥工业大学 一种基于牛顿二阶插值的图像缩放方法
CN110738625A (zh) * 2019-10-21 2020-01-31 Oppo广东移动通信有限公司 图像重采样方法、装置、终端及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496751A (zh) * 2022-11-16 2022-12-20 威海捷诺曼自动化股份有限公司 一种纤维缠绕机缠绕检测方法

Also Published As

Publication number Publication date
CN113808012A (zh) 2021-12-17
CN113808012B (zh) 2024-07-12
US20230082346A1 (en) 2023-03-16

Similar Documents

Publication Publication Date Title
JP4295340B2 (ja) 二次元画像の拡大およびピンチング
CN110827229B (zh) 一种基于纹理加权直方图均衡化的红外图像增强方法
JP7175197B2 (ja) 画像処理方法および装置、記憶媒体、コンピュータ装置
CN110503704B (zh) 三分图的构造方法、装置和电子设备
WO2020186385A1 (zh) 图像处理方法、电子设备及计算机可读存储介质
CN111583381B (zh) 游戏资源图的渲染方法、装置及电子设备
WO2021254381A1 (zh) 图像处理方法及装置、电子设备、计算机可读存储介质
CN111968033A (zh) 一种图像缩放处理方法及装置
CN111539238A (zh) 二维码图像修复方法、装置、计算机设备和存储介质
CN114022384A (zh) 一种基于各向异性扩散模型的自适应边缘保持去噪方法
WO2021000495A1 (zh) 一种图像处理方法以及装置
JPWO2019041842A5 (zh)
CN111192302A (zh) 一种基于运动平滑性和ransac算法的特征匹配方法
CN115661464A (zh) 图像分割方法、装置、设备及计算机存储介质
JP6164977B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP2017201454A (ja) 画像処理装置及びプログラム
CN112907708B (zh) 人脸卡通化方法、设备及计算机存储介质
CN118071979B (zh) 一种基于深度学习的晶圆预对准方法及系统
CN111968139B (zh) 基于初级视觉皮层固视微动机制的轮廓检测方法
JP5934019B2 (ja) 階調復元装置及びそのプログラム
TWI736335B (zh) 基於深度影像生成方法、電子裝置與電腦程式產品
CN110648341B (zh) 一种基于尺度空间和子图的目标边界检测方法
CN114973288B (zh) 一种非商品图文本检测方法、系统及计算机存储介质
WO2022205606A1 (zh) 模具处理方法、装置、电子设备、系统和存储介质
WO2024212665A1 (zh) 图像缩放方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21826044

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21826044

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21826044

Country of ref document: EP

Kind code of ref document: A1