CN117333386A - Image sharpening method, system, device and storage medium - Google Patents
Image sharpening method, system, device and storage medium Download PDFInfo
- Publication number
- CN117333386A CN117333386A CN202311272067.6A CN202311272067A CN117333386A CN 117333386 A CN117333386 A CN 117333386A CN 202311272067 A CN202311272067 A CN 202311272067A CN 117333386 A CN117333386 A CN 117333386A
- Authority
- CN
- China
- Prior art keywords
- sharpened
- sharpening
- signal
- image
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003707 image sharpening Methods 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000004044 response Effects 0.000 claims description 95
- 238000012545 processing Methods 0.000 claims description 29
- 238000004364 calculation method Methods 0.000 claims description 25
- 238000001914 filtration Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 18
- 238000013507 mapping Methods 0.000 claims description 15
- 230000000694 effects Effects 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000013136 deep learning model Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000001965 increasing effect Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000001629 suppression Effects 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 3
- 230000000670 limiting effect Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012356 Product development Methods 0.000 description 1
- 241000519995 Stachys sylvatica Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000000265 homogenisation Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The application provides an image sharpening method, an image sharpening system, image sharpening equipment and a storage medium, wherein the image sharpening method comprises the following steps: acquiring brightness information of an input image, wherein the brightness information comprises original brightness signals of all pixel points of the input image; determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule; for each pixel point, respectively calculating reference signals at two sides perpendicular to the judging direction, and calculating a base signal of the pixel point based on the reference signals at the two sides; and calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal. The image sharpening method provided by the invention can be well adapted to hardware implementation, has a good image sharpening effect, and can obtain more continuous, clear and natural edges.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an image sharpening method, system, device, and storage medium.
Background
Image sharpening is an important image enhancement algorithm. The method is limited by factors such as optical quality of a lens, actual specification of a sensor and the like, and the problem of insufficient sharpness exists in video/images actually output by image equipment, so that the contrast of edges needs to be increased through a sharpening algorithm to improve the actual look and feel of the video/images. Thus, sharpening algorithms are an integral part of image processing systems for various imaging systems.
The traditional sharpening algorithm distinguishes the low-frequency signal and the medium-high frequency signal in the image by designing different filters, carries out enhancement processing on the high-frequency signal, then fuses the low-frequency signal and the medium-high frequency signal, and outputs the sharpened image.
In practical applications, there are significant differences in the requirements of different customer groups for image styles. Therefore, various sharpening algorithms are derived in order to meet the adaptability requirements of different application scenarios. In particular, for video conferencing systems, the sharpening algorithm needs to optimize the following requirements, in addition to the two main factors of sharpness and sharpness: highlighting face details, suppressing edge spurious details, suppressing edge aliasing, controlling background noise, etc.
The technical routes of hardware grounding and software implementation of the sharpening algorithm mainly include the following steps:
(1) The hardware aspect: partial sharpening algorithms to be able to cover more frequency bands, video/images are often filtered using a size filtering template, such as the one described in patent application CN112184565a, which uses a multi-window serial image sharpening method. A similar method can have more video/image effect optimization space, and although the method can adapt to the requirements of multiple scenes through debugging, a larger multi-stage series filtering template (such as bilateral filtering of 17 x 17 in fig. 2 of the patent application) can obviously increase cache data, and when the method is implemented through hardware such as an FPGA (field programmable gate array), the occupied cache resources are more, so that the method is unfavorable for hardware implementation algorithms with relatively low cost; and the algorithm has more related parameters, the image effect is measured, and the product development period is longer.
(2) Software aspect: sharpening algorithms are generally classified into directional sharpening and non-directional sharpening.
(2.1) the directional sharpening algorithm needs to detect the direction of the edge, and if the detected edge strength is greater than a preset threshold value, sharpening enhancement is performed. Common algorithms are a step differential operator, sobel operator (Sobel), etc. There are two problems with this algorithm: the saw teeth at the edge after sharpening are stronger; if the threshold value is set unreasonably in the detection, the high-frequency edge is not continuous, cleaned and natural enough.
(2.2) the non-directional sharpening algorithm does not need to perform edge detection, the high-frequency information screened by the filter is enhanced, and typical algorithms include a reverse sharpening mask algorithm (USM, unsharp Masking for short), a Laplace operator and the like. For example, patent application CN107742280a employs such an algorithm. There are two problems with this type of algorithm: pseudo details exist after sharpening, and noise points are large after sharpening; the edges are thicker after the partial algorithm sharpens.
Disclosure of Invention
Aiming at the problems in the prior art, the purpose of the application is to provide an image sharpening method, an image sharpening system, an image sharpening device and a storage medium, which can be well adapted to hardware realization and have good image sharpening effect.
The embodiment of the application provides an image sharpening method, which comprises the following steps:
acquiring brightness information of an input image, wherein the brightness information comprises original brightness signals of all pixel points of the input image;
determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule;
for each pixel point, respectively calculating reference signals at two sides perpendicular to the judging direction, and calculating a base signal of the pixel point based on the reference signals at the two sides;
And calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal.
In some embodiments, the determining the determination direction of each pixel point in the input image based on the plurality of direction filters and the direction determination rule includes the following steps:
setting a first main direction and a second main direction which are perpendicular to each other;
filtering the input image by adopting a first main direction filter and a second main direction filter respectively, and taking an absolute value of a filtered result to obtain a response value of each pixel point;
for each pixel point, determining the determination direction based on the response value corresponding to each filter according to the direction determination rule;
the direction determination rule includes: and if the difference value between the larger direction filter response value and the smaller direction filter response value is larger than a first threshold value, taking the direction corresponding to the larger direction filter response value as the judging direction.
In some embodiments, the determining the determination direction of each pixel point in the input image based on the plurality of direction filters and the direction determination rule includes the following steps:
Setting a first main direction and a second main direction which are perpendicular to each other and a first secondary direction and a second secondary direction which are perpendicular to each other, wherein the first secondary direction is between the first main direction and the second main direction;
filtering the input image by adopting a first main direction filter, a second main direction filter, a first secondary direction filter and a second secondary direction filter respectively, and taking an absolute value of a filtered result to obtain a response value of each pixel point;
determining the judging direction for each pixel point based on response values corresponding to each filter according to the direction judging rule;
the direction judgment rule includes:
according to a first condition selection: if the difference value between the largest direction filter response value and the other direction filter response values is larger than a first threshold value, taking the direction corresponding to the largest direction filter response value as the judging direction;
selecting according to a second condition: and if the absolute value of the difference value of the two main direction filter response values is larger than a first threshold value and the absolute value of the difference value of the larger main direction filter response value and one secondary direction filter response value is smaller than the first threshold value, taking the direction corresponding to the larger main direction filter response value as the judging direction.
In some embodiments, the calculating the reference signals on two sides perpendicular to the determination direction includes the steps of:
respectively taking a plurality of reference pixel points at two sides perpendicular to the judging direction, and respectively calculating the average value of the reference pixel points at two sides to be used as reference signals at the two sides;
the base signal of the pixel point is obtained by calculation based on the reference signals on the two sides, and the method comprises the following steps:
and interpolating and calculating based on the reference signals at the two sides to obtain a basic signal of the pixel point.
In some embodiments, the step of taking a plurality of reference pixel points on two sides perpendicular to the determination direction includes the following steps:
selecting an adjacent pixel point as a center point on each side perpendicular to the determination direction;
and respectively selecting a plurality of pixel points at two sides of the center point along the judging direction, and taking the selected pixel points and the center point as reference pixel points at the side.
In some embodiments, the calculating the to-be-sharpened high-frequency signal portion in the original brightness signal based on the base signal of each pixel point, and performing sharpening processing on the to-be-sharpened high-frequency signal portion in the original brightness signal, to obtain a sharpened brightness signal, includes the following steps:
Calculating the difference value between the original brightness signal and the basic signal of each pixel point to obtain the high-frequency signal part to be sharpened;
processing the high-frequency signal part to be sharpened based on a preset sharpening processing function to obtain a sharpened high-frequency signal part;
and adding the sharpened high-frequency signal part with the basic signal to obtain a sharpened brightness signal.
In some embodiments, the processing the to-be-sharpened high-frequency signal portion based on a preset sharpening processing function to obtain a sharpened high-frequency signal portion includes the following steps:
multiplying the high-frequency signal part to be sharpened by a preset sharpening coefficient to obtain a sharpened high-frequency signal part; or alternatively, the first and second heat exchangers may be,
searching a sharpened signal corresponding to the high-frequency signal part to be sharpened based on a preset mapping relation between the signal before sharpening and the signal after sharpening, and taking the sharpened signal as the high-frequency signal part after sharpening.
The embodiment of the application also provides an image sharpening system, which is applied to the image sharpening method, and comprises the following steps:
the image acquisition module is used for acquiring brightness information of an input image, wherein the brightness information comprises original brightness signals of all pixel points of the input image;
A direction determination module for determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule;
the image sharpening module is used for respectively calculating reference signals at two sides perpendicular to the judging direction for each pixel point, and calculating a basic signal of the pixel point based on the reference signals at the two sides; and calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal.
The embodiment of the application also provides an image sharpening device, which comprises:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the image sharpening method via execution of the executable instructions.
The embodiment of the application also provides a computer readable storage medium for storing a program, which when executed by a processor, implements the steps of the image sharpening method.
By adopting the image sharpening method, the system, the device and the storage medium provided by the application, the direction of the image is judged based on the direction filter, then the reference signals at two sides of the judging direction are respectively calculated based on the judging direction, the more accurate basic signal of the pixel point can be obtained based on the reference signals at two sides, and the high-frequency signal part to be sharpened in the original brightness signal is sharpened, so that the processed sharpened brightness signal is obtained.
According to the method, the direction filter with the noise suppression effect is adopted, the edge direction judgment is carried out based on the response value of the direction filter, after filtering, the signal quantity of the edge far exceeds the signal quantity of the texture, so that sharpening is carried out along the edge direction when the high-frequency signal to be sharpened is ensured, the noise point enhancement result is not brought into sharpening calculation, and therefore the influence of noise on the final image effect is reduced. In conclusion, the technical scheme provided by the application is beneficial to controlling pseudo details and noise in an image picture, reducing the problems of broken lines, saw teeth, excessive thickness and the like of edges, and obtaining more continuous, clear and natural edges.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings.
FIG. 1 is a flow chart of an image sharpening method according to an embodiment of the present application;
FIG. 2 (a) is a schematic diagram of a first principal direction filter according to an embodiment of the present application;
FIG. 2 (b) is a schematic diagram of a second principal direction filter according to an embodiment of the present application;
FIG. 2 (c) is a schematic diagram of a first order direction filter according to an embodiment of the present application;
FIG. 2 (d) is a schematic diagram of a second order direction filter according to an embodiment of the present application;
FIG. 3 is a flow chart of an implementation of the sharpening process steps of an embodiment of the present application;
FIG. 4 is a schematic diagram of a piecewise polyline mapping function of an embodiment of the present application;
FIG. 5 is a schematic diagram of an image sharpening system according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an image sharpening device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer storage medium according to an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus a repetitive description thereof will be omitted. Although the terms "first" or "second" etc. may be used herein to describe certain features, these features should be interpreted in a descriptive sense only and not for purposes of limitation as to the number and importance of the particular features.
As shown in fig. 1, in an embodiment, the application provides an image sharpening method, which includes the following steps:
s100: acquiring brightness information of an input image, wherein the brightness information comprises original brightness signals of all pixel points of the input image;
the input image can be an image shot independently or a video frame image, if the input image is a YUV video frame, the image information of a Y channel, namely the brightness information, is acquired independently, and if the input image is an RGB video frame, the RGB video frame is converted into the YUV image, and then the image information of the Y channel, namely the brightness information, is acquired;
s200: determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule;
specifically, filtering an input image based on a plurality of direction filters and a direction judging rule, taking an absolute value of a filtered result as a filtering response value of each pixel point in the input image, determining a judging direction according to the response value of each pixel point corresponding to each direction filter, wherein the number of the direction filters and the form of the filters can be selected according to requirements, and the direction filters can adopt but are not limited to gradient filters and related deformation thereof or Sobel operators and related deformation;
S300: for each pixel point, respectively calculating reference signals at two sides perpendicular to the judging direction, and calculating a base signal of the pixel point based on the reference signals at the two sides;
in this embodiment, the step S300 includes: respectively taking a plurality of reference pixel points at two sides perpendicular to the judging direction, respectively calculating the average value of the reference pixel points at two sides, wherein the average value can be a simple average value of the plurality of reference pixel points or a weighted average value of the plurality of reference pixel points as a reference signal at two sides; then, interpolating the reference signals at two sides to obtain a base signal of the pixel point, wherein the interpolation is to select a value between the reference signals at two sides as the base signal of the pixel point, for example, te=α·mean_a+ (1- α) ·mean_b, α is a mixed intensity coefficient greater than 0 and less than 1, TE is the base signal of the pixel point, mean_a and mean_b are respectively the reference signals at two sides, the smaller the α is, the closer the TE value is to mean_a, the larger the α is, and the closer the TE value is to mean_b;
s400: calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal;
In this embodiment, the to-be-sharpened high-frequency signal portion may be calculated based on the difference between the original brightness signal and the base signal, and the sharpened brightness signal may be obtained by processing the to-be-sharpened high-frequency signal portion based on a preset sharpening processing function and then adding the processed to-be-sharpened high-frequency signal portion to the base signal.
According to the method, the directions of the images are judged through the direction filters in the steps S100 and S200, then the reference signals on the two sides of the judging directions are calculated respectively based on the judging directions in the step S300, the more accurate basic signals of the pixel points can be obtained based on the reference signals on the two sides, the high-frequency signal part to be sharpened in the original brightness signals is sharpened in the step S400, the processed sharpened brightness signals are obtained, the method is simple to realize, a large filtering template is not needed, and buffer resources are saved, and hardware implementation such as FPGA is facilitated. The method adopts the direction filter with noise suppression function, carries out edge direction judgment based on the response value of the direction filter, and after filtering treatment, the signal quantity of the edge far exceeds the signal quantity of texture, so that sharpening is carried out along the edge direction when sharpening is carried out on the high-frequency signal part to be sharpened, and the result of enhancing the small noise point is not brought into sharpening calculation, thereby reducing the influence of noise on the final image effect. Therefore, the technical scheme provided by the application is beneficial to controlling pseudo details and noise in an image picture, reducing the problems of broken lines, saw teeth, excessive thickness and the like of edges, and obtaining more continuous, clear and natural edges.
In the example of applying the image sharpening method to the face image sharpening, the definition of the facial features is proportional to the edge sharpening effect, and a small amount of small noise points similar to textures exist in the YUV image or the RGB image transmitted after actual imaging. If a non-directional sharpening enhancement algorithm is adopted, small noise points similar to textures in an image can be synchronously enhanced, and then edge discontinuity occurs near a face area, and poor image effects such as white spots, white edges and the like are achieved. The image sharpening method of the application is used for sharpening the face image, the direction filter with the noise suppression function is adopted, the edge direction judgment is carried out based on the response value of the direction filter, after filtering, the signal quantity of the edge is far more than that of the texture, so that when the sharpening treatment is carried out on the high-frequency signal part to be sharpened, the sharpening is carried out along the edge direction of the facial feature, the facial feature is finer, the pseudo-details (white point, white edge and the like) and the too-thick black lines and color noise are avoided, and the face image after the sharpening treatment is more real and natural while the face key information is highlighted.
In this embodiment, the direction of the input image may be determined by the first main direction filter and the second main direction filter. The step S200: determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule, comprising the steps of:
Setting a first main direction and a second main direction which are perpendicular to each other;
filtering the input image by adopting a first main direction filter and a second main direction filter respectively, and taking an absolute value of a filtered result as a response value of each pixel point;
for each pixel point, determining the determination direction based on the response value corresponding to each filter according to the direction determination rule;
the direction determination rule includes:
determining a determination direction according to a first condition, the first condition comprising: and if the difference value between the larger direction filter response value and the smaller direction filter response value is larger than a first threshold value, taking the direction corresponding to the larger direction filter response value as the judging direction. For example, if the first principal direction filter response value is greater than the second principal direction filter response value and the difference is greater than the first threshold, the determined direction of the pixel is the first principal direction, and if the first principal direction filter response value is less than the second direction filter response value and the difference is greater than the first threshold, the determined direction of the pixel is the second principal direction;
if the determination direction cannot be determined according to the first condition, determining no direction.
Further, the first direction and the second direction may be set, and the step S200: determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule, comprising the steps of:
setting a first main direction and a second main direction which are perpendicular to each other and a first secondary direction and a second secondary direction which are perpendicular to each other, wherein the first secondary direction is between the first main direction and the second main direction;
filtering the input image by adopting a first main direction filter, a second main direction filter, a first secondary direction filter and a second secondary direction filter respectively, and taking an absolute value of a filtered result as a response value of each pixel point;
determining the judging direction for each pixel point based on response values corresponding to each filter according to the direction judging rule;
the direction judgment rule includes:
according to a first condition selection: if the difference value between the largest direction filter response value and the other direction filter response values is larger than a first threshold value, taking the direction corresponding to the largest direction filter response value as the judging direction;
selecting according to a second condition: if the absolute value of the difference value of the two main direction filter response values is larger than a first threshold value, and the absolute value of the difference value of the larger main direction filter response value and one secondary direction filter response value is smaller than the first threshold value, taking the direction corresponding to the larger main direction filter response value as the judging direction;
If the determination direction cannot be determined according to both the first condition and the second condition, determining no direction.
In this embodiment, the first main direction is a 0 ° direction, the second main direction is a 90 ° direction, the first secondary direction is a 45 ° direction, and the second secondary direction is a 135 ° direction, which are illustrated as examples, and the angles of the directions are not limiting the scope of the present application. The step S200 is implemented by the following steps:
the four direction filters (a first main direction filter, a second main direction filter, a first sub direction filter, and a second sub direction filter are respectively shown in fig. 2 (a), fig. 2 (b), fig. 2 (c), and fig. 2 (d)) corresponding to the four directions are designed, the input image is filtered, and the filtered result takes a response value, which is used as a response value of each pixel point corresponding to each filter and is respectively called a first main direction response value, a second main direction response value, a first sub direction response value, and a second sub direction response value;
the first threshold is set as dir_th, which may be set according to engineering experience and user requirements or by training with a deep learning model, for example: marking the directions of all pixel points of a sample image, inputting different selectable thresholds and the sample image into a deep learning model, obtaining the prediction directions of all pixel points, and selecting the selectable thresholds based on the marked image and the prediction directions;
The determining the determination direction according to the response value corresponding to each filter for each pixel point includes:
(1) According to a first condition selection:
(1.1) for a pixel, if the difference between the first main direction response value and the second main direction response value is greater than DIR_TH, the difference between the first main direction response value and the first sub direction response value is greater than DIR_TH, and the difference between the first main direction response value and the second sub direction response value is greater than DIR_TH, determining the direction of the current pixel as the first main direction;
(1.2) for a pixel, if the difference of the second main direction response value minus the first main direction response value is greater than DIR_TH, the difference of the second main direction response value minus the first direction response value is greater than DIR_TH, and the difference of the second main direction response value minus the second main direction response value is greater than DIR_TH, determining the direction of the current pixel as the second main direction;
(1.3) for a pixel, if the difference of the first primary direction response value minus the first primary direction response value is greater than DIR_TH, the difference of the first primary direction response value minus the second primary direction response value is greater than DIR_TH, and the difference of the first primary direction response value minus the second primary direction response value is greater than DIR_TH, determining the direction of the current pixel as the first primary direction;
(1.4) for a pixel, if the difference of the second direction response value minus the first main direction response value is greater than DIR_TH, the difference of the second direction response value minus the second main direction response value is greater than DIR_TH, and the difference of the second direction response value minus the first direction response value is greater than DIR_TH, judging the direction of the current pixel as a second direction;
(2) Selecting according to a second condition:
(2.1) for a pixel, if the difference between the first main direction response value and the second main direction response value is greater than dir_th, if one of the following two conditions A, B is met, determining the direction of the current pixel as the first main direction:
condition a: the absolute value of the difference between the first main direction response value and the first secondary direction response value is smaller than DIR_TH;
condition B: the absolute value of the difference between the first main direction response value and the second direction response value is smaller than DIR_TH;
(2.2) for a pixel, if the difference between the second main direction response value and the first main direction response value is greater than dir_th, if one of the following two conditions C, D is met, determining the direction of the current pixel as the second main direction:
condition C: the absolute value of the difference between the second main direction response value and the first direction response value is smaller than DIR_TH;
Condition D: the absolute value of the difference between the second main direction response value and the second secondary direction response value is smaller than DIR_TH;
(3) If the judging direction of a pixel point cannot be determined according to the first condition and the second condition, the pixel point is determined to be unoriented.
In this embodiment, the step S300 of calculating the reference signals on the two sides perpendicular to the determination direction includes taking a plurality of reference pixel points on the two sides perpendicular to the determination direction, respectively, and calculating the average value of the reference pixel points on the two sides, as the reference signals on the two sides, for example, a simple average value or a weighted average value. In the step S300, the base signal of the pixel is calculated based on the reference signals of the two sides, which includes interpolating the base signal of the pixel based on the reference signals of the two sides.
In this embodiment, for each pixel, the step of taking a plurality of reference pixels on two sides perpendicular to the determination direction includes the following steps:
for each pixel point, selecting an adjacent pixel point as a center point on each side perpendicular to the judging direction; adjacent pixels refer to: two pixel points closest to the currently determined pixel point in a direction perpendicular to the determination direction;
And selecting a plurality of pixel points on two sides of the center point along the judging direction respectively, wherein the selected pixel points and the center point are used as reference pixel points on the side, so that the connecting line of the reference pixel points is parallel to the judging direction on each side of each pixel point which is perpendicular to the judging direction.
For example, if the determination direction of one pixel is 0 °, it selects one adjacent similar point on its upper side, other reference pixels are distributed on a straight line parallel to the 0 ° direction centered on the adjacent pixel, one similar pixel is selected on its lower side, and other reference pixels are distributed on a straight line parallel to the 0 ° direction centered on the adjacent pixel.
The currently calculated pixel point is denoted as a point T, and its coordinates are (x, y), and the calculation method of the base signal of each pixel point is described below by taking the possible direction of each pixel point as the first main direction, the second main direction, the first sub direction, the second sub direction, or no direction as an example.
(1) The basic signal calculation step when the determination direction of the point T is the first main direction is as follows:
(1-1) the two side areas perpendicular to the first main direction passing through the point T are respectively an area A and an area B, wherein the point with the coordinates of (x, y-1) is taken as a central point Ta in the area A, and the point with the coordinates of (x, y+1) is taken as a central point Tb in the area B;
(1-2) taking 1*N pixels along the first main direction with the point Ta as the center, and performing weighted average to obtain a weighted average mean_a of the area A along the first main direction, wherein the weighted average mean_a is calculated as follows:
wherein Y represents the gray value of the luminance channel at the corresponding coordinate position, that is, the original luminance signal, ω represents the normalized coefficient array of the weighted mean value, and the specific numerical value of the filter kernel may be determined according to engineering experience setting or by training through a deep learning model, for example: a simple array of normalization coefficients may take the mean coefficients, i.e. each value is 1/N, or the array of homogenization coefficients may be set according to the distance of the reference pixel point from the center point, the larger the distance the larger the corresponding weighting coefficient. N represents the size of the taken calculation template, is set according to experience, and is generally 3,5,7,9,11 equivalent;
(1-3) taking 1*N pixels along the first main direction with the point Tb as the center, and performing weighted average to obtain a weighted average mean_b of the B region along the first main direction, namely, the weighted average mean_b is calculated as follows:
(1-4) calculating a base signal TE of the current pixel point by linear interpolation according to weighted average values mean_a and mean_b of the a region and the B region in (1-2) and (1-3), the calculation formula of the base signal TE is as follows:
TE=α·Mean_a+(1-α)·Mean_b
In practical use, the user adjusts the mixing intensity coefficient of the region a according to his own preferred sharpening style, and the coefficient ranges from 0 to 1, preferably from more than 0 to less than 1, for example, 0.4,0.5,0.6 may be selected as needed. If the brightness value of the current pixel point is larger than TE, assuming that the mean_a calculated near the edge is smaller than mean_b, when alpha is more biased to 1, TE is more biased to mean_a, and the sharpening style is biased to increase the gray value of the pixel point near the edge, namely the white edge is increased; when alpha is more biased to 0, TE is more biased to mean_b, the sharpening style is biased to reduce the gray value of the pixel points near the edge, namely, the black edge is increased, and when the mixing coefficient is selected to be 0.5, the sharpening style with relative compromise can be obtained; thus, a user may debug and determine a particular value of α based on whether the sharpening style is desired to favor increasing white or black edges;
(2) The basic signal calculation step when the determination direction of the point T is the second main direction is as follows:
(2-1) recording the two side areas passing through the second main direction of the point T as an area A and an area B, wherein the point with the coordinates of (x-1, y) is taken as a central point Ta in the area A, and the point with the coordinates of (x+1, y) is taken as a central point Tb in the area B;
(2-2) taking 1*N pixels along the second main direction with the point Ta as the center, and performing weighted average to obtain a weighted average mean_a of the region a along the second main direction, namely, the weighted average mean_a, and the calculation method is as follows:
wherein Y represents the gray value of the luminance channel at the corresponding coordinate position, namely the original luminance signal, ω represents the normalized coefficient array of the weighted mean value, and the specific numerical value of the filter kernel is determined according to the engineering experience setting or by training through a deep learning model, for example: the normalization coefficient array may take the mean coefficient, i.e. each value is 1/N, or the uniformity coefficient array may be set according to the distance between the reference pixel point and the center point, where the greater the distance, the greater the corresponding weighting coefficient. N represents the size of the taken calculation template, is set according to experience, and is generally 3,5,7,9,11 equivalent;
(2-3) taking 1*N pixels along the second main direction with the point Tb as the center, and performing weighted average to obtain a weighted average mean_b of the B region along the second main direction, namely, the weighted average mean_b is calculated as follows:
(2-4) calculating a base signal TE of the current point according to weighted average values mean_a and mean_b of the a region and the B region in (2-2) and (2-3), wherein the calculation formula of the base signal TE is as follows:
TE=α·Mean_a+(1-α)·Mean_b
Wherein, alpha represents the mixed intensity coefficient of the A area, in actual use, the user carries out debugging according to the sharpening style favored by the user, and the range of the coefficient is between 0 and 1.
(3) The basic signal calculation step when the determination direction of the point T is the first direction is as follows:
(3-1) taking the two side areas of the first direction passing through the point T as an area A and an area B, taking the point with the coordinates of (x-1, y-1) as a central point Ta in the area A, and taking the point with the coordinates of (x+1, y+1) as a central point Tb in the area B;
(3-2) taking 1*N pixels along the first direction with the point Ta as the center, and performing weighted average to obtain a weighted average mean_a of the area A along the first direction, namely, the weighted average mean_a is calculated as follows:
wherein Y represents the gray value of the luminance channel at the corresponding coordinate position, namely the original luminance signal, ω represents the normalized coefficient array of the weighted mean value, and the specific numerical value of the filter kernel is determined according to the engineering experience setting or by training through a deep learning model, for example: the normalization coefficient array may take the mean coefficient, i.e. each value is 1/N, or the uniformity coefficient array may be set according to the distance between the reference pixel point and the center point, where the greater the distance, the greater the corresponding weighting coefficient. N represents the size of the taken calculation template, is set according to experience, and is generally 3,5,7,9,11 equivalent;
(3-3) taking 1*N pixels along the first direction with the point Tb as the center for weighted average, and obtaining a weighted average mean_b of the B region along the first direction, namely, the weighted average mean_b, wherein the calculation method is as follows:
(3-4) calculating a base signal TE of the current point according to weighted average values mean_a and mean_b of the a region and the B region in (3-2) and (3-3), the calculation formula of the base signal TE being as follows:
TE=α·Mean_a+(1-α)·Mean_b
wherein, alpha represents the mixed intensity coefficient of the A area, in actual use, the user carries out debugging according to the sharpening style favored by the user, and the range of the coefficient is between 0 and 1.
(4) The basic signal calculation step when the determination direction of the point T is the second direction is as follows:
(4-1) recording the coordinates of the current point T as (x, y), recording the two side areas of the second direction passing through the point T as an area A and an area B, taking the point with the coordinates of (x+1, y-1) as a central point Ta in the area A, and taking the point with the coordinates of (x-1, y+1) as a central point Tb in the area B;
(4-2) taking 1*N pixels along the second direction with the point Ta as the center, and performing weighted average to obtain a weighted average mean_a of the area A along the second direction, namely, the weighted average mean_a, wherein the calculation method is as follows:
wherein Y represents the gray value of the luminance channel at the corresponding coordinate position, namely the original luminance signal, ω represents the normalized coefficient array of the weighted mean value, and the specific numerical value of the filter kernel is determined according to the engineering experience setting or by training through a deep learning model, for example: the normalization coefficient array may take the mean coefficient, i.e. each value is 1/N, or the uniformity coefficient array may be set according to the distance between the reference pixel point and the center point, where the greater the distance, the greater the corresponding weighting coefficient. N represents the size of the taken calculation template, is set according to experience, and is generally 3,5,7,9,11 equivalent;
(4-3) taking 1*N pixels along the second direction with the point Tb as the center, and performing weighted average to obtain a weighted average mean_b of the B region along the second direction, namely, the weighted average mean_b, wherein the calculation method is as follows:
(4-4) calculating a base signal value TE of the current point according to weighted average values mean_a and mean_b of the a region and the B region in (4-2) and (4-3), the calculation formula being as follows:
TE=α·Mean_a+(1-α)·Mean_b
wherein, alpha represents the mixed intensity coefficient of the A area, in actual use, the user carries out debugging according to the sharpening style favored by the user, and the range of the coefficient is between 0 and 1.
(5) If the direction decision result is no direction, the base signal value TE may be calculated using techniques including, but not limited to, (5-1) or (5-2):
(5-1) directly assigning the original luminance signal Y (x, Y) of the point T to TE;
(5-2) filtering the original luminance signal of the current point T by using a low-pass filter f of n×n, where the low-pass filter needs to ensure that the filter coefficients at the two symmetrical positions of the first main direction, the first sub-direction, the second main direction, and the second sub-direction are equal, i.e. the following formula:
the low pass filter f employed includes, but is not limited to, an average filter, a gaussian filter, etc. N represents the size of the calculation template taken, and is empirically set, typically taking 3,5,7,9,11 equivalent.
When the method and the device calculate the basic signal, the adopted calculation template is smaller, compared with the existing multi-stage serial sharpening algorithm, the method and the device have more advantages in terms of resource consumption, and are more beneficial to being deployed in hardware such as an FPGA (field programmable gate array).
As shown in fig. 3, in this embodiment, the step S400: calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal, wherein the method comprises the following steps:
s410: calculating the difference value between the original brightness signal and the basic signal of each pixel point to obtain the high-frequency signal part to be sharpened;
in particular, hf=y (x, Y) -TE, HF representing the high frequency signal portion to be sharpened;
s420: processing the high-frequency signal part to be sharpened based on a preset sharpening processing function to obtain a sharpened high-frequency signal part;
specifically, HF' =sharpen_function·tf;
wherein sharpen_function represents a sharpening function, and sharpen_function·tf represents a high-frequency signal portion after sharpening.
S430: adding the sharpened high-frequency signal part and the basic signal to obtain a sharpened brightness signal;
Specifically, Y out =TE+HF′,Y out Representing the sharpened luminance signal.
In this embodiment, the step S420: the sharpened high-frequency signal part is obtained after the high-frequency signal part to be sharpened is processed based on a preset sharpening processing function, and one of the following modes can be adopted:
(1) Multiplying the high-frequency signal part to be sharpened by a preset sharpening coefficient to obtain a sharpened high-frequency signal part, namely, sharpening processing function sharpen_function is a preset sharpening coefficient, and the sharpening coefficient can be a constant, so that the convenience of sharpening processing is improved.
(2) Searching a sharpened signal corresponding to the high-frequency signal part to be sharpened based on a preset mapping relation between the signal before sharpening and the signal after sharpening, and taking the sharpened signal as the high-frequency signal part after sharpening.
The mapping relationship between the pre-sharpening signal and the post-sharpening signal may take the form of a curve mapping table (as in 2.1 below) or a piecewise polyline mapping function (as in 2.2 below).
(2.1) the mapping relationship between the pre-sharpening signal and the post-sharpening signal is a curve mapping table, and the step S420 may be implemented as follows:
(2.1.1) designing a 1*M-element mapping table array sharpen_ lut, wherein the array is a sharpened signal corresponding to M sharpened signals, the values of the elements in the array are debugged according to actual scene requirements, and the M usually comprises parameters such as 8, 16, 32 and the like, but the application is not limited to the parameters, and the specific values can be set through engineering experience;
(2.1.2) taking the absolute value of the high-frequency signal HF to be sharpened, then rounding downwards, and taking the obtained value as an initial position coordinate index_init;
(2.1.3) comparing the initial position coordinate index_init with M, and taking the smaller value of the initial position coordinate index_init and the M as a final position coordinate index_final;
(2.1.4) reading a sharpened signal value sharp_ lut [ index_final ] corresponding to the index_final in the mapping table array;
(2.1.5) outputting a sharpened high frequency signal portion according to the sign of the high frequency signal value to be sharpened, as shown in the following formula:
wherein sharpen_function·hf represents a high frequency signal portion after sharpening;
sharpen_function represents a function that processes the high frequency signal HF to be sharpened;
(2.2) the mapping relationship between the pre-sharpening signal and the post-sharpening signal is a piecewise folding mapping function as shown in fig. 4, and in the step S420, the post-sharpening high frequency signal portion is calculated using the following formula:
the segment polyline mapping function shown in FIG. 4 is merely exemplary, and it is within the scope of the present application to employ more segments or fewer segments.
In the embodiment, the sharpening processing function is adjusted, so that the sharpening requirement in each scene is further met, and the adaptability of the sharpening algorithm in each scene is improved. When the sharpening processing function adopts the mapping relation between the pre-sharpening signal and the post-sharpening signal, part of shot noise points can be effectively restrained.
In this embodiment, after step S400, the method further includes outputting a sharpened image, specifically including the following steps:
reading sharpened brightness information after the step S400, wherein the sharpened brightness information comprises sharpened brightness signals Y of all pixel points out ;
If the original video frame is YUV image, the sharpened brightness signal Y out Backfilling the image into a brightness channel of an original input image to obtain a sharpened image;
if the original video frame is an RGB image, the sharpened luminance signal Y out Backfilling the obtained image into a brightness channel of the converted YUV image to obtain a sharpened YUV image, and converting the sharpened YUV image into an RGB image to obtain the sharpened image.
In another embodiment, when the sharpened image is output, the image may be further sharpened globally by using a global sharpening strength sharpen_str, and the output sharpened image includes the following steps:
reading sharpened brightness information after the step S400, wherein the sharpened brightness information comprises sharpened brightness signals Y of all pixel points out ;
Reading the global sharpening intensity sharpen_str of the input image with the sharpened luminance signal Y out Subtracting the original brightness signal Y, multiplying the difference value by the global sharpening intensity sharpen_str, and then adding the obtained product into the original brightness signal Y to obtain the finally output brightness Y final The calculation formula is as follows:
Y final =sharpen_str·(Y out -Y)+Y
if the original video frame is YUV image, the brightness Y is set final Backfilling the image into a brightness channel of an original input image to obtain a sharpened image;
if the original video frame is an RGB image, the brightness Y is set final Backfilling the obtained image into a brightness channel of the converted YUV image to obtain a sharpened YUV image, and converting the sharpened YUV image into an RGB image to obtain the sharpened image.
As shown in fig. 5, an embodiment of the present application further provides an image sharpening system, which is applied to the image sharpening method, where the system includes:
an image acquisition module M100, configured to acquire luminance information of an input image, where the luminance information includes an original luminance signal of each pixel point of the input image;
a direction determining module M200, configured to determine a determining direction of each pixel point in the input image based on a plurality of direction filters and a direction determining rule;
the image sharpening module M300 is used for respectively calculating reference signals at two sides perpendicular to the judging direction for each pixel point, and calculating a basic signal of the pixel point based on the reference signals at the two sides; and calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal.
Through adopting this application, through image acquisition module M100 and direction decision module M200 based on the direction filter to the direction of image judge, through image sharpening module M300 based on judge the reference signal of direction both sides respectively, can obtain the more accurate basic signal of this pixel based on the reference signal of both sides, then sharpen the high frequency signal part that waits to sharpen in the original luminance signal, obtain the luminance signal after sharpening after the processing, the algorithm realizes simply, need not to adopt very big filtering template, be favorable to saving the cache resource, hardware realization such as using FPGA. According to the method, the direction filter with the noise suppression effect is adopted, the edge direction judgment is carried out based on the response value of the direction filter, after filtering, the signal quantity of the edge far exceeds the signal quantity of the texture, so that sharpening is carried out along the edge direction when the high-frequency signal to be sharpened is ensured, the noise point enhancement result is not brought into sharpening calculation, and therefore the influence of noise on the final image effect is reduced. In conclusion, the technical scheme provided by the application is beneficial to controlling pseudo details and noise in an image picture, reducing the problems of broken lines, saw teeth, excessive thickness and the like of edges, and obtaining more continuous, clear and natural edges.
Further, in this embodiment, the image sharpening system may further include an image output module for outputting the sharpened luminance signal Y out Or the globally sharpened enhancement signal Y as in the above method final Backfilling the image into the original input image, and outputting the sharpened image.
The embodiment of the application also provides image sharpening equipment, which comprises a processor; a memory having stored therein executable instructions of the processor; wherein the processor is configured to perform the steps of the image sharpening method via execution of the executable instructions.
Those skilled in the art will appreciate that the various aspects of the present application may be implemented as a system, method, or program product. Accordingly, aspects of the present application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the present application is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 6, the electronic device 600 is in the form of a general purpose computing device. Components of electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different system components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code that is executable by the processing unit 610 such that the processing unit 610 performs steps according to various exemplary embodiments of the present application described in the above-mentioned image sharpening method section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The memory unit 620 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 6201 and/or cache memory unit 6202, and may further include Read Only Memory (ROM) 6203.
The storage unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 630 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 600, and/or any device (e.g., router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 650. Also, electronic device 600 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 600, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
By adopting the image sharpening device provided by the application, the processor executes the image sharpening method when executing the executable instructions, so that the beneficial effects of the image sharpening method can be obtained.
The embodiment of the application also provides a computer readable storage medium for storing a program, which when executed by a processor, implements the steps of the image sharpening method. In some possible embodiments, the various aspects of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the present application as described in the above-mentioned image sharpening method section of the present specification, when the program product is run on the terminal device.
Referring to fig. 7, a program product 800 for implementing the above-described method according to an embodiment of the present application is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or cluster. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
By adopting the computer-readable storage medium provided by the present application, the program stored therein, when executed, realizes the steps of the image sharpening method, whereby the advantageous effects of the image sharpening method described above can be obtained.
The foregoing is a further detailed description of the present application in connection with the specific preferred embodiments, and it is not intended that the practice of the present application be limited to such description. It should be understood that those skilled in the art to which the present application pertains may make several simple deductions or substitutions without departing from the spirit of the present application, and all such deductions or substitutions should be considered to be within the scope of the present application.
Claims (10)
1. An image sharpening method, comprising the steps of:
acquiring brightness information of an input image, wherein the brightness information comprises original brightness signals of all pixel points of the input image;
determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule;
for each pixel point, respectively calculating reference signals at two sides perpendicular to the judging direction, and calculating a base signal of the pixel point based on the reference signals at the two sides;
and calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal.
2. The image sharpening method according to claim 1, wherein the determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule includes the steps of:
setting a first main direction and a second main direction which are perpendicular to each other;
filtering the input image by adopting a first main direction filter and a second main direction filter respectively, and taking an absolute value of a filtered result to obtain a response value of each pixel point;
for each pixel point, determining the determination direction based on the response value corresponding to each filter according to the direction determination rule;
the direction determination rule includes: and if the difference value between the larger direction filter response value and the smaller direction filter response value is larger than a first threshold value, taking the direction corresponding to the larger direction filter response value as the judging direction.
3. The image sharpening method according to claim 1, wherein the determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule includes the steps of:
setting a first main direction and a second main direction which are perpendicular to each other and a first secondary direction and a second secondary direction which are perpendicular to each other, wherein the first secondary direction is between the first main direction and the second main direction;
Filtering the input image by adopting a first main direction filter, a second main direction filter, a first secondary direction filter and a second secondary direction filter respectively, and taking an absolute value of a filtered result to obtain a response value of each pixel point;
determining the judging direction for each pixel point based on response values corresponding to each filter according to the direction judging rule;
the direction judgment rule includes:
according to a first condition selection: if the difference value between the largest direction filter response value and the other direction filter response values is larger than a first threshold value, taking the direction corresponding to the largest direction filter response value as the judging direction;
selecting according to a second condition: and if the absolute value of the difference value of the two main direction filter response values is larger than a first threshold value and the absolute value of the difference value of the larger main direction filter response value and one secondary direction filter response value is smaller than the first threshold value, taking the direction corresponding to the larger main direction filter response value as the judging direction.
4. The image sharpening method according to claim 1, wherein the calculating of the reference signals on both sides perpendicular to the determination direction, respectively, comprises the steps of:
Respectively taking a plurality of reference pixel points at two sides perpendicular to the judging direction, and respectively calculating the average value of the reference pixel points at two sides to be used as reference signals at the two sides;
the base signal of the pixel point is obtained by calculation based on the reference signals on the two sides, and the method comprises the following steps:
and interpolating and calculating based on the reference signals at the two sides to obtain a basic signal of the pixel point.
5. The image sharpening method of claim 4, wherein said respectively taking a plurality of reference pixel points on both sides perpendicular to said determination direction comprises the steps of:
selecting an adjacent pixel point as a center point on each side perpendicular to the determination direction;
and respectively selecting a plurality of pixel points at two sides of the center point along the judging direction, and taking the selected pixel points and the center point as reference pixel points at the side.
6. The image sharpening method according to claim 1, wherein the calculating the high-frequency signal portion to be sharpened in the original luminance signal based on the base signal of each pixel point, and sharpening the high-frequency signal portion to be sharpened in the original luminance signal, to obtain a sharpened luminance signal, includes the following steps:
Calculating the difference value between the original brightness signal and the basic signal of each pixel point to obtain the high-frequency signal part to be sharpened;
processing the high-frequency signal part to be sharpened based on a preset sharpening processing function to obtain a sharpened high-frequency signal part;
and adding the sharpened high-frequency signal part with the basic signal to obtain a sharpened brightness signal.
7. The image sharpening method as defined in claim 6, wherein said processing said high frequency signal portion to be sharpened based on a preset sharpening processing function to obtain a sharpened high frequency signal portion comprises the steps of:
multiplying the high-frequency signal part to be sharpened by a preset sharpening coefficient to obtain a sharpened high-frequency signal part; or alternatively, the first and second heat exchangers may be,
searching a sharpened signal corresponding to the high-frequency signal part to be sharpened based on a preset mapping relation between the signal before sharpening and the signal after sharpening, and taking the sharpened signal as the high-frequency signal part after sharpening.
8. An image sharpening system, characterized in that it is applied to the image sharpening method according to any one of claims 1 to 7, said system comprising:
the image acquisition module is used for acquiring brightness information of an input image, wherein the brightness information comprises original brightness signals of all pixel points of the input image;
A direction determination module for determining a determination direction of each pixel point in the input image based on a plurality of direction filters and a direction determination rule;
the image sharpening module is used for respectively calculating reference signals at two sides perpendicular to the judging direction for each pixel point, and calculating a basic signal of the pixel point based on the reference signals at the two sides; and calculating a high-frequency signal part to be sharpened in the original brightness signal based on the basic signal of each pixel point, and sharpening the high-frequency signal part to be sharpened in the original brightness signal to obtain a sharpened brightness signal.
9. An image sharpening device, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the image sharpening method of any one of claims 1 to 7 via execution of the executable instructions.
10. A computer-readable storage medium storing a program, characterized in that the program when executed by a processor implements the steps of the image sharpening method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311272067.6A CN117333386A (en) | 2023-09-28 | 2023-09-28 | Image sharpening method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311272067.6A CN117333386A (en) | 2023-09-28 | 2023-09-28 | Image sharpening method, system, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117333386A true CN117333386A (en) | 2024-01-02 |
Family
ID=89291181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311272067.6A Pending CN117333386A (en) | 2023-09-28 | 2023-09-28 | Image sharpening method, system, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117333386A (en) |
-
2023
- 2023-09-28 CN CN202311272067.6A patent/CN117333386A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4160258B2 (en) | A new perceptual threshold determination for gradient-based local contour detection | |
US7321699B2 (en) | Signal intensity range transformation apparatus and method | |
US11941785B2 (en) | Directional scaling systems and methods | |
US9165345B2 (en) | Method and system for noise reduction in video systems | |
US11551336B2 (en) | Chrominance and luminance enhancing systems and methods | |
US9355435B2 (en) | Method and system for adaptive pixel replacement | |
CN115631117A (en) | Image enhancement method, device, detection system and storage medium for defect detection | |
Shi et al. | Underwater image enhancement based on adaptive color correction and multi-scale fusion | |
CN117333386A (en) | Image sharpening method, system, device and storage medium | |
US11321813B2 (en) | Angular detection using sum of absolute difference statistics systems and methods | |
US10719916B2 (en) | Statistical noise estimation systems and methods | |
CN114648448A (en) | Image enhancement method, device, equipment and storage medium | |
US11532106B2 (en) | Color gradient capture from source image content | |
CN118799348A (en) | Tone and gray level edge detection method and system based on self-adaptive fusion method | |
Yuxi et al. | Heterogeneity Constrained Color Ellipsoid Prior Image Dehazing Algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |