CN116939374A - Lens shading correction method and device and electronic equipment - Google Patents

Lens shading correction method and device and electronic equipment Download PDF

Info

Publication number
CN116939374A
CN116939374A CN202311162970.7A CN202311162970A CN116939374A CN 116939374 A CN116939374 A CN 116939374A CN 202311162970 A CN202311162970 A CN 202311162970A CN 116939374 A CN116939374 A CN 116939374A
Authority
CN
China
Prior art keywords
grid
coefficient
weight
current pixel
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311162970.7A
Other languages
Chinese (zh)
Other versions
CN116939374B (en
Inventor
刘晓伟
杜建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guixin Technology Shenzhen Co ltd
Original Assignee
Guixin Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guixin Technology Shenzhen Co ltd filed Critical Guixin Technology Shenzhen Co ltd
Priority to CN202311162970.7A priority Critical patent/CN116939374B/en
Publication of CN116939374A publication Critical patent/CN116939374A/en
Application granted granted Critical
Publication of CN116939374B publication Critical patent/CN116939374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a lens shading correction method, a lens shading correction device and electronic equipment. The method comprises the following steps: obtaining correction coefficients of each input pixel in an input image, and multiplying the input pixel value by the correction coefficients to obtain an initial correction result; obtaining an average brightness value of each grid in the input image, wherein the input image is divided into a plurality of grids on average; determining a grid coefficient of each grid according to the average brightness value of each grid; acquiring the space weight and the brightness weight of each grid of the current pixel neighborhood; determining the self-adaptive coefficient of the current pixel according to the grid coefficient of each grid, and the space weight and the brightness weight of each grid of the neighborhood of the current pixel; and determining a final correction result of the current pixel according to the input pixel value, the initial correction result and the self-adaptive coefficient. The invention can reduce the loss of image details in the correction process of the highlight area as much as possible.

Description

Lens shading correction method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a lens shading correction method, a lens shading correction device, and an electronic device.
Background
Lens shading is a phenomenon in which shading occurs around an image screen due to the optical characteristics of a lens or the like. Lens shading is generally divided into two types: one is a luminance shadow, i.e., bright center and dark surrounding; the other is a color shade, i.e. the center and the surrounding colors are not consistent.
The lens shading correction (Lens Shading Correction, LSC) module is an algorithm module on the image signal processor chip that corrects the lens shading problem. At present, a commonly used LSC algorithm is called a grid method, namely an image picture is divided into M multiplied by N grids, calibration calculation is performed by shooting an image with uniform illumination in advance, and correction coefficients of endpoints of each grid are stored. And in actual shooting, calibrating the correction coefficient according to the real-time environment information, and reconfiguring the calibrated correction coefficient to the LSC module. The LSC performs coefficient interpolation according to the grid where each pixel point is located to obtain a correction coefficient of each pixel point, multiplies an input pixel by the correction coefficient, performs Clip operation, cuts off data exceeding an effective range, and obtains corrected output pixels:
wherein,,for input pixels +.>The correction coefficients for the pixel after interpolation of adjacent grid coefficients,for the output pixel.
When the above LSC algorithm is used to correct the pixel point in the highlight region, after the input pixel is multiplied by the correction coefficient, the result may exceed the effective pixel value range, and the image detail is lost at the position due to overexposure (that is, the pixel value reaches the upper limit of the effective pixel value) caused by the Clip operation. This loss of detail due to the Clip operation is not recoverable.
Disclosure of Invention
The lens shading correction method, the lens shading correction device and the electronic equipment can reduce the loss of image details of a highlight area in the correction process as much as possible.
In a first aspect, the present invention provides a lens shading correction method, the method including:
obtaining correction coefficients of each input pixel in an input image, and multiplying the input pixel value by the correction coefficients to obtain an initial correction result;
obtaining an average brightness value of each grid in the input image, wherein the input image is divided into a plurality of grids on average;
determining a grid coefficient of each grid according to the average brightness value of each grid;
acquiring the space weight and the brightness weight of each grid of the current pixel neighborhood;
determining the self-adaptive coefficient of the current pixel according to the grid coefficient of each grid, and the space weight and the brightness weight of each grid of the neighborhood of the current pixel;
and determining a final correction result of the current pixel according to the input pixel value, the initial correction result and the self-adaptive coefficient.
Optionally, the obtaining an average brightness value of each grid in the input image includes:
calculating the sum of brightness values of the grids according to the pixel values of all pixel points in the grids;
dividing the sum of the brightness values by the number of pixels in the grid to obtain the average brightness value of the grid.
Optionally, determining the grid coefficient of each grid according to the average brightness value of each grid includes:
calculating initial grid coefficients of the grid according to the average brightness value of the grid;
and performing time domain filtering on the initial grid coefficient, and performing weighted average on the initial grid coefficient and the initial grid coefficient of the previous N frames to obtain a final grid coefficient of the grid, wherein N is a positive integer.
Optionally, calculating the initial grid coefficient of the grid according to the average brightness value of the grid includes:
when the average brightness value of the grid is less than or equal to t_highlight0, coef=1;
when the average luminance value of the grid is greater than T _ highlight0 and less than T _ highlight,
when the average brightness value of the grid is equal to or greater than T _ highlight,
wherein, t_highlight0, t_ highlight, luma _ max, highlight _coef, lumamax_coef are preset values, coef is an initial grid coefficient of the grid, and luma is an average brightness value of the grid.
Optionally, the acquiring the spatial weight and the brightness weight of each grid of the current pixel neighborhood includes:
determining the space weight of each grid by using a preset weight function according to the distance between each grid and the current pixel;
and determining the brightness weight of each grid according to the absolute brightness difference between the current pixel and the average brightness value of each grid and a preset brightness weight curve.
Optionally, the determining the brightness weight of each grid according to the absolute brightness difference between the current pixel and the average brightness value of each grid and the preset brightness weight curve includes:
when the absolute brightness difference is less than or equal to th1, the brightness weightEqual to w_max;
when the absolute brightness difference is greater than or equal to th2, the brightness weightEqual to w_min;
when the absolute brightness difference is largeWhen th1 is smaller than th2, the brightness weight is equal toThe method comprises the following steps:
wherein, th1, th2, w_max, w_min are preset values,is the absolute luminance difference.
Optionally, the determining the adaptive coefficient of the current pixel according to the grid coefficient of each grid and the spatial weight and the brightness weight of each grid of the neighborhood of the current pixel includes:
and carrying out weighted average on grid coefficients of grids of the current pixel neighborhood by using the space weight and the brightness weight of the current pixel neighborhood to obtain the self-adaptive coefficient of the current pixel.
Optionally, the determining the final correction result of the current pixel according to the input pixel value, the initial correction result and the adaptive coefficient includes:
multiplying the initial correction result by the adaptive coefficient;
comparing the obtained product with the input pixel value, and taking the maximum value;
and performing Clip operation on the maximum value to obtain a final correction result of the current pixel.
In a second aspect, the present invention provides a lens shading correction device, the device comprising:
the first acquisition unit is used for acquiring the correction coefficient of each input pixel in the input image, and multiplying the input pixel value by the correction coefficient to obtain an initial correction result;
a second obtaining unit, configured to obtain an average brightness value of each grid in the input image, where the input image is divided into a plurality of grids on average;
a first determining unit configured to determine a grid coefficient of each grid according to an average luminance value of each grid;
the third acquisition unit is used for acquiring the space weight and the brightness weight of each grid of the current pixel neighborhood;
the second determining unit is used for determining the self-adaptive coefficient of the current pixel according to the grid coefficient of each grid, the space weight and the brightness weight of each grid of the neighborhood of the current pixel;
and the third determining unit is used for determining the final correction result of the current pixel according to the input pixel value, the initial correction result and the adaptive coefficient.
In a third aspect, the present invention provides an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the lens shading correction method described above.
In a fourth aspect, the present invention provides a chip comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the lens shading correction method described above.
In a fifth aspect, the present invention provides a computer readable storage medium, where the computer readable storage medium stores computer instructions that, when executed by a processor, implement the lens shading correction method described above.
According to the lens shading correction method, device and electronic equipment provided by the embodiment of the invention, after the input pixels are initially corrected by using the correction coefficients, the Clip operation is not directly performed, statistical information of image gridding is combined, a weighting strategy of combining spatial weight and brightness weight is adopted, the halation phenomenon is effectively restrained while the edge condition is maintained, the overexposure condition of the highlight region after lens shading correction is reduced, and the loss of image details of the highlight region in the correction process is reduced as much as possible.
Drawings
FIG. 1 is a flowchart of a lens shading correction method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a lens shading correction method according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of bilinear interpolation of pixel points according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of calculating RGB values of pixel points according to an embodiment of the present invention;
FIG. 5 is a graph illustrating the calculation of grid coefficients according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a neighborhood space according to which spatial weights and luminance weights are calculated according to an embodiment of the present invention;
FIG. 7 is a graph illustrating the calculation of spatial weights according to an embodiment of the present invention;
FIG. 8 is a graph illustrating the calculation of brightness weights according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a lens shading correction device according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of the present invention provides a lens shading correction method, where the method is applied to an electronic device, as shown in fig. 1, and the method includes:
s11, obtaining correction coefficients of each input pixel in the input image, and multiplying the input pixel value by the correction coefficients to obtain an initial correction result.
S12, obtaining an average brightness value of each grid in the input image, wherein the input image is divided into a plurality of grids in an average mode.
S13, determining the grid coefficient of each grid according to the average brightness value of each grid.
S14, acquiring the space weight and the brightness weight of each grid of the current pixel neighborhood.
S15, according to the grid coefficient of each grid, and the space weight and the brightness weight of each grid of the neighborhood of the current pixel, determining the self-adaptive coefficient of the current pixel.
S16, determining a final correction result of the current pixel according to the input pixel value, the initial correction result and the self-adaptive coefficient.
According to the lens shading correction method provided by the embodiment of the invention, after the input pixels are initially corrected by using the correction coefficients, the Clip operation is not directly performed, but the statistical information of the image meshing is combined, a weighting strategy of combining the spatial weight and the brightness weight is adopted, the halo phenomenon is effectively restrained while the edge situation is maintained, the overexposure situation of the highlight area after lens shading correction is reduced, and the loss of image details of the highlight area in the correction process is reduced as much as possible.
The lens shading correction method of the present invention is described in detail with reference to specific embodiments.
As shown in fig. 2, the lens shading correction method provided in this embodiment includes the following steps:
s21, acquiring a pixel value of each input pixel in the input image.
S22, obtaining correction coefficients of each input pixel in the input image, and multiplying the input pixel value by the correction coefficients to obtain an initial correction result.
In this embodiment, coefficient interpolation may be performed using bilinear interpolation or bicubic interpolation to determine the correction coefficient for each input pixel. Taking bilinear interpolation as an example, as shown in fig. 3, a point P is a pixel point in the grid abcd, and the correction coefficient of P is calculated by using four vertices of the grid to perform bilinear interpolation as follows:
wherein,,、/>、/>、/>、/>the abscissa of point a, b, c, d, P, +.>、/>、/>Ordinate of points a, c, P, respectively,/->Is the correction coefficient of P.
S23, dividing the input image into M multiplied by N grids on average, and obtaining the average brightness value of each grid.
Specifically, the sum of the luminance values of each grid is first determinedTwo methods can be employed:
(1) The LSC module input image is typically a Bayer image, for which the Bayer image may be considered as a single-channel gray image, the pixel value of each point is a luminance value, and the pixel values pixel_value of the pixel points in each grid are summed together to obtain a sum of luminance values of the current grid:
(2) As shown in fig. 4, RGB values of each pixel point are first interpolated. Using the average value of the same channel in the adjacent area as the pixel value of the channel corresponding to the current point:
the brightness is then calculated by RGB weighting:
where c is the corresponding weighting coefficient and,
then, sum the brightness valuesDividing the number of pixels in the grid to obtain the average brightness value of the grid.
S24, determining the grid coefficient of each grid according to the average brightness value of each grid.
Specifically, the initial grid coefficient coef of each grid may be obtained from the curve shown in fig. 5:
when luma (brightness) is equal to or less than t_highlight0, coef=1;
when luma is greater than t_highlight0 and less than t_highlight,
when luma is equal to or greater than T _ highlight,
wherein, T_highlight0, T_ highlight, luma _ max, highlight _coef, lumamax_coef are preset values.
The main functions of the curve are:
for a highlight interval, the coefficient is smaller than 1, so that the pixel after LSC correction has a certain inhibition effect, and the possibility of overexposure is reduced; the highlight areas are divided into a plurality of inclined line segments, and the inhibition speeds of the different areas are different;
for non-highlight regions, the coefficient is 1, maintaining the pixel value.
Further, since the frames are excessively smoothed during continuous multi-frame processing, the initial grid coefficient obtained above can be further processed to avoid severe jitterPerforming time domain filtering to obtain the initial grid coefficient +.>And the initial grid coefficient of the previous N frames are weighted and averaged to obtain the final grid coefficient +.>
Wherein W is the weight value of the grid coefficient.
S25, acquiring the space weight and the brightness weight of each grid of the current pixel neighborhood.
The current pixel neighborhood range may be a square neighborhood of r×r, and this embodiment is illustrated by taking 3×3 as an example. As shown in FIG. 6, the P point is a point in the grid G4, and G0-G8 are the centers of adjacent 3X 3 grids.
(1) Determination of spatial weights
Spatial weighting of ith gridWherein->And f is a weight function of the distance. The distance calculation can be one of common distances, such as Euclidean distance, manhattan distance, and Chebyshev distance. The weighting function may follow a decreasing curve (the greater the distance, the smaller the weight), e.g. +.>. To facilitate hardware implementation, hardware resources may be saved by using a piecewise segment approximation, as shown in fig. 7, where th1 and th2 are preset values.
According to the distance between each grid Gi and the current pixel P, the space weight of each grid is searched and determined in a weight function curve shown in FIG. 7.
(2) Determination of luminance weights
Firstly, calculating absolute brightness difference between current pixel point and grid average brightness valueThen a decreasing brightness weight curve (the larger the difference, the smaller the weight) may be set, and the brightness weight of the ith grid may be determined from the curve shown in fig. 8>
When the absolute brightness difference is less than or equal to th1, the brightness weightEqual to w_max;
when the absolute brightness difference is greater than or equal to th2, the brightness weightEqual to w_min;
when the absolute luminance difference is greater than th1 and less than th2, the luminance weightThe method comprises the following steps:
wherein, th1, th2, w_max, w_min are preset values.
S26, carrying out weighted average on grid coefficients of grids of the current pixel neighborhood by using the spatial weight and the brightness weight of the current pixel neighborhood to obtain the self-adaptive coefficient of the current pixel.
Specifically, according to the results of step S24 and step S25, the final grid coefficients of the grids in the 3×3 grid neighborhood are weighted-averaged to obtain the adaptive coefficient of the current pixel
Wherein,,for the final grid coefficient +.>Is spatial weight, ++>Is the luminance weight.
The weighting method considers the space information and the brightness information at the same time, so that the method has a certain protection effect on the image edge.
And S27, determining a final correction result of the current pixel according to the input pixel value, the initial correction result and the self-adaptive coefficient.
Specifically, the initial correction result is multiplied by the adaptive coefficient, the obtained product is compared with an input pixel value, the maximum value is taken, namely, the suppression of the highlight area cannot be lower than that of the input pixel, and finally, the maximum value is subjected to Clip operation, so that the final correction result of the current pixel is obtained:
wherein,,the initial correction result of the current pixel obtained in step S22.
The embodiment of the invention also provides a lens shading correction device, which is located in an electronic device, as shown in fig. 9, and includes:
a first obtaining unit 11, configured to obtain a correction coefficient of each input pixel in the input image, and multiply the input pixel value by the correction coefficient to obtain an initial correction result;
a second obtaining unit 12, configured to obtain an average brightness value of each grid in the input image, where the input image is divided into a plurality of grids on average;
a first determining unit 13 for determining a grid coefficient of each grid according to the average brightness value of each grid;
a third obtaining unit 14, configured to obtain a spatial weight and a brightness weight of each grid of the current pixel neighborhood;
a second determining unit 15, configured to determine an adaptive coefficient of the current pixel according to the grid coefficient of each grid, and the spatial weight and the brightness weight of each grid in the neighborhood of the current pixel;
a third determining unit 16, configured to determine a final correction result of the current pixel according to the input pixel value, the initial correction result and the adaptive coefficient.
According to the lens shading correction device provided by the embodiment of the invention, after the input pixels are initially corrected by using the correction coefficients, the Clip operation is not directly performed, but the statistical information of the image meshing is combined, a weighting strategy of combining the spatial weight and the brightness weight is adopted, the halo phenomenon is effectively restrained while the edge situation is maintained, the overexposure situation of the highlight area after lens shading correction is reduced, and the loss of image details of the highlight area in the correction process is reduced as much as possible.
Optionally, the second obtaining unit 12 is further configured to calculate a sum of brightness values of the grid according to pixel values of each pixel point in the grid; dividing the sum of the brightness values by the number of pixels in the grid to obtain the average brightness value of the grid.
Optionally, the first determining unit 13 is further configured to calculate an initial grid coefficient of the grid according to the average brightness value of the grid; and performing time domain filtering on the initial grid coefficient to obtain a final grid coefficient of the grid.
Optionally, the third acquiring unit 14 includes:
the first determining module is used for determining the space weight of each grid by utilizing a preset weight function according to the distance between each grid and the current pixel;
and the second determining module is used for determining the brightness weight of each grid according to the absolute brightness difference between the current pixel and the average brightness value of each grid and a preset brightness weight curve.
The second determining unit 15 is further configured to perform weighted average on the grid coefficients of each grid of the current pixel neighborhood by using the spatial weight and the brightness weight of the current pixel neighborhood, so as to obtain an adaptive coefficient of the current pixel.
The third determining unit 16 is further configured to multiply the initial correction result with the adaptive coefficient; comparing the obtained product with the input pixel value, and taking the maximum value; and performing Clip operation on the maximum value to obtain a final correction result of the current pixel.
The device of the present embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The embodiment of the invention also provides electronic equipment, which comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the lens shading correction method described above.
The embodiment of the invention also provides a chip, which comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the lens shading correction method described above.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions, and the computer instructions realize the lens shading correction method when being executed by a processor.
Those skilled in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by way of computer programs, which may be stored on a computer readable storage medium, which when executed may comprise the steps of the method embodiments described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (12)

1. A method for correcting lens shading, the method comprising:
obtaining correction coefficients of each input pixel in an input image, and multiplying the input pixel value by the correction coefficients to obtain an initial correction result;
obtaining an average brightness value of each grid in the input image, wherein the input image is divided into a plurality of grids on average;
determining a grid coefficient of each grid according to the average brightness value of each grid;
acquiring the space weight and the brightness weight of each grid of the current pixel neighborhood;
determining the self-adaptive coefficient of the current pixel according to the grid coefficient of each grid, and the space weight and the brightness weight of each grid of the neighborhood of the current pixel;
and determining a final correction result of the current pixel according to the input pixel value, the initial correction result and the self-adaptive coefficient.
2. The method of claim 1, wherein the obtaining an average luminance value for each grid in the input image comprises:
calculating the sum of brightness values of the grids according to the pixel values of all pixel points in the grids;
dividing the sum of the brightness values by the number of pixels in the grid to obtain the average brightness value of the grid.
3. The method according to claim 1 or 2, wherein determining the grid coefficient of each grid from the average luminance value of each grid comprises:
calculating initial grid coefficients of the grid according to the average brightness value of the grid;
and performing time domain filtering on the initial grid coefficient, and performing weighted average on the initial grid coefficient and the initial grid coefficient of the previous N frames to obtain a final grid coefficient of the grid, wherein N is a positive integer.
4. A method according to claim 3, wherein calculating initial grid coefficients of the grid based on the average luminance values of the grid comprises:
when the average brightness value of the grid is less than or equal to t_highlight0, coef=1;
when the average luminance value of the grid is greater than T _ highlight0 and less than T _ highlight,
when the average brightness value of the grid is equal to or greater than T _ highlight,
wherein, t_highlight0, t_ highlight, luma _ max, highlight _coef, lumamax_coef are preset values, coef is an initial grid coefficient of the grid, and luma is an average brightness value of the grid.
5. The method according to claim 1 or 2, wherein the obtaining the spatial weight and the luminance weight of each grid of the current pixel neighborhood comprises:
determining the space weight of each grid by using a preset weight function according to the distance between each grid and the current pixel;
and determining the brightness weight of each grid according to the absolute brightness difference between the current pixel and the average brightness value of each grid and a preset brightness weight curve.
6. The method of claim 5, wherein determining the luminance weight of each grid based on the absolute luminance difference between the current pixel and the average luminance value of each grid and a preset luminance weight curve comprises:
when the absolute brightness difference is less than or equal to th1, the brightness weightEqual to w_max;
when the absolute brightness difference is greater than or equal to th2, the brightness weightEqual to w_min;
when the absolute luminance difference is greater than th1 and less than th2, the luminance weightThe method comprises the following steps:
wherein, th1, th2, w_max, w_min are preset values,is the absolute luminance difference.
7. The method of claim 1, wherein determining the adaptive coefficients for the current pixel based on the grid coefficients for each grid, and the spatial weights and luminance weights for each grid of the current pixel neighborhood comprises:
and carrying out weighted average on grid coefficients of grids of the current pixel neighborhood by using the space weight and the brightness weight of the current pixel neighborhood to obtain the self-adaptive coefficient of the current pixel.
8. The method of claim 1 or 7, wherein determining a final correction result for the current pixel based on the input pixel value, the initial correction result, and an adaptive coefficient comprises:
multiplying the initial correction result by the adaptive coefficient;
comparing the obtained product with the input pixel value, and taking the maximum value;
and performing Clip operation on the maximum value to obtain a final correction result of the current pixel.
9. A lens shading correction device, the device comprising:
the first acquisition unit is used for acquiring the correction coefficient of each input pixel in the input image, and multiplying the input pixel value by the correction coefficient to obtain an initial correction result;
a second obtaining unit, configured to obtain an average brightness value of each grid in the input image, where the input image is divided into a plurality of grids on average;
a first determining unit configured to determine a grid coefficient of each grid according to an average luminance value of each grid;
the third acquisition unit is used for acquiring the space weight and the brightness weight of each grid of the current pixel neighborhood;
the second determining unit is used for determining the self-adaptive coefficient of the current pixel according to the grid coefficient of each grid, the space weight and the brightness weight of each grid of the neighborhood of the current pixel;
and the third determining unit is used for determining the final correction result of the current pixel according to the input pixel value, the initial correction result and the adaptive coefficient.
10. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
11. A chip, the chip comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
12. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 8.
CN202311162970.7A 2023-09-11 2023-09-11 Lens shading correction method and device and electronic equipment Active CN116939374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311162970.7A CN116939374B (en) 2023-09-11 2023-09-11 Lens shading correction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311162970.7A CN116939374B (en) 2023-09-11 2023-09-11 Lens shading correction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN116939374A true CN116939374A (en) 2023-10-24
CN116939374B CN116939374B (en) 2024-03-26

Family

ID=88375560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311162970.7A Active CN116939374B (en) 2023-09-11 2023-09-11 Lens shading correction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116939374B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737363A (en) * 2011-03-31 2012-10-17 索尼公司 Image processing apparatus and method, and program
CN106470293A (en) * 2015-08-20 2017-03-01 联咏科技股份有限公司 Image processing apparatus and image processing method
CN106470292A (en) * 2015-08-20 2017-03-01 联咏科技股份有限公司 Image processing apparatus and image processing method
CN107590840A (en) * 2017-09-21 2018-01-16 长沙全度影像科技有限公司 Colored shadow bearing calibration and its correction system based on mesh generation
CN109218714A (en) * 2018-09-30 2019-01-15 天津天地基业科技有限公司 A kind of automatic correction method for camera lens shade
CN116456208A (en) * 2023-02-09 2023-07-18 辉羲智能科技(上海)有限公司 Automatic lens shading correction method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737363A (en) * 2011-03-31 2012-10-17 索尼公司 Image processing apparatus and method, and program
CN106470293A (en) * 2015-08-20 2017-03-01 联咏科技股份有限公司 Image processing apparatus and image processing method
CN106470292A (en) * 2015-08-20 2017-03-01 联咏科技股份有限公司 Image processing apparatus and image processing method
CN107590840A (en) * 2017-09-21 2018-01-16 长沙全度影像科技有限公司 Colored shadow bearing calibration and its correction system based on mesh generation
CN109218714A (en) * 2018-09-30 2019-01-15 天津天地基业科技有限公司 A kind of automatic correction method for camera lens shade
CN116456208A (en) * 2023-02-09 2023-07-18 辉羲智能科技(上海)有限公司 Automatic lens shading correction method and device

Also Published As

Publication number Publication date
CN116939374B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
JP4288623B2 (en) Imaging device, noise removal device, noise removal method, noise removal method program, and recording medium recording noise removal method program
JP4894595B2 (en) Image processing apparatus and method, and program
KR101061866B1 (en) Image processing apparatus for performing gradation correction on the target image
JP5105209B2 (en) Image processing apparatus and method, program, and recording medium
JP5595585B2 (en) Image processing apparatus and method
JP6097588B2 (en) Image processing apparatus and image processing method
JP5392560B2 (en) Image processing apparatus and image processing method
US8831346B2 (en) Image processing apparatus and method, and program
US8526736B2 (en) Image processing apparatus for correcting luminance and method thereof
JP4111980B2 (en) Image processing apparatus and image processing method
JP4810473B2 (en) Image processing apparatus and image processing program
US7889942B2 (en) Dynamic range compensation-dependent noise reduction
JP5701640B2 (en) Image processing device
CN112819721A (en) Method and system for reducing noise of image color noise
JP2008258909A (en) Signal processing circuit
CN116939374B (en) Lens shading correction method and device and electronic equipment
JP4769332B2 (en) Image processing apparatus, image processing method, and program
JP2009296210A (en) Image processor and image processing method
JP2007234034A (en) Image processor and image processing method
JP4455554B2 (en) Image processing apparatus and image processing method
KR101007840B1 (en) Signal processing device and method, and recording medium
JP6843510B2 (en) Image processing equipment, image processing methods and programs
Toda et al. High dynamic range rendering method for YUV images with global luminance correction
JP2022016808A (en) Video signal processing device and video signal processing method
CN115330621A (en) Image processing method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant