CN113132562B - Lens shading correction method and device and electronic equipment - Google Patents

Lens shading correction method and device and electronic equipment Download PDF

Info

Publication number
CN113132562B
CN113132562B CN202110432799.1A CN202110432799A CN113132562B CN 113132562 B CN113132562 B CN 113132562B CN 202110432799 A CN202110432799 A CN 202110432799A CN 113132562 B CN113132562 B CN 113132562B
Authority
CN
China
Prior art keywords
channel
raw image
compensation
pixel
color value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110432799.1A
Other languages
Chinese (zh)
Other versions
CN113132562A (en
Inventor
黄志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110432799.1A priority Critical patent/CN113132562B/en
Publication of CN113132562A publication Critical patent/CN113132562A/en
Application granted granted Critical
Publication of CN113132562B publication Critical patent/CN113132562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a lens shading correction method, a lens shading correction device and electronic equipment, belonging to the technical field of image processing, wherein the method comprises the following steps: acquiring a first RAW image, and downsampling the first RAW image to obtain a second RAW image, wherein the first RAW image is an original RAW image to be corrected; pre-compensating the second RAW image based on preset first compensation information to obtain a third RAW image; acquiring the color cast type of the third RAW image according to the color value of the pixel point in the third RAW image; obtaining an adjustment coefficient corresponding to the color cast type of the third RAW image according to the mapping relation between the preset color cast type and the compensation information adjustment coefficient; according to the adjustment coefficient, adjusting the first compensation information to obtain second compensation information; and compensating the first RAW image based on the first compensation information and the second compensation information to obtain a fourth RAW image after lens shading correction.

Description

Lens shading correction method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a lens shading correction method and device and electronic equipment.
Background
In recent years, with rapid development of internet technology and configuration upgrade of device hardware, functions of electronic devices are becoming more and more abundant, and more users start entertainment activities using electronic devices, for example, capturing images or videos using electronic devices. When shooting images or videos, the quality of imaging is determined by imaging hardware and software algorithms in the electronic equipment, wherein the core components of the imaging hardware comprise a lens, an infrared cut-off filter and a CMOS/CCD sensor.
At present, the Lens Shading (Lens Shading) is caused by the optical characteristics of the Lens, the mechanical structure deviation of the Lens module and the inconsistency of micro lenses (micro lenses) above the infrared cut filter and the Chief Ray Angle (CRA) of the Lens. The appearance of lens shadows will seriously affect the imaging quality, not only can the brightness of an image be attenuated from the center of the image (luma shading), but also color cast phenomenon (color shading) around and in the center of the image, especially in a wide-angle lens commonly appearing in a small-sized camera module, so that the lens shadows need to be corrected.
In the prior art, the lens shading correction method mainly comprises a cos4 theta four-time term function fitting method and a grid correction method. However, although the two methods can improve the lens shading problem to a certain extent, the two methods are widely applied to scenes with low requirements on imaging quality. However, when there is a high requirement on the imaging quality, for example, when there is a need to solve the color cast problem in various scenes, both of these methods cannot meet the application requirements.
Disclosure of Invention
The embodiment of the application aims to provide a lens shading correction method, a lens shading correction device and electronic equipment, which can solve the technical problem that the requirement on higher imaging quality cannot be met in the prior art.
In a first aspect, an embodiment of the present application provides a lens shading correction method, including:
acquiring a first RAW image, and downsampling the first RAW image to obtain a second RAW image, wherein the first RAW image is an original RAW image to be corrected;
pre-compensating the second RAW image based on preset first compensation information to obtain a third RAW image;
acquiring the color cast type of the third RAW image according to the color value of the pixel point in the third RAW image;
obtaining an adjustment coefficient corresponding to the color cast type of the third RAW image according to the mapping relation between the preset color cast type and the compensation information adjustment coefficient;
according to the adjustment coefficient, adjusting the first compensation information to obtain second compensation information;
and compensating the first RAW image based on the first compensation information and the second compensation information to obtain a fourth RAW image after lens shading correction.
In a second aspect, an embodiment of the present application provides a lens shading correction apparatus, including:
the sampling module is used for acquiring a first RAW image and downsampling the first RAW image to obtain a second RAW image, wherein the first RAW image is an original RAW image to be corrected;
the pre-compensation module is used for pre-compensating the second RAW image based on preset first compensation information to obtain a third RAW image;
the first acquisition module is used for acquiring the color cast type of the third RAW image according to the color value of the pixel point in the third RAW image;
the second acquisition module is used for acquiring an adjustment coefficient corresponding to the color cast type of the third RAW image according to the mapping relation between the preset color cast type and the compensation information adjustment coefficient;
the adjusting module is used for adjusting the first compensation information to obtain second compensation information according to the adjusting coefficient;
and the correction module is used for compensating the first RAW image based on the first compensation information and the second compensation information to obtain a fourth RAW image after lens shading correction.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the lens shading correction method according to the first aspect when executed by the processor.
In a fourth aspect, an embodiment of the present application provides a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the lens shading correction method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the lens shading correction method according to the first aspect.
In the embodiment of the application, the RAW image can be pre-compensated through the compensation information, the color cast type of the pixel points in the pre-compensated RAW image is obtained according to the color values of the pixel points in the pre-compensated RAW image, the dynamic adjustment coefficient of the compensation information is obtained according to the color cast type, and the lens shadow of the RAW image is corrected according to the adjustment coefficient and the compensation information, so that the RAW image with higher imaging quality is obtained. Compared with the prior art, in the embodiment of the application, only one group of pre-calibrated compensation information is provided, and the compensation information is dynamically adjusted by analyzing and estimating the lens shading of the current scene, so that the compensation purpose is accurately achieved, and the scene with higher requirements on imaging quality can be met.
Drawings
Fig. 1 is a flowchart of a lens shading correction method according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of a calculation process of hue components provided by an embodiment of the present application;
FIG. 3 is a diagram of examples of red hue components of different color shift types provided by embodiments of the present application;
fig. 4 is a block diagram of a lens shading correction device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The embodiment of the application provides a lens shading correction method and device and electronic equipment.
The lens shading correction method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
It should be noted that, the lens shading correction method provided by the embodiment of the present application is applicable to an electronic device, and in practical application, the electronic device may include: mobile terminals such as smartphones, tablet computers, personal digital assistants, etc., the embodiments of the present application are not limited thereto.
Fig. 1 is a flowchart of a lens shading correction method according to an embodiment of the present application, as shown in fig. 1, the method may include the following steps: step 101, step 102, step 103, step 104, step 105 and step 106, wherein,
in step 101, a first RAW image is acquired, and downsampled to obtain a second RAW image, where the first RAW image is an original RAW image to be corrected.
An image sensor is a device for converting an optical signal into an electrical signal, each photosensitive unit on the image sensor is called a pixel, and the pixel value of the pixel is used for representing the intensity of illumination sensed by the pixel, but cannot represent the color.
To characterize color, the surface of the image sensor is covered with a CFA (Color filter array ) that includes a plurality of color filter cells, each color filter cell corresponding to a pixel on the image sensor, each color filter cell allowing only a single color of light to pass through and be captured by the image sensor. In this way, the pixel values of the pixel points on the image sensor can be used to characterize the intensity of illumination of a particular color of light.
The image in the RAW format (i.e., RAW image) is directly output by the image sensor, is not processed by a software algorithm, and is generally formed by arranging four channels of R, GR, B and GB into a group according to a specific bayer array mode. For example, the image of the uppermost layer in fig. 2 is a RAW data diagram according to the arrangement of RG (R) G (B) B.
In the embodiment of the application, considering that the Lens scaling is slowly changed from the edge to the center of the image, and the size of the RAW image with the original size is larger, more calculation resources are occupied during processing, therefore, when the final compensation table is determined in practical application, the RAW image with the original size is not required to be determined according to the RAW image with the original size, the RAW image with the original size can be firstly downsampled, and then the RAW image with the downsampled size is determined according to the RAW image with the downsampled size, and the final compensation table can be determined by taking less calculation resources. Preferably, the size of the down-sampled RAW image does not exceed 100 x 100.
In the embodiment of the application, the first RAW image is an image with an original size, and the second RAW image is a sampled image.
In the embodiment of the present application, any downsampling method in the related art may be used to downsample the first RAW image, which is not limited in the embodiment of the present application.
In step 102, the second RAW image is pre-compensated based on the preset first compensation information, so as to obtain a third RAW image.
In the embodiment of the application, the organization form of the compensation information can be a compensation table or other forms. When the organization form of the compensation information is a compensation table, the first compensation information is a first compensation table. For ease of understanding, the description will be given below taking the first compensation table as an example.
In the embodiment of the application, the first compensation table is used for pre-compensating the second RAW image so as to supplement the brightness of the image and eliminate the color influence, thereby being convenient for the subsequent determination of the color cast type.
In the embodiment of the present application, the first compensation table is generally calibrated and stored in a laboratory environment in advance, that is, the first compensation table is generated in advance. The specific operation is as follows:
under a specific light source, shooting a flat image with the difference between the color and the brightness smaller than a preset threshold value;
Downsampling a flat image to obtain a color value of a pixel corresponding to an R channel, a color value of a pixel corresponding to a GR channel, a color value of a pixel corresponding to a GB channel and a color value of a pixel corresponding to a B channel in the flat image obtained by downsampling;
and obtaining the maximum value Max of the color values of the pixel points corresponding to the channel P aiming at each color channel P, and obtaining a compensation table of the channel P by the color values/Max of the pixel points corresponding to the channel P.
In embodiments of the present application, the color and brightness of the flat image may be substantially the same.
Compared with the compensation table generated directly based on the flat image with the original size, in the embodiment of the application, the compensation table generated based on the sampled flat image only occupies less calculation resources and storage resources.
In one example, the specific light source is D65, a flat image with substantially the same color and brightness is photographed under D65, the flat image is downsampled, the downsampled RAW image is split into four channels R, GR, B and GB, and the compensation tables gain_r, gain_gr, gain_b and gain_gb of the corresponding channels are respectively solved. The method for solving the corresponding channel compensation table is as follows: with the maximum value of the corresponding channel being Max, and then with each point/Max of the channel, the compensation table of the channel is obtained, which has the same size as each channel.
In the embodiment of the present application, when the second RAW image is precompensated based on the first compensation table, the color value of each pixel point in the second RAW image is multiplied by the gain value at the same position on the first compensation table, so as to obtain a precompensated third RAW image.
In step 103, the color cast type of the third RAW image is obtained according to the color value of the pixel point in the third RAW image.
In the embodiment of the present application, the color cast type may include: center reddening, four-week greenish, center greenish four-week reddening, substantially no change from four-week to center, etc.
In the embodiment of the present application, when the first RAW image is an image arranged according to an RG (R) G (B) B arrangement, the second RAW image is also an image arranged according to an RG (R) G (B) B arrangement, and correspondingly, the third RAW image is also an image arranged according to an RG (R) G (B) B arrangement; the step 103 may specifically include the following steps (not shown in the figure): step 1031, step 1032, step 1033, step 1034, and step 1035, wherein,
in step 1031, the color value of the R channel corresponding to the pixel point, the color value of the GR channel corresponding to the pixel point, the color value of the GB channel corresponding to the pixel point, and the color value of the B channel corresponding to the pixel point in the third RAW image are read.
In the embodiment of the application, the second RAW image can be split into four independent color channels, namely an R channel, a GR channel, a GB channel and a B channel, and after the four channels are split, color values of corresponding pixel points of each channel can be obtained, which are respectively: the color value of the pixel corresponding to the R channel, the color value of the pixel corresponding to the GR channel, the color value of the pixel corresponding to the GB channel and the color value of the pixel corresponding to the B channel.
In one example, as shown in fig. 2, the image of the first layer from top to bottom is a second RAW image arranged according to RG (R) G (B) B mode, and the second RAW image is split into four separate color channels, so as to obtain a splitting result in the second layer, which is sequentially from left to right: color value set { R of pixel point corresponding to R channel 11 ,R 12 ,…,R 44 Color value set { GR } of pixel point corresponding to GR channel 11 ,GR 12 ,…,GR 44 Color value set { B } of pixel point corresponding to B channel 11 ,B 12 ,…,B 44 Color value set { GB } of pixel points corresponding to GB channels 11 ,GB 12 ,…,GB 44 }。
In step 1032, the hue component of the R channel of the third RAW image is calculated according to the color value of the pixel corresponding to the R channel and the color value of the pixel corresponding to the GR channel, where the hue component of one pixel of the R channel=the color value of one pixel of the R channel/the color value of the pixel at the corresponding position in the GR channel.
In step 1033, a hue component of a B channel of the third RAW image is calculated according to the color value of the pixel corresponding to the B channel and the color value of the pixel corresponding to the GB channel, where the hue component of one pixel of the B channel=the color value of one pixel of the B channel/the color value of the pixel at the corresponding position in the GB channel.
In the embodiment of the application, after the splitting of the four channels is completed, the hue component Hr of the pixel point corresponding to the R channel and the hue component Hb of the pixel point corresponding to the B channel can be calculated respectively according to the following formulas:
wherein the division in the formula represents division by site.
In one example, following the example in step 1021, the color value set { R ] of the pixel point corresponding to the R channel is 11 ,R 12 ,…,R 44 Each color value in the } is combined with the color value set { GR } of the corresponding pixel points of the GR channel 11 ,GR 12 ,…,GR 44 The color values in } are divided by site, i.e., R 11 /GR 11 ,R 12 /GR 12 ,…,R 44 /GR 44 Obtaining a tone component set { Hr ] of the R channel of the second RAW image 11 ,Hr 12 ,…,Hr 44 -as shown on the left side of the lowest layer in fig. 2;
the color value set { B of the pixel point corresponding to the B channel 11 ,B 12 ,…,B 44 Each color value in the color value sets { GB } and corresponding pixel points of the GB channels 11 ,GB 12 ,…,GR 44 The color values in } are divided by site, i.e., B 11 /GB 11 ,B 12 /GB 12 ,…,B 44 /GB 44 Obtaining a set { Hb } of hue components of the B channel of the second RAW image 11 ,Hb 12 ,…,Hb 44 And shown on the right side of the lowest layer in fig. 2.
In step 1034, the tone components of the R channel and the tone components of the B channel of the third RAW image are subjected to low-pass filtering processing, resulting in a low-frequency portion of the tone components of the R channel and a low-frequency portion of the tone components of the B channel.
Considering that the color cast type in the lens shadow changes slowly from four corners to the center of the image, this can be reflected by the low frequency part of the tone component.
In the embodiment of the present application, the low-frequency part of the tone component may be obtained by designing a low-pass filter, and the specific solving method may be filtering in the frequency domain by fourier transform, may be processing in the spatial domain by a gradient map, or may be a similar method.
In the embodiment of the present application, when a spatial-domain low-pass filtering mode is adopted, a gradient map G of the hue component of the R channel may be generated based on the hue component of the R channel of the third RAW image r The method comprises the steps of carrying out a first treatment on the surface of the And generating a gradient map G of the tone components of the B channel based on the tone components of the B channel of the third RAW image b The method comprises the steps of carrying out a first treatment on the surface of the According to G r 、G b Filtering with a preset low-pass filtering formula to obtain filtered G r (i, j) and G b (i, j) data distribution; the preset low-pass filtering formula is as follows:
G r (i, j) data distribution for characterizing low frequency portions of hue components of R channels, G b (i, j) data distribution for characterizing low frequency portions of hue components of the B channel, i and j representing the abscissa, θ, respectively, of the corresponding points r And theta b Is a preset threshold.
In step 1035, a color shift type of the third RAW image is acquired from the low-frequency part of the tone component of the R channel and the low-frequency part of the tone component of the B channel.
In the embodiment of the application, the third RAW chart can be obtained according to the low-frequency part of the tone component of the R channelLike color cast type Cr in R channel, specifically, according to G r And (i, j) the change condition of the data distribution from the periphery to the center, and acquiring the color cast type Cr of the third RAW image in the R channel.
In the embodiment of the present application, the color cast type Cb of the third RAW image in the B channel may be obtained according to the low frequency part of the hue component of the B channel, specifically, according to G b And (i, j) the change condition of the data distribution from the periphery to the center, and acquiring the color cast type Cb of the third RAW image in the B channel.
For ease of understanding, taking the hue component of the R channel as an example, the estimation method of the color cast type is described: since the color cast type component is slowly changed from the four corners to the center of the image, it can be particularly changed into the difference of hue components at different positions. Generally, if the color cast phenomenon appears as a center reddening and a surrounding greenish, as shown in the left image in fig. 3, it can be found that the corresponding gradient map changes from surrounding to center first to positive and then to negative; on the contrary, as shown in the right image in fig. 3, if the color cast phenomenon is represented by a center being greenish and a periphery being reddish, the corresponding gradient map changes from the periphery to the center, namely, negative is firstly and then positive; if no obvious color cast phenomenon exists at present, the corresponding gradient map basically has no obvious change from the periphery to the center.
In the embodiment of the application, the color cast type of the pre-compensated RAW image can be estimated according to the judging method and the obtained gradient map information.
In step 104, according to the mapping relationship between the preset color cast type and the adjustment coefficient of the compensation information, the adjustment coefficient corresponding to the color cast type of the third RAW image is obtained.
In the embodiment of the application, after the color cast type of the third RAW image is acquired, a corresponding dynamic adjustment coefficient is required to be acquired.
In the embodiment of the application, the coefficient Sr corresponding to Cr and the coefficient Sb corresponding to Cb can be obtained according to the mapping relation between the preset color cast type and the compensation information adjustment coefficient; that is, the adjustment coefficient corresponding to the color cast type of the third RAW image in the R channel and the adjustment coefficient corresponding to the color cast type of the third RAW image in the B channel.
In the embodiment of the application, the positive and negative of the adjustment coefficient are used for indicating the direction of adjustment of the first compensation table, and the absolute value of the adjustment coefficient is used for indicating the color cast degree, namely the speed of gradient change, for the central reddening and greenish and reddening phenomenon, which is expressed as the adjustment amplitude of the first compensation table. When the adjustment coefficient is 0, no obvious color cast phenomenon is shown.
In step 105, the first compensation information is adjusted according to the adjustment coefficient corresponding to the color cast type of the third RAW image to obtain the second compensation information.
In the embodiment of the application, the adjustment coefficient S, the first compensation information T and the second compensation information T / The relationship of (2) may be: t (T) / =S*(T-1)+T。
In the embodiment of the application, the compensation value of the R channel in the first compensation information can be adjusted according to Sr, and the compensation value of the B channel in the first compensation information can be adjusted according to Sb to obtain the second compensation information. That is, according to the dynamically adjusted coefficients, only the compensation values of the R channel and the B channel in the first compensation information are adjusted, and the compensation values of the GR channel and the GB channel remain unchanged.
In the embodiment of the application, when the organization form of the compensation information is the compensation table, the first compensation table is adjusted according to the adjustment coefficient to obtain the second compensation table. That is, the second compensation information is a second compensation table.
In step 106, the first RAW image is compensated based on the first compensation information and the second compensation information, so as to obtain a fourth RAW image after lens shading correction.
In the embodiment of the application, the first RAW image can be compensated based on the first compensation information to obtain an intermediate RAW image; and compensating the intermediate RAW image based on the second compensation information to obtain a fourth RAW image after lens shading correction. That is, the first RAW image is first pre-compensated based on the first compensation information, and then dynamically compensated using the second compensation information.
Considering that the size of the first compensation table and the second compensation table is far smaller than the size of the first RAW image, in one implementation of the embodiment of the present application, both the first compensation table and the second compensation table may be up-sampled first, to obtain a third compensation table and a fourth compensation table which are the same as the size of the first RAW image; multiplying the color value of each pixel point in the first RAW image with the compensation value of the corresponding position in the third compensation table to obtain an intermediate RAW image; and multiplying the color value of each pixel point corresponding to the R channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table, and multiplying the color value of each pixel point corresponding to the B channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table to obtain the fourth RAW image after lens shading correction.
Or considering that the sizes of the first compensation table and the second compensation table are the same as the size of the third RAW image, and the third RAW image is pre-compensated by the first compensation table, in another embodiment of the present application, the color value of each pixel point corresponding to the R channel in the third RAW image may be multiplied by the compensation value of the corresponding position in the second compensation table, and the color value of each pixel point corresponding to the B channel in the third RAW image may be multiplied by the compensation value of the corresponding position in the second compensation table, so as to obtain an intermediate RAW image after lens shading correction; and then upsampling the intermediate RAW image to obtain a RAW image with the same size as the first RAW image, namely, a fourth RAW image after lens shading correction of the first RAW image.
As can be seen from the foregoing embodiments, in this embodiment, the RAW image may be pre-compensated by the compensation information, the color cast type of the pixel point in the pre-compensated RAW image is obtained according to the color value of the pixel point in the RAW image, the dynamic adjustment coefficient of the compensation information is obtained according to the color cast type, and the lens shade of the RAW image is corrected according to the adjustment coefficient and the compensation information, so as to obtain the RAW image with higher imaging quality. Compared with the prior art, in the embodiment of the application, only one group of pre-calibrated compensation information is provided, and the compensation information is dynamically adjusted by analyzing and estimating the lens shading of the current scene, so that the compensation purpose is accurately achieved, and the scene with higher requirements on imaging quality can be met.
In the embodiment of the application, the dynamically adjusted Lens Shading Correction algorithm not only can solve the problem that the brightness of the periphery of the image is obviously lower than the brightness of the center in the imaging process, but also can solve the phenomenon of color cast of the center and the periphery which are difficult to eliminate. The algorithm only needs to provide a group of pre-calibrated compensation tables, and the compensation tables are dynamically adjusted by analyzing and estimating the Lens Shading of the current scene, so that the compensation purpose is accurately achieved. The algorithm can effectively solve the Lens Shading problem of various scenes, is high in convergence speed and good in compensation effect, can be integrally applied to embedded equipment, can improve imaging quality of a camera of the embedded equipment, and improves experience of a consumer when photographing.
It should be noted that, in the lens shading correction method provided in the embodiment of the present application, the execution body may be a lens shading correction device, or a control module in the lens shading correction device for executing the loading lens shading correction method. In the embodiment of the present application, a lens shading correction method performed by a lens shading correction device is taken as an example, and the lens shading correction device provided by the embodiment of the present application is described.
Fig. 4 is a block diagram of a lens shading correction device according to an embodiment of the present application, and as shown in fig. 4, a lens shading correction device 400 may include: a sampling module 401, a precompensation module 402, a first acquisition module 403, a second acquisition module 404, an adjustment module 405 and a correction module 406, wherein,
the sampling module 401 is configured to obtain a first RAW image, and downsample the first RAW image to obtain a second RAW image, where the first RAW image is an original RAW image to be corrected;
a pre-compensation module 402, configured to pre-compensate the second RAW image based on preset first compensation information, so as to obtain a third RAW image;
a first obtaining module 403, configured to obtain a color cast type of the third RAW image according to a color value of a pixel point in the third RAW image;
A second obtaining module 404, configured to obtain an adjustment coefficient corresponding to the color cast type of the third RAW image according to a mapping relationship between a preset color cast type and a compensation information adjustment coefficient;
an adjustment module 405, configured to adjust the first compensation information to obtain second compensation information according to the adjustment coefficient;
and a correction module 406, configured to compensate the first RAW image based on the first compensation information and the second compensation information, so as to obtain a fourth RAW image after lens shading correction.
As can be seen from the foregoing embodiments, in this embodiment, the RAW image may be pre-compensated by the compensation information, the color cast type of the pixel point in the pre-compensated RAW image is obtained according to the color value of the pixel point in the RAW image, the dynamic adjustment coefficient of the compensation information is obtained according to the color cast type, and the lens shade of the RAW image is corrected according to the adjustment coefficient and the compensation information, so as to obtain the RAW image with higher imaging quality. Compared with the prior art, in the embodiment of the application, only one group of pre-calibrated compensation information is provided, and the compensation information is dynamically adjusted by analyzing and estimating the lens shading of the current scene, so that the compensation purpose is accurately achieved, and the scene with higher requirements on imaging quality can be met.
Alternatively, as an embodiment, the correction module 406 may include:
the first correction submodule is used for compensating the first RAW image based on the first compensation information to obtain an intermediate RAW image;
and the second correction sub-module is used for compensating the intermediate RAW image based on the second compensation information to obtain a fourth RAW image after lens shading correction.
Alternatively, as an embodiment, the third RAW image is an image arranged in an RG (R) G (B) B arrangement;
the first obtaining module 403 may include:
the reading submodule is used for reading the color value of the pixel corresponding to the R channel, the color value of the pixel corresponding to the GR channel, the color value of the pixel corresponding to the GB channel and the color value of the pixel corresponding to the B channel in the third RAW image;
a first calculating sub-module, configured to calculate a hue component of an R channel of the third RAW image according to the color value of the pixel corresponding to the R channel and the color value of the pixel corresponding to the GR channel, where the hue component of one pixel of the R channel = the color value of one pixel of the R channel/the color value of the pixel at a corresponding position in the GR channel;
a second calculating sub-module, configured to calculate a hue component of a B channel of the third RAW image according to the color value of the pixel corresponding to the B channel and the color value of the pixel corresponding to the GB channel, where the hue component of one pixel of the B channel=the color value of one pixel of the B channel/the color value of the pixel at the corresponding position in the GB channel;
A filtering sub-module, configured to perform low-pass filtering processing on a tone component of an R channel and a tone component of a B channel of the third RAW image, to obtain a low-frequency portion of the tone component of the R channel and a low-frequency portion of the tone component of the B channel;
and the first acquisition submodule is used for acquiring the color cast type of the third RAW image according to the low-frequency part of the tone component of the R channel and the low-frequency part of the tone component of the B channel.
Alternatively, as an embodiment, the filtering submodule may include:
a first generation unit for generating a gradient map G of the tone components of the R channel based on the tone components of the R channel r
A second generation unit for generating a gradient map G of the tone components of the B channel based on the tone components of the B channel b
A filtering processing unit for processing the signals according to the G r 、G b Filtering with a preset low-pass filtering formula to obtain filtered G r (i, j) and G b (i, j) data distribution, said G r (i, j) data distribution for characterizing low frequency portions of hue components of said R channel, said G b (i, j) a data distribution for characterizing a low frequency portion of a tonal component of the B-channel;
the preset low-pass filtering formula is as follows:
i and j respectively represent the abscissa, θ of the corresponding point r And theta b Is a preset threshold.
Optionally, as an embodiment, the first obtaining sub-module may include:
a first acquisition unit for acquiring the data according to the G r (i, j) the change condition of the data distribution from the periphery to the center, and acquiring the color cast type Cr of the third RAW image in the R channel;
a second acquisition unit for acquiring the data according to the G b And (i, j) the change condition of the data distribution from the periphery to the center, and acquiring the color cast type Cb of the third RAW image in the B channel.
Optionally, as an embodiment, the second obtaining module 404 may include:
the second obtaining submodule is used for obtaining a coefficient Sr corresponding to the Cr and an Sb corresponding to the Cb according to a mapping relation between a preset color cast type and a compensation information adjustment coefficient;
the adjustment module 405 may include:
the first adjusting submodule is used for adjusting the compensation value of the R channel in the first compensation information according to the Sr;
and the second adjustment sub-module is used for adjusting the compensation value of the B channel in the first compensation information according to the Sb to obtain second compensation information.
Optionally, as an embodiment, the first compensation information is a first compensation table; the lens shading correction device 400 may further include: the generation module is used for generating a first compensation table;
The generating module may include:
the shooting sub-module is used for shooting a flat image with the difference of color and brightness smaller than a preset threshold under a specific light source;
a first sampling submodule for downsampling the flat image;
the third obtaining submodule is used for obtaining the color value of the pixel corresponding to the R channel, the color value of the pixel corresponding to the GR channel, the color value of the pixel corresponding to the GB channel and the color value of the pixel corresponding to the B channel in the flat image obtained by downsampling;
and the fourth acquisition sub-module is used for acquiring the maximum value Max of the color values of the pixel points corresponding to the channel P aiming at each color channel P, and obtaining the compensation table of the channel P by the color value/Max of each pixel point corresponding to the channel P.
Optionally, as an embodiment, the adjusting module 405 may include:
the third adjustment sub-module is used for adjusting the first compensation table to obtain a second compensation table according to the adjustment coefficient;
the correction module 406 may include:
the second sampling submodule is used for upsampling the first compensation table and the second compensation table to obtain a third compensation table and a fourth compensation table which are the same as the first RAW image in size;
A third correction sub-module, configured to multiply the color value of each pixel point in the first RAW image with the compensation value of the corresponding position in the third compensation table, so as to obtain an intermediate RAW image; and multiplying the color value of each pixel point corresponding to the R channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table, and multiplying the color value of each pixel point corresponding to the B channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table to obtain a fourth RAW image after lens shading correction.
The lens shading correction device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and embodiments of the present application are not limited in particular.
The lens shading correction device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The lens shading correction device provided by the embodiment of the application can realize each process realized by the embodiment of the method, and in order to avoid repetition, the description is omitted.
Optionally, as shown in fig. 5, an electronic device 500 according to an embodiment of the present application further includes a processor 501, a memory 502, and a program or an instruction stored in the memory 502 and capable of being executed on the processor 501, where the program or the instruction implements each process of the above-mentioned lens shading correction method embodiment when executed by the processor 501, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application. The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, and processor 610.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 610 is configured to obtain a first RAW image, and downsample the first RAW image to obtain a second RAW image, where the first RAW image is an original RAW image to be corrected; pre-compensating the second RAW image based on preset first compensation information to obtain a third RAW image; acquiring the color cast type of the third RAW image according to the color value of the pixel point in the third RAW image; obtaining an adjustment coefficient corresponding to the color cast type of the third RAW image according to the mapping relation between the preset color cast type and the compensation information adjustment coefficient; according to the adjustment coefficient, adjusting the first compensation information to obtain second compensation information; and compensating the first RAW image based on the first compensation information and the second compensation information to obtain a fourth RAW image after lens shading correction.
Therefore, in the embodiment of the application, the RAW image can be pre-compensated through the compensation information, the color cast type of the pixel point in the pre-compensated RAW image is obtained according to the color value of the pixel point in the pre-compensated RAW image, the dynamic adjustment coefficient of the compensation information is obtained according to the color cast type, and the lens shadow of the RAW image is corrected according to the adjustment coefficient and the compensation information, so that the RAW image with higher imaging quality is obtained. Compared with the prior art, in the embodiment of the application, only one group of pre-calibrated compensation information is provided, and the compensation information is dynamically adjusted by analyzing and estimating the lens shading of the current scene, so that the compensation purpose is accurately achieved, and the scene with higher requirements on imaging quality can be met.
Optionally, as an embodiment, the processor 610 is further configured to compensate the first RAW image based on the first compensation information to obtain an intermediate RAW image;
and compensating the intermediate RAW image based on the second compensation information to obtain a fourth RAW image after lens shading correction.
Alternatively, as an embodiment, the third RAW image is an image arranged in an RG (R) G (B) B arrangement;
the processor 610 is further configured to read a color value of a pixel corresponding to an R channel, a color value of a pixel corresponding to a GR channel, a color value of a pixel corresponding to a GB channel, and a color value of a pixel corresponding to a B channel in the third RAW image;
Calculating the tone component of the R channel of the third RAW image according to the color value of the pixel corresponding to the R channel and the color value of the pixel corresponding to the GR channel, wherein the tone component of one pixel of the R channel=the color value of one pixel of the R channel/the color value of the pixel at the corresponding position in the GR channel;
calculating the tone component of the B channel of the third RAW image according to the color value of the pixel corresponding to the B channel and the color value of the pixel corresponding to the GB channel, wherein the tone component of one pixel of the B channel=the color value of one pixel of the B channel/the color value of the pixel at the corresponding position in the GB channel;
performing low-pass filtering processing on the tone component of the R channel and the tone component of the B channel of the third RAW image to obtain a low-frequency part of the tone component of the R channel and a low-frequency part of the tone component of the B channel;
and acquiring the color cast type of the third RAW image according to the low-frequency part of the tone component of the R channel and the low-frequency part of the tone component of the B channel.
Optionally, as an embodiment, the processor 610 is further configured to generate a gradient map G of hue components of the R channel based on hue components of the R channel r The method comprises the steps of carrying out a first treatment on the surface of the And generating a gradient map G of the tone components of the B channel based on the tone components of the B channel b
According to the G r 、G b Filtering with a preset low-pass filtering formula to obtain filtered G r (i, j) and G b (i, j) data distribution, said G r (i, j) data distribution for characterizing low frequency portions of hue components of said R channel, said G b (i, j) a data distribution for characterizing a low frequency portion of a tonal component of the B-channel;
the preset low-pass filtering formula is as follows:
i and j respectively represent the abscissa, θ of the corresponding point r And theta b Is a preset threshold.
Optionally, as an embodiment, the processor 610 is further configured to, according to the G r (i, j) the change condition of the data distribution from the periphery to the center, and acquiring the color cast type Cr of the third RAW image in the R channel;
according to the G b And (i, j) the change condition of the data distribution from the periphery to the center, and acquiring the color cast type Cb of the third RAW image in the B channel.
Optionally, as an embodiment, the processor 610 is further configured to obtain, according to a mapping relationship between a preset color cast type and a compensation information adjustment coefficient, a coefficient Sr corresponding to Cr and Sb corresponding to Cb;
and adjusting the compensation value of the R channel in the first compensation information according to the Sr, and adjusting the compensation value of the B channel in the first compensation information according to the Sb to obtain second compensation information.
Optionally, as an embodiment, the adjustment coefficient S, the first compensation information T and the second compensation information T / The relation of (2) is: t (T) / =S*(T-1)+T。
Optionally, as an embodiment, the first compensation information is a first compensation table;
the processor 610 is further configured to generate a first compensation table;
the generating a first compensation table includes:
under a specific light source, shooting a flat image with the difference between the color and the brightness smaller than a preset threshold value;
downsampling the flat image;
acquiring a color value of a pixel corresponding to an R channel, a color value of a pixel corresponding to a GR channel, a color value of a pixel corresponding to a GB channel and a color value of a pixel corresponding to a B channel in a flat image obtained by downsampling;
and obtaining the maximum value Max of the color values of the pixel points corresponding to each color channel P, and obtaining a compensation table of the channel P by the color value/Max of each pixel point corresponding to the channel P.
Optionally, as an embodiment, the processor 610 is further configured to adjust the first compensation table to obtain a second compensation table according to the adjustment coefficient;
upsampling the first compensation table and the second compensation table to obtain a third compensation table and a fourth compensation table which have the same size as the first RAW image;
Multiplying the color value of each pixel point in the first RAW image with the compensation value of the corresponding position in the third compensation table to obtain an intermediate RAW image; and multiplying the color value of each pixel point corresponding to the R channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table, and multiplying the color value of each pixel point corresponding to the B channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table to obtain a fourth RAW image after lens shading correction.
It should be understood that in an embodiment of the present application, the input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, and the graphics processor 6041 processes image data of still pictures or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes a touch panel 6071 and other input devices 6072. The touch panel 6071 is also called a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the lens shading correction method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the lens shading correction method embodiment, and the same technical effects can be achieved, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (16)

1. A lens shading correction method, the method comprising:
acquiring a first RAW image, and downsampling the first RAW image to obtain a second RAW image, wherein the first RAW image is an original RAW image to be corrected;
pre-compensating the second RAW image based on preset first compensation information to obtain a third RAW image;
acquiring the color cast type of the third RAW image according to the color value of the pixel point in the third RAW image;
obtaining an adjustment coefficient corresponding to the color cast type of the third RAW image according to the mapping relation between the preset color cast type and the compensation information adjustment coefficient;
according to the adjustment coefficient, adjusting the first compensation information to obtain second compensation information;
compensating the first RAW image based on the first compensation information to obtain an intermediate RAW image;
and compensating the intermediate RAW image based on the second compensation information to obtain a fourth RAW image after lens shading correction.
2. The method of claim 1, wherein the third RAW image is an image arranged in an RG (R) G (B) B arrangement;
the obtaining the color cast type of the third RAW image according to the color value of the pixel point in the third RAW image includes:
Reading a color value of a pixel corresponding to an R channel, a color value of a pixel corresponding to a GR channel, a color value of a pixel corresponding to a GB channel and a color value of a pixel corresponding to a B channel in the third RAW image;
calculating the tone component of the R channel of the third RAW image according to the color value of the pixel corresponding to the R channel and the color value of the pixel corresponding to the GR channel, wherein the tone component of one pixel of the R channel=the color value of one pixel of the R channel/the color value of the pixel at the corresponding position in the GR channel;
calculating the tone component of the B channel of the third RAW image according to the color value of the pixel corresponding to the B channel and the color value of the pixel corresponding to the GB channel, wherein the tone component of one pixel of the B channel=the color value of one pixel of the B channel/the color value of the pixel at the corresponding position in the GB channel;
performing low-pass filtering processing on the tone component of the R channel and the tone component of the B channel of the third RAW image to obtain a low-frequency part of the tone component of the R channel and a low-frequency part of the tone component of the B channel;
and acquiring the color cast type of the third RAW image according to the low-frequency part of the tone component of the R channel and the low-frequency part of the tone component of the B channel.
3. The method according to claim 2, wherein the low-pass filtering the hue component of the R channel and the hue component of the B channel of the third RAW image to obtain a low-frequency portion of the hue component of the R channel and a low-frequency portion of the hue component of the B channel, comprises:
generating a gradient map of the hue component of the R channel based on the hue component of the R channelThe method comprises the steps of carrying out a first treatment on the surface of the And generating a gradient map of the tone components of the B channel based on the tone components of the B channel>
According to the described、/>And a preset low-pass filtering formula is used for filtering treatment to obtain filtered +.>And->Data distribution, said->A data distribution for characterizing a low frequency part of a hue component of said R channel, said +.>The data distribution is used for representing the low-frequency part of the tone component of the B channel;
the preset low-pass filtering formula is as follows:
,/>
i and j represent the abscissa of the corresponding point respectively,and->Is a preset threshold.
4. The method of claim 3, wherein the obtaining the color cast type of the third RAW image from the low frequency part of the hue component of the R channel and the low frequency part of the hue component of the B channel comprises:
According to the describedThe change condition of the data distribution from the periphery to the center is obtained, and the color cast type of the third RAW image in the R channel is obtained>
According to the describedThe change condition of the data distribution from the periphery to the center is obtained, and the color cast type of the third RAW image in the B channel is obtained>
5. The method of claim 4, wherein the obtaining the adjustment coefficient corresponding to the color cast type of the third RAW image according to the mapping relationship between the preset color cast type and the compensation information adjustment coefficient comprises:
acquiring the preset mapping relation between the color cast type and the compensation information adjustment coefficientCorresponding coefficient->And said->Corresponding->
And adjusting the first compensation information to obtain second compensation information according to the adjustment coefficient, including:
according to the describedAdjusting the compensation value of the R channel in the first compensation information according to the +.>And adjusting the compensation value of the B channel in the first compensation information to obtain second compensation information.
6. The method according to claim 1, wherein the adjustment coefficient S, the first compensation information T and the second compensation information T / The relation of (2) is: t (T) / =S*(T-1)+T。
7. The method of claim 2, wherein the first compensation information is a first compensation table;
The step of pre-compensating the second RAW image based on the preset first compensation information to obtain a third RAW image further includes: generating a first compensation table;
the generating a first compensation table includes:
under a specific light source, shooting a flat image with the difference between the color and the brightness smaller than a preset threshold value;
downsampling the flat image;
acquiring a color value of a pixel corresponding to an R channel, a color value of a pixel corresponding to a GR channel, a color value of a pixel corresponding to a GB channel and a color value of a pixel corresponding to a B channel in a flat image obtained by downsampling;
and obtaining the maximum value Max of the color values of the pixel points corresponding to each color channel P, and obtaining a compensation table of the channel P by the color value/Max of each pixel point corresponding to the channel P.
8. The method of claim 7, wherein adjusting the first compensation information based on the adjustment factor to obtain second compensation information comprises:
according to the adjustment coefficient, adjusting the first compensation table to obtain a second compensation table;
the method further comprises the steps of:
upsampling the first compensation table and the second compensation table to obtain a third compensation table and a fourth compensation table which have the same size as the first RAW image;
Multiplying the color value of each pixel point in the first RAW image with the compensation value of the corresponding position in the third compensation table to obtain an intermediate RAW image; and multiplying the color value of each pixel point corresponding to the R channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table, and multiplying the color value of each pixel point corresponding to the B channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table to obtain a fourth RAW image after lens shading correction.
9. A lens shading correction device, the device comprising:
the sampling module is used for acquiring a first RAW image and downsampling the first RAW image to obtain a second RAW image, wherein the first RAW image is an original RAW image to be corrected;
the pre-compensation module is used for pre-compensating the second RAW image based on preset first compensation information to obtain a third RAW image;
the first acquisition module is used for acquiring the color cast type of the third RAW image according to the color value of the pixel point in the third RAW image;
the second acquisition module is used for acquiring an adjustment coefficient corresponding to the color cast type of the third RAW image according to the mapping relation between the preset color cast type and the compensation information adjustment coefficient;
The adjusting module is used for adjusting the first compensation information to obtain second compensation information according to the adjusting coefficient;
the correction module is used for compensating the first RAW image based on the first compensation information to obtain an intermediate RAW image; and compensating the intermediate RAW image based on the second compensation information to obtain a fourth RAW image after lens shading correction.
10. The apparatus of claim 9, wherein the third RAW image is an image arranged in an RG (R) G (B) B arrangement;
the first acquisition module includes:
the reading submodule is used for reading the color value of the pixel corresponding to the R channel, the color value of the pixel corresponding to the GR channel, the color value of the pixel corresponding to the GB channel and the color value of the pixel corresponding to the B channel in the third RAW image;
a first calculating sub-module, configured to calculate a hue component of an R channel of the third RAW image according to the color value of the pixel corresponding to the R channel and the color value of the pixel corresponding to the GR channel, where the hue component of one pixel of the R channel = the color value of one pixel of the R channel/the color value of the pixel at a corresponding position in the GR channel;
a second calculating sub-module, configured to calculate a hue component of a B channel of the third RAW image according to the color value of the pixel corresponding to the B channel and the color value of the pixel corresponding to the GB channel, where the hue component of one pixel of the B channel=the color value of one pixel of the B channel/the color value of the pixel at the corresponding position in the GB channel;
A filtering sub-module, configured to perform low-pass filtering processing on a tone component of an R channel and a tone component of a B channel of the third RAW image, to obtain a low-frequency portion of the tone component of the R channel and a low-frequency portion of the tone component of the B channel;
and the first acquisition submodule is used for acquiring the color cast type of the third RAW image according to the low-frequency part of the tone component of the R channel and the low-frequency part of the tone component of the B channel.
11. The apparatus of claim 10, wherein the filtering submodule comprises:
a first generation unit for generating a gradient map of the tone components of the R channel based on the tone components of the R channel
A second generation unit for generating a gradient map of the tone components of the B channel based on the tone components of the B channel
A filtering processing unit for processing the signals according to the steps、/>Filtering with a preset low-pass filtering formula to obtain filtered productAnd->Data distribution, said->For data distributionIn the low frequency part of the hue component characterizing said R channel, said +.>The data distribution is used for representing the low-frequency part of the tone component of the B channel;
the preset low-pass filtering formula is as follows:
,/>
i and j represent the abscissa of the corresponding point respectively,and->Is a preset threshold.
12. The apparatus of claim 11, wherein the first acquisition submodule comprises:
a first acquisition unit for according to theThe change condition of the data distribution from the periphery to the center is obtained, and the color cast type of the third RAW image in the R channel is obtained>
A second acquisition unit for according to theThe change condition of the data distribution from the periphery to the center is obtained, and the color cast type of the third RAW image in the B channel is obtained>
13. The apparatus of claim 12, wherein the second acquisition module comprises:
a second obtaining sub-module, configured to obtain the mapping relationship between the preset color cast type and the compensation information adjustment coefficientCorresponding coefficient->And said->Corresponding->
The adjustment module includes:
a first adjusting sub-module for adjusting according to theAdjusting the compensation value of the R channel in the first compensation information;
a second adjusting sub-module for adjusting according to theAnd adjusting the compensation value of the B channel in the first compensation information to obtain second compensation information.
14. The apparatus of claim 10, wherein the first compensation information is a first compensation table; the apparatus further comprises: the generation module is used for generating a first compensation table;
The generation module comprises:
the shooting sub-module is used for shooting a flat image with the difference of color and brightness smaller than a preset threshold under a specific light source;
a first sampling submodule for downsampling the flat image;
the third obtaining submodule is used for obtaining the color value of the pixel corresponding to the R channel, the color value of the pixel corresponding to the GR channel, the color value of the pixel corresponding to the GB channel and the color value of the pixel corresponding to the B channel in the flat image obtained by downsampling;
and the fourth acquisition sub-module is used for acquiring the maximum value Max of the color values of the pixel points corresponding to the channel P aiming at each color channel P, and obtaining the compensation table of the channel P by the color value/Max of each pixel point corresponding to the channel P.
15. The apparatus of claim 14, wherein the adjustment module comprises:
the third adjustment sub-module is used for adjusting the first compensation table to obtain a second compensation table according to the adjustment coefficient;
the correction module further includes:
the second sampling submodule is used for upsampling the first compensation table and the second compensation table to obtain a third compensation table and a fourth compensation table which are the same as the first RAW image in size;
A third correction sub-module, configured to multiply the color value of each pixel point in the first RAW image with the compensation value of the corresponding position in the third compensation table, so as to obtain an intermediate RAW image; and multiplying the color value of each pixel point corresponding to the R channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table, and multiplying the color value of each pixel point corresponding to the B channel in the intermediate RAW image by the compensation value of the corresponding position in the fourth compensation table to obtain a fourth RAW image after lens shading correction.
16. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the lens shading correction method according to any one of claims 1 to 8.
CN202110432799.1A 2021-04-21 2021-04-21 Lens shading correction method and device and electronic equipment Active CN113132562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110432799.1A CN113132562B (en) 2021-04-21 2021-04-21 Lens shading correction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110432799.1A CN113132562B (en) 2021-04-21 2021-04-21 Lens shading correction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113132562A CN113132562A (en) 2021-07-16
CN113132562B true CN113132562B (en) 2023-09-29

Family

ID=76778869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110432799.1A Active CN113132562B (en) 2021-04-21 2021-04-21 Lens shading correction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113132562B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115767290B (en) * 2022-09-28 2023-09-29 荣耀终端有限公司 Image processing method and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010177917A (en) * 2009-01-28 2010-08-12 Acutelogic Corp White balance adjusting device, white balance adjusting method, and white balance adjusting program
CN105100550A (en) * 2014-04-21 2015-11-25 展讯通信(上海)有限公司 Shadow correction method and device and imaging system
CN108234824A (en) * 2018-03-26 2018-06-29 上海小蚁科技有限公司 Shadow correction detection parameters determine, correct detection method and device, storage medium, fisheye camera
CN109068025A (en) * 2018-08-27 2018-12-21 建荣半导体(深圳)有限公司 A kind of camera lens shadow correction method, system and electronic equipment
CN110365949A (en) * 2018-03-26 2019-10-22 展讯通信(天津)有限公司 A kind of bearing calibration of image color cast, device and electronic equipment
CN111385438A (en) * 2018-12-28 2020-07-07 展讯通信(上海)有限公司 Compensating method and device for lens shading correction and computer readable storage medium
CN112150357A (en) * 2019-06-28 2020-12-29 维沃移动通信有限公司 Image processing method and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100816301B1 (en) * 2006-05-24 2008-03-24 엠텍비젼 주식회사 Apparatus and method for compensating color, and image processor, digital processing apparatus, recording medium using it
CN101833749B (en) * 2009-03-12 2012-03-28 株式会社理光 Device and method for detecting shadow in image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010177917A (en) * 2009-01-28 2010-08-12 Acutelogic Corp White balance adjusting device, white balance adjusting method, and white balance adjusting program
CN105100550A (en) * 2014-04-21 2015-11-25 展讯通信(上海)有限公司 Shadow correction method and device and imaging system
CN108234824A (en) * 2018-03-26 2018-06-29 上海小蚁科技有限公司 Shadow correction detection parameters determine, correct detection method and device, storage medium, fisheye camera
CN110365949A (en) * 2018-03-26 2019-10-22 展讯通信(天津)有限公司 A kind of bearing calibration of image color cast, device and electronic equipment
CN109068025A (en) * 2018-08-27 2018-12-21 建荣半导体(深圳)有限公司 A kind of camera lens shadow correction method, system and electronic equipment
CN111385438A (en) * 2018-12-28 2020-07-07 展讯通信(上海)有限公司 Compensating method and device for lens shading correction and computer readable storage medium
CN112150357A (en) * 2019-06-28 2020-12-29 维沃移动通信有限公司 Image processing method and mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Moving shadow detection using a physics-based approach;S. Nadimi;《2002 International Conference on Pattern Recognition》;全文 *
基于无人机航拍图像的阴影分割与消除;马皇特;《中国优秀硕士学位论文全文数据库》;全文 *

Also Published As

Publication number Publication date
CN113132562A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN111418201B (en) Shooting method and equipment
WO2021051996A1 (en) Image processing method and apparatus
CN109272459B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109146814B (en) Image processing method, image processing device, storage medium and electronic equipment
US9767544B2 (en) Scene adaptive brightness/contrast enhancement
US8547450B2 (en) Methods and systems for automatic white balance
JP5688090B2 (en) System and method for demosaic processing of image data using weighted gradient
US20100278423A1 (en) Methods and systems for contrast enhancement
WO2014125659A1 (en) Image processing device, image capture device, filter generating device, image restoration method, and program
US20130342736A1 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN113132695B (en) Lens shading correction method and device and electronic equipment
JP2010056774A (en) Apparatus, method and program for processing image
GB2455858A (en) Colour noise removal using linear variation rate of the chrominance signal with regard to the luminance signal
CN104380727B (en) Image processing apparatus and image processing method
KR20110004791A (en) Image processing apparatus and computer-readable medium
US20100194918A1 (en) Methods and Systems for Automatic White Balance
US9984449B2 (en) Restoration filter generation device and method, image processing device and method, imaging device, and non-transitory computer-readable medium
JP2003304549A (en) Camera and image signal processing system
WO2022218245A1 (en) Image processing method and apparatus, electronic device, and readable storage medium
CN113132562B (en) Lens shading correction method and device and electronic equipment
US11074678B2 (en) Biasing a noise filter to preserve image texture
JP4941219B2 (en) Noise suppression device, noise suppression method, noise suppression program, and imaging device
JP2002281327A (en) Device, method and program for image processing
US20090324127A1 (en) Method and System for Automatic Red-Eye Correction
JP2008219289A (en) Video correction device, video display device, imaging apparatus and video correction program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant