CN118250404A - Image processing method, apparatus, storage medium, and computer program product - Google Patents

Image processing method, apparatus, storage medium, and computer program product Download PDF

Info

Publication number
CN118250404A
CN118250404A CN202410274950.7A CN202410274950A CN118250404A CN 118250404 A CN118250404 A CN 118250404A CN 202410274950 A CN202410274950 A CN 202410274950A CN 118250404 A CN118250404 A CN 118250404A
Authority
CN
China
Prior art keywords
image
value
image component
color
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410274950.7A
Other languages
Chinese (zh)
Inventor
王振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202410274950.7A priority Critical patent/CN118250404A/en
Publication of CN118250404A publication Critical patent/CN118250404A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, a storage medium and a computer program product, wherein the method comprises the following steps: acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image; determining a color ratio parameter according to pixel values of different image components in the first image; and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object.

Description

Image processing method, apparatus, storage medium, and computer program product
Technical Field
The present invention relates to the field of image processing technology, and in particular, to an image processing method, an image processing device, a storage medium, and a computer program product.
Background
The single frame high dynamic range (HIGH DYNAMIC RANGE, HDR) must shorten the exposure time to prevent overexposure of the highlight region in order to improve the dynamic range of the image, and shortening the exposure time necessarily results in poor signal-to-noise ratio of the image; in contrast, in the HDR scheme of multi-frame synthesis, it is difficult to avoid artifacts (ghest) because of the time-domain differences in the generated multi-frame images.
That is, the conventional HDR scheme, whether a single frame scheme or a multi-frame scheme, has the problem that the high dynamic range and the signal to noise ratio, and the high in motion scene cannot be considered, and the better image processing effect cannot be achieved.
Disclosure of Invention
The embodiment of the application provides an image processing method, image processing equipment, a storage medium and a computer program product, which can realize the compromise of high dynamic range and signal to noise ratio, solve the problem of ghost in a motion scene and finally obtain a better image processing effect.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
Acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image;
determining a color ratio parameter according to pixel values of different image components in the first image;
and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object.
In a second aspect, an embodiment of the present application provides an image processing apparatus including: an acquisition unit, a determination unit, a correction unit,
The acquisition unit is used for acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image;
The determining unit is used for determining color proportion parameters according to pixel values of different image components in the first image;
The correction unit is used for carrying out color correction on the second image according to the color proportion parameter to obtain a color corrected image corresponding to the shooting object.
In a third aspect, an embodiment of the present application provides a terminal device, including: a processor and a memory; wherein,
The memory is used for storing a computer program capable of running on the processor;
the processor is configured to perform the method according to the first aspect when the computer program is run.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a program which, when executed by a processor, implements a method as described in the first aspect above.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a method as described in the first aspect above.
The embodiment of the application provides an image processing method, image processing equipment, a storage medium and a computer program product, wherein a first image and a second image corresponding to a shooting object are acquired; the first image is a linear response image, and the second image is a nonlinear response image; determining a color ratio parameter according to pixel values of different image components in the first image; and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object. That is, in the embodiment of the present application, while the HDR effect is achieved by the nonlinear response image, that is, the second image, the linear response image, that is, the color ratio parameter provided by the first image, is used to perform color correction on the second image with good signal-to-noise ratio, so as to obtain the image after color correction, thereby achieving both high dynamic range and signal-to-noise ratio, and simultaneously solving the problem of ghost in the motion scene, and finally obtaining a better image processing effect.
Drawings
FIG. 1 is a schematic diagram of a linear FWC response curve and a nonlinear FWC response curve of a Sensor;
Fig. 2 is a schematic diagram of an implementation flow of an image processing method according to an embodiment of the present application;
Fig. 3 is a second schematic implementation flow chart of the image processing method according to the embodiment of the present application;
FIG. 4 is a schematic diagram of image components according to an embodiment of the present application;
fig. 5 is a schematic diagram of an implementation flow of an image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an implementation of a high light protection algorithm according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating an implementation of an image processing method according to an embodiment of the present application;
fig. 8 is a second schematic implementation diagram of the image processing method according to the embodiment of the present application;
fig. 9 is a schematic diagram of an implementation structure of image processing according to an embodiment of the present application;
Fig. 10 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application;
Fig. 11 is a schematic diagram of a composition structure of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to be limiting. It should be noted that, for convenience of description, only a portion related to the related application is shown in the drawings.
The current image Sensor (Sensor) basically uses the characteristic of linear Full well capacity (Full WELL CAPACITY, FWC), which is expressed as that the pixel value and the exposure amount are in a linear increasing relation, the larger the exposure amount is, the larger the pixel value is, and the pixel value is not changed after the exposure amount is increased to reach the maximum value of the pixel bit width. The current high dynamic range (HIGH DYNAMIC RANGE, HDR) scheme uses the linear FWC characteristic output image of Sensor for both single frame scheme and multi-frame scheme, if the highlight area of the image has reached the maximum value of pixel, the texture and color of the image can not be recovered for single frame HDR scheme, while for multi-frame HDR scheme, the frames generally participating in synthesizing the HDR image use different exposure parameters, the brightness of the image is different, and the pixel value of the same area of dark frame is used for the overexposure area to compensate for the composition.
The common wide dynamic range (WIDE DYNAMIC RANGE, WDR), both single frame and multi-frame schemes have their own problems, and single frame HDR must shorten exposure time to increase the dynamic range of an image to prevent overexposure in the highlight region, and shortening exposure time necessarily results in poor signal-to-noise ratio of the image; in the multi-frame synthesized HDR scheme, due to the difference in the time domain of the generated multi-frame images, the contents between the multi-frame images are displaced, and strict alignment and fusion strategies are needed, so that artifacts (artifacts) are difficult to avoid, and furthermore, the multi-frame HDR scheme is mapped from a Sensor and then preprocessed, so that the processing process before multi-frame fusion is multi-frame, and the consumption of relative power consumption is higher.
Fig. 1 is a schematic diagram of a linear FWC response curve and a nonlinear FWC response curve of a Sensor, where, as shown in fig. 1, a pixel value is proportional to an exposure amount until the pixel value is saturated. The response curve of the nonlinear FWC (N-FWC) characteristic is different from the response curve, the relation between the pixel value and the exposure is not completely linear, the response curve is divided into two sections, namely a linear region and a nonlinear region, the linear region is positioned when the exposure is small, the pixel value and the exposure are in linear proportional relation, the response curve gradually enters the nonlinear region along with the increase of the exposure, and the pixel value is in a nonlinear relation gradually increasing along with the increase of the exposure. The Sensor of this characteristic is not easily saturated in a highlight region, and is therefore very useful for preserving the dynamics of an image.
It should be noted that the response curves of the R/G/B three channels of the Sensor are not identical, and the three color components of the same pixel location may be located at different positions of the respective response curves, some components being in the linear region and other components being in the nonlinear region, so that the color of the displayed image is inaccurate.
That is, the conventional HDR scheme, whether a single frame scheme or a multi-frame scheme, has the problem that the high dynamic range and the signal to noise ratio, and the high in motion scene cannot be considered, and the better image processing effect cannot be achieved.
In order to solve the above-described problems, in an embodiment of the present application, a first image and a second image corresponding to a photographic subject are acquired; the first image is a linear response image, and the second image is a nonlinear response image; determining a color ratio parameter according to pixel values of different image components in the first image; and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object. That is, in the embodiment of the present application, while the HDR effect is achieved by the nonlinear response image, that is, the second image, the linear response image, that is, the color ratio parameter provided by the first image, is used to perform color correction on the second image with good signal-to-noise ratio, so as to obtain the image after color correction, thereby achieving both high dynamic range and signal-to-noise ratio, and simultaneously solving the problem of ghost in the motion scene, and finally obtaining a better image processing effect.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
An embodiment of the present application provides an image processing method that can be applied to an image processing apparatus or a terminal device, or to a terminal device provided with an image processing apparatus. The image processing method will be described by way of example using an image processing apparatus.
Further, in an embodiment of the application, fig. 2 is a schematic diagram of an implementation flow of an image processing method according to an embodiment of the present application, and as shown in fig. 2, the image processing method may include the following steps:
Step 101, acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image.
In the embodiment of the application, the image processing device can firstly acquire the first image and the second image corresponding to the shooting device. The first image may be a linear response image, and the second image may be a nonlinear response image.
In the embodiment of the present application, the first image is taken as a linear response image, and it is understood that the pixel value of each pixel position in the first image belongs to a linear region. The second image is taken as a nonlinear response image, which can be understood as the existence of pixel locations in the second image where pixel values belong to nonlinear regions.
In an embodiment of the present application, the divided pixel values between the linear region and the nonlinear region of the N-FWC characteristic response curve are referred to as inflection points kneeVal, and values smaller than the inflection points are all considered to be in the linear region, and values larger than the inflection points are considered to be in the nonlinear region. Wherein the inflection point value for each Sensor is calibratable, i.e., the inflection point of the N-FWC characteristic response curve is considered known.
Further, in the embodiment of the present application, when acquiring the first image and the second image corresponding to the photographic subject, the first image for short exposure and the second image for long exposure may be selected to be acquired by the same image sensor; alternatively, the first image of short exposure and the second image of long exposure can be acquired by different image sensors respectively; alternatively, the first image may be acquired based on the FWC characteristic and the second image may be acquired based on the N-FWC characteristic.
Illustratively, in some embodiments, the image processing apparatus may capture the photographic subject by the same image sensor, obtaining the first image and the second image. The first image may be a short exposure frame, and the second image may be a long exposure frame.
For example, in some embodiments, the image processing apparatus may simultaneously photograph the photographing object through different image sensors to obtain the first image and the second image, respectively. The first image may be a short exposure frame, and the second image may be a long exposure frame.
For example, in some embodiments, the image processing apparatus may output a first image having a linear response characteristic and a second image having a nonlinear response characteristic based on the FWC characteristic and the N-FWC characteristic, respectively.
That is, in the embodiment of the present application, the long-exposure frame (second image) and the short-exposure frame (first image) acquired by the image processing apparatus may be both output by the same Sensor, or the long-exposure frame and the short-exposure frame may be output by different sensors, or even an image having a nonlinear response, i.e., the second image, may be output by using the N-FWC, and an image having a normal linear response, i.e., the first image, may be generated by another Sensor using the FWC characteristic.
In the embodiment of the application, in the process of acquiring the long exposure frame and the short exposure frame, any exposure mode can be selected to acquire the image frame according to the supporting condition of the image sensor, and the application is not particularly limited. For example, the acquired long exposure frame and short exposure frame may be two frames exposed by 2DOL (Digital Over Lap) modes, or may be two frames exposed in a time domain.
In the embodiment of the present application, the resolution of the first image and the second image may be the same or different, and the present application is not particularly limited.
Step 102, determining a color proportion parameter according to pixel values of different image components in the first image.
In the embodiment of the present application, after the first image and the second image corresponding to the photographing object are acquired, the color ratio parameters may be further determined according to the pixel values of different image components in the first image.
In an embodiment of the present application, the color ratio parameters may be used to perform color correction processing. Wherein the color scale parameter may comprise scale values of pixel values of different image components.
It will be appreciated that in embodiments of the application, the data format of the images may be different for the first image or the second image, as may the form of the corresponding image components. For example, the image component corresponding to the first image or the second image may include an R component, a G component, and a B component; or the image component corresponding to the first image or the second image may include a Y component, a U component, and a V component; the present application is not particularly limited.
In the embodiment of the present application, the image processing method provided by the present application is exemplified by taking the first image component as the R component, the second image component as the G component, and the third image component as the B component. Of course, the image processing method may be applied to other types of image components, and the present application is not particularly limited.
Further, in an embodiment of the present application, fig. 3 is a second implementation flow chart of an image processing method according to an embodiment of the present application, as shown in fig. 3, before determining a color scale parameter according to pixel values of different image components in a first image, the image processing method may further include the following steps:
step 104, smoothing and filtering the first image.
In an embodiment of the present application, the image processing apparatus may further perform smoothing filtering on the first image after the first image is acquired and before the color scale parameter is determined according to the pixel values of the different image components in the first image.
In the embodiment of the present application, in order to reduce interference of noise, the image processing apparatus may perform smoothing filter processing on different image components in the first image before determining the color scale parameter. That is, in the present application, the image processing apparatus may make the determination of the color scale parameter based on the pixel values of the different image components of the first image after the smoothing filter.
In the embodiment of the present application, the method of smoothing filtering and the neighborhood range selected during smoothing filtering are not particularly limited.
It is understood that in the embodiment of the present application, the image processing apparatus may be adjusted according to the Sensor and the exposure time, the sensitivity (ISO), and the like.
Illustratively, in some embodiments, taking the simplest mean filtering as an example, a mean filtering of 5×5 may be selected, and a smoothing method with a guard edge effect such as bilateral filtering may be selected.
Illustratively, in some embodiments, taking an original (raw) image with three image components of r\g\b as an example, a method of smoothing the three image components separately may be selected during smoothing, and after smoothing each component, the smoothed pixel is placed at the original position to obtain a smoothed raw image.
Further, in the embodiment of the present application, when determining the color ratio parameter according to the pixel values of the different image components in the first image, the first color ratio corresponding to the first pixel position may be determined according to the pixel value of the first image component and the pixel value of the second image component corresponding to the first pixel position, and the second color ratio corresponding to the first pixel position may be determined according to the pixel value of the first image component and the pixel value of the third image component corresponding to the first pixel position; a color scale parameter is then determined based on the first color scale and the second color scale corresponding to each pixel location in the first image.
In an embodiment of the present application, the first pixel position is any pixel position in the first image.
In some embodiments, fig. 4 is a schematic diagram of image components according to an embodiment of the present application, and as shown in fig. 4, taking the RGGB structure in a Bayer array as an example (Bayer pattern), there are image components corresponding to R, gr, gb, B for one Bayer array position in an image.
It should be noted that, in the embodiment of the present application, the image processing apparatus may perform calculation of the scale value between different image components for any one pixel position in the first image, that is, the first pixel position, so as to obtain the color scale of the different image components at the first pixel position, and further determine the color scale parameter of the first image based on the color scale between the pixel values of the different image components corresponding to all the pixel positions in the first image.
That is, in embodiments of the present application, the color scale parameter of the first image may include one or more color scales at each pixel location.
For example, in some embodiments, for a smooth filtered first image, two color ratios, GR and BR, respectively, may be calculated for any pixel location therein, such as the first pixel location. GR may include GrR and GbR, among others.
Illustratively, in some embodiments, assuming that the pixel value of the first image component is R, the pixel value of the second image component is Gr and Gb, and the pixel value of the third image component is B, the image processing apparatus may determine the corresponding first color ratios GrR and GbR using the pixel value of the first image component and the pixel value of the second image component, with reference to the following formula:
GrR = Gr / R (1)
GbR = Gb / R (2)
The image processing apparatus may determine the corresponding second color ratio BR using the pixel value of the first image component and the pixel value of the third image component, referring to the following formula:
BR= B/ R (3)
In the embodiment of the present application, the image processing apparatus may traverse each pixel position in the first image according to the above method, determine a color ratio corresponding to each pixel position, for example, a first color ratio and a second color ratio, so as to obtain color ratios corresponding to all pixel positions in the first image, and finally determine a color ratio parameter according to the first color ratio and the second color ratio corresponding to each pixel position in the first image.
It will be appreciated that in embodiments of the present application, the image processing apparatus may use the first image to determine the color scale parameters, i.e. the first image functions to provide reference color scale parameters for subsequent color correction processes.
And 103, performing color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object.
In the embodiment of the application, after the color proportion parameters are determined according to the pixel values of different image components in the first image, the second image can be further subjected to color correction according to the color proportion parameters, so as to obtain a color corrected image corresponding to the shooting object.
In the embodiment of the application, after the color proportion parameter for performing color correction processing is determined based on the first image, the color proportion parameter can be further utilized to perform color correction on the second image corresponding to the first image, so that the image after color correction is obtained.
In an embodiment of the present application, since the second image is a nonlinear response image, the pixel values of certain pixel locations in the second image may be in the nonlinear region of the response curve, and thus color correction may be selectively performed using the color scale parameters determined based on the first image.
It should be noted that in the embodiment of the present application, the linear region and the nonlinear region may be distinguished using a segmentation value (i.e., inflection point kneeVal). Wherein the split pixel values between the linear regions may be referred to as split values, smaller than the split values being considered to be in the linear region and larger than the split values being considered to be in the non-linear region.
It is understood that in the embodiment of the present application, different inflection points kneeVal may be corresponding to different image components, that is, the segmentation values corresponding to different image components are different.
Further, in the embodiment of the present application, in the process of performing color correction on the second image, whether the pixel value of the image component is in the nonlinear region may be determined by using the corresponding segmentation value of the different image component, that is, whether the pixel value of the image component needs to be subjected to color correction may be determined by using the corresponding segmentation value.
Further, in the embodiment of the present application, when performing color correction on the second image according to the color ratio parameter to obtain a color corrected image corresponding to the photographing object, the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component corresponding to the second pixel position may be determined according to the first division value corresponding to the first image component, the second division value corresponding to the second image component, the third division value corresponding to the third image component, and the color ratio parameter, and the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component are respectively subjected to color correction to determine the correction value of the first image component, the correction value of the second image component, and the correction value of the third image component; the color corrected image is then determined based on the correction value of the first image component, the correction value of the second image component, and the correction value of the third image component for each pixel location in the second image.
In an embodiment of the present application, the second pixel location is any pixel location in the second image.
It should be noted that, in the embodiment of the application, the first image and the second image are corresponding, that is, the pixel positions in the first image and the second image are in one-to-one correspondence.
In embodiments of the present application, for any pixel location in the second image, such as a second pixel location, one or more color proportions of the second pixel location in the first image corresponding to the first pixel location may be used to color correct the pixel value of the image component for the second pixel location.
In the embodiment of the present application, assuming that the pixel value of the first image component is R, the pixel value of the second image component is Gi (Gr or Gb), the pixel value of the third image component is B, after determining the color scale parameters GiR (GrR or GbR) and BR by using the pixel value of the first image component, the pixel value of the second image component and the pixel value of the third image component in the first image, the color scale parameters GiR and BR may be further used to further complete the color correction of the image component in the second image in combination with the segmentation values (inflection points kneeVal) corresponding to the different image components.
Illustratively, in some embodiments, it is assumed that the original pixel value before color correction may be represented as in_x, and accordingly the result after color correction may be represented as out_x, where x comprises R, gi (Gr or Gb), B.
Further, in the embodiment of the present application, when the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component corresponding to the second pixel position, the first division value corresponding to the first image component, the second division value corresponding to the second image component, the third division value corresponding to the third image component, and the color scale parameter are respectively color-corrected, the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component are determined, the correction value of the first image component, the correction value of the second image component, and the correction value of the third image component are determined, and when the pixel value of the first image component is less than or equal to the first division value, and the pixel value of the second image component is greater than the second division value, and the pixel value of the third image component is greater than the third division value, the pixel value of the first image component may be determined as the correction value of the first image component, and the correction value of the second image component and the correction value of the third image component may be determined based on the correction value of the first image component and the color scale parameter.
For example, in some embodiments, in performing color correction, if, for any pixel position In the second image, the pixel value R of the first image component, the pixel value Gi of the second image component, and the pixel value B of the third image component are each greater than the respective corresponding segmentation values kneeVal-R, kneeVal-Gi, kneeVal-B, then the pixel value R of the first image component for that pixel position In the second image may be taken as the correction value for the color corrected first image component, i.e., the pixel value R of the first image component is not subjected to color correction processing, such as out_r=in_r. Accordingly, the pixel value Gi of the second image component may be subjected to color correction processing using the color ratio GiR of the corresponding pixel position, such as out_gi=in_r× GiR, to obtain a correction value of the second image component. Accordingly, the pixel value B of the third image component may be subjected to color correction processing using the color ratio BR of the corresponding pixel position, such as out_b=in_r×br, to obtain a correction value of the third image component.
Further, in the embodiment of the present application, when the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component corresponding to the second pixel position, the first division value corresponding to the first image component, the second division value corresponding to the second image component, the third division value corresponding to the third image component, and the color scale parameter are respectively color-corrected, the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component are determined, the correction value of the first image component, the correction value of the second image component, and the correction value of the third image component are determined, and when the pixel value of the first image component is less than or equal to the first division value, and the pixel value of the second image component is greater than or equal to the second division value, and the pixel value of the third image component is greater than or equal to the third division value, the pixel value of the first image component may be determined as the correction value of the first image component, and the correction value of the second image component and the correction value of the third image component may be determined based on the correction value of the first image component and the color scale parameter.
For example, in some embodiments, in performing color correction, if there is only a pixel value R of the first image component less than or equal to the corresponding segmentation value kneeVal-R for any pixel position In the second image, the pixel value R of the first image component for that pixel position In the second image may be taken as the correction value for the color corrected first image component, i.e., the pixel value R of the first image component is not subjected to color correction processing, such as out_r=in_r. Accordingly, if the pixel value Gi of the second image component is greater than the corresponding segmentation value kneeVal-Gi, then the color ratio GiR of the corresponding pixel location may be used to perform color correction processing on the pixel value Gi of the second image component, such as out_gi=in_r× GiR, otherwise the pixel value Gi of the second image component of the pixel location In the second image is taken as the correction value of the color corrected second image component, i.e. no color correction processing is performed on the pixel value Gi of the second image component, such as out_gi=in_gi. Accordingly, if the pixel value B of the third image component is greater than the corresponding segmentation value kneeVal-B, the color ratio BR of the corresponding pixel location may be used to perform a color correction process on the pixel value B of the third image component, such as out_b=in_r×br, otherwise the pixel value B of the third image component of the pixel location In the second image is taken as the corrected value of the color corrected third image component, i.e., the pixel value B of the third image component is not subjected to a color correction process, such as out_b=in_b.
Further, in the embodiment of the present application, when the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component corresponding to the second pixel position, the first division value corresponding to the first image component, the second division value corresponding to the second image component, the third division value corresponding to the third image component, and the color scale parameter are respectively color-corrected, the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component, the correction value of the first image component, the correction value of the second image component, and the correction value of the third image component are determined, and when the pixel value of the first image component is greater than the first division value, and the pixel value of the third image component is less than or equal to the third division value, the pixel value of the third image component may be selected to be determined as the correction value of the third image component, and the correction value of the first image component and the correction value of the second image component may be determined based on the correction value of the third image component and the color scale parameter.
Illustratively, in some embodiments, in performing color correction, if, for an arbitrary pixel position In the second image, the pixel value R of the first image component is greater than the first segmentation value kneeVal-R and the pixel value B of the third image component is less than or equal to the third segmentation value kneeVal-B, then the pixel value B of the third image component for that pixel position In the second image may be taken as the corrected value for the color corrected third image component, i.e., without performing color correction processing on the pixel value B of that third image component, such as out_b=in_b. Accordingly, the pixel value R of the first image component may be color corrected using the color ratio BR of the corresponding pixel location, such as out_r=in_b/BR. Accordingly, if the pixel value Gi of the second image component is greater than the corresponding segmentation value kneeVal-Gi, then the color ratio GiR of the corresponding pixel location may be used to perform color correction processing on the pixel value Gi of the second image component, such as out_gi=in_r× GiR, otherwise the pixel value Gi of the second image component of the pixel location In the second image is taken as the correction value of the color corrected second image component, i.e. no color correction processing is performed on the pixel value Gi of the second image component, such as out_gi=in_gi.
Further, in the embodiment of the present application, when the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component corresponding to the second pixel position, the first division value corresponding to the first image component, the second division value corresponding to the second image component, the third division value corresponding to the third image component, and the color scale parameter are respectively color-corrected, the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component are determined, and the correction value of the first image component, the correction value of the second image component, and the correction value of the third image component are determined, when the pixel value of the first image component is greater than the first division value, and the pixel value of the second image component is less than or equal to the second division value, the pixel value of the second image component is determined as the correction value of the second image component, and the correction value of the first image component and the correction value of the third image component are determined based on the correction value of the second image component and the color scale parameter.
Illustratively, in some embodiments, in performing color correction, if, for an arbitrary pixel position In the second image, the pixel value R of the first image component is greater than the first segmentation value kneeVal-R and the pixel value Gi of the second image component is less than or equal to the second segmentation value kneeVal-Gi, then the pixel value Gi of the second image component for that pixel position In the second image may be taken as the correction value for the color corrected second image component, i.e., the pixel value Gi of the second image component is not subjected to color correction processing, such as out_gi=in_gi. Accordingly, the pixel value R of the first image component may be color corrected using the color ratio GiR of the corresponding pixel location, such as out_r=in_gi/BR. Accordingly, if the pixel value B of the third image component is greater than the corresponding segmentation value kneeVal-B, the color ratio BR of the corresponding pixel location may be used to perform a color correction process on the pixel value B of the third image component, such as out_b=in_r×br, otherwise the pixel value B of the third image component of the pixel location In the second image is taken as the corrected value of the color corrected third image component, i.e., the pixel value B of the third image component is not subjected to a color correction process, such as out_b=in_b.
It should be noted that, in the embodiment of the present application, after the color correction is performed on the second image by using the color ratio column parameter determined based on the first image, the pixel value of the nonlinear region of the second image may be restored to the normal color, so that the image after the corresponding color correction may be obtained.
It can be understood that, in the embodiment of the present application, the color corrected image corresponding to the photographing object is obtained after the color correction processing is completed by using the first image to assist the second image. The color proportion parameters obtained based on the first image can provide accurate and reliable color proportion relation of different image components for color correction processing of the second image, so that normal colors of the second image can be recovered.
Further, in an embodiment of the present application, fig. 5 is a schematic diagram of a third implementation flow chart of an image processing method according to an embodiment of the present application, as shown in fig. 5, after performing color correction on a second image according to a color scale parameter to obtain a color corrected image corresponding to a shooting object, that is, after step 103, the image processing method may further include the following steps:
step 105, performing lens shading correction, automatic white balance, highlight preservation and bayer domain local tone mapping on the color corrected image to obtain a processed image corresponding to the shooting object.
In an embodiment of the present application, after performing color Correction on the second image according to the color scale parameter to obtain a color corrected image corresponding to the photographed object, the image processing apparatus may further perform subsequent image processing procedures on the color corrected image, including but not limited to lens shading Correction (LENS SHADING Correction, LSC), automatic white balancing (Auto White Balance, AWB), high-Light Recovery (HLR), and bayer-domain local tone mapping (Bayer Local Tone Mapping, BLTM), and finally may obtain a processed image corresponding to the photographed object.
In an embodiment of the present application, a lens shading correction LSC is an important component in an Image Signal Processing (ISP) algorithm for correcting a lens shading (LENS SHADING) phenomenon.
In the embodiment of the application, the automatic white balance AWB is to restore the white after imaging under the ambient light with different color temperatures through an algorithm.
In the embodiments of the present application, the algorithms used in the LSC processing and the AWB processing are not particularly limited.
It should be noted that, in the embodiment of the present application, the first bit width of the image after LSC processing is equal to the second bit width of the image after AWB processing. The bit widths of the images after LSC processing and AWB processing are the same.
Illustratively, in some embodiments, the bit width of the color corrected image may be less than or equal to the first bit width of the LSC processed image.
Illustratively, in some embodiments, all or a portion of the pixels may be multiplied by different gain values during the LSC process and the AWB process, i.e., out_val=in_val×gain. In some cases, the maximum value of out_val may exceed the maximum value max_val that can be represented by the bit width of in_val, especially when the highlight region is at the edge of the image, it is common practice that out_val > max_val is clipped to the maximum value, but this easily causes loss of details at the highlight.
It will be appreciated that in the embodiment of the present application, in order to obtain more details of the highlight, the image processing apparatus may choose to ensure the details of the highlight as first as possible by expanding the bit width of out_val, and then process the details of the highlight through the subsequent HLRs and BLTM.
That is, in the embodiment of the present application, the image processing apparatus may choose not to perform clip operation in the process of performing color correction, LSC, and AWB processing, so that it may be ensured that more details at high light are obtained.
In embodiments of the present application, the high light retention HLR technique utilizes image data at different exposure levels to restore as much overexposed area as possible by synthesizing and adjusting the exposure.
In the embodiment of the present application, in the process of performing HLR, the image processing apparatus may perform the bit width correction process according to the anchor value and the bit width threshold value.
It should be noted that, in the embodiment of the present application, the third bit width of the image after HLR processing is less than or equal to the first bit width.
It can be understood that in the embodiment of the present application, for the case that the clip operation is not performed during the color correction, LSC and AWB processes, but the pixel value exceeding the maximum value is reserved by expanding the bit width, the bit width correction process can be further performed during the subsequent HLR processing, so that the data range exceeding the maximum value can be compressed into the designated bit width, and further, the highlight detail loss caused by directly using the clip operation is avoided.
In some embodiments, fig. 6 is a schematic diagram of implementation of the highlight protection algorithm according to the embodiment of the present application, and as shown in fig. 6, assuming that the HLR inputs valid data in the 16-bit range by using 20-bit data, and the more 4-bit data is used to hold data exceeding 16-bit in the previous calculation and is not clip, the HLR module needs to recompress the 20-bit data to the 16-bit range. Where in_max_val represents the maximum input value of 16 bits, out_max_val represents the maximum output value of 16 bits, bit20_max_val represents the maximum value of 20 bits, and Anchor_val represents the Anchor value set at the time of correction, which is smaller than out_max_val, one possible implementation is to keep the original value after correction for values not greater than Anchor_val In a long frame of 20 bits, and values between Anchor_val and bit20_max_val are corrected to be within the range of Anchor_val and out_max_val.
For example, in some embodiments, assuming that the current value to be corrected is bit20_cur_val and the corrected value is bit16_cur_val, the specific correction procedure is as follows:
if(Bit20_cur_val<=Anchor_val)
Bit16_cur_val=Bit20_cur_val
else
(In_bit20_val–In_anc_val)/(In_max_val-In_anc_val)=
(Bit16_cur_val–Anchor_val)/(Out_max_val-Anchor_val)
It can be appreciated that In the embodiment of the present application, since the relationship between the Anchor_val and the in_ anc _val and the relationship between the Bit20_cur_val and the in_bit20_val are determined, the corresponding value Bit16_cur_val after compressing the Bit20_cur_val to 16 bits can be calculated. Wherein the corrected value may make the result smoother by filtering of the neighborhood or the like.
Further, in the embodiment of the present application, before performing HLR processing, the image processing apparatus may further perform LSC, AWB and brightness alignment on the first image, respectively, to obtain an aligned image; and then carrying out BLTM processing according to the aligned image and the image processed by the HLR to obtain a processed image corresponding to the shooting object.
In the embodiment of the present application, the luminance alignment process mainly pulls the smoothed short frame image (first image) to the luminance of the long frame (second image) according to the exposure information, because the pixels of the short frame are all in the linear region, the pixels are still linear after pulling, and the luminance thereof represents the target luminance of the image; the pixels in the nonlinear region of the long frame are corrected, and the color proportion relation is correct, but the pixels are different from the target brightness to a certain extent, so that the brightness information after a part of short frame brightness alignment can be referred to in the subsequent BLTM processing process.
In the embodiment of the present application, the image processing apparatus may sequentially perform LSC, AWB and luminance alignment on the first image after the smoothing filter, where implementation manners of the LSC process and the AWB process before the luminance alignment process may be the same as those performed on the image after the color correction.
Illustratively, in some embodiments, assuming that the exposure time and Gain of the first image f_s are t_s and gain_s, respectively, and the exposure time and Gain of the second image f_l are t_l and gain_l, respectively, the luminance-aligned image refers to the following formula:
F_s_align = (Fs - ob)× T_l×Gain_l/(T_s × Gain_s) (4)
Note that, in the embodiment of the present application, since the first image functions as a color scale parameter that provides a reference for the color correction processing of the second image, the processing of LSC, AWB, and luminance alignment may be selectively performed for the first image, or any subsequent processing may not be performed, and the present application is not particularly limited.
In the embodiment of the application, the local tone mapping (Local Tone Mapping, LTM) algorithm is to change the local brightness change and contrast of the image by combining parameters of exposure information of the reference image such as ISO, gain and the like with self information such as brightness, texture, histogram and the like of the image, and can act on YUV domain, RGB domain or Bayer domain. BLTM is the LTM algorithm that acts in the Bayer domain.
It should be noted that, in the embodiment of the present application, BLTM may be performed by using various methods, and the present application is not particularly limited.
Illustratively, in some embodiments, both the input processed first image and the processed second image may be converted to corresponding gray maps, and then both gray maps divided into MxN blocks, with the histogram and average luminance of each block being calculated separately for greater adjustability to the module. For example, the histograms of the two gray scales are weighted and averaged to form a final histogram, so that the average brightness of each block of the gray scale corresponding to the first image can be adjusted to a weighted coefficient, the smaller the average brightness is, the larger the weight is given to the histogram corresponding to the second image, and the larger the average brightness is, the larger the weight is given to the histogram of the first image, because the brightness information of the first image is more accurate at this time, and the newly generated histogram is used for subsequent histogram equalization until each block mapping table, called Lut, is generated. The mapping process is then performed based on the Lut table.
In summary, according to the image processing method provided by the embodiment of the present application, on one hand, the HDR effect can be achieved by the second image with the N-FWC characteristic, and meanwhile, the color correction of the second image can be assisted by the first image with the FWC characteristic, so that the problem that the single-frame HDR scheme cannot achieve both the dynamic and the signal-to-noise ratio can be solved, and meanwhile, the problem of the ghost existing in the double-frame HDR scheme can be solved. On the other hand, the first image may be used only to provide auxiliary color correction information, and most of the image processing operations are completed for the second image, so that the power consumption of image processing may be reduced. On the other hand, in the process of carrying out HLR processing, the clip operation of the highlight region can be avoided through bit width correction, so that the detail and color authenticity of the highlight region are further improved.
Therefore, the image processing method provided by the embodiment of the application can correct the color proportion of the second image in the nonlinear region by using the accurate color proportion relation provided by the first image while generating the second image with the HDR effect by using the Sensor with the N-FWC characteristic, and recover the normal color of the second image.
In some embodiments, fig. 7 is a schematic diagram illustrating an implementation of an image processing method according to an embodiment of the present application, and as shown in fig. 7, an image processing apparatus for performing image processing may include a smoothing filter module, an image sensor, a calculation module, a color correction module, an LSC module, an AWC module, an HLR module, a brightness alignment module, and a BLTM module. After the first image and the second image are obtained by the image sensor, on one hand, the first image may be subjected to large-scale smoothing filtering by the smoothing filtering module to eliminate noise interference, then the calculating module calculates the ratio of the G component and the R component of each pixel position to obtain a first color ratio GRmap, and the ratio of the B component and the R component to obtain a second color ratio BRmap, and then the first image and the second image sequentially pass through the LSC module, the AWC module and the brightness alignment module to perform corresponding processing; on the other hand, for the second image, the proportional relationship between the components of the pixel value below the inflection point value is correct without processing, and the pixel value above the inflection point value can be corrected by using RGmap and BRmap, so that the normal color is restored; then, after LSC and AWC processing are executed by the LSC module and the AWC module, highlight maintaining processing is executed by the HLR module, and the images are sent to the local tone mapping (BLTM) module of the Bayer domain to adjust brightness and dynamic state of the images.
In some embodiments, fig. 8 is a schematic diagram showing a second implementation of the image processing method according to the embodiment of the present application, and as shown in fig. 8, the image processing apparatus for performing image processing may include a smoothing filter module, an image sensor, a calculation module, a color correction module, an LSC module, an AWC module, an HLR module, and BLTM modules. After the first image and the second image are acquired by the image sensor, on one hand, the first image may be subjected to large-scale smoothing filtering by the smoothing filtering module to eliminate noise interference, and then the ratio of the G component and the R component of each pixel position is calculated by the calculating module for the smoothed filtered first image to obtain a first color proportion GRmap, and the ratio of the B component and the R component is calculated to obtain a second color proportion BRmap; on the other hand, for the second image, the proportional relationship between the components of the pixel value below the inflection point value is correct without processing, and the pixel value above the inflection point value can be corrected by using RGmap and BRmap, so that the normal color is restored; then, after LSC and AWC processing are executed by the LSC module and the AWC module, highlight maintaining processing is executed by the HLR module, and the images are sent to the local tone mapping (BLTM) module of the Bayer domain to adjust brightness and dynamic state of the images.
The embodiment of the application provides an image processing method, which is used for acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image; determining a color ratio parameter according to pixel values of different image components in the first image; and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object. That is, in the embodiment of the present application, while the HDR effect is achieved by the nonlinear response image, that is, the second image, the linear response image, that is, the color ratio parameter provided by the first image, is used to perform color correction on the second image with good signal-to-noise ratio, so as to obtain the image after color correction, thereby achieving both high dynamic range and signal-to-noise ratio, and simultaneously solving the problem of ghost in the motion scene, and finally obtaining a better image processing effect.
Based on the above embodiments, still another embodiment of the present application proposes an image processing method, which includes an HDR algorithm scheme capable of maintaining dynamic and restoring color under the N-FWC characteristic. The split pixel values between the linear region and the nonlinear region of the N-FWC characteristic response curve are called inflection points kneeVal, the values smaller than the inflection points are considered to be in the linear region, the values larger than the inflection points are considered to be in the nonlinear region, and the values of the inflection points can be calibrated for each Sensor.
In the embodiment of the present application, fig. 9 is a schematic diagram of an implementation structure of image processing according to the embodiment of the present application, as shown in fig. 9, in the present application, a two-frame exposure strategy may be used, where the exposure strategy needs to ensure that the effective value of the short exposure frame (the first image) does not exceed the inflection point value. Firstly, carrying out large-scale smoothing filtering on a short exposure frame to eliminate noise interference, and calculating the ratio of a G component to an R component of each pixel position on a smoothed and filtered graph to obtain GRmap, and the ratio of a B component to the R component to obtain BRmap; for a long exposure frame (second image), the proportional relationship between the components of the pixel values below the inflection point value is correct without processing, and the values of the components are corrected by using RGmap and BRmap for the pixel values above the inflection point value, so that the normal color of the pixel values is recovered; then after normal LSC and AWC processing, the images are sent to a high-light maintaining (HLR) module, then sent to a Bayer domain local tone mapping (BLTM) module to adjust the brightness and the dynamic state of the images, finally sent to a subsequent ISP processing module, and the subsequent ISP processing module is different according to different possibility of respective algorithms, and the application is not limited in detail; wherein the brightness alignment module of the short exposure frame pulls the brightness of the short frame to the brightness of the normal long frame in order to provide guidance in BLTM for adjusting the true brightness of the highlight region.
In the embodiment of the present application, the exposure strategy used may be two frames exposed in a 2Dol manner according to the supporting condition of the Sensor, or may be time domain bracketing exposure (i.e. one frame is exposed after the end of exposure, and another frame is exposed). In addition, the long and short frames raw image output by the scheme is assumed to be an R\G\B three-component image with 10 bits. The exposure strategy of the embodiment of the application needs to refer to the value of the inflection point, and particularly adjusts the exposure time of the short exposure frame according to the brightness condition of the environment, so that the pixel value of the highlight area of the short exposure frame is not more than kneeVal, and the pixel value of the short exposure frame can be ensured to be in the linear area of the response curve. In addition, the shorter the exposure time of the short frame is, the shorter the time interval between the short frame and the long frame is, the higher the texture similarity of the short frame and the long frame is, and the more accurate the color of the recovered nonlinear region is.
In the embodiment of the application, in the process of smoothing filtering, the purpose of smoothing filtering the short exposure frame is to reduce noise interference as much as possible when calculating the proportion relation of each color component; the smoothing filtering method and the neighborhood range selected during smoothing are not limited, can be adjusted according to Sensor, exposure time, ISO and the like, take the simplest mean filtering as an example, can select the mean filtering of 5x5, and can select the smoothing method with the edge protection effect such as bilateral filtering; taking a raw image with three components of R\G\B as an example, a method for respectively smoothing the three components can be selected during smoothing, and after the smoothing of each component is completed, the smoothed pixels are put at the original positions to obtain a smoothed raw image.
In the embodiment of the present application, in the process of calculating RGmap and BRmap, the proportional relationship between the G component and the R component and the proportional relationship between the B component and the R component of each Bayer pattern position are calculated for the smoothed short-exposure frame image respectively, which are called GR and BR, and the proportional relationship diagrams obtained after the calculation of the whole figure are called GRmap and BRmap. Because the pixel values of the short exposure frame image are all located in the linear region, GRmap and BRmap can more accurately reflect the accurate color proportion relation of each region of the image. And because the time interval between the short exposure frame and the long exposure frame is very short, the color proportion relation of the short exposure frame can be approximately regarded as the color proportion relation of the long exposure frame. Thus, for pixel values in the long exposure frame where the pixel values are already in the nonlinear region of the FWC response curve, the values of the color components can be corrected according to this color scaling relationship.
For a smooth filtered short frame image, the Bayer pattern at any position p is calculated by the following formula:
GrR[p]= Gr[p] / R[p] (5)
GbR[p]= Gb[p] / R[p] (6)
BR[p]= B[p] / R[p] (7)
In the embodiment of the application, in the process of correcting the color of the nonlinear region, for the raw image of the long exposure frame, the values in the nonlinear region of the response curve may exist due to the longer exposure time, and then the values calculated by the short exposure frame can be used for correction by GRmap and BRmap. The correction is performed on a Bayer pattern-by-Bayer basis, and the strategy is to determine whether each channel value of the pattern is below an inflection point kneeVal (assuming that kneeVal of each channel of R/G/B is not the same, each channel uses its own kneeVal, kneeVal herein refers to kneeVal of each channel), and assuming that the original pixel value is in_x, the corrected result is out_x, the color ratio is GrR/GbR/BR, and if each channel is not greater than kneeVal, out_x=in_x, and the correction strategy is otherwise:
the above Gi represents Gr or Gb, and after the above correction process, the pixel values of the nonlinear region of the long-exposure frame are restored to normal colors, and subsequent processing can be performed.
In the embodiment of the present application, when the LSC and the AWB are finally implemented, all pixels or part of pixels need to be multiplied by different gain values, namely out_val=in_val×gain, in some cases, the maximum value of out_val may exceed the maximum value max_val that can be represented by the bit width of in_val, especially when the highlight area is at the edge of the image, it is common practice that the highlight area is clip to be maximum when out_val > max_val, thus the loss of details at the highlight is easy to be caused, so the scheme ensures details at the highlight as first as possible by expanding the bit width of out_val, and then processes the details at the highlight through the following HLR and BLTM.
In the embodiment of the application, because the clip operation is not performed when the three modules of the color correction, the LSC and the AWB of the nonlinear region are processed, but the pixel value exceeding the maximum value is reserved by expanding the bit width, the HLR module aims to compress the data range exceeding the maximum value into the designated bit width, and the highlight detail loss caused by directly using the clip operation is avoided. Assuming that the HLR inputs valid data in the 16-bit range with 20-bit data, and that the more than 4-bit data is used to retain more than 16-bit data from the previous calculations without clip, the HLR module needs to recompress the 20-bit data to the 16-bit range.
As shown in fig. 6, 20_max_val represents the maximum value of 20 bits, and Anchor_val represents the Anchor value set at the time of correction, which is smaller than out_max_val, a simple implementation is to maintain the original value after correction for the value not greater than Anchor_val in the long frame of 20 bits, and the value between Anchor_val and Bit20_max_val is corrected to be within the range of Anchor_val and out_max_val; assuming that the current correction value to be corrected is Bit20_cur_val and the corrected value is Bit16_cur_val, the method is expressed as follows:
if(Bit20_cur_val<=Anchor_val)
Bit16_cur_val=Bit20_cur_val
else
(In_bit20_val–In_anc_val)/(In_max_val-In_anc_val)=
(Bit16_cur_val–Anchor_val)/(Out_max_val-Anchor_val)
since the relationship between Anchor_val and In_ anc _val is already determined between Bit20_cur_val and In_bit20_val, the corresponding value Bit16_cur_val after compression of Bit20_cur_val to 16 bits can be calculated. The corrected values may be smoothed by neighborhood filtering or the like.
In the embodiment of the application, the brightness alignment is to lighten the smoothed short frame image to the brightness of the long frame according to the exposure information, because the pixels of the short frame are all in a linear region, the pixels are still linear after the lightening, and the brightness represents the target brightness of the image; the pixels in the nonlinear region of the long frame are corrected, and the color proportion relation is correct, but the pixels are different from the target brightness to a certain extent, so that the brightness information after a part of short frame brightness alignment can be referred to by the subsequent BLTM module in local tone mapping. The implementation of the LSC and AWB modules before the luminance alignment module is the same as the long frame. Assuming that the exposure time and Gain of the short frame f_s are t_s and gain_s, respectively, and the exposure time and Gain of the long frame f_l are t_l and gain_l, respectively, the short frame f_s_align= (Fs-ob) ×t_l×gain_l/(t_s×gain_s) after the luminance alignment.
In the embodiment of the application, the LTM algorithm is to change local brightness change, contrast and the like of an image by combining parameters of exposure information of a reference image such as ISO, gain and the like with self information of the image such as brightness, texture, histogram and the like, and can act on a YUV domain, an RGB domain or a Bayer domain. BLTM is the LTM algorithm that acts in the Bayer domain. The implementation process of BLTM is illustrated by taking the block histogram equalization using the CLAHE algorithm as an example of LTM. Firstly, the input long exposure frame and short exposure frame are converted into corresponding gray level images, the conversion mode is that a Bayer pattern is converted into a gray level value, and the calculation mode is as follows: gray_val= (R+Gr+Gb+B)/4, thus obtaining two Gray level images with the size of 1/4 of the original raw image. In order to provide greater adjustability for the module, the scheme takes weighted average of the histograms of the two gray maps as a final histogram, so that the average brightness of each block of the gray map of a short frame can be used for regulating the weighted coefficient, the smaller the average brightness is, the greater the weight of the histogram of a long frame is, the greater the average brightness is, the weight of the histogram of the short frame is, the more accurate the brightness information of the short frame is, the later histogram equalization processing is carried out by using the newly generated histogram until each block mapping table is generated, which is called as the Lut table, because the Lut table before each block has a certain difference, in order to avoid obvious brightness difference at the block, each pixel is mapped by using the Lut table of the block where the pixel is located, the neighborhood block of the pixel where the pixel is located is mapped, and then the distances between the current position and the center position of the neighborhood block are interpolated.
Therefore, in the embodiment of the application, the long exposure frame image output by the Sensor with the characteristic of N-FWC is used for achieving the HDR effect, the short exposure frame image is used for assisting the color correction of the long exposure frame, the problem that the single-frame HDR scheme cannot give consideration to the dynamic state and the signal to noise ratio is avoided, the problem of the high existing in the motion scene of the double-frame HDR scheme is also avoided, the short frame can provide auxiliary color correction information, and most of operations are carried out on the long frame, so that the power consumption is lower than that of the double-frame scheme; meanwhile, the HLR module is used for avoiding clip operation of the highlight region, so that details and color authenticity of the highlight region are improved.
That is, the image processing method provided by the embodiment of the application uses the Sensor with the characteristic of N-FWC to generate an image with HDR effect, uses the short exposure frame to provide an accurate color proportion relation, and is used for correcting the color proportion of the long exposure frame in the nonlinear region and recovering the normal color of the long exposure frame. And the exposure time of the long exposure frame is normal exposure, so the signal to noise ratio is better, and the real single-frame high dynamic range image effect is realized.
In the embodiment of the application, the resolution of the short exposure frame and the resolution of the long exposure frame can be the same, and can be set to be different from the resolution of the long exposure frame according to the requirement.
For example, a short shot frame of small resolution may be generated, as only providing a local average color scale, which may also provide such information. Outputting and processing short exposure frames of small resolution reduces the power consumption of the processing.
In the embodiment of the application, the main function of the short exposure frame is to provide an accurate color proportion relation, so that the short exposure frame is subjected to subsequent processing, such as brightness alignment, and provided to BLTM modules and the like, which belong to expandable items, can be opened or closed according to the actual requirements of the items.
In the embodiment of the application, the used long exposure frame and short exposure frame can be output by the same Sensor, in the actual operation process, the long exposure frame and the short exposure frame can be output by different sensors, even an image with nonlinear response can be output by using N-FWC, and an image with normal linear response is generated by using another Sensor with FWC characteristics, the image functions like the short exposure frame listed in the scheme, and the linear image is used for providing accurate color proportion and brightness information for correcting the image with nonlinear response.
In the embodiment of the application, if two sensors are used, the two sensors can start exposure at the same time, the time difference of images is small, and only the spatial positions of the sensors cause a certain difference of the angles of view of the two sensors, and the difference is fixed and can be corrected, so that auxiliary information can be better provided for long exposure frames.
The embodiment of the application provides an image processing method, which is used for acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image; determining a color ratio parameter according to pixel values of different image components in the first image; and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object. That is, in the embodiment of the present application, while the HDR effect is achieved by the nonlinear response image, that is, the second image, the linear response image, that is, the color ratio parameter provided by the first image, is used to perform color correction on the second image with good signal-to-noise ratio, so as to obtain the image after color correction, thereby achieving both high dynamic range and signal-to-noise ratio, and simultaneously solving the problem of ghost in the motion scene, and finally obtaining a better image processing effect.
Based on the above-described embodiments, in another embodiment of the present application, fig. 10 is a schematic diagram illustrating a composition structure of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 10, an image processing apparatus 11 according to an embodiment of the present application may include an acquisition unit 111, a determination unit 112, a correction unit 113,
The acquiring unit 111 is configured to acquire a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image;
The determining unit 112 is configured to determine a color scale parameter according to pixel values of different image components in the first image;
the correction unit 113 is configured to perform color correction on the second image according to the color proportion parameter, so as to obtain a color-corrected image corresponding to the shooting object.
In an embodiment of the present application, further, fig. 11 is a schematic diagram of a composition structure of a terminal device according to an embodiment of the present application, and as shown in fig. 11, a terminal device 12 according to an embodiment of the present application may include a processor 121, a memory 122 storing instructions executable by the processor 121, further, the software testing device 10 may further include a communication interface 123, and a bus 124 for connecting the processor 121, the memory 122 and the communication interface 123.
In an embodiment of the present application, the Processor 121 may be at least one of an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (ProgRAMmable Logic Device, PLD), a field programmable gate array (Field ProgRAMmable GATE ARRAY, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronics for implementing the above-described processor functions may be other for different devices, and embodiments of the present application are not particularly limited. The software testing device 10 may further comprise a memory 122, which memory 122 may be connected to the processor 121, wherein the memory 122 is adapted to store executable program code comprising computer operation instructions, the memory 122 may comprise a high speed RAM memory, and may further comprise a non-volatile memory, e.g. at least two disk memories.
In an embodiment of the application, bus 124 is used to connect communication interface 123, processor 121, and memory 122 to each other and to communicate between these devices.
In an embodiment of the application, memory 122 is used to store instructions and data.
Further, in the embodiment of the present application, the processor 121 is configured to acquire a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image; determining a color ratio parameter according to pixel values of different image components in the first image; and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object.
In practical applications, the memory 122 may be a volatile memory (RAM), such as Random-access memory (RAM); or a nonvolatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a hard disk (HARD DISK DRIVE, HDD) or a Solid state disk (Solid-state-STATE DRIVE, SSD); or a combination of the above types of memories and provides instructions and data to the processor 121.
In addition, each functional module in the present embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional modules.
The integrated units, if implemented in the form of software functional modules, may be stored in a computer-readable storage medium, if not sold or used as separate products, and based on this understanding, the technical solution of the present embodiment may be embodied essentially or partly in the form of a software product, or all or part of the technical solution may be embodied in a storage medium, which includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or processor (processor) to perform all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (Random ACCess Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiment of the application provides a device and equipment for acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image; determining a color ratio parameter according to pixel values of different image components in the first image; and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object. That is, in the embodiment of the present application, while the HDR effect is achieved by the nonlinear response image, that is, the second image, the linear response image, that is, the color ratio parameter provided by the first image, is used to perform color correction on the second image with good signal-to-noise ratio, so as to obtain the image after color correction, thereby achieving both high dynamic range and signal-to-noise ratio, and simultaneously solving the problem of ghost in the motion scene, and finally obtaining a better image processing effect.
Specifically, the program instructions corresponding to one image processing method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disk, or a usb disk, and when the program instructions corresponding to one image processing method in the storage medium are read or executed by an electronic device, the method includes the steps of:
Acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image;
determining a color ratio parameter according to pixel values of different image components in the first image;
and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object.
The embodiment of the application also provides a computer program product.
In some embodiments, the computer program product may include a computer program or instructions.
In some embodiments, the computer program product may be applied to a terminal device in the embodiments of the present application, and the computer program instructions cause a computer to execute corresponding processes implemented by the terminal device in the methods in the embodiments of the present application, which are not described herein for brevity.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of implementations of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block and/or flow of the flowchart illustrations and/or block diagrams, and combinations of blocks and/or flow diagrams in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the present application.

Claims (12)

1. An image processing method, the method comprising:
Acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image;
determining a color ratio parameter according to pixel values of different image components in the first image;
and carrying out color correction on the second image according to the color proportion parameters to obtain a color corrected image corresponding to the shooting object.
2. The method of claim 1, wherein determining a color scale parameter from pixel values of different image components in the first image comprises:
determining a first color proportion corresponding to a first pixel position according to a pixel value of a first image component and a pixel value of a second image component corresponding to the first pixel position, and determining a second color proportion corresponding to the first pixel position according to a pixel value of the first image component and a pixel value of a third image component corresponding to the first pixel position; wherein the first pixel position is any pixel position in the first image;
The color scale parameters are determined based on a first color scale and a second color scale corresponding to each pixel location in the first image.
3. The method according to claim 2, wherein performing color correction on the second image according to the color scale parameter to obtain a color corrected image corresponding to the photographic subject, comprises:
According to the pixel value of the first image component, the pixel value of the second image component and the pixel value of the third image component corresponding to the second pixel position, the first segmentation value corresponding to the first image component, the second segmentation value corresponding to the second image component, the third segmentation value corresponding to the third image component and the color proportion parameter, respectively performing color correction on the pixel value of the first image component, the pixel value of the second image component and the pixel value of the third image component, and determining a correction value of the first image component, a correction value of the second image component and a correction value of the third image component; wherein the second pixel location is any pixel location in the second image;
And determining the color corrected image based on the correction value of the first image component, the correction value of the second image component and the correction value of the third image component corresponding to each pixel position in the second image.
4. A method according to claim 3, wherein the determining the correction value of the first image component, the correction value of the second image component, the correction value of the third image component, and the correction value of the third image component based on the pixel value of the first image component, the pixel value of the second image component, and the pixel value of the third image component, the first division value of the first image component, the second division value of the second image component, the third division value of the third image component, and the color scale parameter, respectively, includes:
Determining a pixel value of the first image component as a correction value of the first image component and determining a correction value of the second image component and a correction value of the third image component based on the correction value of the first image component and the color ratio parameter, in a case where the pixel value of the first image component is greater than the first division value and the pixel value of the second image component is greater than the second division value and the pixel value of the third image component is greater than the third division value;
Determining a pixel value of the first image component as a correction value of the first image component and determining a correction value of the second image component and a correction value of the third image component based on the correction value of the first image component and the color ratio parameter, in a case where the pixel value of the first image component is less than or equal to the first division value and the pixel value of the second image component is greater than or equal to the second division value and the pixel value of the third image component is greater than or equal to the third division value;
Determining a pixel value of the third image component as a correction value of the third image component and determining a correction value of the first image component and a correction value of the second image component based on the correction value of the third image component and the color scale parameter, in a case where the pixel value of the first image component is greater than the first segmentation value and the pixel value of the third image component is less than or equal to the third segmentation value;
in a case where the pixel value of the first image component is greater than the first division value and the pixel value of the second image component is less than or equal to the second division value, determining the pixel value of the second image component as a correction value of the second image component, and determining the correction value of the first image component and the correction value of the third image component based on the correction value of the second image component and the color ratio parameter.
5. The method according to any one of claims 1-4, further comprising:
performing lens shading correction LSC, automatic white balance AWB, highlight retention HLR and Bayer domain local tone mapping BLTM on the color corrected image respectively to obtain a processed image corresponding to the shooting object; wherein,
In the process of carrying out HLR, carrying out bit width correction processing according to the anchor point value and the bit width threshold value;
a first bit width of the LSC processed image is equal to a second bit width of the AWB processed image;
And the third bit width of the image processed by the HLR is smaller than or equal to the first bit width.
6. The method of claim 5, wherein the method further comprises:
Performing LSC, AWB and brightness alignment on the first image respectively to obtain an aligned image;
And carrying out BLTM processing according to the aligned image and the image processed by the HLR to obtain a processed image corresponding to the shooting object.
7. The method according to any of claims 1-4,6, wherein before determining a color scale parameter from pixel values of different image components in the first image, the method further comprises:
smoothing the first image.
8. The method according to claim 1, wherein acquiring the first image and the second image corresponding to the photographic subject includes:
acquiring the first image of short exposure and the second image of long exposure by the same image sensor; or alternatively
Respectively acquiring the first image with short exposure and the second image with long exposure through different image sensors; or alternatively
The first image is acquired based on full well capacity FWC characteristics and the second image is acquired based on nonlinear full well capacity N-FWC characteristics.
9. An image processing apparatus, characterized in that the image processing apparatus comprises: an acquisition unit, a determination unit, a correction unit,
The acquisition unit is used for acquiring a first image and a second image corresponding to a shooting object; the first image is a linear response image, and the second image is a nonlinear response image;
The determining unit is used for determining color proportion parameters according to pixel values of different image components in the first image;
The correction unit is used for carrying out color correction on the second image according to the color proportion parameter to obtain a color corrected image corresponding to the shooting object.
10. A terminal device, characterized in that the terminal device comprises: a processor and a memory; wherein,
The memory is used for storing a computer program capable of running on the processor;
The processor for performing the method of any of claims 1 to 8 when the computer program is run.
11. A computer readable storage medium having stored thereon a program, which when executed by a processor, implements the method of any of claims 1 to 8.
12. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of any one of claims 1 to 8.
CN202410274950.7A 2024-03-11 2024-03-11 Image processing method, apparatus, storage medium, and computer program product Pending CN118250404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410274950.7A CN118250404A (en) 2024-03-11 2024-03-11 Image processing method, apparatus, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410274950.7A CN118250404A (en) 2024-03-11 2024-03-11 Image processing method, apparatus, storage medium, and computer program product

Publications (1)

Publication Number Publication Date
CN118250404A true CN118250404A (en) 2024-06-25

Family

ID=91557564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410274950.7A Pending CN118250404A (en) 2024-03-11 2024-03-11 Image processing method, apparatus, storage medium, and computer program product

Country Status (1)

Country Link
CN (1) CN118250404A (en)

Similar Documents

Publication Publication Date Title
US10412296B2 (en) Camera using preview image to select exposure
US10171786B2 (en) Lens shading modulation
US8711249B2 (en) Method of and apparatus for image denoising
US8624923B2 (en) Method of forming an image based on a plurality of image frames, image processing system and digital camera
US7834915B2 (en) Image processing apparatus, imaging apparatus, imaging processing method, and computer program
US9077905B2 (en) Image capturing apparatus and control method thereof
JP5343726B2 (en) Image processing apparatus and image processing program
JP5693271B2 (en) Image processing apparatus and method
JP4639037B2 (en) Image processing method and apparatus
WO2015119271A1 (en) Image processing device, imaging device, image processing method, computer-processable non-temporary storage medium
US8526736B2 (en) Image processing apparatus for correcting luminance and method thereof
WO2008056566A1 (en) Image signal processing apparatus, image signal processing program and image signal processing method
US8463034B2 (en) Image processing system and computer-readable recording medium for recording image processing program
JP5932392B2 (en) Image processing apparatus and image processing method
JP5814610B2 (en) Image processing apparatus, control method therefor, and program
JP2010183460A (en) Image capturing apparatus and method of controlling the same
CN118250404A (en) Image processing method, apparatus, storage medium, and computer program product
US8867830B2 (en) Image processing method for recovering details in overexposed digital video footage or digital still images
KR20120060278A (en) Method and apparatus for automatic brightness adjustment of image signal processor
US20090086059A1 (en) Image Taking System, and Image Signal Processing Program
JP2010183461A (en) Image capturing apparatus and method of controlling the same
KR102170697B1 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination