CN108111785B - Image processing method and device, computer readable storage medium and computer device - Google Patents

Image processing method and device, computer readable storage medium and computer device Download PDF

Info

Publication number
CN108111785B
CN108111785B CN201711465123.2A CN201711465123A CN108111785B CN 108111785 B CN108111785 B CN 108111785B CN 201711465123 A CN201711465123 A CN 201711465123A CN 108111785 B CN108111785 B CN 108111785B
Authority
CN
China
Prior art keywords
image
dynamic range
pixel
determining
green
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711465123.2A
Other languages
Chinese (zh)
Other versions
CN108111785A (en
Inventor
李智乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711465123.2A priority Critical patent/CN108111785B/en
Publication of CN108111785A publication Critical patent/CN108111785A/en
Application granted granted Critical
Publication of CN108111785B publication Critical patent/CN108111785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/63Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current

Abstract

The application discloses an image processing method. The image processing method comprises the following steps: processing the image to be processed to subtract a dark level of a predetermined proportion from a pixel value of each pixel of the image to be processed to obtain a primarily blackened image; denoising the primarily blacked-out image to obtain a denoised image; and processing the noise-reduced image to subtract the dark level of a residual proportion from the pixel value of each pixel of the noise-reduced image to obtain a blackened image, the sum of the residual proportion and the predetermined proportion being 1, the residual proportion being greater than the predetermined proportion. The application also discloses an image processing apparatus, a computer readable storage medium and a computer device. The image processing method and device, the computer readable storage medium and the computer device in the embodiment of the application can avoid poor effect of a dark part of an image due to weak signals in noise reduction processing, and can also avoid the deterioration of subsequent various image processing effects due to the existence of black level, thereby improving the image processing effect on the whole.

Description

Image processing method and device, computer readable storage medium and computer device
Technical Field
The present application relates to image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and a computer device.
Background
The related art image processing method performs black level correction on an image (for example, subtracting a black level from a pixel value of each pixel of the image), and then performs noise reduction on the corrected image. However, the original signal (pixel value of a pixel) of the dark portion of the image is weak, and the signal after the black level correction is weaker, resulting in a poor effect of the noise reduction processing of the dark portion of the image.
Disclosure of Invention
Embodiments of the present application provide an image processing method, an image processing apparatus, a computer-readable storage medium, and a computer device.
The image processing method of the embodiment of the application comprises the following steps:
processing an image to be processed to subtract a dark level of a predetermined proportion from a pixel value of each pixel of the image to be processed to obtain a primarily blackened image;
denoising the initial blackout-removed image to obtain a denoised image; and
processing the noise-reduced image to subtract a dark level of a residual proportion from a pixel value of each pixel of the noise-reduced image to obtain a blackened image, wherein the sum of the residual proportion and the predetermined proportion is 1, and the residual proportion is larger than the predetermined proportion.
An image processing apparatus according to an embodiment of the present application includes:
the first processing module is used for processing an image to be processed to subtract a dark level of a preset proportion from a pixel value of each pixel of the image to be processed to obtain a primary black-removed image;
the second processing module is used for denoising the initial black-removed image to obtain a denoised image; and
a third processing module, configured to process the noise-reduced image to subtract a dark level of a residual proportion from a pixel value of each pixel of the noise-reduced image to obtain a blackened image, where a sum of the residual proportion and the predetermined proportion is 1, and the residual proportion is greater than the predetermined proportion.
One or more non-transitory computer-readable storage media embodying computer-executable instructions that, when executed by one or more processors, cause the processors to perform the photography method of embodiments of the present application.
A computer device according to an embodiment of the present application includes a memory and a processor, where the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the image processing method.
According to the image processing method and device, the computer readable storage medium and the computer device, only most of black levels are subtracted before the image is subjected to noise reduction processing, signals of dark parts of the image can be kept to a large extent, the phenomenon that the effect of the dark parts of the image is poor due to weak signals in the noise reduction processing is avoided, all the black levels are further subtracted after the noise reduction processing, the phenomenon that the effect of subsequent various image processing is poor due to the existence of the black levels can be avoided, and the image processing effect is integrally improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application.
FIG. 2 is a block diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 3 is a schematic plan view of a computer device according to some embodiments of the present application.
FIG. 4 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 5 is a block diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 6 is a flow chart illustrating an image processing method according to some embodiments of the present application.
Fig. 7 is a block diagram of a second processing module of an image processing apparatus according to some embodiments of the present disclosure.
FIG. 8 is a flow chart illustrating an image processing method according to some embodiments of the present application.
Fig. 9 is a block diagram of a first processing unit of an image processing apparatus according to some embodiments of the present application.
FIG. 10 is a flow chart illustrating an image processing method according to some embodiments of the present application.
Fig. 11 is a block diagram of a second processing unit of an image processing apparatus according to some embodiments of the present application.
FIG. 12 is a schematic diagram of a row of horizontally filtered pixels according to some embodiments of the present application.
FIG. 13 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application.
Fig. 14 is a block diagram of a third processing unit of an image processing apparatus according to some embodiments of the present application.
FIG. 15 is a schematic diagram of a vertically filtered pixel row in accordance with certain embodiments of the present application.
FIG. 16 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 17 is a block diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 18 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application.
FIG. 19 is a block diagram of a first calibration module of an image processing apparatus according to some embodiments of the present disclosure.
Fig. 20 is a schematic diagram of pixels not on gain grid points according to some embodiments of the present application.
FIG. 21 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 22 is a block diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 23 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 24 is a block diagram of a second calibration module of an image processing apparatus according to some embodiments of the present disclosure.
FIG. 25 is a schematic diagram of a set of neighboring pixels according to some embodiments of the present application.
FIG. 26 is a flow chart illustrating an image processing method according to some embodiments of the present application.
FIG. 27 is a block diagram of an image processing apparatus according to some embodiments of the present application.
FIG. 28 is a flow chart illustrating an image processing method according to some embodiments of the present application.
Fig. 29 is a block diagram of a sixth processing module of the image processing apparatus according to some embodiments of the present application.
FIG. 30 is a flow chart illustrating an image processing method according to some embodiments of the present application.
Fig. 31 is a block diagram of a sixth processing module of the image processing apparatus according to some embodiments of the present application.
FIG. 32 is a block diagram of a computer device according to some embodiments of the present application.
FIG. 33 is a block diagram of an image processing circuit according to some embodiments of the present application.
Description of the main element symbols:
computer device 100, image processing apparatus 10, first processing module 12, second processing module 14, first processing unit 142, first determining subunit 1422, second determining subunit 1424, first calculating subunit 1426, first judging subunit 1428 and first processing subunit 1421, second processing unit 144, third determining subunit 1442, fourth determining subunit 1444, second calculating subunit 1446, fifth determining subunit 1448, second judging subunit 1441, second processing subunit 1443 and sixth determining subunit 1445, third processing unit 146, seventh determining subunit 1462, eighth determining subunit 1464, third calculating subunit 1466, ninth determining subunit 1468, third judging subunit 1461, third processing subunit 1463 and tenth determining subunit 1465, third processing module 16, first acquiring module 18, fourth processing module 11, first correcting module 13, first correcting module 18, fourth processing module 11, and second correcting module 1462, A first determination unit 132, a second determination unit 134, a first determination unit 136, a fourth processing unit 138, a fifth processing unit 131, a second correction module 15, a third determination unit 152, a first calculation unit 154, a fourth determination unit 156, a fifth determination unit 158, a second determination unit 151, a sixth processing unit 153, a seventh processing unit 155, a fifth processing module 17, a sixth processing module 19, an eighth processing unit 192, a sixth determination unit 194, a seventh determination unit 196, an eighth determination unit 198, a ninth processing unit 191, a ninth determination unit 193, a third determination unit 195, a tenth processing unit 197, a seventh processing module 1X, a system bus 20, a processor 40, a memory 60, a memory 80, a display screen 30, an input device 50, an image processing circuit 70, an ISP processor 72, a control logic 74, a camera 76, a lens, and an image sensor 764, Sensor 78, image memory 71, display 73, encoder/decoder 75.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first weight can be referred to as a second weight, and similarly, a second weight can be referred to as a first weight, without departing from the scope of the application. Both the first weight and the second weight are weights, but not the same weight.
Referring to fig. 1, an image processing method according to an embodiment of the present application includes the following steps:
s12: processing the image to be processed to subtract a dark level of a predetermined proportion from a pixel value of each pixel of the image to be processed to obtain a primarily blackened image;
s14: denoising the primarily blacked-out image to obtain a denoised image; and
s16: the noise-reduced image is processed to subtract a dark level of a residual ratio, which is greater than a predetermined ratio and is 1, from a pixel value of each pixel of the noise-reduced image to obtain a blackened image.
Referring to fig. 2, an image processing apparatus 10 according to an embodiment of the present disclosure includes a first processing module 12, a second processing module 14, and a third processing module 16. The first processing module 12 is configured to process the image to be processed to subtract a predetermined proportion of the dark level from the pixel value of each pixel of the image to be processed to obtain an initial blackened image. The second processing module 14 is configured to denoise the first blacked out image to obtain a denoised image. The third processing module 16 is configured to process the noise-reduced image to subtract the dark level of a residual proportion from the pixel value of each pixel of the noise-reduced image to obtain a blackened image, where a sum of the residual proportion and a predetermined proportion is 1, and the residual proportion is greater than the predetermined proportion.
That is, the image processing method according to the embodiment of the present application may be implemented by the image processing apparatus 10 according to the embodiment of the present application, wherein the step S12 may be implemented by the first processing module 12, the step S14 may be implemented by the second processing module 14, and the step S16 may be implemented by the third processing module 16.
Referring to fig. 3, the image processing apparatus 10 according to the embodiment of the present application may be applied to the computer device 100 according to the embodiment of the present application, that is, the computer device 100 according to the embodiment of the present application may include the image processing apparatus 10 according to the embodiment of the present application.
The image processing method, the image processing apparatus 10, and the computer device 100 according to the embodiment of the present application can maintain the signal of the dark portion of the image to a large extent by subtracting at most a part of the black level before performing the noise reduction processing on the image, and prevent the dark portion of the image from having a poor effect due to a weak signal during the noise reduction processing, and further subtract all the black levels after performing the noise reduction processing, thereby preventing the effects of various subsequent image processing from being deteriorated due to the existence of the black level, and improving the image processing effect as a whole.
Referring to fig. 4, in some embodiments, the image processing method of the present application further includes the following steps:
s18: reading an original image from an image sensor; and
s11: and preprocessing the original image to determine statistical information and obtain an image to be processed.
Referring to fig. 5, in some embodiments, the image processing apparatus 10 includes a first obtaining module 18 and a fourth processing module 11. The first acquiring module 11 is used for reading an original image from an image sensor. The fourth processing module 11 is configured to pre-process the original image to determine statistical information and obtain an image to be processed.
The image sensor includes an integrated circuit having a pixel array and a color filter array disposed over the pixel array. Each pixel includes a photodetector that can be used to detect light intensity. A color filter array may be used to detect the wavelength. Based on the detected light intensity and wavelength information, an original image can be formed. The preprocessing is carried out on the original image, and statistical information needed to be used in the subsequent image processing can be determined, so that the subsequent image processing can be normally carried out.
In some embodiments, the original image includes raw data of a bayer array.
The Bayer array is the most widely applied color filter array, and is simple, easy to obtain and easy to realize. When the bayer array is employed as the color filter array, the original image includes raw data of the bayer array.
In some embodiments, the image sensor comprises a CMOS image sensor.
The CMOS image sensor not only saves electricity and can reduce the use cost, but also can greatly simplify the hardware structure of the system.
In some embodiments, the predetermined ratio is 10% or less.
The predetermined proportion is less than or equal to 10 percent, so that the signal of the initial black-removed image in a dark place is not too weak, and the noise reduction effect is more obvious. The predetermined ratio may be 0, that is, no black level correction is performed.
Referring to fig. 6, in some embodiments, step S14 includes the following steps:
s142: correcting the inconsistency of two adjacent green pixels in the diagonal direction in the primarily deblurred image to obtain a green consistent image;
s144: applying horizontal filtering to the green consistent image to obtain a horizontally filtered image; and
s146: vertical filtering is applied to the horizontally filtered image to obtain a noise reduced image.
Referring to fig. 7, in some embodiments, the second processing module 14 includes a first processing unit 142, a second processing unit 144, and a third processing unit 146. The first processing unit 142 is configured to correct the disparity of two adjacent green pixels in the diagonal direction in the initial blacked-out image to obtain a green consistent image, and the second processing unit 144 is configured to apply horizontal filtering to the green consistent image to obtain a horizontally filtered image. The third processing unit 146 is configured to apply vertical filtering to the horizontally filtered image to obtain a noise-reduced image.
Given the planar condition of uniform illumination, two adjacent green pixels Gr and Gb in the diagonal direction have a slight lightness difference, which is corrected to avoid the occurrence of artifacts in a full-color image after demosaicing. Horizontal filtering and vertical filtering are applied to the green uniform image so that the noise of the image is reduced.
Referring to fig. 8, in some embodiments, step S142 includes the following steps:
s1422: determining a green inconsistency correction threshold;
s1424: determining the pixel value of the current green pixel and the pixel value of a lower-right green pixel positioned at the lower-right corner of the current green pixel;
s1426: calculating the difference value between the pixel value of the current pixel and the pixel value of the green pixel at the lower right corner;
s1428: judging whether the difference value is smaller than a green inconsistency correction threshold value; and
s1421: and when the difference value is smaller than the green inconsistency correction threshold value, replacing the pixel value of the current green pixel and the pixel value of the lower-right green pixel by the average value of the pixel value of the current green pixel and the pixel value of the lower-right green pixel.
Referring to fig. 9, in some embodiments, the first processing unit 142 includes a first determining subunit 1422, a second determining subunit 1424, a first calculating subunit 1426, a first judging subunit 1428, and a first processing subunit 1421. The first determining subunit 1422 is configured to determine a green inconsistency correction threshold. The second determining subunit 1424 is configured to determine a pixel value of the current green pixel and a pixel value of a lower-right green pixel located at a lower-right corner of the current green pixel. The first calculating sub-unit 1426 is configured to calculate a difference between a pixel value of the current pixel and a pixel value of a lower-right green pixel. The first determining sub-unit 1428 is configured to determine whether the difference is smaller than the green inconsistency correction threshold. The first processing sub-unit 1421 is configured to replace the pixel value of the current green pixel and the pixel value of the lower-right green pixel with an average value of the pixel value of the current green pixel and the pixel value of the lower-right green pixel when the difference is smaller than the green inconsistency correction threshold.
The pixel value of the current green pixel and the pixel value of the lower right-corner green pixel are replaced by the average value of the pixel value of the current green pixel and the pixel value of the lower right-corner green pixel, so that the pixel value of the current green pixel and the pixel value of the lower right-corner green pixel are consistent, and further, the phenomenon of artifact in the full-color image after demosaicing is avoided. Moreover, such a correction method helps avoid averaging the current green pixel and the lower right green pixel across edges, improving and preserving sharpness.
Referring to fig. 10, in some embodiments, step S144 includes the following steps:
s1442: determining a horizontal filter, the horizontal filter comprising N taps, the N taps comprising a center tap;
s1444: determining a horizontal filtering pixel line, wherein the horizontal filtering pixel line comprises N pixels respectively corresponding to the N taps, and the horizontal filtering pixel line comprises a central pixel;
s1446: calculating a horizontal gradient across an edge of each of the N taps;
s1448: determining a horizontal edge threshold;
s1441: determining whether each horizontal gradient is above a horizontal edge threshold;
s1443: when the horizontal gradient is higher than the horizontal edge threshold value, folding a tap corresponding to the horizontal gradient to a central pixel; and
s1445: a horizontal filter output is determined based on the horizontal gradient.
Referring to fig. 11, in some embodiments, the second processing unit 144 includes a third determining subunit 1442, a fourth determining subunit 1444, a second calculating subunit 1446, a fifth determining subunit 1448, a second judging subunit 1441, a second processing subunit 1443, and a sixth determining subunit 1445. The third determining subunit 1442 is configured to determine a horizontal filter, which includes N taps, including a center tap. The fourth determining subunit 1444 is configured to determine a horizontal filtered pixel row, where the horizontal filtered pixel row includes N pixels corresponding to the N taps, respectively, and the horizontal filtered pixel row includes a center pixel. The second calculation subunit 1446 is used to calculate the horizontal gradient across the edge of each of the N taps. A fifth determining subunit 1448 is used for determining the horizontal edge threshold. The second determining subunit 1441 is used for determining whether each horizontal gradient is higher than a horizontal edge threshold. The second processing subunit 1443 is configured to fold the tap corresponding to the horizontal gradient to the center pixel when the horizontal gradient is above the horizontal edge threshold. The sixth determining subunit 1445 is configured to determine a horizontal filter output based on the horizontal gradient.
By applying horizontal filtering to the green consistent image, the horizontally filtered image is made less noisy than the green consistent image.
Referring to fig. 12, in some embodiments, the horizontal filter includes a 7-tap horizontal filter, the 7-tap horizontal filter includes 7 taps, the horizontal filtering pixel row includes 7 pixels (P0, P1, P2 … … P6) corresponding to the 7 taps, the center tap is disposed at the center pixel P3, and the horizontal gradient and the horizontal filtering output are determined by the following equations:
Eh0=abs(P0-P1);
Eh1=abs(P1-P2);
Eh2=abs(P2-P3);
Eh3=abs(P3-P4);
Eh4=abs(P4-P5);
Eh5=abs(P5-P6);
Phorz=C0×[(Eh2>horzTh[c])?P3:(Eh1>horzTh[c])?P2:(Eh0>horzTh[c])?P1:P0]+
C1×[(Eh2>horzTh[c])?P3:(Eh1>horzTh[c])?P2:P1]+
C2×[(Eh2>horzTh[c])?P3:P2]+
C3×P3+
C4×[(Eh3>horzTh[c])?P3:P4]+
C5×[(Eh3>horzTh[c])?P3:(Eh4>horzTh[c])?P4:P5]+
C6×[(Eh3>horzTh[c])?P3:(Eh4>horzTh[c])?P4:(Eh5>horzTh[c])?P5:P6]
wherein Eh0, Eh1, Eh2, Eh3, Eh4 and Eh5 are horizontal gradients, PhorzIs the horizontal filtering output, horzTh [ c ]]Is a horizontal edge threshold, where C0-C6 are filter tap coefficients corresponding to pixels P0-P6, respectively, in a horizontal filtered pixel row。
In some embodiments, the filter tap coefficients C0-C6 include 2's complement values having 3 integer bits and 13 fractional bits.
The complement can process the sign bit and the value field uniformly, so that the addition and the subtraction can process uniformly.
In some embodiments, the filter tap coefficients C0-C6 are symmetric about the center pixel P3.
In some embodiments, the filter tap coefficients C0-C6 are not symmetric about the center pixel P3.
The filter tap coefficients C0-C6 may or may not be symmetric about the center pixel P3 or the center pixel P3, making the processing more flexible with respect to the horizontal filter.
Referring to fig. 13, in some embodiments, step S146 includes the following steps:
s1462: determining a vertical filter, the vertical filter comprising N taps, the N taps comprising a center tap;
s1464: determining a vertical filtering pixel line, wherein the vertical filtering pixel line comprises N pixels respectively corresponding to N taps, and the vertical filtering pixel line comprises a central pixel;
s1466: calculating a vertical gradient across an edge of each of the N taps;
s1468: determining a vertical edge threshold;
s1461: judging whether the vertical gradient is higher than a vertical edge threshold value;
s1463: when the vertical gradient is higher than the vertical edge threshold value, folding a tap corresponding to the vertical gradient to a central pixel; and
s1465: a vertical filter output is determined based on the vertical gradient.
Referring to fig. 14, in some embodiments, third processing unit 146 includes a seventh determining subunit 1462, an eighth determining subunit 1464, a third calculating subunit 1466, a ninth determining subunit 1468, a third determining subunit 1461, a third processing subunit 1463, and a tenth determining subunit 1465.
The seventh determining subunit 1462 is configured to determine a vertical filter, which includes N taps including a center tap. The eighth determining subunit 1464 is configured to determine a vertical filtering pixel row, where the vertical filtering pixel row includes N pixels respectively corresponding to the N taps, and the vertical filtering pixel row includes a center pixel. The third computation subunit 1466 is used to compute the vertical gradient across the edge of each of the N taps. A ninth determination subunit 1468 is used to determine vertical edge thresholds. The third determining subunit 1461 is configured to determine whether the vertical gradient is higher than the vertical edge threshold. The third processing subunit 1463 is configured to fold the tap corresponding to the vertical gradient to the center pixel when the vertical gradient is above the vertical edge threshold. The tenth determining subunit 1465 is configured to determine a vertical filter output based on the vertical gradient.
By applying vertical filtering to the horizontally filtered image, the noise of the denoised image is made lower than the horizontally filtered image.
Referring to fig. 15, in some embodiments, the vertical filter includes a 5-tap vertical filter, the 5-tap vertical filter includes 5 taps, the vertical filter pixel row includes 5 pixels (P0, P1, P2 … … P4) corresponding to the 5 taps, the center tap is disposed at the center pixel P2, and the vertical gradient and the vertical filter output are determined by the following formulas:
Ev0=abs(P0-P1);
Ev1=abs(P1-P2);
Ev2=abs(P2-P3);
Ev3=abs(P3-P4);
Pvert=C0×[(Ev1>vertTh[c])?P2:(Ev0>vertTh[c])?P1:P0]+
C1×[(Ev1>vertTh[c])?P2:P1]+
C2×P2+
C3×[(Ev2>vertTh[c])?P2:P3]+
C4×[(Ev2>vertTh[c])?P2:(Ev3>vertTh[c])?P3:P4];
wherein Ev0, Ev1, Ev2 and Ev3 are vertical gradients, PvertIs the vertically filtered output, vertTh [ c ]]Is a vertical edge threshold, where C0-C4 are the filter tap coefficients corresponding to pixels P0-P4, respectively, in a vertical filtered pixel row.
Using the above formula, the vertical filtering output P is madevertCan be determined.
In some embodiments, the filter tap coefficients C0-C4 may be 2's complement values having 3 integer bits and 13 fractional bits.
The complement can process the sign bit and the value field uniformly, so that the addition and the subtraction can process uniformly.
In some embodiments, the filter tap coefficients C0-C4 are symmetric about the center pixel P2.
In some embodiments, the filter tap coefficients C0-C4 are not symmetric about the center pixel P2.
The filter tap coefficients C0-C4 may or may not be symmetric about the center pixel P2 or the center pixel P2, making the processing more flexible with respect to the vertical filter.
Referring to fig. 16, in some embodiments, the image processing method of the present application further includes the following steps:
s13: and carrying out lens shading correction on the primarily blacked-out image to obtain a lens shading corrected image.
Referring to fig. 17, the image processing apparatus 10 according to the embodiment of the present disclosure further includes a first calibration module 13. The first correction module 13 is configured to perform lens shading correction on the primarily blacked out image to obtain a lens shading-corrected image.
When the imaging distance of the camera is far, as the field angle is gradually increased, the oblique light beams which can pass through the camera lens are gradually reduced, so that the middle of the obtained image is brighter, the edge of the obtained image is darker, and the brightness of the image is uneven. The lens shading correction is carried out on the primarily blacked-out image, so that the adverse effect can be eliminated, and the accuracy of subsequent processing is improved.
Referring to fig. 18, in some embodiments, S13 includes the following steps:
s132: determining a two-dimensional gain grid, wherein the two-dimensional gain grid comprises gain grid points which are distributed in an original frame at fixed horizontal intervals and vertical intervals;
s134: determining a current pixel and an activation processing area, wherein the activation processing area comprises an area for lens shading correction;
s136: judging whether the current pixel is on the gain grid point;
s138: when the current pixel is on a specific grid point of the gain grid points, using the gain value of the specific grid point as the gain value of the current pixel; and
s131: and when the current pixel is not on the gain grid point, determining the gain value of the current pixel by using the upper left grid point, the upper right grid point, the lower left grid point and the lower right grid point of the grid where the current pixel is located.
Referring to fig. 19, in some embodiments, the first calibration module 13 includes a first determining unit 132, a second determining unit 134, a first determining unit 136, a fourth processing unit 138, and a fifth processing unit 131. The first determination unit 132 is configured to determine a two-dimensional gain grid, which includes gain grid points distributed at fixed horizontal and vertical intervals within the original frame. The second determination unit 134 is used to determine the current pixel and an activation processing area including an area where lens shading correction is performed. The first judgment unit 136 is used to judge whether the current pixel is at the gain grid point. The fourth processing unit 138 is configured to use the gain value of a particular grid point as the gain value of the current pixel when the current pixel is on the particular grid point among the gain grid points. The fifth processing unit 131 is configured to determine a gain value of the current pixel by using an upper left grid point, an upper right grid point, a lower left grid point, and a lower right grid point of the grid where the current pixel is located, when the current pixel is not on the gain grid point.
Referring to fig. 20, in some embodiments, the gain value of the current pixel determined by the upper left grid point, the upper right grid point, the lower left grid point and the lower right grid point of the grid where the current pixel is located may be determined by the following formula:
Figure GDA0002401364760000101
where G is the gain value of the current pixel, G0, G1, G2, and G3 are the gain values of the upper left grid point, the upper right grid point, the lower left grid point, and the lower right grid point, respectively, X and Y are the horizontal and vertical sizes of the grid spacing of the two-dimensional gain grid, respectively, and ii and jj are the horizontal and vertical offsets of the current pixel with respect to the upper left grid point.
Referring to fig. 21, in some embodiments, the image processing method of the present application further includes the following steps:
s15: and correcting the defective pixel of the primarily blacked-out image.
Because the image sensor has a plurality of elements and is easy to have defective pixels, the defective pixels can be eliminated by correcting the defective pixels of the primarily-removed black image, and the influence of the defective pixels on subsequent processing is reduced.
Referring to fig. 22, the image processing apparatus 10 according to the embodiment of the present disclosure further includes a second calibration module 15. The second correction module is used for correcting the defective pixels of the primarily blacked-out image.
Referring to fig. 23, in some embodiments, S15 includes the following steps:
s152: determining a current pixel and a set of neighboring pixels adjacent to the current pixel, each pixel in the set of neighboring pixels being within an original frame;
s154: calculating a pixel-to-pixel gradient for the current pixel and each pixel in a set of neighboring pixels;
s156: determining a pixel gradient threshold (dprTh) and determining a number C of pixel-to-pixel gradients less than the pixel gradient threshold (dprTh);
s158: determining a maximum count (dprMaxC);
s151: determining whether the number is less than a maximum count (dprMaxC);
s153: identifying the current pixel as a defective pixel when the number is less than a maximum count (dprMaxC); and
s155: the pixel value of the current pixel is replaced with the replacement value.
Referring to fig. 24, in some embodiments, the second correction module 15 includes a third determination unit 152, a first calculation unit 154, a fourth determination unit 156, a fifth determination unit 158, a second determination unit 151, a sixth processing unit 153, and a seventh processing unit 155. The third determining unit 152 is configured to determine the current pixel and a set of neighboring pixels adjacent to the current pixel, each of the pixels in the set of neighboring pixels being in the original frame. The first calculation unit 154 is configured to calculate a pixel-to-pixel gradient of the current pixel and each pixel of a set of neighboring pixels. The fourth determination unit 156 is for determining a pixel gradient threshold (dprTh) and determining a number C of pixel-to-pixel gradients smaller than the pixel gradient threshold (dprTh). The fifth determination unit 158 is used to determine the maximum count (dprMaxC). The second judgment unit 151 is for judging whether the number is less than the maximum count (dprMaxC). The sixth processing unit 153 is configured to identify the current pixel as a defective pixel when the number is less than a maximum count (dprMaxC). The seventh processing unit 155 is configured to replace the pixel value of the current pixel with the replacement value.
Referring to FIG. 25, in some embodiments, a group of adjacent pixels are pixels P0, P1, P2 and P3 in horizontal order, the current pixel includes P between P1 and P2, and the pixel-to-pixel gradient and number can be determined by the following formula:
Gk=abs(P-Pk) Wherein k is more than or equal to 0 and less than or equal to 3;
Figure GDA0002401364760000111
wherein k is more than or equal to 0 and less than or equal to 3.
Referring to fig. 26, in some embodiments, the image processing method of the present application further includes the following steps:
s17: processing the image after the black removal to obtain a pre-tone mapping image;
s19: applying local tone mapping to the pre-tone mapped image to obtain a local tone mapped image; and
S1X: the locally tone mapped image is processed to obtain an output image.
Referring to fig. 27, the image processing apparatus 10 according to the embodiment of the present application further includes:
a fifth processing module 17, configured to process the blackened image to obtain a pre-tone mapped image;
a sixth processing module 19, configured to apply local tone mapping to the pre-tone-mapped image to obtain a local tone-mapped image; and
a seventh processing module 1X, configured to process the locally tone-mapped image to obtain an output image.
Tone mapping techniques are used in image processing to map one set of pixel values to another. Applying tone mapping may map input image values to corresponding values of an output range of the input image when the input image and the output image have different bit accuracies. Tone mapping techniques include local tone mapping techniques and full tone mapping techniques. The local tone mapping technique can improve local contrast and output an image with better local contrast, thereby aesthetically satisfying viewers, compared to the global tone mapping technique. Thus, applying local tone mapping to the deblocked image may result in an improved contrast property of the locally tone mapped image.
Referring to fig. 28, in some embodiments, S19 includes the following steps:
s192: dividing the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image;
s194: determining a total available output dynamic range for the current portion;
s196: determining an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range;
s198: determining an actual dynamic range in the current portion; and
s191: the actual dynamic range is mapped to the output dynamic range.
Referring to fig. 29, in some embodiments, the sixth processing module 19 includes an eighth processing unit 192, a sixth determining unit 194, a seventh determining unit 196, an eighth determining unit 198, and a ninth processing unit 191. The eighth processing unit 192 is for dividing the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image. The sixth determining unit 194 is adapted to determine the total available output dynamic range for the current portion. The seventh determining unit 196 is configured to determine an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range. The eighth determination unit 198 is used to determine the actual dynamic range in the current section. The ninth processing unit 191 is configured to map the actual dynamic range to the output dynamic range.
Limiting the output dynamic range to 90% of the total available output dynamic range can weaken the local tone mapping, and further make the noise of the image after the local tone mapping less obvious.
Referring to fig. 30, in some embodiments, S19 includes the following steps:
s192: dividing the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image;
s194: determining a total available output dynamic range for the current portion;
s196: determining an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range;
s198: determining an actual dynamic range of the current portion;
s193: determining a total available dynamic range for the current portion;
s195: judging whether the actual dynamic range is smaller than the total available dynamic range;
s197: when the actual dynamic range is less than the total available dynamic range, the actual dynamic range is expanded by mapping the actual dynamic range to the total available dynamic range to obtain an expanded actual dynamic range and mapping the expanded actual dynamic range to an output dynamic range.
Referring to fig. 31, in some embodiments, the sixth processing module 19 includes an eighth processing unit 192, a sixth determining unit 194, a seventh determining unit 196, an eighth determining unit 198, a ninth determining unit 193, a third judging unit 195, and a tenth processing unit 197. The eighth processing unit 192 is for dividing the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image. The sixth determining unit 194 is adapted to determine the total available output dynamic range for the current portion. The seventh determining unit 196 is configured to determine an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range. The eighth determination unit 198 is used to determine the actual dynamic range of the current portion. The ninth determining unit 193 is configured to determine a total available dynamic range of the current portion. The third judging unit 195 is configured to judge whether the actual dynamic range is smaller than the total available dynamic range. The tenth processing unit 197 is configured to, when the actual dynamic range is smaller than the total available dynamic range, expand the actual dynamic range by mapping the actual dynamic range to the total available dynamic range to obtain an expanded actual dynamic range and map the expanded actual dynamic range to the output dynamic range.
Limiting the output dynamic range to 90% of the total available output dynamic range can weaken the local tone mapping, and further make the noise of the image after the local tone mapping less obvious. Furthermore, local tone mapping techniques generally do not consider whether an unused value or range of values is mapped, resulting in a portion of the output values being used to represent input values that are not actually present in the current portion, thereby reducing the available output values that may be used to represent input values present in the current portion. Here, when the actual dynamic range of the current portion is smaller than the total available dynamic range, the actual dynamic range is expanded to the total available dynamic range, and then the actual dynamic range is mapped to the output dynamic range, so that the problem can be solved, and the whole output range can be utilized by the input value of the current portion.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of:
s12: processing the image to be processed to subtract a dark level of a predetermined proportion from a pixel value of each pixel of the image to be processed to obtain a primarily blackened image;
s14: denoising the primarily blacked-out image to obtain a denoised image; and
s16: the noise-reduced image is processed to subtract a dark level of a residual ratio, which is greater than a predetermined ratio and is 1, from a pixel value of each pixel of the noise-reduced image to obtain a blackened image.
FIG. 32 is a diagram showing an internal configuration of a computer device according to an embodiment. As shown in fig. 32, the computer apparatus 100 includes a processor 40, a memory 60 (e.g., a nonvolatile storage medium), an internal memory 80, a display 30, and an input device 50, which are connected via a system bus 20. The memory 60 of the computer device 100 has stored therein an operating system and computer readable instructions. The computer readable instructions can be executed by the processor 40 to implement the image processing method of the embodiment of the present application. The processor 40 is used to provide computing and control capabilities that support the operation of the overall computing device 100. The internal memory 80 of the computer device 100 provides an environment for the execution of computer readable instructions in the memory 60. The display 30 of the computer device 100 may be a liquid crystal display or an electronic ink display, and the input device 50 may be a touch layer covered on the display 30, a key, a trackball or a touch pad arranged on a housing of the computer device 100, or an external keyboard, a touch pad or a mouse. The computer device 100 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc. It will be understood by those skilled in the art that the configuration shown in fig. 32 is only a schematic diagram of a part of the configuration related to the present application, and does not constitute a limitation to the computer device 100 to which the present application is applied, and a specific computer device 100 may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
Referring to fig. 33, the computer device 100 according to the embodiment of the present disclosure includes an Image Processing circuit 70, and the Image Processing circuit 70 may be implemented by hardware and/or software components and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 33 is a schematic diagram of image processing circuitry 70 in one embodiment. As shown in fig. 33, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present application are shown.
As shown in fig. 33, the image processing circuit 70 includes an ISP processor 72 (the ISP processor 72 may be the processor 40 or part of the processor 40) and control logic 74. The image data captured by the camera 76 is first processed by the ISP processor 72, and the ISP processor 72 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the camera 76. The camera 76 may include one or more lenses 762 and an image sensor 764. The image sensor 764, which may include an array of color filters (e.g., Bayer filters), may acquire the light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by the ISP processor 72. The sensor 78 (e.g., gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 72 based on the type of interface of the sensor 78. The sensor 78 interface may be a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interface, or a combination thereof.
In addition, the image sensor 764 may also send raw image data to the sensor 78, the sensor 78 may provide raw image data to the ISP processor 72 based on the type of interface of the sensor 78, or the sensor 78 may store raw image data in the image memory 71.
The ISP processor 72 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 72 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 72 may also receive image data from the image memory 71. For example, the sensor 78 interface sends raw image data to the image memory 71, and the raw image data in the image memory 71 is then provided to the ISP processor 72 for processing. The image Memory 71 may be the Memory 60, a portion of the Memory 60, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 764 interface or from the sensor 78 interface or from the image memory 71, the ISP processor 72 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to an image memory 71 for additional processing before being displayed. The ISP processor 72 receives the processed data from the image memory 71 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 72 may be output to display 73 (display 73 may include display screen 30) for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 72 may also be sent to the image memory 71, and the display 73 may read image data from the image memory 71. In one embodiment, image memory 71 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 72 may be transmitted to an encoder/decoder 75 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 73 device. The encoder/decoder 75 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by the ISP processor 72 may be sent to the control logic 74 unit. For example, the statistical data may include image sensor 764 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 762 shading correction, and the like. Control logic 74 may include a processing element and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters for camera 76 and ISP processor 72 based on the received statistical data. For example, the control parameters of the camera 76 may include sensor 78 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 762 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 762 shading correction parameters.
The following steps are used for realizing the image processing method by using the image processing technology in FIG. 15:
s12: processing the image to be processed to subtract a dark level of a predetermined proportion from a pixel value of each pixel of the image to be processed to obtain a primarily blackened image;
s14: denoising the primarily blacked-out image to obtain a denoised image; and
s16: the noise-reduced image is processed to subtract a dark level of a residual ratio, which is greater than a predetermined ratio and is 1, from a pixel value of each pixel of the noise-reduced image to obtain a blackened image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the above embodiments of the methods. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (20)

1. An image processing method characterized by comprising the steps of:
processing an image to be processed to subtract a dark level of a predetermined proportion from a pixel value of each pixel of the image to be processed to obtain a primarily blackened image;
denoising the initial blackout-removed image to obtain a denoised image; and
processing the noise-reduced image to subtract the dark level of a residual proportion from the pixel value of each pixel of the noise-reduced image to obtain a blackened image, wherein the sum of the residual proportion and the predetermined proportion is 1, and the residual proportion is larger than the predetermined proportion.
2. The image processing method according to claim 1, wherein the predetermined ratio is 10% or less.
3. The image processing method according to claim 1, wherein the step of denoising the initially blacked out image to obtain a denoised image comprises the steps of:
correcting the inconsistency of two adjacent green pixels in the diagonal direction in the primarily blackened image to obtain a green consistent image;
applying horizontal filtering to the green consistent image to obtain a horizontally filtered image; and
applying vertical filtering to the horizontally filtered image to obtain a noise reduced image.
4. The image processing method according to claim 3, wherein the step of correcting the inconsistency of two adjacent green pixels in diagonal directions in the initially blacked out image to obtain a green consistent image comprises the steps of:
determining a green inconsistency correction threshold;
determining a pixel value of a current green pixel and a pixel value of a lower-right green pixel located at a lower-right corner of the current green pixel;
calculating a difference value between the pixel value of the current green pixel and the pixel value of the lower right-hand green pixel;
judging whether the difference value is smaller than the green inconsistency correction threshold value; and
when the difference is less than the green inconsistency correction threshold, replacing the pixel value of the current green pixel and the pixel value of the lower-right green pixel with an average of the pixel value of the current green pixel and the pixel value of the lower-right green pixel.
5. The image processing method of claim 3, wherein the step of applying horizontal filtering to the green consistent image to obtain a horizontally filtered image comprises the steps of:
determining a horizontal filter comprising N taps, the N taps comprising a center tap;
determining a horizontal filtered pixel row comprising N pixels respectively corresponding to the N taps, the horizontal filtered pixel row comprising a center pixel;
calculating a horizontal gradient across an edge of each of the N taps;
determining a horizontal edge threshold;
determining whether each of the horizontal gradients is above the horizontal edge threshold;
when the horizontal gradient is higher than the horizontal edge threshold, folding a tap corresponding to the horizontal gradient to a central pixel; and
a horizontal filter output is determined based on the horizontal gradient.
6. The image processing method of claim 3, wherein the step of applying a vertical filter to the horizontally filtered image to obtain a noise reduced image comprises the steps of:
determining a vertical filter comprising N taps, the N taps comprising a center tap;
determining a vertical filtering pixel row, wherein the vertical filtering pixel row comprises N pixels respectively corresponding to the N taps, and the vertical filtering pixel row comprises a central pixel;
calculating a vertical gradient across an edge of each of the N taps;
determining a vertical edge threshold;
judging whether the vertical gradient is higher than the vertical edge threshold value;
when the vertical gradient is higher than the vertical edge threshold, folding a tap corresponding to the vertical gradient to the center pixel; and
a vertical filter output is determined based on the vertical gradient.
7. The image processing method according to claim 1, characterized by further comprising the steps of:
processing the deblocked image to obtain a pre-tone mapped image;
applying local tone mapping to the pre-tone mapped image to obtain a local tone mapped image; and
processing the locally tone-mapped image to obtain an output image.
8. The image processing method of claim 7, wherein the step of applying local tone mapping to the pre-tone mapped image to obtain a locally tone mapped image comprises:
dividing the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image;
determining a total available output dynamic range for the current portion;
determining an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range;
determining an actual dynamic range in the current portion; and
mapping the actual dynamic range to the output dynamic range.
9. The image processing method of claim 7, wherein the step of applying local tone mapping to the pre-tone mapped image to obtain a locally tone mapped image comprises:
dividing the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image;
determining a total available output dynamic range for the current portion;
determining an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range;
determining an actual dynamic range of the current portion;
determining a total available dynamic range for the current portion;
judging whether the actual dynamic range is smaller than the total available dynamic range;
when the actual dynamic range is less than the total available dynamic range, expanding the actual dynamic range by mapping the actual dynamic range to the total available dynamic range to obtain an expanded actual dynamic range and mapping the expanded actual dynamic range to the output dynamic range.
10. An image processing apparatus characterized by comprising:
the first processing module is used for processing an image to be processed to subtract a dark level of a preset proportion from a pixel value of each pixel of the image to be processed to obtain a primary black-removed image;
a second processing module, configured to perform noise reduction on the preliminary blackout removed image to obtain a noise-reduced image; and
a third processing module, configured to process the noise-reduced image to subtract the dark level of a remaining proportion from a pixel value of each pixel of the noise-reduced image to obtain a blackened image, where a sum of the remaining proportion and the predetermined proportion is 1, and the remaining proportion is greater than the predetermined proportion.
11. The image processing apparatus according to claim 10, wherein the predetermined ratio is less than 10%.
12. The image processing apparatus of claim 10, wherein the second processing module comprises:
a first processing unit, configured to correct inconsistency between two adjacent green pixels in a diagonal direction in the initially blacked-out image to obtain a green consistent image;
a second processing unit for applying horizontal filtering to the green consistent image to obtain a horizontally filtered image; and
a third processing unit for applying vertical filtering to the horizontally filtered image to obtain a noise reduced image.
13. The image processing apparatus according to claim 12, wherein the first processing unit includes:
a first determining subunit configured to determine a green inconsistency correction threshold;
a second determining subunit, configured to determine a pixel value of a current green pixel and a pixel value of a lower-right green pixel located at a lower-right corner of the current green pixel;
a first calculating subunit, configured to calculate a difference between a pixel value of the current green pixel and a pixel value of the lower-right green pixel;
a first judgment subunit, configured to judge whether the difference is smaller than the green inconsistency correction threshold; and
a first processing subunit to replace a pixel value of the current green pixel and a pixel value of the lower-right green pixel with an average of the pixel value of the current green pixel and the pixel value of the lower-right green pixel when the difference is less than the green inconsistency correction threshold.
14. The image processing apparatus according to claim 12, wherein the second processing unit includes:
a third determining subunit for determining a horizontal filter, the horizontal filter comprising N taps, the N taps comprising a center tap;
a fourth determining subunit configured to determine a horizontal filtering pixel row including N pixels respectively corresponding to the N taps, the horizontal filtering pixel row including a center pixel;
a second computation subunit to compute a horizontal gradient across an edge of each of the N taps;
a fifth determining subunit to determine a horizontal edge threshold;
a second determining subunit, configured to determine whether each horizontal gradient is higher than the horizontal edge threshold;
a second processing subunit, configured to fold a tap corresponding to the horizontal gradient to a center pixel when the horizontal gradient is higher than the horizontal edge threshold; and
a sixth determining subunit to determine a horizontal filter output based on the horizontal gradient.
15. The image processing apparatus according to claim 12, wherein the third processing unit includes:
a seventh determining subunit for determining a vertical filter, the vertical filter comprising N taps, the N taps comprising a center tap;
an eighth determining subunit, configured to determine a vertical filtering pixel row, where the vertical filtering pixel row includes N pixels respectively corresponding to the N taps, and the vertical filtering pixel row includes a center pixel;
a third computing subunit to compute a vertical gradient across an edge of each of the N taps;
a ninth determination subunit to determine a vertical edge threshold;
a third determining subunit, configured to determine whether the vertical gradient is higher than a vertical edge threshold;
a third processing subunit to, when the vertical gradient is above the vertical edge threshold, collapse a tap corresponding to the vertical gradient to the center pixel; and
a tenth determination subunit to determine a vertical filter output based on the vertical gradient.
16. The image processing apparatus according to claim 10, wherein the image processing apparatus comprises:
a fifth processing module, configured to process the blackened image to obtain a pre-tone mapped image;
a sixth processing module to apply local tone mapping to the pre-tone mapped image to obtain a local tone mapped image; and
a seventh processing module, configured to process the locally tone-mapped image to obtain an output image.
17. The image processing apparatus according to claim 16, wherein the sixth processing module includes:
an eighth processing unit to divide the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image;
a sixth determining unit for determining a total available output dynamic range for a current portion;
a seventh determining unit for determining an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range;
an eighth determining unit for determining an actual dynamic range in the current portion; and
a ninth processing unit to map the actual dynamic range to the output dynamic range.
18. The image processing apparatus according to claim 16, wherein the sixth processing module includes:
an eighth processing unit to divide the pre-tone mapped image into a plurality of portions based on local features of the pre-tone mapped image;
a sixth determining unit for determining a total available output dynamic range for a current portion;
a seventh determining unit for determining an output dynamic range based on the total available output dynamic range, the output dynamic range being 60% to 70% of the total available output dynamic range;
an eighth determining unit for determining an actual dynamic range of the current portion;
a ninth determining unit for determining a total available dynamic range of the current portion;
a third judging unit, configured to judge whether the actual dynamic range is smaller than the total available dynamic range;
a tenth processing unit to, when the actual dynamic range is smaller than the total available dynamic range, expand the actual dynamic range by mapping the actual dynamic range to the total available dynamic range to obtain an expanded actual dynamic range and map the expanded actual dynamic range to the output dynamic range.
19. A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the image processing method of any one of claims 1 to 9.
20. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the image processing method of any of claims 1 to 9.
CN201711465123.2A 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device Active CN108111785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711465123.2A CN108111785B (en) 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711465123.2A CN108111785B (en) 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device

Publications (2)

Publication Number Publication Date
CN108111785A CN108111785A (en) 2018-06-01
CN108111785B true CN108111785B (en) 2020-05-15

Family

ID=62214320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711465123.2A Active CN108111785B (en) 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device

Country Status (1)

Country Link
CN (1) CN108111785B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193859A (en) * 2019-03-29 2020-05-22 安庆市汇智科技咨询服务有限公司 Image processing system and work flow thereof
CN112487424A (en) * 2020-11-18 2021-03-12 重庆第二师范学院 Computer processing system and computer processing method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6982705B2 (en) * 2002-02-27 2006-01-03 Victor Company Of Japan, Ltd. Imaging apparatus and method of optical-black clamping
KR20070070990A (en) * 2005-12-29 2007-07-04 매그나칩 반도체 유한회사 Image sensor having automatic black level compensation function and method for compensating black level automatically
CN101035195A (en) * 2006-03-08 2007-09-12 深圳Tcl新技术有限公司 Adjusting method for the image quality
CN101076080A (en) * 2006-05-18 2007-11-21 富士胶片株式会社 Image-data noise reduction apparatus and method of controlling same
CN101086786A (en) * 2006-06-07 2007-12-12 富士施乐株式会社 Image generating apparatus, image processing apparatus and computer readable recording medium
CN102131040A (en) * 2010-06-04 2011-07-20 苹果公司 Adaptive lens shading correction
CN102457684A (en) * 2010-10-21 2012-05-16 英属开曼群岛商恒景科技股份有限公司 Black level calibration method and system for same
CN103795942A (en) * 2014-01-23 2014-05-14 中国科学院长春光学精密机械与物理研究所 Smear correction method of frame transfer CCD on basis of virtual reference lines
CN104221364A (en) * 2012-04-20 2014-12-17 株式会社理光 Imaging device and image processing method
CN105578082A (en) * 2016-01-29 2016-05-11 深圳市高巨创新科技开发有限公司 adaptive black level correction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140152844A1 (en) * 2012-09-21 2014-06-05 Sionyx, Inc. Black level calibration methods for image sensors

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6982705B2 (en) * 2002-02-27 2006-01-03 Victor Company Of Japan, Ltd. Imaging apparatus and method of optical-black clamping
KR20070070990A (en) * 2005-12-29 2007-07-04 매그나칩 반도체 유한회사 Image sensor having automatic black level compensation function and method for compensating black level automatically
CN101035195A (en) * 2006-03-08 2007-09-12 深圳Tcl新技术有限公司 Adjusting method for the image quality
CN101076080A (en) * 2006-05-18 2007-11-21 富士胶片株式会社 Image-data noise reduction apparatus and method of controlling same
CN101086786A (en) * 2006-06-07 2007-12-12 富士施乐株式会社 Image generating apparatus, image processing apparatus and computer readable recording medium
CN102131040A (en) * 2010-06-04 2011-07-20 苹果公司 Adaptive lens shading correction
CN102457684A (en) * 2010-10-21 2012-05-16 英属开曼群岛商恒景科技股份有限公司 Black level calibration method and system for same
CN104221364A (en) * 2012-04-20 2014-12-17 株式会社理光 Imaging device and image processing method
CN103795942A (en) * 2014-01-23 2014-05-14 中国科学院长春光学精密机械与物理研究所 Smear correction method of frame transfer CCD on basis of virtual reference lines
CN105578082A (en) * 2016-01-29 2016-05-11 深圳市高巨创新科技开发有限公司 adaptive black level correction method

Also Published As

Publication number Publication date
CN108111785A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108322669B (en) Image acquisition method and apparatus, imaging apparatus, and readable storage medium
CN107424198B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN109005364B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
CN108335279B (en) Image fusion and HDR imaging
CN107493432B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN109089046B (en) Image noise reduction method and device, computer readable storage medium and electronic equipment
US8891867B2 (en) Image processing method
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN109348088B (en) Image noise reduction method and device, electronic equipment and computer readable storage medium
US10805508B2 (en) Image processing method, and device
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107563979B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108024057B (en) Background blurring processing method, device and equipment
US11233948B2 (en) Exposure control method and device, and electronic device
CN107317967B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108989699B (en) Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
CN108259754B (en) Image processing method and device, computer readable storage medium and computer device
CN110942427A (en) Image noise reduction method and device, equipment and storage medium
CN108111785B (en) Image processing method and device, computer readable storage medium and computer device
CN107454317B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110213462B (en) Image processing method, image processing device, electronic apparatus, image processing circuit, and storage medium
CN107392870B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107464225B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

GR01 Patent grant
GR01 Patent grant