CN110430403B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110430403B
CN110430403B CN201910675913.6A CN201910675913A CN110430403B CN 110430403 B CN110430403 B CN 110430403B CN 201910675913 A CN201910675913 A CN 201910675913A CN 110430403 B CN110430403 B CN 110430403B
Authority
CN
China
Prior art keywords
channel component
pixel
pixel value
pixel point
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910675913.6A
Other languages
Chinese (zh)
Other versions
CN110430403A (en
Inventor
赵华
杨凯茜
魏三强
何婷
杨俊�
易雪薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xitu Information Technology Co ltd
Original Assignee
Shanghai Xitu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xitu Information Technology Co ltd filed Critical Shanghai Xitu Information Technology Co ltd
Priority to CN201910675913.6A priority Critical patent/CN110430403B/en
Publication of CN110430403A publication Critical patent/CN110430403A/en
Application granted granted Critical
Publication of CN110430403B publication Critical patent/CN110430403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths

Abstract

The invention discloses an image processing method and a corresponding device, wherein the method comprises the following steps: acquiring a photosensitive image to be processed acquired by image acquisition equipment, wherein each pixel point in the image to be processed corresponds to an original pixel value of one of an R channel component, a G channel component and a B channel component; carrying out bilinear color interpolation processing on an image to be processed by using an original pixel value of a channel component corresponding to each pixel point to obtain target pixel values of three channel components corresponding to each pixel point; correcting a second target pixel value of the R channel component and a third target pixel value of the B channel component of each pixel point by using a first target pixel value of the G channel component of each pixel point to obtain a processed color image; and outputting the processed color image. By processing the image by the technical scheme, the imaging quality can be improved, and the delay time can be shortened.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
The image sensor can only capture one component of RGB colors on one pixel. In order to obtain the best image effect, 3 image sensors are needed to respectively acquire different color components. However, in consideration of the cost and design complexity of the product, a common digital imaging device collects an image by covering a color filter array on the surface of a sensor, the image collected by an optical lens is filtered by the color filter array and then is subjected to light sensing on the image sensor, each pixel point of the obtained image data represents one color in the rgb, and color interpolation is performed on the data to obtain a color image to be collected. At present, various types of color filter arrays exist, wherein the Bayer type color filter array has good color signal sensitivity and color recovery characteristics, and is widely applied to various image sensors.
Color interpolation is a key technology in image post-processing, and the selection of an interpolation algorithm directly influences the final effect of an image. For different application fields, the algorithm complexity and the recovery effect of the interpolation algorithm need to be comprehensively considered, and a proper algorithm needs to be selected.
Disclosure of Invention
In view of the above problems, the present invention provides an image processing method and a corresponding apparatus, which can improve imaging quality and shorten delay time by using a bilinear interpolation algorithm as an algorithm for processing an image file of an ultra high definition RAW format image file with not less than 2000 ten thousand pixels.
According to a first aspect of embodiments of the present invention, there is provided an image processing method including:
acquiring a photosensitive image to be processed acquired by image acquisition equipment, wherein each pixel point in the image to be processed corresponds to an original pixel value of one of an R channel component, a G channel component and a B channel component;
carrying out bilinear color interpolation processing on the image to be processed by using the original pixel value of the channel component corresponding to each pixel point to obtain target pixel values of three channel components corresponding to each pixel point;
correcting a second target pixel value of the R channel component and a third target pixel value of the B channel component of each pixel point by using a first target pixel value of the G channel component of each pixel point to obtain a processed color image;
and outputting the processed color image.
In one embodiment, preferably, performing bilinear color interpolation on the image to be processed by using an original pixel value of a channel component corresponding to each pixel point to obtain target pixel values of three channel components corresponding to each pixel point, including:
aiming at each pixel point, calculating a first target pixel value of a G channel component by utilizing a first interpolation calculation formula and an original pixel value of the G channel component in four surrounding nearest neighbor pixel points;
aiming at each pixel point, calculating a second target pixel value of the R channel component by utilizing a second interpolation calculation formula and an original pixel value of the R channel component in four surrounding nearest neighbor pixel points;
and aiming at each pixel point, calculating a third target pixel value of the B channel component by using a third interpolation calculation formula and the original pixel value of the B channel component in the four pixel points which are nearest to the pixel points around the third interpolation calculation formula.
In one embodiment, preferably, the first interpolation calculation formula includes:
G(i,j)=(g(i,j)+g(i+1,j)+g(i,j+1)+g(i+1,j+1))/x
wherein, (i, j) represents a pixel point coordinate, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent four pixel point coordinates nearest to the pixel point (i, j), G (i, j) represents a first target pixel value of a G channel component of the pixel point (i, j), G (i, j) represents an original pixel value of the G channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the G channel component, the original pixel value corresponding to the pixel point is 0, and x is the number of the original pixel values representing the G channel component in the four pixel points;
the second interpolation calculation formula includes:
R’(i,j)=(r(i,j)+r(i+1,j)+r(i,j+1)+r(i+1,j+1))/y
wherein, (i, j) represents a pixel point coordinate, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent four pixel point coordinates nearest to the pixel point (i, j), R' (i, j) represents a second target pixel value of a G channel component of the pixel point (i, j), R (i, j) represents an original pixel value of an R channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the R channel component, the original pixel value corresponding to the pixel point is 0, and y is the number of the original pixel values representing the R channel component in the four pixel points;
the third interpolation calculation formula includes:
B’(i,j)=(b(i,j)+b(i+1,j)+b(i,j+1)+b(i+1,j+1))/z
wherein, (i, j) represents a pixel point coordinate, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent four pixel point coordinates nearest to the pixel point (i, j), B' (i, j) represents a second target pixel value of the B channel component of the pixel point (i, j), B (i, j) represents an original pixel value of the B channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the B channel component, the original pixel value corresponding to the pixel point is 0, and z is the number of the original pixel values representing the B channel component in the four pixel points.
In one embodiment, preferably, the modifying the second target pixel value of the R-channel component and the third target pixel value of the B-channel component by using the first target pixel value of the G-channel component of each pixel point includes:
aiming at each pixel point, obtaining a corrected pixel value of an R channel component of each pixel point by utilizing a first correction formula and a first target pixel value of a G channel component and a second target pixel value of the R channel component of each pixel point;
and aiming at each pixel point, obtaining a corrected pixel value of the B channel component of the pixel point by utilizing a second correction formula and the first target pixel value of the G channel component and the third target pixel value of the B channel component of each pixel point.
In one embodiment, preferably, the first modification formula includes:
r (i, j) ═ R '(i, j) + G' (i, j) -G mean
Wherein, (i, j) represents a pixel point coordinate, R (i, j) represents a corrected pixel value of an R channel component of the pixel point (i, j), R '(i, j) represents a second target pixel value of the R channel component of the pixel point (i, j), G' (i, j) represents a first target pixel value or an original pixel value of the G channel component of the pixel point (i, j), and G mean represents a mean value of target pixel values of the G channel component of at least one pixel point of the R channel component corresponding to the original pixel value among 8 pixel points around the pixel point (i, j);
the second correction formula includes:
b (i, j) ═ B '(i, j) + G' (i, j) -G mean
Wherein, (i, j) represents the pixel point coordinates, B (i, j) represents the corrected pixel value of the B channel component of the pixel point (i, j), B '(i, j) represents the second target pixel value of the B channel component of the pixel point (i, j), G' (i, j) represents the first target pixel value or the original pixel value of the G channel component of the pixel point (i, j), and G mean represents the mean of the target pixel values of the G channel component of at least one pixel point of the B channel component corresponding to the original pixel values in 8 pixel points around the pixel point (i, j).
In one embodiment, preferably, when the original pixel value of the pixel point (i, j) is the original pixel value of the G channel component, the G '(i, j) represents the original pixel value of the pixel point (i, j), and when the original pixel value of the pixel point (i, j) is the original pixel value of the R channel component or the original pixel value of the B channel component, the G' (i, j) represents the first target pixel value of the G channel component.
In one embodiment, preferably, the pixel value of the pixel point of the image to be processed is greater than or equal to 2000 pixels.
In one embodiment, preferably, the image to be processed is in a rawbeye format.
In one embodiment, preferably, the bilinear color interpolation processing includes parallel processing or serial processing.
According to a second aspect of the embodiments of the present invention, there is provided an image processing apparatus including:
a touch-sensitive display;
one or more processors;
one or more memories;
one or more applications, wherein the one or more applications are stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs configured to perform the method as described in the first aspect or any of the embodiments of the first aspect.
In the embodiment of the invention, aiming at the ultra-high-definition RAW format image file with not less than 2000 ten thousand pixels, the bilinear interpolation algorithm is used as the algorithm for processing the image file, so that the imaging quality can be improved, and the delay time can be shortened.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of a Bayer filter array format.
Fig. 2 is a flow chart of an image processing method according to an embodiment of the invention.
FIG. 3 is a flow chart of another image processing method of an embodiment of the present invention.
FIG. 4 is a flow chart of another image processing method of an embodiment of the present invention.
Fig. 5 shows a schematic diagram of a rawbeye format picture.
FIG. 6 is a schematic illustration of a processed color image according to one embodiment of the invention.
Fig. 7 shows a schematic diagram of a Bayer filter array format.
Fig. 8 shows a schematic diagram of another Bayer filter array format.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The Bayer color filter array processes incident light by alternately using a group of green and red filters and a group of green and blue filters, the spectral range of green color is closer to the spectral response waveband of a human vision system, human eyes are more sensitive to green, more details can be distinguished, and the restored color image has the maximum definition. The number of red and blue pixels in the Bayer color filter array is 1/4, and the number of green pixels is 1/2.
The Bayer filter array format is shown in fig. 1.
An image collected by the optical lens is filtered by the Bayer color filter array and then is subjected to light sensing on the image sensor, each pixel point of the obtained image data represents one color in R, G, B, and at the moment, color interpolation needs to be carried out on the data to obtain a color image to be collected.
Color interpolation is a key technology in image post-processing, and the selection of an interpolation algorithm directly influences the final effect of an image. For different application fields, the algorithm complexity and the recovery effect of the interpolation algorithm need to be comprehensively considered, and a proper algorithm needs to be selected.
The commonly used Bayer CFA color restoration algorithm is mainly divided into two types, namely a single color channel independent interpolation algorithm and interactive interpolation by utilizing the correlation of multiple channels.
The single color channel independent interpolation algorithm comprises the following steps: neighborhood algorithm, correlation linear interpolation algorithm, cubic convolution interpolation algorithm, B-spline interpolation algorithm and the like.
The multi-channel correlation interpolation algorithm comprises the following steps: a boundary-based interpolation algorithm, a weighting coefficient interpolation algorithm, an interactive interpolation algorithm, an optimization recovery algorithm, wavelet transformation, a constant color ratio-based COK constant hue interpolation method, a gradient-based edge-oriented interpolation method, a weighted RomKimmel algorithm rule, a median filtering algorithm, a color difference-based LU algorithm, and the like.
The invention tests a plurality of interpolation algorithms and finally selects a bilinear interpolation algorithm.
A conventional interpolation algorithm is used for image scaling, for example, when an original image has a resolution of 3X3 and an image with a resolution of 4X4 is to be enlarged, a certain pixel in the enlarged image is a target pixel, and for a target pixel, a floating point coordinate obtained by inverse transformation is set to be (i + u, j + v), where i and j are both non-negative integers, and u and v are floating points in an interval of [0,1), a value of the pixel is f (i + u, j + v), and may be determined by values of four surrounding pixels corresponding to coordinates (i, j), (i +1, j), (i, j +1), (i +1, j +1) in the original image, that is:
f(i+u,j+v)=(1-u)(1-v)f(i,j)+(1-u)vf(i,j+1)+u(1-v)f(i+1,j)+uvf(i+1,j+1)
where f (i, j) represents the pixel value at the source image (i, j), and so on.
It can be seen that the conventional bilinear interpolation method needs scaling of the image.
Fig. 2 is a flow chart of an image processing method according to an embodiment of the invention.
As shown in fig. 2, the image processing method includes steps S201 to S204:
step S201, acquiring a photosensitive to-be-processed image acquired by an image acquisition device, where each pixel point in the to-be-processed image corresponds to an original pixel value of one of an R channel component, a G channel component, and a B channel component.
In one embodiment, preferably, the pixel value of the pixel point of the image to be processed is greater than or equal to 2000 pixels, and the image to be processed is in the rasba eye format.
In one embodiment, preferably, the processing manner of the bilinear color interpolation processing includes parallel processing or serial processing.
Step S202, carrying out bilinear color interpolation processing on an image to be processed by using an original pixel value of a channel component corresponding to each pixel point to obtain target pixel values of three channel components corresponding to each pixel point;
step S203, correcting a second target pixel value of an R channel component and a third target pixel value of a B channel component of each pixel point by using a first target pixel value of the G channel component of each pixel point to obtain a processed color image;
and step S204, outputting the processed color image.
In this embodiment, the image to be processed having pixel values greater than or equal to 2000 pixels in the rasba layer format is interpolated by bilinear color interpolation, so that the obtained color image has high quality and the delay time can be shortened.
FIG. 3 is a flow chart of another image processing method of an embodiment of the present invention.
As shown in fig. 3, in one embodiment, preferably, the step S202 includes:
step S301, aiming at each pixel point, calculating a first target pixel value of a G channel component by using a first interpolation calculation formula and an original pixel value of the G channel component in four surrounding nearest neighbor pixel points;
a first interpolation calculation formula comprising:
G(i,j)=(g(i,j)+g(i+1,j)+g(i,j+1)+g(i+1,j+1))/x
wherein, (i, j) represents pixel point coordinates, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent coordinates of four pixel points nearest to the pixel point (i, j), G (i, j) represents a first target pixel value of a G channel component of the pixel point (i, j), G (i, j) represents an original pixel value of the G channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value which does not represent the G channel component, the original pixel value corresponding to the pixel point is 0, and x is the number of the original pixel values which represent the G channel component in the four pixel points;
step S302, aiming at each pixel point, calculating a second target pixel value of the R channel component by utilizing a second interpolation calculation formula and an original pixel value of the R channel component in four surrounding nearest neighbor pixel points;
a second interpolation calculation formula comprising:
R’(i,j)=(r(i,j)+r(i+1,j)+r(i,j+1)+r(i+1,j+1))/y
wherein, (i, j) represents pixel coordinates, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent coordinates of four pixel points nearest to the pixel point (i, j), R' (i, j) represents a second target pixel value of a G channel component of the pixel point (i, j), R (i, j) represents an original pixel value of an R channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the R channel component, the original pixel value corresponding to the pixel point is 0, and y is the number of the original pixel values representing the R channel component in the four pixel points;
step S303, for each pixel point, an original pixel value representing a B channel component in the third interpolation calculation formula and four surrounding nearest neighbor pixel points is used to calculate a third target pixel value of the B channel component.
A third interpolation calculation formula comprising:
B’(i,j)=(b(i,j)+b(i+1,j)+b(i,j+1)+b(i+1,j+1))/z
wherein, (i, j) represents a pixel point coordinate, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent four pixel point coordinates nearest to the pixel point (i, j), B' (i, j) represents a second target pixel value of the B channel component of the pixel point (i, j), B (i, j) represents an original pixel value of the B channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the B channel component, the original pixel value corresponding to the pixel point is 0, and z is the number of the original pixel values representing the B channel component in the four pixel points.
In this embodiment, the G channel classification of each pixel point of the image may be interpolated in parallel or in series, no matter whether the pixel point has a pixel value of a G channel component, so that the pixel value of the image may be uniform and smooth, and the image distortion caused by abrupt change of the pixel value may be avoided. Then, the pixel values of the R channel component and the B channel component are calculated by a calculation method similar to the G channel component, and the pixel values of the R channel component and the B channel component are corrected by the pixel value of the G channel component, so that the obtained pixel values are distributed more uniformly and smoothly, and the imaging quality is further ensured.
For example, taking the image format of fig. 1 as an example, such as a pixel point P (3,4), an original pixel value of the pixel point is a pixel value of a G channel, in which case, a target pixel value of a G channel component of the pixel point is still determined again by using a bilinear interpolation algorithm, so as to avoid image distortion caused by abrupt change of the pixel value, then, according to the principle of the bilinear interpolation algorithm, a value of G (3,4) is determined according to pixel values of G channel components of four points P (3,4), P (4,4), P (3,5), and P (4, 5). As can be seen from fig. 1, the original pixel values of P (4,4) and P (3,5) are a B-channel component and an R-channel component, and the target pixel value of the G-channel component of P (3,4) is determined only from the pixel values of the G-channel components of P (3,4) and P (4,5), i.e., G (3,4) ═ G (3,4) + G (4, 5))/2.
FIG. 4 is a flow chart of another image processing method of an embodiment of the present invention.
As shown in fig. 4, in one embodiment, preferably, the step S203 includes:
step S401, aiming at each pixel point, obtaining a corrected pixel value of an R channel component of each pixel point by utilizing a first correction formula, a first target pixel value of a G channel component and a second target pixel value of the R channel component of each pixel point;
in one embodiment, preferably, the first modification formula includes:
r (i, j) ═ R '(i, j) + G' (i, j) -G mean
Wherein, (i, j) represents a pixel point coordinate, R (i, j) represents a corrected pixel value of an R channel component of the pixel point (i, j), R '(i, j) represents a second target pixel value of the R channel component of the pixel point (i, j), G' (i, j) represents a first target pixel value or an original pixel value of the G channel component of the pixel point (i, j), and G mean represents a mean value of target pixel values of the G channel component of at least one pixel point of the R channel component corresponding to the original pixel value among 8 pixel points around the pixel point (i, j).
Step S402, aiming at each pixel point, obtaining a corrected pixel value of a B channel component of the pixel point by utilizing a second correction formula and a first target pixel value of the G channel component and a third target pixel value of the B channel component of each pixel point.
The second modification formula includes:
b (i, j) ═ B '(i, j) + G' (i, j) -G mean
Wherein, (i, j) represents the pixel point coordinates, B (i, j) represents the corrected pixel value of the B channel component of the pixel point (i, j), B '(i, j) represents the second target pixel value of the B channel component of the pixel point (i, j), G' (i, j) represents the first target pixel value or the original pixel value of the G channel component of the pixel point (i, j), and G mean represents the mean of the target pixel values of the G channel component of at least one pixel point of the B channel component corresponding to the original pixel values in 8 pixel points around the pixel point (i, j).
In one embodiment, preferably, G '(i, j) represents the original pixel value of the pixel point (i, j) when the original pixel value of the pixel point (i, j) is the original pixel value of the G channel component, and G' (i, j) represents the first target pixel value of the G channel component when the original pixel value of the pixel point (i, j) is the original pixel value of the R channel component or the original pixel value of the B channel component.
In this embodiment, the pixel values of the R channel component and the B channel component are corrected by the pixel value of the G channel component, so that the obtained pixel value distribution is more uniform and smooth, and the imaging quality is further ensured.
For example, for the pixel point P (3,4), the pixel values of the R channel component and the B channel component obtained by correction are as follows by using the correction formula:
Figure BDA0002143242550000101
Figure BDA0002143242550000102
the above technical solution of the present invention is explained in detail by a specific embodiment.
First, a RAWBeye format picture is captured by a 2000-ten-thousand-pixel camera, as shown in FIG. 5. Then carrying out interpolation calculation on the G channel component according to a bilinear interpolation algorithm and adjacent pixel points; and after the interpolation calculation of the G channel component is finished, estimating R, B a pixel value of the channel component according to a bilinear interpolation algorithm and adjacent pixel points, and correcting R, B the pixel value of the channel component by using the pixel value of the G channel component. This results in the values of all pixels of all components of a complete image, as shown in fig. 6. The processed image is finally displayed and can be compared with the RAWBeye image of FIG. 5.
In order to express the interpolation effect of the present application, other interpolation algorithms are listed and compared.
1. Nearest neighbor interpolation (neighbor sampling method)
The nearest interpolation is simply rounded up for a floating point coordinate obtained by inverse transformation to obtain an integer coordinate, and the pixel value corresponding to the integer coordinate is the pixel value of the target pixel, that is, the pixel value corresponding to the nearest upper left corner point (upper right corner for DIB because its scan line is stored in reverse order) of the floating point coordinate is taken.
2. Interpolation based on boundary gradients
The edge-oriented interpolation algorithm based on the gradient can overcome the problem of edge blurring caused by a bilinear interpolation algorithm. The human visual system is very sensitive to color changes and boundary information, and efficient interpolation algorithms can interpolate along the boundary direction. The algorithm ensures that the interpolation is performed in the boundary direction.
The gradient magnitude in the horizontal direction and the gradient magnitude in the vertical direction are compared firstly, and the pixel point in the direction with small gradient magnitude is taken as an estimation point during interpolation to calculate the unknown color value of the current pixel, so that the condition of edge blurring caused by cross-edge color interpolation is avoided. Since the human eye is sensitive to green, in order to reduce the complexity of the algorithm as much as possible, the algorithm mainly uses an edge-oriented algorithm for the G component, and the R, B component is calculated by a bilinear interpolation method and is corrected by the G component.
The algorithm firstly interpolates the G component, takes the pixel points (3,3) in the graph into consideration, determines the gradient values of the points in the horizontal and vertical directions according to the value of the red component around the points, and determines the boundary direction by Laroche through calculating the second order differential of the chrominance component. Order to
Figure BDA0002143242550000111
Representing the horizontal gradient value at that point,
Figure BDA0002143242550000112
representing the vertical gradient value at this point, G (3,3) is estimated as:
Figure BDA0002143242550000113
the interpolation of the other pixels whose G component is 0 is similar to the above.
After the interpolation of the G component is finished, the values of the R component and the B component are calculated. For pixel (3,3) and pixel (3,4), the value of R, B is estimated from the neighboring pixels and corrected with the G component:
Figure BDA0002143242550000114
Figure BDA0002143242550000115
Figure BDA0002143242550000121
the R, B component values for all points can be obtained in the same way.
3. Hamilton interpolation
In the Bayer CFA, since the number of green pixel points is twice as many as the number of red and blue pixel points, it contains more edge information of the original image. Therefore, Adam and Hamilton proposed an edge adaptive interpolation algorithm in 1997 based on this idea.
The specific implementation method comprises the following steps:
(1) green component reconstruction
The green components at the red and blue sample points, i.e. at the central sample point in fig. 7a and 7b, are first recovered, and the fig. 7b green component reconstruction process is similar to that of fig. 7a, so fig. 7a is taken as an example. The horizontal and vertical direction detection operators at the center red sample point R (i, j) are calculated as follows:
ΔHi,j=|Gi,j-1-Gi,j+1|+|2Ri,j-Ri,j-2-Ri,j+2|
ΔVi,j=|Gi-1,j-Gi+1,j|+|2Ri,j-Ri-2,j-Ri+2,j|
when the horizontal operator is smaller than the vertical operator, the probability that the horizontal edge exists in the center point R (i, j) is large, and the calculation of the central green component is performed along the horizontal direction, and the formula is as follows:
Figure BDA0002143242550000122
when the horizontal operator is larger than the vertical operator, the probability that the vertical edge exists at the center point R (i, j) is larger, and the calculation of the central green component is carried out along the vertical direction, and the formula is as follows:
Figure BDA0002143242550000123
provided that the horizontal and vertical operators are equal, the green component at the center point is calculated as the average of the horizontal and vertical directions, as follows:
Figure BDA0002143242550000131
(2) red and blue component reconstruction at green sample points
The reconstruction process for the blue and red components of fig. 7d is similar to that of fig. 7c, so fig. 7c is taken as an example. The reconstruction of the blue component at the central point uses the linear interpolation of the B-G space of the left and right points, and the reconstruction of the red component uses the linear interpolation of the R-G space of the upper and lower points, which is as follows:
Figure BDA0002143242550000132
Figure BDA0002143242550000133
(3) reconstruction of the blue (red) component at the red (blue) sample point
Finally, the blue color at the center of fig. 7a and the red color at the center of fig. 7b are recovered, and since the reconstruction process of fig. 7b is similar to that of fig. 7a, fig. 7a is taken as an example. And observing the nearest blue pixel points around the R, wherein the blue pixel points are positioned at four positions, namely the upper left position, the lower left position, the upper right position and the lower right position, of the R pixel point. In order to better select the interpolation direction and save the edge information, similar to the restoration of the green component, the gradient of the pixel needs to be calculated along two directions inclined by forty-five degrees, and then the interpolation is performed along the direction with smaller gradient.
The lower left, upper right and lower left, lower right gradients are calculated as follows:
D45(i,j)=|Bi-1,j+1-Bi+1,j-1|+|2gi,j-gi-1,j+1-gi+1,j-1|
D135(i,j)=|Bi-1,j-1-Bi+1,j+1|+|2gi,j-gi-1,j-1-gi+1,j+1|
according to the comparison result of the gradients, selecting a proper interpolation defense line, and calculating as follows:
Figure BDA0002143242550000141
wherein the sequence of the 2 nd step and the 3 rd step can be exchanged.
The method has low time complexity and can provide better interpolation effect. For very bright white light regions, there are individual points of intensity of some pure color 255 using the above two methods, and the interpolation of this method is relatively smooth for bright colors. For dense line pairs, no matter the horizontal and vertical directions, the problem of graininess can not occur, the lines are complete, and the contrast with the background is high.
4. Interpolation algorithm of color ratio and color difference law
a) Law of color ratio and color difference
Consider the RGB channel color correlation, an imaging model of a Mondriann (Mondriann) color image: each color channel can be viewed as the product of the projection of the normal vector n (x) of the surface of the real three-dimensional world in the direction of the light source and the reflectivity p (x, y). The reflectivity p (x, y) characterizes the material properties of the three-dimensional object and the reflectivities of different colors are different from each other. According to this model, the three color channels of red, green and blue can be represented by the following equations:
Figure BDA0002143242550000151
Figure BDA0002143242550000152
Figure BDA0002143242550000153
according to the above formula, it is assumed that the material of a given object in the picture is the same, i.e. the reflectance of the material for red, blue and green is constant over the given object, i.e. p (x) ═ c. The following ratio relationships can be derived:
Figure BDA0002143242550000154
from the above, the color ratio is constant at any position within a given object. Although the above ratio relation is based on a simple assumption condition, it is still valid in the case that a small local neighborhood of the image does not cross the edge of the image, which conforms to the characteristic that the color brightness of the natural image is uniform and excessive. The above ratio law is called the color ratio law.
The color ratio law is obtained by discussing in a linear exposure space, and the color difference law can be obtained by converting the color ratio law of the linear exposure space into a logarithmic exposure space. The color difference law can be expressed as I (I, X) -I (j, X) being constant within a small local neighborhood of the image.
A common Bayer domain R/G/B distribution model is as follows, with subsequent interpolation algorithms using:
b) interpolation algorithm based on color ratio law
The demosaicing algorithm of the primary color ratio law solves the unnatural tone change condition occurring in the bilinear interpolation process. The Bayer CFA can be seen as being composed of luminance signals (high-sampling rate green pixels) and chrominance signals (low-sampling rate red and blue pixels). The luminance signal can be estimated by a simple bilinear interpolation and the chrominance signal is restored by interpolation applying the smoothness of the hue in the neighborhood. Hue (hue) is defined as the ratio of chrominance to luminance signals, i.e., B/G for the hue at the blue sample point and R/G for the hue at the red sample point.
The reconstruction steps are as follows:
(1) the green color component at the red and blue sampling points is recovered by a bilinear interpolation algorithm, and fig. 8a/8b is as follows:
Figure BDA0002143242550000161
(2) the red and blue components are recovered based on the color ratio law using the green component that has been reconstructed, as the blue component recovery strategy at the red sample point of fig. 8a is as follows, and the red component recovery strategy at the blue sample point of fig. 8b is similar. Within the 3x3 central domain, it is known based on the law of color ratios that:
Figure BDA0002143242550000162
the conversion can obtain:
Figure BDA0002143242550000163
the recovery method for the red and blue components at the center green sample point in fig. 8c is the same as that in fig. 8d, and taking fig. 8c as an example, the R/B recovery method is as follows:
Figure BDA0002143242550000164
Figure BDA0002143242550000165
the demosaicing operation based on the color ratio law needs to involve a large number of multiplication and division operations, and when the green component of a certain point is 0, the meaning of Hue (Hue) needs to be redefined, so that the algorithm needs to consume a large amount of performance resources and condition protection when being implemented.
C) Interpolation algorithm based on color difference law
Based on the defectiveness brought by the interpolation algorithm realized by the color ratio law, the operation of demosaic design is simpler and more convenient by applying the following color difference law, when the color difference law is applied, the Hue (Hue) is defined as the difference between a chrominance signal and a luminance signal, namely (B-G or R-G), and the algorithm is realized by the following steps:
(1) restoring the lost green components at the red and blue sampling points by using bilinear interpolation, and restoring the green components of the same color ratio law;
(2) with the green component already recovered, the blue or red component in fig. 8(a) and 8(b) is recovered based on the color difference law as follows:
Figure BDA0002143242550000171
Figure BDA0002143242550000172
(3) the blue and red components in fig. 8c and 8d are recovered based on the color difference law with the recovered green component, taking fig. 8c as an example, as follows:
Figure BDA0002143242550000173
Figure BDA0002143242550000174
compared with a color difference algorithm based on a color ratio law, the interpolation algorithm utilizing the color difference law can be completed only through addition and subtraction and shift operation, the calculation cost is very small, the hardware is easy to realize, and the performance and the hardware resource consumption are obviously superior to those of the former.
According to a second aspect of the embodiments of the present invention, there is provided an image processing apparatus including:
one or more processors;
one or more memories;
one or more applications, wherein the one or more applications are stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs configured to perform the method as described in the first aspect or any of the embodiments of the first aspect.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by hardware that is instructed to implement by a program, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
While the image processing apparatus provided by the present invention has been described in detail, those skilled in the art will appreciate that the present invention is not limited thereto, and that the present invention is not limited to the foregoing embodiments and applications.

Claims (9)

1. An image processing method, comprising:
acquiring a photosensitive image to be processed acquired by image acquisition equipment, wherein each pixel point in the image to be processed corresponds to an original pixel value of one of an R channel component, a G channel component and a B channel component;
carrying out bilinear color interpolation processing on the image to be processed by using the original pixel value of the channel component corresponding to each pixel point to obtain target pixel values of three channel components corresponding to each pixel point;
correcting a second target pixel value of the R channel component and a third target pixel value of the B channel component of each pixel point by using a first target pixel value of the G channel component of each pixel point to obtain a processed color image;
the correcting the second target pixel value of the R channel component and the third target pixel value of the B channel component by using the first target pixel value of the G channel component of each pixel point includes:
aiming at each pixel point, obtaining a corrected pixel value of an R channel component of each pixel point by utilizing a first correction formula and a first target pixel value of a G channel component and a second target pixel value of the R channel component of each pixel point;
aiming at each pixel point, obtaining a corrected pixel value of a B channel component of the pixel point by utilizing a second correction formula and a first target pixel value of the G channel component and a third target pixel value of the B channel component of each pixel point;
and outputting the processed color image.
2. The image processing method according to claim 1, wherein performing bilinear color interpolation on the image to be processed by using an original pixel value of a channel component corresponding to each pixel point to obtain target pixel values of three channel components corresponding to each pixel point, includes:
aiming at each pixel point, calculating a first target pixel value of a G channel component by utilizing a first interpolation calculation formula and an original pixel value of the G channel component in four surrounding nearest neighbor pixel points;
aiming at each pixel point, calculating a second target pixel value of the R channel component by utilizing a second interpolation calculation formula and an original pixel value of the R channel component in four surrounding nearest neighbor pixel points;
and aiming at each pixel point, calculating a third target pixel value of the B channel component by using a third interpolation calculation formula and the original pixel value of the B channel component in the four pixel points which are nearest to the pixel points around the third interpolation calculation formula.
3. The image processing method according to claim 2,
the first interpolation calculation formula includes:
G(i,j)=(g(i,j)+g(i+1,j)+g(i,j+1)+g(i+1,j+1))/x
wherein, (i, j) represents a pixel point coordinate, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent four pixel point coordinates nearest to the pixel point (i, j), G (i, j) represents a first target pixel value of a G channel component of the pixel point (i, j), G (i, j) represents an original pixel value of the G channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the G channel component, the original pixel value corresponding to the pixel point is 0, and x is the number of the original pixel values representing the G channel component in the four pixel points;
the second interpolation calculation formula includes:
R’(i,j)=(r(i,j)+r(i+1,j)+r(i,j+1)+r(i+1,j+1))/y
wherein, (i, j) represents a pixel point coordinate, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent four pixel point coordinates nearest to the pixel point (i, j), R' (i, j) represents a second target pixel value of a G channel component of the pixel point (i, j), R (i, j) represents an original pixel value of an R channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the R channel component, the original pixel value corresponding to the pixel point is 0, and y is the number of the original pixel values representing the R channel component in the four pixel points;
the third interpolation calculation formula includes:
B’(i,j)=(b(i,j)+b(i+1,j)+b(i,j+1)+b(i+1,j+1))/z
wherein, (i, j) represents a pixel point coordinate, (i, j), (i +1, j), (i, j +1) and (i +1, j +1) represent four pixel point coordinates nearest to the pixel point (i, j), B' (i, j) represents a second target pixel value of the B channel component of the pixel point (i, j), B (i, j) represents an original pixel value of the B channel component of the pixel point (i, j), if any one of the four pixel points corresponds to an original pixel value not representing the B channel component, the original pixel value corresponding to the pixel point is 0, and z is the number of the original pixel values representing the B channel component in the four pixel points.
4. The image processing method according to claim 1, wherein the first modification formula includes:
r (i, j) ═ R '(i, j) + G' (i, j) -G mean
Wherein, (i, j) represents a pixel point coordinate, R (i, j) represents a corrected pixel value of an R channel component of the pixel point (i, j), R '(i, j) represents a second target pixel value of the R channel component of the pixel point (i, j), G' (i, j) represents a first target pixel value or an original pixel value of the G channel component of the pixel point (i, j), and G mean represents a mean value of target pixel values of the G channel component of at least one pixel point of the R channel component corresponding to the original pixel value among 8 pixel points around the pixel point (i, j);
the second correction formula includes:
b (i, j) ═ B '(i, j) + G' (i, j) -G mean
Wherein, (i, j) represents the pixel point coordinates, B (i, j) represents the corrected pixel value of the B channel component of the pixel point (i, j), B '(i, j) represents the second target pixel value of the B channel component of the pixel point (i, j), G' (i, j) represents the first target pixel value or the original pixel value of the G channel component of the pixel point (i, j), and G mean represents the mean of the target pixel values of the G channel component of at least one pixel point of the B channel component corresponding to the original pixel values in 8 pixel points around the pixel point (i, j).
5. The image processing method according to claim 4, wherein the G '(i, j) represents the original pixel value of the pixel point (i, j) when the original pixel value of the pixel point (i, j) is the original pixel value of the G channel component, and the G' (i, j) represents the first target pixel value of the G channel component when the original pixel value of the pixel point (i, j) is the original pixel value of the R channel component or the original pixel value of the B channel component.
6. The image processing method according to claim 1, wherein pixel values of pixel points of the image to be processed are greater than or equal to 2000 pixels.
7. The image processing method according to claim 1, wherein the image to be processed is in a RAWBeye format.
8. The image processing method according to claim 1, wherein a processing manner of the bilinear color interpolation processing includes parallel processing or serial processing.
9. An image processing apparatus characterized by comprising:
a touch-sensitive display;
one or more processors;
one or more memories;
one or more applications, wherein the one or more applications are stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-8.
CN201910675913.6A 2019-07-25 2019-07-25 Image processing method and device Active CN110430403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910675913.6A CN110430403B (en) 2019-07-25 2019-07-25 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910675913.6A CN110430403B (en) 2019-07-25 2019-07-25 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110430403A CN110430403A (en) 2019-11-08
CN110430403B true CN110430403B (en) 2021-11-02

Family

ID=68412395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910675913.6A Active CN110430403B (en) 2019-07-25 2019-07-25 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110430403B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127480B (en) * 2019-12-18 2023-06-30 上海众源网络有限公司 Image processing method and device, electronic equipment and storage medium
CN111626935B (en) * 2020-05-18 2021-01-15 成都乐信圣文科技有限责任公司 Pixel map scaling method, game content generation method and device
CN112004003B (en) * 2020-08-07 2021-12-21 深圳市汇顶科技股份有限公司 Image processing method, chip, electronic device, and storage medium
CN113781349A (en) * 2021-09-16 2021-12-10 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665030A (en) * 2012-05-14 2012-09-12 浙江大学 Improved bilinear Bayer format color interpolation method
CN102938843A (en) * 2012-11-22 2013-02-20 华为技术有限公司 Image processing method, image processing device and imaging device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574277A (en) * 2015-01-30 2015-04-29 京东方科技集团股份有限公司 Image interpolation method and image interpolation device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665030A (en) * 2012-05-14 2012-09-12 浙江大学 Improved bilinear Bayer format color interpolation method
CN102938843A (en) * 2012-11-22 2013-02-20 华为技术有限公司 Image processing method, image processing device and imaging device

Also Published As

Publication number Publication date
CN110430403A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110430403B (en) Image processing method and device
Chang et al. Effective use of spatial and spectral correlations for color filter array demosaicking
Lukac et al. A novel cost effective demosaicing approach
US8837853B2 (en) Image processing apparatus, image processing method, information recording medium, and program providing image blur correction
CN111510691B (en) Color interpolation method and device, equipment and storage medium
WO2009126540A1 (en) Interpolation system and method
CN108122201A (en) A kind of Bayer interpolation slide fastener effect minimizing technology
TW201944773A (en) Image demosaicer and method
Chen et al. Effective demosaicking algorithm based on edge property for color filter arrays
CN110852953B (en) Image interpolation method and device, storage medium, image signal processor and terminal
CN113676629B (en) Image sensor, image acquisition device, image processing method and image processor
JP2013055623A (en) Image processing apparatus, image processing method, information recording medium, and program
CN111539892A (en) Bayer image processing method, system, electronic device and storage medium
CN104410786A (en) Image processing apparatus and control method for image processing apparatus
WO2022061879A1 (en) Image processing method, apparatus and system, and computer-readable storage medium
Saito et al. Demosaicing approach based on extended color total-variation regularization
US20110261236A1 (en) Image processing apparatus, method, and recording medium
Hua et al. A color interpolation algorithm for Bayer pattern digital cameras based on green components and color difference space
Wang et al. Demosaicing with improved edge direction detection
Kim et al. On recent results in demosaicing of Samsung 108MP CMOS sensor using deep learning
KR101327790B1 (en) Image interpolation method and apparatus
KR100741517B1 (en) Noise insensitive high resolution color interpolation method for considering cross-channel correlation
JP2013055622A (en) Image processing apparatus, image processing method, information recording medium, and program
Chang et al. Adaptive color filter array demosaicing with artifact suppression
EP4332834A1 (en) Method and camera device for generating a moiré-corrected image file

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant