CN113454687A - Image processing method, apparatus and system, computer readable storage medium - Google Patents
Image processing method, apparatus and system, computer readable storage medium Download PDFInfo
- Publication number
- CN113454687A CN113454687A CN202080013781.1A CN202080013781A CN113454687A CN 113454687 A CN113454687 A CN 113454687A CN 202080013781 A CN202080013781 A CN 202080013781A CN 113454687 A CN113454687 A CN 113454687A
- Authority
- CN
- China
- Prior art keywords
- channel
- pixel
- image
- type
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The present specification provides an image processing method, an apparatus, and a computer-readable storage medium, the method including: acquiring a first channel pixel value of a first type of pixel point in an original image, wherein the original image comprises a second channel pixel value of the first type of pixel point and a first channel pixel value of a second type of pixel point; calculating the color difference value between the first channel and the second channel of the first type of pixel points according to the first channel pixel value and the second channel pixel value of the first type of pixel points; and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the first type pixel point and the first channel pixel value of the second type pixel point.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a computer-readable storage medium.
Background
In digital or video cameras, an image sensor is responsible for recording raw images. The raw image is generally a bayer-format image, that is, each pixel point in the image includes only one channel of pixel values. In order to convert an original image into an RGB image understandable to a user, color reconstruction of the original image is required. However, color reconstruction of images in different bayer formats requires different ISP algorithms and hardware designs, which results in higher complexity of color reconstruction.
Disclosure of Invention
In order to overcome the problems in the related art and achieve the purpose of obtaining a full-color image without redesigning an ISP algorithm and hardware, the present specification provides an image processing method, an image processing apparatus, and a computer-readable storage medium.
According to a first aspect of embodiments herein, there is provided an image processing method, the method comprising: acquiring a first channel pixel value of a first type of pixel point in an original image, wherein the original image comprises a second channel pixel value of the first type of pixel point and a first channel pixel value of a second type of pixel point; calculating the color difference value between the first channel and the second channel of the first type of pixel points according to the first channel pixel value and the second channel pixel value of the first type of pixel points; and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the first type pixel point and the first channel pixel value of the second type pixel point.
According to a second aspect of embodiments of the present specification, there is provided an image processing apparatus, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the following method when executing the program: acquiring a first channel pixel value of a first type of pixel point in an original image, wherein the original image comprises a second channel pixel value of the first type of pixel point and a first channel pixel value of a second type of pixel point; calculating the color difference value between the first channel and the second channel of the first type of pixel points according to the first channel pixel value and the second channel pixel value of the first type of pixel points; and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the first type pixel point and the first channel pixel value of the second type pixel point.
According to a third aspect of embodiments herein, there is provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the image processing method of the first aspect.
The technical scheme provided by the embodiment of the specification can have the following beneficial effects: in the embodiment of the present specification, for an original image acquired by an image sensor based on other arrays, a first channel pixel value of a first type of pixel point and a second channel pixel value of a second type of pixel point are obtained, and then an RGB image or a conventional RGGB bayer array image can be obtained, so that a full-color image can be obtained without redesigning an ISP algorithm and hardware.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a raw image acquired by an image sensor.
Fig. 2 is a raw image acquired by another image sensor.
Fig. 3 is a flowchart illustrating a method for obtaining a second channel pixel value of a second type of pixel point according to an exemplary embodiment of the present disclosure.
Fig. 4 is a schematic diagram illustrating a method for obtaining a second channel pixel value of a second type of pixel point according to an exemplary embodiment of the present disclosure.
Fig. 5 is a schematic diagram illustrating a method for obtaining a first channel pixel value of a first type of pixel point according to an exemplary embodiment of the present disclosure.
FIG. 6 is a schematic diagram illustrating a method of obtaining features of an acquired original image according to an exemplary embodiment of the present description.
Fig. 7 is a schematic diagram illustrating another method for obtaining a first channel pixel value of a first type pixel point according to an exemplary embodiment of the present disclosure.
FIG. 8 is a schematic diagram illustrating another method of capturing features of an original image according to an exemplary embodiment of the present description.
Fig. 9 is a flowchart illustrating another method for obtaining a first channel pixel value of a first type of pixel point according to an exemplary embodiment of the present disclosure.
Fig. 10 is a flowchart illustrating another method for obtaining a first channel pixel value of a first type of pixel point according to an exemplary embodiment of the present disclosure.
Fig. 11 is a flowchart illustrating another method for obtaining a first channel pixel value of a first type of pixel point according to an exemplary embodiment of the present disclosure.
Fig. 12 is a flow chart illustrating a method of acquiring an RGGB bayer image according to an example embodiment of the present description.
Fig. 13 is a flow chart illustrating another method of acquiring an RGGB bayer image according to an example embodiment of the present description.
Fig. 14 is a hardware configuration diagram of an image processing apparatus according to an embodiment of the present specification.
Detailed Description
In an image capturing apparatus such as a digital camera or a video camera, an image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device) is responsible for recording a raw image. By adding different color filters in front of the image sensor, an original image (called a bayer array image) containing incomplete colors can be obtained, and an RGB image is reconstructed through subsequent image processing.
One raw image acquired by an image sensor is an RGGB Bayer array (Bayer Pattern) image. The format of the raw image captured by an image sensor is referred to as the sampling pattern of the image sensor. Referring to fig. 1, an example of an RGGB bayer array image is shown. One image unit (for example, shown by four gray squares) of the RGGB bayer array image includes 4 identical 2 × 2 image blocks, and in each 2 × 2 image block, the 1 st row and the 1 st column are R (Red ) channel pixel points, the 1 st row and the 2 nd row and the 1 st column are G (Green ) channel pixel points, and the 2 nd row and the 2 nd column are B (Blue ) channel pixel points. In this application, the capital letters R, G, B represent the different channels and the corresponding lower case letters r, g, b represent the pixel values corresponding to the pixel points of the different channels throughout the text and figures.
Another original image acquired by the image sensor is a four-pixel bayer array (QBC) image. Referring to fig. 2, which is an example of a four-pixel bayer array image, one image unit of the QBC image includes 4 2 × 2 image blocks, the 2 × 2 image block of the 1 st row and the 1 st column (i.e., a black image block in the figure) includes 4R-channel pixel points, the 2 × 2 image blocks of the 1 st row and the 2 nd column (i.e., dark gray and light gray image blocks in the figure) each include 4G-channel pixel points, and the 2 × 2 image block of the 2 nd column (i.e., a white image block in the figure) includes 4B-channel pixel points.
In addition, the original image acquired by the image sensor can also be other types of bayer array images, and the arrangement modes of R-channel pixel points, G-channel pixel points and B-channel pixel points in different types of bayer array images are different, and are not described herein again.
In order to convert an original image into an RGB image understandable to a user, color reconstruction of the original image is required. However, color reconstruction of images of different bayer formats requires the use of different ISP algorithms and hardware designs. For example, ISP algorithms and hardware designs for color reconstruction of RGGB bayer array images are not suitable for color reconstruction of QBC images. This results in a high complexity of performing the color reconstruction.
In order to overcome the above problems and achieve an RGB image without redesigning ISP algorithms and hardware, the present specification provides an image processing method, an apparatus, and a computer-readable storage medium. The following provides a detailed description of examples of the present specification.
As shown in fig. 3, fig. 3 is a flow chart illustrating an image processing method according to an exemplary embodiment of the present description. The method provided by the embodiment of the present application may be executed by any terminal device and/or server with a computing processing capability, which is not limited in the present application. As shown in fig. 3, the method provided by the embodiment of the present application may include the following steps:
in step 301, a first channel pixel value of a first type pixel point in an original image is obtained, wherein the original image comprises a second channel pixel value of the first type pixel point and a first channel pixel value of a second type pixel point;
in step 302, calculating a color difference value between a first channel and a second channel of the first-class pixel point according to a first channel pixel value and a second channel pixel value of the first-class pixel point;
in step 303, a second channel pixel value of the second type pixel point is determined according to a color difference value between a first channel and a second channel of the first type pixel point and the first channel pixel value of the second type pixel point.
In an exemplary embodiment, the raw image is collected by an image sensor, and the image sensor may use a four-pixel bayer array (QBC) as a sampling mode, and may also use other bayer array formats as a sampling mode, which is not limited in this application. In the following examples, for convenience of explanation, the sampling mode is exemplified by a four-pixel bayer array mode. It will be appreciated by those skilled in the art that the method of the present application is equally applicable to processing raw images acquired in other sampling modes.
Referring to fig. 4, an original image 401 collected by the image sensor may include a second channel pixel value of a first type of pixel and a first channel pixel value of a second type of pixel. Furthermore, the first-type pixel point only comprises a second channel pixel value and does not comprise a first channel pixel value; the second type of pixel point only comprises the first channel pixel value and does not comprise the second channel pixel value. The first type of pixel points may be R-channel pixel points (i.e., pixel points including only R-channel pixel values), or may be B-channel pixel points (i.e., pixel points including only B-channel pixel values). When the first type of pixel point is an R channel pixel point, the second channel pixel value is an R channel pixel value; when the first-class pixel point is a B-channel pixel point, the second-channel pixel value is a B-channel pixel value. The second type of pixel points may be G-channel pixel points (i.e., pixel points including only G-channel pixel values), and the first channel pixel values are G-channel pixel values. In the present application, the processing of the B-channel pixel point and the pixel value thereof is the same as the processing of the R-channel pixel point and the pixel value thereof, so the B-channel pixel point and the R-channel pixel point are collectively referred to as a first-class pixel point, and the B-channel pixel value of the B-channel pixel point and the R-channel pixel value of the R-channel pixel point are collectively referred to as a second-channel pixel value. For convenience of description, the following description will take the first-type pixel as an R-channel pixel and the second-channel pixel as an R-channel pixel, and will explain the processing of the first-type pixel and the pixel value thereof. The processing of the first-type pixels as the B-channel pixels can be referred to the processing of the R-channel pixels, which is not described in detail in this disclosure.
In some embodiments, referring to fig. 4, the raw image 401 collected by the image sensor includes R channel pixel values R of R channel pixel points 4011 and G channel pixel values G of G channel pixel points 4012. According to the original image 401, the pixel value G of the R channel pixel 4011 in the G channel in the original image 401, that is, the pixel value G of the pixel 4026 in the image 402, can be obtained. The pixel 4011 and the pixel 4026 are the same pixel. And similarly, obtaining the pixel value G of the B channel pixel point in the G channel.
According to the R channel pixel value R of the R channel pixel 4011 in the original image 401 and the G channel pixel value G of the pixel obtained in the above step, the color difference value of the R channel pixel 4011 in the R channel and the G channel, that is, the pixel value R-G corresponding to the pixel 4027 in the image 403 can be calculated.
According to the color difference value R-G of the R channel pixel point calculated in the above step, in combination with the G channel pixel value G of the G channel pixel point, the R channel pixel value of the G channel pixel point can be determined, for example, the R channel pixel value R of the G channel pixel point 4028 in the image 404 can be obtained.
It should be noted that, in the foregoing step, the G-channel pixel value of the B-channel pixel point has been obtained, and therefore, the R-channel pixel value of the B-channel pixel point may also be calculated. And regarding the B channel pixel points as G channel pixel points, and acquiring the R channel pixel values of the B channel pixel points in the same way as the G channel pixel points. The R channel estimated image 404 of the original image can be obtained by determining the R channel pixel value of each pixel point in the original image. That is to say, the finally obtained R channel estimation image 404 may include R channel pixel values of G channel pixel points, and may also include R channel pixel values of B channel pixel points.
In some embodiments, the color difference value between the first channel and the second channel of the second type pixel point can be interpolated according to the color difference value between the first channel and the second channel of the first type pixel point; and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the second type pixel point and the first channel pixel value of the second type pixel point.
Referring to fig. 4, still taking the first-type pixel as the R channel pixel as an example, how to determine the R channel pixel value R of the G channel pixel 4028 is described. According to the original image 401 and the G channel estimation image 402 corresponding thereto, a color difference image 403 between the R channel and the G channel of the R channel pixel point can be obtained. The color difference image 403 is interpolated in the G channel and the B channel, respectively, to obtain a complete color difference image 413. The full-color-difference image 413 includes a color difference value between a G-channel pixel value and an R-channel pixel value of each pixel in the original image. In obtaining the full-color-difference image 413, in addition to the interpolation processing, smoothing processing may be performed after the interpolation processing. The interpolation method used in the interpolation process may be bilinear interpolation or other existing interpolation methods. The G channel estimation image 402 and the full color difference image 413 are added to obtain an R channel estimation image 404. The R channel estimated image 404 includes the R channel pixel value R of the G channel pixel 4028.
With the above-described embodiments, the R-channel estimation image of the original image can be obtained. It should be understood by those skilled in the art that, according to the B channel pixel value of the original image and the obtained G channel estimated image, the same method as the above embodiment is adopted, and the B channel estimated image of the original image can also be obtained, which is not described herein again.
In some embodiments, interpolation may be performed according to pixel values in the original image to obtain an initial first channel pixel value of the original image; calculating the characteristics of an initial first channel pixel value of the original image; and filtering the original image according to the characteristics of the initial first channel pixel value of the original image to obtain the first channel pixel value of the original image, wherein the first channel pixel value of the original image comprises the first channel pixel value of the first type of pixel points.
Referring to fig. 5, the first-type pixel is still taken as a G-channel pixel, and the first-channel pixel value is still taken as a G-channel pixel value. In order to obtain a G channel pixel value G of an R channel pixel 4011 in the original image 401 (i.e. obtain a pixel value G of a pixel 4026 in the image 402), the following method may be adopted:
and performing G-channel interpolation on pixel points in the original image 401 to obtain an image containing the initial G-channel pixel value of the original image. The initial G-channel pixel value refers to a G-channel pixel value obtained by interpolating an original image. In fig. 5, a pixel point of the image 401 is interpolated to obtain an image 406, and the pixel point in the image 406 includes an initial G-channel pixel value of the original image. Then, the features of the image 406 are calculated, and the original image 401 is filtered according to the calculated features, so that the image 402 is obtained. The image 402 includes G-channel pixel values of R-channel pixel points, and is a G-channel estimation image of the original image 401.
For each R channel pixel point, the characteristics of an image block comprising the pixel point can be obtained, and the R channel pixel point is filtered based on the characteristics of the image block to obtain a G channel pixel value of the R channel pixel point. An R channel pixel point image block may be an image block centered on the R channel pixel point, such as a horizontal image block or a vertical image block, or a two-dimensional image block. The horizontal image block is an image block only comprising a row of pixel points; a vertical image is an image block that contains only one column of pixels.
In some embodiments, first channel interpolation processing may be performed on each pixel point in an original image along a first direction to obtain an interpolated image, where each pixel point in the interpolated image includes a first channel pixel value; determining a gradient structure tensor of the first type of pixel points according to the interpolation image; and acquiring the feature vector of the first-class pixel points on the first channel based on the gradient structure tensor of the first-class pixel points.
With reference to fig. 5, the first-type pixels are still used as R-channel pixels, and the first-channel pixel value is still used as an R-channel pixel value. Calculating the characteristics of the pixel points in the image 406 can be implemented as follows:
an interpolation direction is selected, which enables G-channel interpolation of all pixel points on the image, so that the image 406 can be obtained from the image 401. Wherein g in image 406xyAll the values are G-channel pixel values obtained by performing interpolation along a selected interpolation direction, and G may be the G-channel pixel values obtained by interpolation, or may be G-channel pixel values in an original image, which is not limited in the present application. According to the obtained interpolation image 406, the gradient structure tensor of the R channel pixel point in the original image can be determined, and according to the determined gradient structure tensor, the feature vector of the R channel pixel point on the G channel can be determined. For example, interpolating the R channel pixel 4011 in the original image 401 according to the selected direction to obtain the G channel interpolation G of the pixel 4021xyAnd further determining the gradient structure tensor and the eigenvector of the pixel point on the G channel.
In some embodiments, the first direction includes a horizontal direction and a vertical direction. Performing first channel interpolation processing on each pixel point in the original image along the horizontal direction to obtain a horizontal interpolation image; and carrying out first channel interpolation processing on each pixel point in the original image along the vertical direction to obtain a vertical interpolation image. Then, the gradient structure tensor of the first type of pixel points can be determined according to the horizontal interpolation image and the vertical interpolation image. By carrying out interpolation processing in two directions, the processed image can have a more natural imaging effect in the horizontal direction and the vertical direction, and the problem that the image is finer in one direction and the fault phenomenon is obvious in the other direction is solved.
Referring to fig. 6 specifically, still taking R channel pixel 4011 in the original image 401 as an example, according to the horizontal direction interpolation, the horizontal interpolation image 407 and the G channel interpolation G of the pixel 4013 in the horizontal direction can be obtainedx(ii) a According to the interpolation in the vertical direction, the vertical interpolation image 408 and the G channel interpolation G of the pixel point 4014 in the vertical direction can be obtainedy. Then, according to the horizontal interpolation image 407 and the vertical interpolation image 408, the gradient structure tensor and the eigenvector of the pixel point on the G channel can be further determined.
In addition, the interpolation direction may specifically be two diagonal directions of the image, and may include both a horizontal direction and a vertical direction, and two diagonal directions, or may also be other directions. When other interpolation directions are selected, obtaining the gradient structure tensor and the eigenvector of the pixel point on the G channel according to the interpolation image is similar to the method of the above embodiment, and details are not repeated here.
In some embodiments, pixel values of neighborhood pixels in the first direction may be obtained; and performing first channel interpolation processing on the target pixel point based on the pixel value of the neighborhood pixel point of the target pixel point in the first direction.
The G-channel pixel value is still taken as the first-channel pixel value, and the horizontal direction is taken as the first direction for illustration. The G-channel horizontal interpolation processing is performed on the pixel points in the original image 401 in fig. 6, and the following method may be adopted:
referring to fig. 7, an original image block 409 including the first row of pixels of 4 original image units is shown, and G-channel interpolation processing is performed on each pixel point, where pixel values of adjacent pixel points in the horizontal direction may be obtained first, and then G-channel interpolation processing is performed. Taking the R channel pixel 4011 as an example of a target pixel, the pixel values of the neighboring pixels can be obtained first, for example, the pixel values of the neighboring pixels 4015, 4016, 4017, 4018, 4019, and 4020 in the original image can be obtained, and the G channel interpolation of the R channel pixel 4011 is obtained by performing interpolation by combining the pixel values of the pixels themselves. Or, less pixel values of the neighboring pixel points may be selected, for example, only the pixel values of the neighboring pixel points 4015 and 4016 in the original image are obtained, and interpolation is performed by combining the pixel values of the pixel points themselves. Or selecting more pixel values of the adjacent pixel points, and performing interpolation by combining the pixel values of the pixel points. When more adjacent pixel points are selected for interpolation, a better interpolation effect can be obtained, and then a better imaging effect can be obtained under the condition that the subsequent processing is not changed. In a specific application, the balance between imaging effect and resource consumption should be considered.
The above embodiment is based on the horizontal direction, and the method for performing G channel interpolation on R channel pixel points in the vertical direction is similar to the horizontal direction, and is not described herein again. In some embodiments, performing a first channel interpolation process on the target pixel point based on the pixel value of a neighboring pixel point of the target pixel point in the first direction includes: and performing first channel interpolation processing on the target pixel point based on the pixel value of the first channel neighborhood pixel point, the pixel value of the second channel neighborhood pixel point, the interpolation coefficient of the first channel neighborhood pixel point and the interpolation coefficient of the second channel neighborhood pixel point of the target pixel point.
Taking fig. 7 as an example, how to perform interpolation on a pixel point in an image block along the horizontal direction to obtain a pixel value of the pixel point on the G channel is described. For the R channel pixel 4011, an appropriate interpolation coefficient may be selected to perform interpolation processing on the pixel value of the pixel and the pixel values of the adjacent pixels, so as to obtain the pixel value of the pixel on the G channel. The interpolation coefficient may be based on color difference oneThe causal principle is derived, and may also be obtained through other ways such as model training, which is not limited in particular. For example, it is still possible to select to obtain pixel values of pixel points 4015, 4016, 4017, 4018, 4019, and 4020 adjacent to the pixel point in the original image, which are: r is-3、g-2、g-1、r1、g2、g3Combining the pixel value r of the pixel point itself0Selecting interpolation coefficient derived according to color difference consistency principleThen, the pixel value G of the R channel pixel 4011 in the G channelx0The method comprises the following steps:
it is also possible to select only the pixel points 4016, 4017, 4018 and 4019 adjacent to the pixel, whose pixel values are g respectively-2、g-1、r1And g2Then, the pixel value G of the R channel pixel 4011 in the G channelx0The method comprises the following steps:
the above embodiment is based on the horizontal direction, and the method for performing G channel interpolation on R channel pixel points in the vertical direction is similar to the horizontal direction, and is not described herein again. In addition, the G channel pixel value at the B channel pixel point may be obtained by the same interpolation method as that for obtaining the G channel pixel value at the R channel pixel point.
In the embodiment, the G channel pixel values are obtained at the R channel pixel points and the B channel pixel points by an interpolation method, and the interpolation coefficient is obtained by using the color difference principle, so that the obtained G channel pixel values are more accurate, and the processed image is more real and natural.
In some embodiments, in the interpolation process, weighting may be further performed on interpolation coefficients of pixels in a second channel neighborhood of the target pixel.
Still taking fig. 7 as an example, in combination with formula (1), R channel pixel value (R) of a neighboring pixel point of R channel pixel point 4011 in formula (1) can be obtained by-3And r1) And multiplying by a number to adjust the interpolation result of the R channel pixel 4011 on the G channel.
In some embodiments, the weighting the interpolation coefficients of the pixel points in the second channel neighborhood of the target pixel point includes: weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight larger than 1 so as to enhance the high-frequency component in the original image; and weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight less than 1 so as to weaken the low-frequency component in the original image.
Still taking fig. 7 as an example, in combination with formula (1), the pixel value (R) of R channel pixel 4011 in formula (1) can be obtained by0) And R channel pixel value (R) of its neighboring pixel-3And r1) Multiplying the same number to adjust the frequency component in the original image, and the interpolation result G of the G channel pixel value of the R channel pixel point 4011x0Can be written as:
when a >1, the high-frequency component of the original image can be enhanced, so that the details of the subsequently reconstructed image are clearer; when a <1, the high frequency components may be attenuated, weakening the details of the subsequently reconstructed image. a is a real number.
The above embodiment is based on the horizontal direction, and the method for performing G channel interpolation on R channel pixel points in the vertical direction is similar to the horizontal direction, and is not described herein again.
In some embodiments, the target pixel points include a target pixel point of a first phase and a target pixel point of a second phase, and the pixel point of the first phase and the target pixel point of the second phase are interpolated by using different interpolation coefficients.
Still taking fig. 7 as an example, different pixels are located at different phases. When the R channel pixel 4018 is interpolated based on the interpolation coefficient to obtain the G channel pixel value G1, an interpolation coefficient different from that of the R channel pixel 4011 may be used. For example, the pixel values of the pixel points 4016, 4017, 4011, 4019, 4020, and 4021 adjacent to the pixel point 4018 in the original image can be selected and obtained as follows: g-2、g-1、r0、g2、g3And r4Combining the pixel value r of the pixel point itself1Selecting interpolation coefficient derived according to color difference consistency principleThen, the pixel value G of the R channel pixel 4011 in the G channelx0The method comprises the following steps:
when the R channel pixel 4018 is interpolated based on the interpolation coefficient to obtain the G channel pixel value G1, the same interpolation coefficient as that of the R channel pixel 4011 may also be used, which is not described herein again.
The above embodiment is based on the horizontal direction, and the method for performing G channel interpolation on R channel pixel points in the vertical direction is similar to the horizontal direction, and is not described herein again.
In some embodiments, determining the gradient structure tensor of the first type of pixel points according to the horizontal interpolation image and the vertical interpolation image may be implemented by: performing convolution processing on the horizontal interpolation image based on a predetermined first convolution core to obtain the horizontal direction gradient of the first type of pixel points; performing convolution processing on the vertical interpolation image based on a predetermined second convolution core to obtain the vertical direction gradient of the first type of pixel points; and determining the gradient structure tensor of the first-class pixel points according to the horizontal direction gradient and the vertical direction gradient of the first-class pixel points.
Still in the right directionThe determination of the gradient structure vector of the R channel pixel 4011 in fig. 6 is described as an example. As shown in fig. 8, a horizontal direction gradient may be obtained by performing convolution processing on the horizontal interpolation image 407 using a predetermined first convolution kernel; the vertically interpolated image 408 is subjected to convolution processing using a predetermined second convolution kernel, resulting in a vertical direction gradient. For example, for the horizontally interpolated image 407, a convolution kernel [ 10-1 ] is used]Performing convolution to obtain a horizontal direction gradient of the horizontal interpolation image 407; for the vertically interpolated image 408, a convolution kernel [ 10-1 ] is used]TConvolution is performed to obtain the vertical gradient of the vertically interpolated image 408. The convolution processing described above employs a convolution kernel other than [ 10-1]And [ 10-1]TOther convolution kernels may be used, for example, [ -101 [)]And [ -101 ]]T(ii) a As another example, [ 1-1 ]]And [ 1-1 ]]TAnd the like.
Then, taking the R channel pixel 4011 as a center, selecting a horizontal direction gradient block Ix and a vertical direction gradient block Iy with a certain window size, and respectively expanding Ix and Iy into column vectors, where a gradient structure tensor Ω is:
wherein the gradient structure tensor Ω is a 2 row 2 column matrix. The selection of the window sizes of the horizontal direction gradient block Ix and the vertical direction gradient block Iy can be determined according to the image quality requirement and hardware resources. The larger the window size, the higher the image quality that can be achieved, but the more hardware resources are consumed.
In some embodiments, obtaining the feature vector of the first type of pixel point on the first channel based on the gradient structure tensor of the first type of pixel point includes: acquiring a hash value vector corresponding to the gradient structure tensor of the first type of pixel points; and determining the hash value vector of the first type of pixel points as the feature vector of the first type of pixel points.
Taking the example illustrated in fig. 8 and the result thereof as an example, the gradient structure tensor of the R channel pixel 4011 is obtainedThen, the hash value vector corresponding to the gradient structure tensor can be obtained by the formula: [ k ] A1,k2,k3]。k1,k2,k3The expressions of (a) may be:
k1=arctan[(c-a+δ)/(2b)],
the hash value vector determined in the above manner is the feature vector of the R channel pixel 4011.
In some embodiments, filtering the original image according to a feature of an initial first channel pixel value of the original image to obtain the first channel pixel value of the original image includes: determining a filter kernel of a first type pixel point based on a feature vector of the first type pixel point in the original image; and filtering the first type of pixel points based on the filter kernels of the first type of pixel points to obtain a first channel pixel value of the original image.
Referring to fig. 9 in combination with fig. 6, after obtaining the features of the R channel pixel 4011 in the original image 401, the features may be feature vectors, or other parameters capable of characterizing the features of the pixel of the image, and may further obtain a filter kernel. Based on the obtained filter kernel, the R channel pixel 4011 in the image block 401 may be filtered, and then the G channel pixel value G of the pixel 4012 is obtained.
In some embodiments, determining a filter kernel of a first type of pixel point in the original image based on a feature vector of the first type of pixel point includes: performing interpolation processing on the feature vector based on a predetermined quantization level number to obtain the filter kernel; or, rounding the elements in the feature vector to obtain the filter kernel.
Referring to fig. 10, a filter kernel is used to filter an R channel pixel 4011 in an original image 401, so as to obtain a pixel value 4026 having a G channel pixel value, where the filter kernel can be obtained by:
taking the eigenvector as the hash value vector as an example, assume that the quantization levels of three elements in the hash value vector are n1,n2And n3Then the number of filter kernels in the preset filter bank is n ═ n1×n2×n3. Wherein, the quantization level refers to a hash value vector [ k ]1,k2,k3]Three elements k1,k2,k3Is measured. Assuming that the size of the filter kernel is m, the filter bank can be composed of m2Three-dimensional lookup tables, each having a size n ═ n1×n2×n3. The filter kernel corresponding to the pixel can be obtained by interpolation from the three-dimensional lookup table according to the hash value vector, and the interpolation method can be bicubic interpolation, tetrahedral interpolation and the like.
In addition, the filter kernel may be obtained by interpolating the hash value vector k1,k2,k3]Three elements k1,k2,k3And performing rounding processing to obtain a filter kernel which is pre-stored in the filter bank and is closest to the three element values, and using the filter kernel as a filter kernel used for subsequent filtering.
In some embodiments, filtering the first type of pixel points based on the filter kernel of the first type of pixel points includes: acquiring image blocks in a preset window which takes the first type of pixel points as the center in the original image; and filtering the image block through the filter core.
Still referring to fig. 10, filtering an R channel pixel 4022 to obtain a G channel pixel value G of the pixel 4023 in a G channel may adopt the following method: an image block with a certain size and centered on an R channel pixel 4022 is selected, for example, a 3 × 3 image block 409 centered on the pixel is selected, and a convolution kernel corresponding to the image block and the R channel pixel is convolved to obtain a G channel estimation value G corresponding to the R channel pixel.
In some embodiments, before obtaining an image block in a preset window centered on the first type of pixel point in the original image, the method further includes: and if the number of the pixels on one side of the first-class pixels is smaller than the radius of the window, carrying out mirror image processing on the pixels on the other side of the first-class pixels so as to complement the pixels on one side of the first-class pixels.
Still taking the example of selecting an image block with a size of 3 × 3 for filtering processing, referring to fig. 11, filtering an R channel pixel 4024 located in a first row of an original image to obtain a pixel value G of the pixel 4025 in a G channel, because the pixel is located at an edge of the original image, when selecting the image block 410 with the pixel as a center, there is no pixel at one side of the image block 410. In this case, the other side of the image block 410 may be mirror-complemented. As shown in fig. 10, in an image block 410, a pixel value of a downlink R channel pixel 4024 is mirrored to an uplink R channel pixel 4024 to obtain an image block 411, and then the image block is convolved with a convolution kernel corresponding to the R channel pixel 4024 to obtain a G channel estimation value G (i.e., a pixel value of a pixel 4025 in an image 402) corresponding to the R channel pixel.
With the above embodiments, a G-channel estimation image of an original image can be obtained. The G channel estimation image comprises G channel pixel values of all pixel points.
In some embodiments, the original image may be further converted into an image in a second bayer format or an RGB image based on the first channel pixel value of the first type pixel point and the second channel pixel value of the second type pixel point, where the image in the second bayer format is the image shown in fig. 2.
If the original image is converted into an image in the second bayer format, the pixel values of three channels (R channel, G channel, and B channel) of each pixel point of the original image may be obtained by using the method in the above embodiment. Referring to fig. 12, an R channel image estimation map 404, a G channel estimation image 402, and a B channel estimation image 414 of the original image are acquired, respectively.
If the original image is converted into an RGGB bayer format image, as shown in fig. 12, pixel values of three channels (R channel, G channel, and B channel) of each pixel point of the original image may be obtained first, and then an RGGB bayer format image 412 may be obtained based on the R channel image estimation image 404, the G channel estimation image 402, and the B channel estimation image 414. As shown in fig. 13, only the specific pixel values of specific pixels in the original image may be obtained, where the specific pixel values of specific pixels refer to the pixel values of pixels required for converting the original image 401 into the RGGB image 412, and the pixel values of these pixels cannot be directly obtained from the original image 401, that is, the pixel values of the pixels shown in fig. 415.
Referring to fig. 12 and 13, after the RGB image is acquired, the original data may be converted into a conventional RGGB image 412. Therefore, images acquired by image sensors based on non-RGGB and other bayer arrays can also access existing ISP processing modules without redesigning ISP algorithms and hardware.
In some embodiments, after converting the original image to an RGB image, converting the RGB image to a pseudo YUV image; and performing median filtering on the U component and the V component in the pseudo YUV image. For example, one way to compute a pseudo YUV image is as follows: y ═ R + B +2 × G)/4, U ═ R-G, V ═ B-G.
For an original image acquired by a bayer array based on non-RGGB and the like, after the RGB image is obtained by the method of the present application, the RGB image may be converted into a YUV image, and some methods are used to improve the image quality, for example, median filtering is used to remove the false color of the image.
Although the foregoing embodiments of the present application mostly describe a four-pixel bayer array image as an original image, it should be understood by those skilled in the art that the method of the present application can also be applied to convert an original image acquired by other non-RGGB array-based image sensors into an RGB image or a conventional RGGB image.
Corresponding to the embodiments of the method, the present specification also provides embodiments of the apparatus and the terminal applied thereto.
Referring to fig. 14, a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application is shown, and specifically, the image processing apparatus includes a memory 1401 and a processor 1402.
The memory 1401 may include a volatile memory (volatile memory); the memory 1401 may also include a non-volatile memory (non-volatile memory); the memory 1401 may also comprise a combination of the above-described types of memory. The processor 1402 may be a Central Processing Unit (CPU). The processor 1402 may further include a hardware video image processing device. The hardware video image processing apparatus may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. Specifically, the programmable logic device may be, for example, a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), or any combination thereof.
The memory 1401 is used for storing program instructions, and when the program instructions are executed, the processor 1402 calls the program instructions stored in the memory 1402 for performing the steps of:
acquiring a first channel pixel value of a first type of pixel point in an original image, wherein the original image comprises a second channel pixel value of the first type of pixel point and a first channel pixel value of a second type of pixel point;
acquiring a color difference value between a first channel and a second channel of the first-class pixel points according to a first channel pixel value and a second channel pixel value of the first-class pixel points;
and determining a second channel pixel value of the second type pixel point based on the color difference value between the first channel and the second channel of the first type pixel point and the first channel pixel value of the second type pixel point.
In some embodiments of the image processing apparatus, the processor is further configured to determine a second channel pixel value of the second type of pixel point based on a color difference value between a first channel and a second channel of the first type of pixel point and a first channel pixel value of the second type of pixel point, and to: interpolating a color difference value between a first channel and a second channel of the second type pixel points according to the color difference value between the first channel and the second channel of the first type pixel points; and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the second type pixel point and the first channel pixel value of the second type pixel point.
In some embodiments of the image processing apparatus, the obtaining is further configured to obtain a first channel pixel value of a first type of pixel point in the original image, and the processor is further configured to: carrying out interpolation according to the pixel value in the original image to obtain an initial first channel pixel value of the original image; calculating the characteristics of an initial first channel pixel value of the original image; and filtering the original image according to the characteristics of the initial first channel pixel value of the original image to obtain the first channel pixel value of the original image, wherein the first channel pixel value of the original image comprises the first channel pixel value of the first type of pixel points.
In some embodiments of the image processing apparatus, said computing a feature of initial first channel pixel values of said original image, said processor is further configured to: performing first channel interpolation processing on each pixel point in an original image along a first direction to obtain an interpolated image, wherein each pixel point in the interpolated image comprises a first channel pixel value; determining a gradient structure tensor of the first type of pixel points according to the interpolation image; and acquiring the feature vector of the first-class pixel points on the first channel based on the gradient structure tensor of the first-class pixel points.
In some embodiments of the image processing apparatus, the first direction includes a horizontal direction and a vertical direction; the processor is further configured to perform a first channel interpolation process on each pixel point in the original image along the first direction to obtain an interpolated image, and is further configured to: performing first channel interpolation processing on each pixel point in the original image along the horizontal direction to obtain a horizontal interpolation image; performing first channel interpolation processing on each pixel point in the original image along the vertical direction to obtain a vertical interpolation image; the determining the gradient structure tensor of the first-class pixel points according to the interpolation image comprises: and determining the gradient structure tensor of the first type of pixel points according to the horizontal interpolation image and the vertical interpolation image.
In some embodiments of the image processing apparatus, the determining a gradient structure tensor of the first type of pixel points from the horizontal interpolation image and the vertical interpolation image, the processor further configured to: performing convolution processing on the horizontal interpolation image based on a predetermined first convolution core to obtain the horizontal direction gradient of the first type of pixel points; performing convolution processing on the vertical interpolation image based on a predetermined second convolution core to obtain the vertical direction gradient of the first type of pixel points; and determining the gradient structure tensor of the first-class pixel points according to the horizontal direction gradient and the vertical direction gradient of the first-class pixel points.
In some embodiments of the image processing apparatus, the processor is further configured to: acquiring pixel values of neighborhood pixels of a target pixel in the first direction; and performing first channel interpolation processing on the target pixel point based on the pixel value of the neighborhood pixel point of the target pixel point in the first direction. In some embodiments of the image processing apparatus, the neighborhood pixels include first channel neighborhood pixels and second channel neighborhood pixels; the processor is further configured to perform first channel interpolation processing on the target pixel point based on the pixel value of the neighborhood pixel point of the target pixel point in the first direction, and is further configured to: and performing first channel interpolation processing on the target pixel point based on the pixel value of the first channel neighborhood pixel point, the pixel value of the second channel neighborhood pixel point, the interpolation coefficient of the first channel neighborhood pixel point and the interpolation coefficient of the second channel neighborhood pixel point of the target pixel point.
In some embodiments of the image processing apparatus, the processor is further configured to: and weighting the interpolation coefficients of the second channel neighborhood pixels of the target pixel.
In some embodiments of the image processing apparatus, the processor is further configured to: weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight larger than 1 so as to enhance the high-frequency component in the original image; and weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight less than 1 so as to weaken the low-frequency component in the original image.
In some embodiments of the image processing device, the target pixels include target pixels of a first phase and target pixels of a second phase, and the processor is further configured to: and carrying out interpolation processing on the pixel point of the first phase and the target pixel point of the second phase by adopting different interpolation coefficients.
In some embodiments of the image processing apparatus, the obtaining, based on the gradient structure tensor of the first type of pixel points, a feature vector of the first type of pixel points on the first channel is further configured to: acquiring a hash value vector corresponding to the gradient structure tensor of the first type of pixel points; and determining the hash value vector of the first type of pixel points as the feature vector of the first type of pixel points.
In some embodiments of the image processing apparatus, the features are feature vectors; the original image is filtered according to the characteristic of the initial first channel pixel value of the original image to obtain the first channel pixel value of the original image, and the processor is further configured to: determining a filter kernel of a first type pixel point based on a feature vector of the first type pixel point in the original image; and filtering the first type of pixel points based on the filter kernels of the first type of pixel points to obtain a first channel pixel value of the original image.
In some embodiments of the image processing apparatus, the determining a filter kernel of a first type of pixel point in the original image based on a feature vector of the first type of pixel point, and the processor is further configured to: performing interpolation processing on the feature vector based on a predetermined quantization level number to obtain the filter kernel; or, rounding the elements in the feature vector to obtain the filter kernel.
In some embodiments of the image processing apparatus, the processor is further configured to filter the first type of pixel points based on a filter kernel of the first type of pixel points, and to: acquiring image blocks in a preset window which takes the first type of pixel points as the center in the original image; and filtering the image block through the filter core.
In some embodiments of the image processing apparatus, before obtaining an image block in a preset window centered on the first type of pixel point in the original image, the processor is further configured to: and if the number of the pixels on one side of the first-class pixels is smaller than the radius of the window, carrying out mirror image processing on the pixels on the other side of the first-class pixels so as to complement the pixels on one side of the first-class pixels.
In some embodiments of the image processing apparatus, the first channel is a G channel, and the second channel is an R channel or a B channel.
In some embodiments of the image processing apparatus, the original image is an image in a first bayer format, the image in the first bayer format includes a plurality of target image blocks, each target image block includes 4 2 × 2 sub image blocks, a 1 st row and a 1 st column of the target image blocks include 4R channel pixel points, a 1 st row and a 2 nd column of the 1 st row and a 1 st column of the 2 nd row include 4G channel pixel points, and a 2 nd row and a 2 nd column of the 2 nd row include 4B channel pixel points.
In some embodiments of the image processing apparatus, the processor is further configured to: based on a first channel pixel value of the first type of pixel points and a second channel pixel value of the second type of pixel points, the original image is converted into an image or an RGB image in a second Bayer format, the image in the second Bayer format comprises a plurality of 2 x 2 image blocks, the 1 st row and 1 st column pixel points in each image block are R channel pixel points, the 1 st row and 2 nd column pixel points in the 1 st row and the 2 nd row and 1 st column pixel points in the 2 nd row are G channel pixel points, and the 2 nd column pixel points in the 2 nd row are B channel pixel points.
In some embodiments of the image processing apparatus, the processor is further configured to: after converting the original image into an RGB image, converting the RGB image into a YUV image; and performing median filtering on the U component and the V component in the YUV image.
In an embodiment of the present application, a computer-readable storage medium is further provided, where a computer program is stored, and when the computer program is executed by a processor, all embodiments of the above-mentioned method of the present application are implemented, and are not described herein again.
The computer readable storage medium may be an internal storage unit of the device according to any of the preceding embodiments, for example, a hard disk or a memory of the device. The computer readable storage medium may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the apparatus. The computer-readable storage medium is used for storing the computer program and other programs and data required by the apparatus. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
Claims (41)
1. An image processing method, characterized in that the method comprises:
acquiring a first channel pixel value of a first type of pixel point in an original image, wherein the original image comprises a second channel pixel value of the first type of pixel point and a first channel pixel value of a second type of pixel point;
calculating the color difference value between the first channel and the second channel of the first type of pixel points according to the first channel pixel value and the second channel pixel value of the first type of pixel points;
and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the first type pixel point and the first channel pixel value of the second type pixel point.
2. The method of claim 1, wherein determining the second channel pixel value of the second type of pixel point based on the color difference value between the first channel and the second channel of the first type of pixel point and the first channel pixel value of the second type of pixel point comprises:
interpolating a color difference value between a first channel and a second channel of the second type pixel points according to the color difference value between the first channel and the second channel of the first type pixel points;
and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the second type pixel point and the first channel pixel value of the second type pixel point.
3. The method of claim 1, wherein the obtaining the first channel pixel value of the first type pixel point in the original image comprises:
carrying out interpolation according to the pixel value in the original image to obtain an initial first channel pixel value of the original image;
calculating the characteristics of an initial first channel pixel value of the original image;
and filtering the original image according to the characteristics of the initial first channel pixel value of the original image to obtain the first channel pixel value of the original image, wherein the first channel pixel value of the original image comprises the first channel pixel value of the first type of pixel points.
4. The method of claim 3, wherein said computing the characteristic of the initial first channel pixel value of the original image comprises:
performing first channel interpolation processing on each pixel point in an original image along a first direction to obtain an interpolated image, wherein each pixel point in the interpolated image comprises a first channel pixel value;
determining a gradient structure tensor of the first type of pixel points according to the interpolation image;
and acquiring the feature vector of the first-class pixel points on the first channel based on the gradient structure tensor of the first-class pixel points.
5. The method of claim 4, wherein the first direction comprises a horizontal direction and a vertical direction; the method for performing first channel interpolation processing on each pixel point in the original image along the first direction to obtain an interpolated image includes:
performing first channel interpolation processing on each pixel point in the original image along the horizontal direction to obtain a horizontal interpolation image;
performing first channel interpolation processing on each pixel point in the original image along the vertical direction to obtain a vertical interpolation image;
the determining the gradient structure tensor of the first-class pixel points according to the interpolation image comprises:
and determining the gradient structure tensor of the first type of pixel points according to the horizontal interpolation image and the vertical interpolation image.
6. The method of claim 5, wherein determining the gradient structure tensor of the first type of pixel points from the horizontally interpolated image and the vertically interpolated image comprises:
performing convolution processing on the horizontal interpolation image based on a predetermined first convolution core to obtain the horizontal direction gradient of the first type of pixel points;
performing convolution processing on the vertical interpolation image based on a predetermined second convolution core to obtain the vertical direction gradient of the first type of pixel points;
and determining the gradient structure tensor of the first-class pixel points according to the horizontal direction gradient and the vertical direction gradient of the first-class pixel points.
7. The method of claim 4, wherein the first channel interpolation is performed for each pixel point in the original image based on:
acquiring pixel values of neighborhood pixels of a target pixel in the first direction;
based on the pixel value of a neighborhood pixel point of a target pixel point in the first direction, carrying out first channel interpolation processing on the pixel point;
and the target pixel point is any pixel point in the original image.
8. The method of claim 7, wherein the neighborhood pixels comprise first channel neighborhood pixels and second channel neighborhood pixels;
the performing, on the basis of the pixel value of the neighborhood pixel of the target pixel in the first direction, a first channel interpolation process on the target pixel includes:
and performing first channel interpolation processing on the target pixel point based on the pixel value of the first channel neighborhood pixel point, the pixel value of the second channel neighborhood pixel point, the interpolation coefficient of the first channel neighborhood pixel point and the interpolation coefficient of the second channel neighborhood pixel point of the target pixel point.
9. The method of claim 8, further comprising weighting interpolation coefficients of pixels in a second channel neighborhood of the target pixel.
10. The method of claim 9, wherein weighting the interpolation coefficients of the pixels in the second channel neighborhood of the target pixel comprises:
weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight larger than 1 so as to enhance the high-frequency component in the original image;
and weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight less than 1 so as to weaken the low-frequency component in the original image.
11. The method of claim 8, wherein the target pixels include a first phase target pixel and a second phase target pixel, and the first phase target pixel and the second phase target pixel are interpolated using different interpolation coefficients.
12. The method of claim 4, wherein obtaining the feature vector of the first type of pixel points on the first channel based on the gradient structure tensor of the first type of pixel points comprises:
acquiring a hash value vector corresponding to the gradient structure tensor of the first type of pixel points;
and determining the hash value vector of the first type of pixel points as the feature vector of the first type of pixel points.
13. The method of claim 3, wherein the feature is a feature vector; the filtering the original image according to the characteristic of the initial first channel pixel value of the original image to obtain the first channel pixel value of the original image includes:
determining a filter kernel of a first type pixel point based on a feature vector of the first type pixel point in the original image;
and filtering the first type of pixel points based on the filter kernels of the first type of pixel points to obtain a first channel pixel value of the original image.
14. The method of claim 13, wherein determining the filter kernel of the first type of pixel based on the feature vector of the first type of pixel in the original image comprises:
performing interpolation processing on the feature vector based on a predetermined quantization level number to obtain the filter kernel;
or,
and rounding the elements in the feature vector to obtain the filter kernel.
15. The method of claim 13, wherein said filtering the first type pixels based on the filter kernel of the first type pixels comprises:
acquiring a first image block in a preset window which takes the first type of pixel points as the center in the original image;
filtering the first image block by the filter kernel.
16. The method according to claim 15, wherein before obtaining the first image block in the predetermined window centered on the first type of pixel point in the original image, the method further comprises:
and if the number of the pixels on one side of the first-class pixels is smaller than the radius of the window, carrying out mirror image processing on the pixels on the other side of the first-class pixels so as to complement the pixels on one side of the first-class pixels.
17. The method of any one of claims 1 to 16, wherein the first channel is a G channel and the second channel is an R channel or a B channel.
18. The method according to any one of claims 1 to 17, wherein the original image is a first bayer pattern image, the first bayer pattern image includes a plurality of target image blocks, each target image block includes 4 2 × 2 second image blocks, the second image block in the 1 st row and the 1 st column of the target image blocks includes 4R-channel pixel points, the second image blocks in the 1 st row and the 2 nd column and the 2 nd row and the 1 st column of the target image blocks include 4G-channel pixel points, and the second image block in the 2 nd row and the 2 nd column of the target image blocks includes 4B-channel pixel points.
19. The method of claim 18, further comprising:
based on a first channel pixel value of the first type pixel point and a second channel pixel value of a second type pixel point of the second type pixel point, the original image is converted into an image or an RGB image in a second Bayer format, the image in the second Bayer format comprises a plurality of 2 x 2 third image blocks, a 1 st row and a 1 st column pixel point in a 1 st line of each third image block is an R channel pixel point, a 2 nd row and a 1 st column pixel point in a 2 nd line are G channel pixel points, and a 2 nd column pixel point in the 2 nd line is a B channel pixel point.
20. The method of claim 19, further comprising:
after converting the original image into an RGB image, converting the RGB image into a YUV image;
and performing median filtering on the U component and the V component in the YUV image.
21. An image processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following method when executing the program:
acquiring a first channel pixel value of a first type of pixel point in an original image, wherein the original image comprises a second channel pixel value of the first type of pixel point and a first channel pixel value of a second type of pixel point;
calculating the color difference value between the first channel and the second channel of the first type of pixel points according to the first channel pixel value and the second channel pixel value of the first type of pixel points;
and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the first type pixel point and the first channel pixel value of the second type pixel point.
22. The device of claim 21, wherein the processor is further configured to determine the second channel pixel value of the second type of pixel based on a color difference value between the first channel and the second channel of the first type of pixel and the first channel pixel value of the second type of pixel, and wherein the processor is further configured to:
interpolating a color difference value between a first channel and a second channel of the second type pixel points according to the color difference value between the first channel and the second channel of the first type pixel points;
and determining a second channel pixel value of the second type pixel point according to the color difference value between the first channel and the second channel of the second type pixel point and the first channel pixel value of the second type pixel point.
23. The apparatus of claim 21, wherein the processor is further configured to obtain a first channel pixel value of a first type of pixel point in the original image, and wherein the processor is further configured to:
carrying out interpolation according to the pixel value in the original image to obtain an initial first channel pixel value of the original image;
calculating the characteristics of an initial first channel pixel value of the original image;
and filtering the original image according to the characteristics of the initial first channel pixel value of the original image to obtain the first channel pixel value of the original image, wherein the first channel pixel value of the original image comprises the first channel pixel value of the first type of pixel points.
24. The apparatus of claim 23, wherein the processor is further configured to compute a feature of an initial first channel pixel value of the original image, and wherein the processor is further configured to:
performing first channel interpolation processing on each pixel point in an original image along a first direction to obtain an interpolated image, wherein each pixel point in the interpolated image comprises a first channel pixel value;
determining a gradient structure tensor of the first type of pixel points according to the interpolation image;
and acquiring the feature vector of the first-class pixel points on the first channel based on the gradient structure tensor of the first-class pixel points.
25. The apparatus of claim 24, wherein the first direction comprises a horizontal direction and a vertical direction; the processor is further configured to perform a first channel interpolation process on each pixel point in the original image along the first direction to obtain an interpolated image, and is further configured to:
performing first channel interpolation processing on each pixel point in the original image along the horizontal direction to obtain a horizontal interpolation image;
performing first channel interpolation processing on each pixel point in the original image along the vertical direction to obtain a vertical interpolation image;
the determining the gradient structure tensor of the first-class pixel points according to the interpolation image comprises:
and determining the gradient structure tensor of the first type of pixel points according to the horizontal interpolation image and the vertical interpolation image.
26. The device of claim 25, wherein the gradient structure tensor for the first type of pixel points is determined from the horizontally and vertically interpolated images, and wherein the processor is further configured to:
performing convolution processing on the horizontal interpolation image based on a predetermined first convolution core to obtain the horizontal direction gradient of the first type of pixel points;
performing convolution processing on the vertical interpolation image based on a predetermined second convolution core to obtain the vertical direction gradient of the first type of pixel points;
and determining the gradient structure tensor of the first-class pixel points according to the horizontal direction gradient and the vertical direction gradient of the first-class pixel points.
27. The device of claim 24, wherein the processor is further configured to:
acquiring pixel values of neighborhood pixels of a target pixel in the first direction;
performing first channel interpolation processing on the target pixel point based on the pixel value of the neighborhood pixel point of the target pixel point in the first direction;
and the target pixel point is any pixel point in the original image.
28. The device of claim 27, wherein the neighborhood pixels comprise first channel neighborhood pixels and second channel neighborhood pixels;
the processor is further configured to perform first channel interpolation processing on the target pixel point based on the pixel value of the neighborhood pixel point of the target pixel point in the first direction, and is further configured to:
and performing first channel interpolation processing on the target pixel point based on the pixel value of the first channel neighborhood pixel point, the pixel value of the second channel neighborhood pixel point, the interpolation coefficient of the first channel neighborhood pixel point and the interpolation coefficient of the second channel neighborhood pixel point of the target pixel point.
29. The device of claim 28, wherein the processor is further configured to:
and weighting the interpolation coefficients of the second channel neighborhood pixels of the target pixel.
30. The device of claim 29, wherein the processor is further configured to:
weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight larger than 1 so as to enhance the high-frequency component in the original image;
and weighting the interpolation coefficient of the pixel point in the second channel neighborhood of the target pixel point by the weight less than 1 so as to weaken the low-frequency component in the original image.
31. The device of claim 28, wherein the target pixels comprise target pixels in a first phase and target pixels in a second phase, and wherein the processor is further configured to:
and carrying out interpolation processing on the pixel point of the first phase and the target pixel point of the second phase by adopting different interpolation coefficients.
32. The apparatus of claim 24, wherein the feature vector of the first type of pixel points on the first channel is obtained based on a gradient structure tensor of the first type of pixel points, and the processor is further configured to:
acquiring a hash value vector corresponding to the gradient structure tensor of the first type of pixel points;
and determining the hash value vector of the first type of pixel points as the feature vector of the first type of pixel points.
33. The apparatus of claim 23, wherein the feature is a feature vector; the original image is filtered according to the characteristic of the initial first channel pixel value of the original image to obtain the first channel pixel value of the original image, and the processor is further configured to:
determining a filter kernel of a first type pixel point based on a feature vector of the first type pixel point in the original image;
and filtering the first type of pixel points based on the filter kernels of the first type of pixel points to obtain a first channel pixel value of the original image.
34. The apparatus of claim 23, wherein the processor is further configured to determine a filter kernel for a first type of pixel point in the original image based on a feature vector of the first type of pixel point, and wherein the processor is further configured to:
performing interpolation processing on the feature vector based on a predetermined quantization level number to obtain the filter kernel;
or,
and rounding the elements in the feature vector to obtain the filter kernel.
35. The device of claim 23, wherein the first type of pixel is filtered based on a filter kernel of the first type of pixel, and wherein the processor is further configured to:
acquiring a first image block in a preset window which takes the first type of pixel points as the center in the original image;
filtering the first image block by the filter kernel.
36. The device according to claim 25, wherein before obtaining the first image block in the predetermined window centered on the first type of pixel point in the original image, the processor is further configured to:
and if the number of the pixels on one side of the first-class pixels is smaller than the radius of the window, carrying out mirror image processing on the pixels on the other side of the first-class pixels so as to complement the pixels on one side of the first-class pixels.
37. The apparatus of any one of claims 21 to 36, wherein the first channel is a G channel and the second channel is an R channel or a B channel.
38. The device according to any one of claims 21 to 37, wherein the original image is a first bayer pattern image, the first bayer pattern image includes a plurality of target image blocks, each target image block includes 4 2 × 2 second image blocks, the second image block in the 1 st row and the 1 st column of the target image blocks includes 4R-channel pixel points, the second image blocks in the 1 st row and the 2 nd column and the 2 nd row and the 1 st column of the target image blocks include 4G-channel pixel points, and the second image block in the 2 nd row and the 2 nd column of the target image blocks includes 4B-channel pixel points.
39. The device of claim 38, wherein the processor is further configured to:
based on a first channel pixel value of the first type of pixel points and a second channel pixel value of the second type of pixel points, the original image is converted into an image or an RGB image in a second Bayer format, the image in the second Bayer format comprises a plurality of 2 x 2 third image blocks, the 1 st row and the 1 st column of pixel points in each third image block are R channel pixel points, the 2 nd row and the 2 nd column of pixel points in the 1 st row are G channel pixel points, and the 2 nd row and the 2 nd column of pixel points in the 2 nd row are B channel pixel points.
40. The device of claim 39, wherein the processor is further configured to:
after converting the original image into an RGB image, converting the RGB image into a YUV image;
and performing median filtering on the U component and the V component in the YUV image.
41. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the processing method of any one of claims 1 to 20.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/118383 WO2022061879A1 (en) | 2020-09-28 | 2020-09-28 | Image processing method, apparatus and system, and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113454687A true CN113454687A (en) | 2021-09-28 |
Family
ID=77808743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202080013781.1A Pending CN113454687A (en) | 2020-09-28 | 2020-09-28 | Image processing method, apparatus and system, computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113454687A (en) |
WO (1) | WO2022061879A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114331836A (en) * | 2021-12-15 | 2022-04-12 | 锐芯微电子股份有限公司 | Image processing method and device and readable storage medium |
CN114359050A (en) * | 2021-12-28 | 2022-04-15 | 北京奕斯伟计算技术有限公司 | Image processing method, image processing apparatus, computer device, storage medium, and program product |
CN114882129A (en) * | 2022-06-13 | 2022-08-09 | 深圳市汇顶科技股份有限公司 | Image processing method, image processing device and chip |
CN115034968A (en) * | 2022-06-30 | 2022-09-09 | 深圳市汇顶科技股份有限公司 | Image reconstruction method, device, module, equipment, storage medium and program product |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114792346A (en) * | 2022-04-28 | 2022-07-26 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, terminal, and readable storage medium |
CN114928436B (en) * | 2022-07-20 | 2022-09-27 | 华东交通大学 | A smart campus network security protection system |
CN115967809A (en) * | 2022-12-29 | 2023-04-14 | 珠海市欧冶半导体有限公司 | Bayer data compression method and related device |
CN117956300B (en) * | 2024-03-25 | 2024-05-28 | 上海元视芯智能科技有限公司 | Image processing architecture, image processing method and image processing chip |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1992909A (en) * | 2005-12-29 | 2007-07-04 | 华晶科技股份有限公司 | Pixel Color Information Reconstruction Method |
CN101917629A (en) * | 2010-08-10 | 2010-12-15 | 浙江大学 | A Bayer scheme color interpolation method based on green component and color difference space |
US20110069192A1 (en) * | 2009-08-18 | 2011-03-24 | Olympus Corporation | Image processing apparatus and image processing method |
CN110852953A (en) * | 2019-11-15 | 2020-02-28 | 展讯通信(上海)有限公司 | Image interpolation method and device, storage medium, image signal processor and terminal |
CN111539892A (en) * | 2020-04-27 | 2020-08-14 | 展讯通信(上海)有限公司 | Bayer image processing method, system, electronic device and storage medium |
-
2020
- 2020-09-28 CN CN202080013781.1A patent/CN113454687A/en active Pending
- 2020-09-28 WO PCT/CN2020/118383 patent/WO2022061879A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1992909A (en) * | 2005-12-29 | 2007-07-04 | 华晶科技股份有限公司 | Pixel Color Information Reconstruction Method |
US20110069192A1 (en) * | 2009-08-18 | 2011-03-24 | Olympus Corporation | Image processing apparatus and image processing method |
CN101917629A (en) * | 2010-08-10 | 2010-12-15 | 浙江大学 | A Bayer scheme color interpolation method based on green component and color difference space |
CN110852953A (en) * | 2019-11-15 | 2020-02-28 | 展讯通信(上海)有限公司 | Image interpolation method and device, storage medium, image signal processor and terminal |
CN111539892A (en) * | 2020-04-27 | 2020-08-14 | 展讯通信(上海)有限公司 | Bayer image processing method, system, electronic device and storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114331836A (en) * | 2021-12-15 | 2022-04-12 | 锐芯微电子股份有限公司 | Image processing method and device and readable storage medium |
CN114359050A (en) * | 2021-12-28 | 2022-04-15 | 北京奕斯伟计算技术有限公司 | Image processing method, image processing apparatus, computer device, storage medium, and program product |
CN114359050B (en) * | 2021-12-28 | 2024-03-29 | 北京奕斯伟计算技术股份有限公司 | Image processing method, apparatus, computer device, storage medium, and program product |
CN114882129A (en) * | 2022-06-13 | 2022-08-09 | 深圳市汇顶科技股份有限公司 | Image processing method, image processing device and chip |
CN115034968A (en) * | 2022-06-30 | 2022-09-09 | 深圳市汇顶科技股份有限公司 | Image reconstruction method, device, module, equipment, storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
WO2022061879A1 (en) | 2022-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113454687A (en) | Image processing method, apparatus and system, computer readable storage medium | |
CN109889800B (en) | Image enhancement method and device, electronic equipment and storage medium | |
JP4352371B2 (en) | Digital image processing method implementing adaptive mosaic reduction method | |
CN111510691B (en) | Color interpolation method and device, equipment and storage medium | |
TWI737979B (en) | Image demosaicer and method | |
US20090263017A1 (en) | Method for reconstruction of pixel color values | |
CN110246087B (en) | System and method for removing image chroma noise by referring to multi-resolution of multiple channels | |
CN113228094A (en) | image processor | |
CN110430403B (en) | Image processing method and device | |
CN103595981B (en) | Based on the color filter array image demosaicing method of non-local low rank | |
Chen et al. | Effective demosaicking algorithm based on edge property for color filter arrays | |
JPWO2006134923A1 (en) | Image processing apparatus, computer program product, and image processing method | |
CN108122201A (en) | A kind of Bayer interpolation slide fastener effect minimizing technology | |
US7652700B2 (en) | Interpolation method for captured color image data | |
US10863148B2 (en) | Tile-selection based deep demosaicing acceleration | |
US11854157B2 (en) | Edge-aware upscaling for improved screen content quality | |
CN110852953B (en) | Image interpolation method and device, storage medium, image signal processor and terminal | |
JP2013055623A (en) | Image processing apparatus, image processing method, information recording medium, and program | |
US20110032269A1 (en) | Automatically Resizing Demosaicked Full-Color Images Using Edge-Orientation Maps Formed In The Demosaicking Process | |
ho Lee et al. | Three dimensional colorization based image/video reconstruction from white-dominant RGBW pattern images | |
CN117274060A (en) | Unsupervised end-to-end demosaicing method and system | |
JP7183015B2 (en) | Image processing device, image processing method, and program | |
Zapryanov et al. | Comparative study of demosaicing algorithms for bayer and pseudo-random bayer color filter arrays | |
Korhonen | Improving image fidelity by luma-assisted chroma subsampling | |
JP4334150B2 (en) | Image interpolation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210928 |