CN118317211A - Demosaicing method, demosaicing device, electronic equipment and storage medium - Google Patents

Demosaicing method, demosaicing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118317211A
CN118317211A CN202410301141.0A CN202410301141A CN118317211A CN 118317211 A CN118317211 A CN 118317211A CN 202410301141 A CN202410301141 A CN 202410301141A CN 118317211 A CN118317211 A CN 118317211A
Authority
CN
China
Prior art keywords
signal
image signals
color difference
horizontal
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410301141.0A
Other languages
Chinese (zh)
Inventor
邸宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wingtech Information Technology Co Ltd
Original Assignee
Xian Wingtech Information Technology Co Ltd
Filing date
Publication date
Application filed by Xian Wingtech Information Technology Co Ltd filed Critical Xian Wingtech Information Technology Co Ltd
Publication of CN118317211A publication Critical patent/CN118317211A/en
Pending legal-status Critical Current

Links

Abstract

The application relates to the technical field of image processing, and provides a demosaicing method, a demosaicing device, electronic equipment and a storage medium. The method comprises the following steps: processing RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals and the second group of image signals comprise initial high-frequency signals; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals at least comprise horizontal and vertical brightness signals and horizontal and vertical color difference signals; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals; the fourth set of image signals is converted into RGB image signals. The negative effects of moire, false color, zipper effect, etc. in the high frequency region can be reduced.

Description

Demosaicing method, demosaicing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a demosaicing method, apparatus, electronic device, and storage medium.
Background
In daily life, a color image seen by the human eye is composed of three components, red (red) green (green) blue (blue), respectively, per pixel. The imaging sensor widely used at present is a Complementary Metal Oxide Semiconductor (CMOS) sensor, and the data format output by the CMOS sensor is that each pixel has only one color component, which is generally called Bayer data. It is further processed to recover the missing two components so that each pixel consists of three components, a process called demosaicing (Demosaic).
In the related art, a method Demosaic is more commonly used to interpolate the missing component in each pixel to obtain a pixel having three components. The interpolation methods commonly used at present are as follows: the method comprises the steps of (1) nearest neighbor interpolation, (2) bilinear interpolation, (3) color difference method, (4) Hamilton & Adams interpolation algorithm, (5) HA interpolation algorithm based on color difference, and the like.
Different Demosaic algorithms have different side effects, especially related to the high frequency region, which can create zipper effects as well as moire. In some rich-colored scenes, some false colors may also be produced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a demosaicing method, apparatus, electronic device, and readable storage medium that solve the problems of the related demosaicing method that generate a zipper effect, moire, and pseudo-color in the high frequency region.
The embodiment of the application provides a demosaicing method, which comprises the following steps: processing RAW material RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprises four image signals in the horizontal direction, the second group of image signals comprises four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise horizontal vertical brightness signals, horizontal vertical color difference signals, horizontal color difference signals, vertical color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into RGB image signals.
In one possible embodiment, processing RAW material RAW image signals to be processed to obtain a first set of image signals and a second set of image signals includes: processing RAW image signals to be processed in a first direction according to a preset method to obtain a first group of image signals; processing RAW image signals to be processed in a second direction according to a preset method to obtain a second group of image signals; the preset method comprises the following steps: interpolation processing is carried out on the pixel points aiming at each pixel point in the RAW image signals to be processed, and R signals, G signals and B signals corresponding to the pixel points are obtained; and processing the R signal, the G signal and the B signal to obtain an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal corresponding to the pixel point.
In one possible embodiment, combining the first set of image signals with the second set of image signals to obtain a third set of image signals includes: determining a horizontal-vertical luminance signal in the third set of image signals based on the initial luminance signal in the first set of image signals and the initial luminance signal in the second set of image signals; determining a horizontal-vertical color difference signal in the third set of image signals based on the first initial color difference signal in the first set of image signals and the first initial color difference signal in the second set of image signals; determining a horizontal color difference signal in a third set of image signals based on a second initial color difference signal in the first set of image signals; determining a vertical color difference signal in a third set of image signals based on a second initial color difference signal in the second set of image signals; determining a horizontal luminance high-frequency signal in a third group of image signals based on the initial luminance signal in the first group of image signals and the initial high-frequency signal in the first group of image signals, and the horizontal and vertical luminance signals; the vertical luminance high-frequency signal in the third group of image signals is determined based on the initial luminance signal in the second group of image signals and the initial high-frequency signal in the second group of image signals, and the horizontal vertical luminance signal.
In one possible embodiment, combining the plurality of signals included in the third set of image signals based on the interpolation direction to obtain a fourth set of image signals includes: determining a target brightness signal corresponding to the difference direction based on the horizontal and vertical brightness signal and the brightness high-frequency signal corresponding to the difference direction; taking the horizontal and vertical color difference signals as first target color difference signals; and determining a second target color difference signal based on the color difference signal corresponding to the difference direction.
In one possible implementation manner, the target luminance signal is a difference value of the horizontal and vertical luminance signals and a first product, and the first product is a product of a luminance high-frequency signal corresponding to a difference value direction and a target weight; the second target color difference signal is the sum of the first ratio and the second product, the first ratio is the ratio of the sum of the horizontal color difference signal and the vertical color difference signal to the first set value, and the second product is the product of the color difference signal corresponding to the interpolation direction and the target weight.
In one possible embodiment, the target weight includes at least one of: when the absolute value is smaller than or equal to the first preset value, the target weight is a second preset value; when the absolute value is larger than the first preset value and the maximum signal value is smaller than or equal to the product of the third preset value and the minimum signal value, the target weight and the fourth preset value are in a proportional relation; when the absolute value is larger than the first preset value and the maximum signal value is larger than the product of the third preset value and the minimum signal value, the target weight is a fourth preset value; the absolute value is the absolute value of the difference value between the horizontal vertical brightness signal and the horizontal vertical color difference signal, the maximum signal value is the maximum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal, and the minimum signal value is the minimum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal.
In one possible embodiment, the method further comprises: traversing a fourth set of image signals using a sliding window of a preset size; selecting a maximum value and a minimum value of image signals in a target area aiming at each image signal in the fourth group of image signals, wherein the target area is obtained by traversing images formed by the fourth group of image signals by a sliding window; if the signal value of the central point in the target area is larger than the maximum value, taking the maximum value as the signal value of the central point in the target area; and if the signal value of the central point in the target area is smaller than the minimum value, taking the minimum value as the signal value of the central point in the target area.
The embodiment of the application provides a demosaicing device, which comprises: a first processing module, configured to process the RAW image signals to obtain a first group of image signals and a second group of image signals, where the first group of image signals includes four image signals in a horizontal direction, the second group of image signals includes four channel images in a vertical direction, and the four image signals include an initial brightness signal, a first initial color difference signal, a second initial color difference signal, and an initial high frequency signal; the second processing module is used for combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise a horizontal vertical brightness signal, a horizontal color difference signal, a vertical color difference signal, a horizontal brightness high-frequency signal and a vertical brightness high-level signal; the signal comparison module is used for comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals and determining an interpolation direction; the third processing module is configured to combine, based on the interpolation direction, a plurality of signals included in the third set of image signals to obtain a fourth set of image signals, where the fourth set of image signals includes: a target luminance signal, a first target color difference signal, a second target color difference signal; and the signal conversion module is used for converting the fourth group of image signals into RGB image signals.
The embodiment of the application provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the demosaicing method provided by any embodiment of the application when executing the computer program.
An embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the demosaicing method provided by any embodiment of the present application.
The embodiment of the application provides a demosaicing method, a demosaicing device, electronic equipment and a computer readable storage medium, wherein the demosaicing method comprises the following steps: processing RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprises four image signals in the horizontal direction, the second group of image signals comprises four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise horizontal vertical brightness signals, horizontal vertical color difference signals, horizontal color difference signals, vertical color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into RGB image signals. The application extracts the high-frequency signals in the RAW image signals to be processed before interpolation, then carries out interpolation processing, combines the extracted high-frequency signals into the fourth group of image signals after interpolation processing, and further converts the extracted high-frequency signals into RGB image signals according to the fourth image signals so as to reduce negative effects such as moire, pseudo color, zipper effect and the like in a high-frequency region.
Drawings
FIG. 1is a flow chart of a Demosaic algorithm in the related art;
FIG. 2 is an application scenario diagram of a demosaicing method in one embodiment;
FIG. 3 is a flow diagram of a demosaicing method in one embodiment;
FIG. 4 is a schematic diagram of a single horizontal row G-channel fill method in one embodiment;
FIG. 5 is a schematic diagram of a calculation of newG a in one embodiment;
FIG. 6 is a schematic diagram of a G-channel filling method for two rows in the horizontal direction in one embodiment;
FIG. 7 is a schematic diagram of a newG-way calculation in one embodiment;
FIG. 8 is a schematic diagram of a G channel filling method in a horizontal direction in one embodiment;
FIG. 9 is a schematic diagram of a comparison of G-channel filling before and after filling in one embodiment;
FIG. 10 is a schematic diagram of a method of filling R-channels in a horizontal direction in one embodiment;
FIG. 11 is a schematic illustration of a B-channel filling method in the horizontal direction in one embodiment;
FIG. 12 is a schematic diagram of a comparison of RB channel filling before and after an embodiment;
FIG. 13 is a schematic diagram of an RB image in one embodiment;
FIG. 14 is a schematic illustration of the filling of even rows in the vertical direction in one embodiment;
FIG. 15 is a schematic illustration of the filling of odd rows in the vertical direction in one embodiment;
FIG. 16 is a schematic diagram of a first reference image C0 in one embodiment;
FIG. 17 is a schematic diagram of a second reference image C1 in one embodiment;
FIG. 18 is a schematic diagram of a first color difference image in one embodiment;
FIG. 19 is a schematic diagram of a second color difference image in one embodiment;
FIG. 20 is a schematic diagram of filtering a G component in one embodiment;
FIG. 21 is a schematic illustration of an initial high frequency image in one embodiment;
FIG. 22 is a schematic diagram of an initial luminance image in one embodiment;
FIG. 23 is a block diagram of a demosaicing device in one embodiment;
Fig. 24 is an internal structural diagram of the demosaicing apparatus in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the image processing process, a plurality of processing modules are included, and the processing modules can be divided into a RAW domain processing module, an RGB domain processing module and a luminance color difference (YUV) domain processing module according to an image format. The module in which the image is transferred from the RAW domain to the RGB domain is referred to as Demosaic module, and the module in which the image is transferred from the RGB domain to the YUV domain is referred to as a color space conversion (colorspace conversion, CSC) module.
In the whole image processing process, most of tasks are color reduction, noise removal and definition improvement of an image, and a Demosaic module comprises the processing process, so that the Demosaic module plays an important role in color reduction, noise removal and definition improvement. For color reduction, only if each pixel point in the image contains three RGB color component signals, the colors can be accurately reduced, and if Demosaic modules do not meet the requirements on color reduction, a zipper effect can be generated at the part where the colors cross. If the processing is not good for the high frequency region of the image space, false color is generated. When light of a specific angle is incident on the contact image sensor (contact image sensor, CIS), crosstalk (cross talk) occurs, in other words, the signals of Gr and Gb channels of the CIS are greatly different due to the light of the specific angle being incident on the CIS. If Demosaic modules do not process well, some maze noise can be generated, affecting sharpness. The Demosaic module is therefore a critical node for the image processing unit (IMAGE SIGNAL processing, ISP). The ISP is a unit for processing output signals of the front-end image sensor, and can be matched with the image sensors of different manufacturers.
The zipper effect is that the interpolation is not carried out along the edge direction, so that a plurality of pixel points are regularly distributed at intervals in the horizontal or vertical direction after the interpolation. The pseudo color means that high frequency components are liable to cause high frequency aliasing at the time of image interpolation.
In real life, each pixel point in a color image seen by people consists of three color components, namely an R component, a G component and a B component. The imaging sensor that is widely used at present is a CMOS sensor, and the output data format is that each pixel has only one color component, which is generally called Bayer (Bayer) data. Wherein Bayer is an original image format, which means that R channel and B channel respectively occupy the whole image 1/4, and G channel occupies the whole image 1/2.
At this time, each pixel needs to be further processed to recover the two components missing from the pixel, and the process of recovering the two components missing from the pixel is called Demosaic. Among them, the Demosaic algorithm is also called color filter array (color FILTER ARRAY, CFA) interpolation algorithm.
The key to the Demosaic algorithm is to interpolate the missing channels for each pixel, and the Demosaic algorithm in the related art is briefly described below with reference to fig. 1. As shown in fig. 1, the image signal collected by the image sensor is generally Bayer data, and may be a RAW image. Each pixel in the RAW image has only 1 color component. The color components corresponding to the pixel points in the singular rows in the RAW image are sequentially arranged in GRGRGRGRGRGR, and the color components corresponding to the pixel points in the double rows in the RAW image are sequentially arranged in BGBGBGBG. As shown in fig. 1, color extraction is performed on the RAW image according to color components, so that a red panel, a green panel and a blue panel can be obtained, wherein a plurality of pixels in the red panel do not have red components, a plurality of pixels in the green panel do not have green components, and a plurality of pixels in the blue panel do not have blue components. Then, interpolation is carried out on pixel points without red components in the red panel to obtain a red panel with red components in each pixel point, interpolation is carried out on pixel points without green components in the green panel to obtain a green panel with green components in each pixel point, interpolation is carried out on pixel points without blue components in the blue panel to obtain a blue panel with blue components in each pixel point, the interpolated red panel, green panel and blue panel are combined to obtain an RGB image, and each pixel point in the RGB image has three color components, namely, each pixel point has a value corresponding to an R channel, a value corresponding to a G channel and a value corresponding to a B channel.
Among these general interpolation methods are the following: the method comprises the steps of (1) nearest neighbor interpolation, (2) bilinear interpolation, (3) color difference method, (4) Hamilton & Adams interpolation algorithm, (5) HA interpolation algorithm based on color difference, and the like.
However, the Demosaic algorithm, when interpolating the high frequency region, generates a zipper effect in the high frequency region. In addition, since the spatial frequency of the high-frequency region is greater than the CIS sampling frequency, adjacent details cannot be distinguished in the interpolation process, and moire is generated. Finally, in some scenes with rich colors, some pseudo colors can be generated, and these problems are all disadvantages of common interpolation algorithms.
To solve some of the drawbacks of the related art caused by interpolation of a high frequency region, embodiments of the present application provide a demosaicing method, apparatus, electronic device, and computer readable storage medium, where the demosaicing method includes: processing RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprises four image signals in the horizontal direction, the second group of image signals comprises four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise horizontal vertical brightness signals, horizontal vertical color difference signals, horizontal color difference signals, vertical color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into RGB image signals.
According to the embodiment of the application, the high-frequency signals in the RAW image signals to be processed are extracted before interpolation, then interpolation is carried out, and after the interpolation, the extracted high-frequency signals are combined into the fourth group of image signals, and then are converted into RGB image signals according to the fourth image signals, so that the negative effects of moire, pseudo-color, zipper effect and the like in a high-frequency area are reduced.
The demosaicing method, device, electronic equipment and storage medium provided by the embodiment of the application are described in detail below with reference to the accompanying drawings.
Fig. 2 is an application scenario schematic diagram of a demosaicing method provided by an embodiment of the present application. It should be noted that fig. 2 is only an example of an application scenario where the embodiment of the present application may be applied, so as to help those skilled in the art understand the technical content of the present application, but it does not mean that the embodiment of the present application may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 2, the application scenario 100 of this embodiment may include a plurality of user terminals 110, a network 120, a server 130, and a database 140. For example, the application scenario 100 may be adapted to implement any of the demosaicing methods of the embodiments of the present application.
The terminal device 110 may be various electronic devices including a display screen and installed with various client applications including, but not limited to, smartphones, tablet computers, portable computers, desktop computers, and the like.
It will be appreciated that the demosaicing method of the embodiments of the present application may be performed by the terminal device 110 or by the server 130 communicatively coupled to the terminal device 110. Accordingly, the demosaicing device of the embodiment of the present application may be disposed in the terminal device 110, or disposed in the server 130 communicatively connected to the terminal device 110.
Network 120 may be a single network or a combination of at least two different networks. For example, network 120 may include, but is not limited to, one or a combination of several of a local area network, a wide area network, a public network, a private network, and the like. The network 120 may be a computer network such as the Internet and/or various telecommunications networks (e.g., 3G/4G/5G mobile communication networks, WIFI, bluetooth, zigBee, etc.), to which embodiments of the application are not limited.
The server 130 may be a single server, or a group of servers, or a cloud server, with each server within the group of servers being connected via a wired or wireless network. A server farm may be centralized, such as a data center, or distributed. The server 130 may be local or remote. The server 130 may communicate with the user terminal 110 through a wired or wireless network. Embodiments of the present application are not limited to the hardware system and software system of server 130.
Database 140 may refer broadly to a device having a storage function. The database 140 is mainly used to store various data utilized, generated, and outputted by the user terminal 110 and the server 130 in operation. Database 140 may be local or remote. The database 140 may include various memories, such as random access memory (random access memory, RAM), read Only Memory (ROM), and the like. The above-mentioned storage devices are merely examples and the storage devices that may be used by the system 100 are not limited in this regard. Embodiments of the present application are not limited to the hardware system and software system of database 140, and may be, for example, a relational database or a non-relational database.
Database 140 may be interconnected or in communication with server 130 or a portion thereof via network 120, or directly with server 130, or a combination thereof.
In some examples, database 140 may be a stand-alone device. In other examples, database 140 may also be integrated in at least one of user terminal 110 and server 130. For example, the database 140 may be provided on the user terminal 110 or on the server 130. For another example, the database 140 may be distributed, with one portion being provided on the user terminal 110 and another portion being provided on the server 130.
Fig. 3 is a flowchart of a demosaicing method according to an embodiment of the present application, where the method may be applied to a case of performing demosaicing processing on an image, and the method may be performed by a demosaicing device, and the demosaicing device may be implemented in a software and/or hardware manner, and the tooth feature point identifying method may be performed by the user terminal 110 or the server 130 in fig. 1.
As shown in fig. 3, the demosaicing method provided by the embodiment of the application mainly includes steps S101 to S105.
S101, processing RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprise four image signals in the horizontal direction, the second group of image signals comprise four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal.
The RAW image signal to be processed refers to the most original Bayer data collected by the image sensor, and it can be understood that only one channel in each pixel has a color value, and the color values of other channels are all 0. The image represented by the RAW image signal to be processed is as shown in the leftmost image of fig. 1. The color components corresponding to the pixel points in the singular rows in the RAW image are sequentially arranged in GRGRGRGRGRGR, and the color components corresponding to the pixel points in the double rows in the RAW image are sequentially arranged in BGBGBGBG.
In one possible implementation, processing the RAW image signal to be processed to obtain a first set of image signals and a second set of image signals includes: processing the RAW image signals to be processed in a first direction according to a preset method to obtain a first group of image signals; and processing the RAW image signals to be processed in a second direction according to a preset method to obtain a second group of image signals. Optionally, the first direction is a horizontal direction, and the second direction is a vertical direction.
In the embodiment of the present application, the first group of image signals is described by taking the example of processing the RAW image signals to be processed in the horizontal direction.
As shown in fig. 1, color extraction is performed on the RAW image according to color components, so that a red panel, a green panel and a blue panel can be obtained, wherein a plurality of pixels in the red panel do not have red components, a plurality of pixels in the green panel do not have green components, and a plurality of pixels in the blue panel do not have blue components.
As shown in fig. 1, in the green panel, there is a corresponding G component in the middle pixel point in each row. The method of filling the G component horizontally in the green panel is described below.
The G channels in the horizontal direction are filled in the singular behavior of a green panel, for example, as shown in fig. 4. As shown in fig. 4, taking 8 pixels in the horizontal direction as an example, the 2 nd pixel, the 4 th pixel, the 6 th pixel, the 8 th pixel all have G components, the 1 st pixel, the 3 rd pixel, the 5 th pixel, the 7 th pixel all have no G components, and only have R components. Therefore, the G channel of the 1 st pixel, the 3 rd pixel, the 5 th pixel, and the 7 th pixel is filled with the calculated new newG.
It should be noted that, after newG th pixel, 3 rd pixel, 5 th pixel, and 7 th pixel are filled with newG th pixel, each of the 1 st pixel, 3 rd pixel, 5 th pixel, and 7 th pixel includes R component and G component.
Wherein newG1 filled into the pixel point is determined by its neighboring G component and the R component neighboring the neighboring G component. Specifically, referring to fig. 5 for example, the calculation method of newG1 is described, including 5 pixel points from left to right, where the R component of the 1 st pixel point is R0, the G component of the 2 nd pixel point is G0, the R component of the 3 rd pixel point is Rc, the G component of the 4 th pixel point is G1, the R component of the 5 th pixel point is R2, and newG1 obtained by calculation is used as the G component of the 3 rd pixel point. Wherein newG is calculated by formula (1).
Note that, for the calculation method of newG corresponding to the leftmost pixel (for example, the 1 st pixel in fig. 5), R0 in the above formula (1) may be set to 0. For the calculation mode of newG s1 corresponding to the rightmost pixel (e.g., the 5 th pixel in fig. 5), R1 in the above formula (1) may be set to 0.
According to the above-described method for filling the singular rows of the green panel, the pixel points in which the G component does not exist in all the singular rows in the green panel are filled.
The G channel in the horizontal direction is filled with the double row of green panels as shown in fig. 6, for example. As shown in fig. 6, taking 8 pixels in the horizontal direction as an example, the 1 st pixel, the 3 rd pixel, the 5 th pixel, the 7 th pixel all have G components, the 2 nd pixel, the 4 th pixel, the 6 th pixel, the 8 th pixel all have no G components, and only have B components. Therefore, the G channels of the 2 nd, 4 th, 6 th, and 8 th pixels are filled with the calculated new newG.
It should be noted that, after the 2 nd pixel, the 4 th pixel, the 6 th pixel, and the 8 th pixel are filled with newG th pixel, each of the 2 nd pixel, the 4 th pixel, the 6 th pixel, and the 8 th pixel includes an R component and a G component.
Wherein newG2 filled into the pixel point is determined by its neighboring G component and the B component neighboring the neighboring G component. Specifically, referring to fig. 7 for example, the calculation method of newG pixels is described, 5 pixels are included once from left to right, the B component of the 1 st pixel is B0, the G component of the 2 nd pixel is G0, the B component of the 3 rd pixel is Bc, the G component of the 4 th pixel is G1, the B component of the 5 th pixel is B1, and newG2 obtained by calculation is taken as the G component of the 3 rd pixel. Wherein newG is calculated by formula (2).
Note that, for the calculation method of newG corresponding to the leftmost pixel (for example, the 1 st pixel in fig. 7), G0 in the above formula (2) may be set to 0. For the calculation mode of newG corresponding to the rightmost pixel (e.g., the 5 th pixel in fig. 7), G1 in the above formula (2) may be set to 0.
According to the double-line filling method of the green panel, the pixel points without the G component in all the double lines in the green panel are filled.
According to the filling method of the singular rows and the double rows, the filling of the G component is performed on the pixel points without the G component in the green panel. As shown in fig. 8, the example of 2 rows is illustrated, in which the G component of each pixel is filled in the first row by the filling method of the single-row G channel, and in which the G component of each pixel is filled in the second row by the filling method of the double-row G channel. After filling the G component, the G channel in each pixel in the green panel has its corresponding G component.
As shown in fig. 9, the pixel points including only the R component (singular row) and the B component (double row) in the green panel on the left side in fig. 9 are filled with the G channel in the interpolation manner described above, so as to obtain a filled green panel on the right side in fig. 9, the G channel including only the pixel point of the R component in the filled green panel is filled with the new newG component, and the G channel including only the pixel point of the B component in the filled green panel is filled with the new newG component. In other words, in the filled green panel on the right side of fig. 9, each pixel includes a G component.
The method of filling the R channel horizontally in the red panel is described below. Since only the singular row has the R component in the red panel, only the pixel points in the singular row in the red panel, in which the R component is not present, are filled.
The R channels in the horizontal direction are filled in the singular behavior of a red panel, for example, as shown in fig. 10. As shown in fig. 10, taking 8 pixels in the horizontal direction as an example, the 1 st pixel, the 3 rd pixel, the 5 th pixel, the 7 th pixel all have R components, the 2 nd pixel, the 4 th pixel, the 6 th pixel, the 8 th pixel all have no R components, and only have G components. Therefore, the R channels of the 2 nd, 4 th, 6 th and 8 th pixels are filled with the calculated new newR.
It should be noted that, after the 2 nd pixel, the 4 th pixel, the 6 th pixel, and the 8 th pixel are filled with newR, each of the 2 nd pixel, the 4 th pixel, the 6 th pixel, and the 8 th pixel includes newR component and G component.
NewR are substantially similar to the above-mentioned newG and newG, and specific reference should be made to the description of the above-mentioned embodiments, which are not repeated herein.
According to the above-described method for filling the single-number lines of the red panel, the pixel points in which the R component does not exist in all the single-number lines in the red panel are filled.
The method of filling the B channel horizontally in the blue panel is described below. Since only the B component exists in the double rows in the blue panel, only the pixels in the double rows in the blue panel, in which the B component does not exist, are filled.
As shown in fig. 11, the B channel in the horizontal direction is filled with the double row of the blue panel. As shown in fig. 11, taking 8 pixels in the horizontal direction as an example, the 2 nd pixel, the 4 th pixel, the 6 th pixel, the 8 th pixel all have a B component, the 1 st pixel, the 3 rd pixel, the 5 th pixel, the 7 th pixel all have no B component, and only have a G component. Therefore, the B channels of the 1 st pixel, the 3 rd pixel, the 5 th pixel and the 7 th pixel are filled with the new newB obtained by calculation.
It should be noted that, after newB th pixel, 3 rd pixel, 5 th pixel, and 7 th pixel are filled, each of the 1 st pixel, 3 rd pixel, 5 th pixel, and 7 th pixel includes newB component and G component.
NewB are substantially similar to the above-mentioned newG and newG, and specific reference should be made to the description of the above-mentioned embodiments, which are not repeated herein.
According to the double-line filling method of the blue panel, the pixel points without the B component in all singular lines in the blue panel are filled.
And combining the filled red panel and the filled blue panel to obtain the filled red-blue panel with R components in all pixel points in the singular rows and B components in all pixel points in the double rows.
As shown in fig. 12, in the red and blue panel on the left side in fig. 12, the pixel points only including the G component in the singular row are filled with the R component according to the above-mentioned R channel filling method, the pixel points only including the G component in the double row are filled with the B component according to the above-mentioned B channel filling method, so as to obtain the red and blue panel after filling on the right side in fig. 12, the R channel of the pixel points only including the G component in the singular row of the red and blue panel after filling is filled with the new newR component, and the B channel of the pixel points only including the G component in the double row of the red and blue panel after filling is filled with the new newB component. In other words, in the filled red-blue panel on the right side of fig. 12, each pixel in the singular row includes an R component, and each pixel in the double row includes a B component.
After the RAW image is filled according to the method, in the filled RAW image, each pixel point in the singular row comprises an R component and a G component, and each pixel point in the double row comprises a B component and a G component. And processing the filled RAW image to obtain an RB image.
As shown in fig. 13, for each pixel including R and G components, the difference between R and G components is calculated and then divided by 2 as the pixel RB value, and for each pixel including B and G components, the difference between B and G components is calculated and then divided by 2 as the pixel RB value. As shown in the rightmost image in fig. 13, after the above RB calculation, the RB value of the pixel point in the singular row is (R-G)/2. The RB value of the pixel in the double rows is (B-G)/2.
The RB image on the far right side of fig. 13 is subjected to vertical difference filling, and when vertical filling is performed, the filling modes of the odd columns and the even columns are different, and the filling modes of the odd columns and the even columns are described below.
The filling pattern of even rows in the vertical direction is described with even columns as an example, and the left image shown in fig. 13 is any one even column in the RB image in fig. 14. As shown in fig. 14, assuming that the RB value of the first line is A1, the RB value of the second line is C1, and assuming that the RB value of the third line is B1, a specific value of the RB value C1 of the second line is calculated based on the RB value A1 of the first line and the RB value B1 of the third line. The RB value C1 of the second row is calculated as shown in formula (3).
C1=(A1+B1)/2 (3)
The RB value C1 of the second row can be calculated as (R-G)/2 according to the formula (3), filling (R-G)/2 in the pixel points in the second row of the even column.
The filling pattern of the vertical singular rows is described with odd columns as an example, and the left image as shown in fig. 15 is any one odd column in the RB image in fig. 13. As shown in fig. 15, assuming that the RB value of the first row is A2, the RB value of the third row is C2, and assuming that the RB value of the fifth row is B2, a specific value of the RB value C2 of the third row is calculated based on the RB value A2 of the first row and the RB value B2 of the fifth row. The RB value C2 of the third row is calculated as shown in formula (3).
C2=(A+6*C+B)/8 (4)
The RB value C2 of the third row can be calculated as (R-G)/2 according to the formula (4), filling (R-G)/2 in the pixel points in the second row of the even column.
The first reference image C0 shown in fig. 16 can be obtained by filling the pixels corresponding to the even rows in the even columns and the pixels corresponding to the singular rows in the odd columns in the above manner, wherein each pixel in the first reference image has a color value corresponding to (R-G)/2.
The method for filling the even number rows in the even number columns is similar to the method for filling the even number rows in the even number columns provided in the above embodiment, and specifically, the method can be used for calculating to obtain the filling RB value (B-G)/2 of the pixel points in the even number rows in the even number columns, and the method for filling the even number rows in the odd number columns is similar to the method for filling the odd number rows in the odd number columns provided in the above embodiment, and specifically, the method can be used for calculating to obtain the filling RB value (B-G)/2 of the pixel points in the even number rows in the technical columns.
The second reference image C1 shown in fig. 17 can be obtained by filling the pixels corresponding to the odd rows in the even columns and the pixels corresponding to the even rows in the odd columns in the above manner, wherein each pixel in the second reference image has a color value corresponding to (B-G)/2.
And adding the color value corresponding to each pixel point in the first reference image C0 and the color value corresponding to the pixel point corresponding to the second reference image C1 to obtain a first initial color difference signal U corresponding to each pixel point. As shown in fig. 18, the first initial color difference signal U corresponds to the first color difference image, and the first initial color difference signal U corresponding to each pixel point in the first color difference image is (B-2g+r)/2.
And subtracting the color value corresponding to each pixel point in the first reference image C0 from the color value corresponding to the pixel point corresponding to the second reference image C1 to obtain a second initial color difference signal V corresponding to each pixel point. As shown in fig. 19, the second initial color difference signal V corresponds to the second initial color difference signal V, and the second initial color difference signal V corresponding to each pixel point in the second color difference image is (B-2g+r)/2.
The way in which the initial high frequency signal is calculated is described below.
The G component in each pixel point in the green panel filled in fig. 9 is subjected to filtering processing, and a filtered G component is obtained. And filtering according to the vertical direction. The filtering mode is as follows: a plurality of filter weights are set, and the G component is subjected to filter processing by using the filter weights.
As shown in fig. 20, the filter weights are set to-1,2,6,2, -1, respectively. The 4 th G component is multiplied by-1 to obtain a first value, the 5 th G component is multiplied by 2 to obtain a second value, the 6 th G component is multiplied by 6 to obtain a third value, the 7 th G component is multiplied by 2 to obtain a fourth value, the 8 th G component is multiplied by-1 to obtain a fifth value, and the sum of the first value, the second value, the third value, the fourth value and the fifth value is divided by the sum of weights 8 to obtain the 6 th filtered G component.
And carrying out filtering processing on the G component in each pixel point in the filled green panel according to the method. For each pixel point in the filled green panel, subtracting the filtered G component from the original G component to obtain a differential value dG of the G component, and taking the differential value of the G component as an initial high-frequency signal H.
After the G component in each pixel point in the green panel after filling is processed in the above manner, an initial high-frequency image corresponding to the initial high-frequency signal shown in fig. 21 is obtained.
The manner in which the initial luminance signal is determined is described below.
And carrying out filtering processing on the G component in each pixel point in the filled green panel according to the method. For each pixel point in the filled green panel, the filtered G component is added with a first initial color difference signal (B-2g+r) to obtain an initial luminance signal Y. As shown in fig. 22, the initial luminance signal Y corresponds to an initial luminance image, and the initial luminance signal Y corresponding to each pixel point in the initial luminance image is (b+2g+r)/2.
After the above processing, 4 first color difference images, second color difference images, initial high-frequency images, initial luminance images in the horizontal direction are obtained.
The processing mode in the vertical direction is consistent with the processing mode in the horizontal direction, and the vertical direction is processed according to the method to obtain 4 first color difference images, second color difference images, initial high-frequency images and initial brightness images in the vertical direction.
Wherein the first group of image signals includes a first initial color difference signal U, a second initial color difference signal V, an initial high frequency signal H, and an initial luminance signal Y in the horizontal direction. The second group of image signals includes a first initial color difference signal U, a second initial color difference signal V, an initial high frequency signal H, and an initial luminance signal Y in the vertical direction.
S102, combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise horizontal vertical brightness signals, horizontal vertical color difference signals, horizontal color difference signals, vertical color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals.
The first initial color difference signal U, the second initial color difference signal V, the initial high-frequency signal H and the initial brightness signal Y in the horizontal direction are combined with the first initial color difference signal U, the second initial color difference signal V, the initial high-frequency signal H and the initial brightness signal Y in the vertical direction, so that a horizontal vertical brightness signal, a horizontal vertical color difference signal, a horizontal color difference signal, a vertical color difference signal, a horizontal brightness high-frequency signal and a vertical brightness high-frequency signal are obtained.
In one possible implementation, combining the first set of image signals with the second set of image signals to obtain a third set of image signals includes: determining a horizontal-vertical luminance signal in the third set of image signals based on the initial luminance signal in the first set of image signals and the initial luminance signal in the second set of image signals; determining a horizontal-vertical color difference signal in the third set of image signals based on the first initial color difference signal in the first set of image signals and the first initial color difference signal in the second set of image signals; determining a horizontal color difference signal in a third set of image signals based on a second initial color difference signal in the first set of image signals; determining a vertical color difference signal in a third set of image signals based on a second initial color difference signal in the second set of image signals; determining a horizontal luminance high-frequency signal in a third group of image signals based on the initial luminance signal in the first group of image signals and the initial high-frequency signal in the first group of image signals, and the horizontal and vertical luminance signals; the vertical luminance high-frequency signal in the third group of image signals is determined based on the initial luminance signal in the second group of image signals and the initial high-frequency signal in the second group of image signals, and the horizontal vertical luminance signal.
From step S101, a first initial color difference signal U, a second initial color difference signal V, an initial high-frequency signal H, and an initial luminance signal Y in the horizontal direction are determined. A first initial color difference signal U, a second initial color difference signal V, an initial high frequency signal H, and an initial luminance signal Y in the vertical direction. Further, an initial luminance signal Y in the horizontal direction is represented by H0, a first initial color difference signal U in the horizontal direction is represented by H1, a second initial color difference signal V in the horizontal direction is represented by H2, a high-frequency signal H in the horizontal direction is represented by H3, an initial luminance signal Y in the vertical direction is represented by V0, a first initial color difference signal U in the vertical direction is represented by V1, a second initial color difference signal V in the vertical direction is represented by V2, and a high-frequency signal H in the vertical direction is represented by V3.
The first set of image signals is represented by equation (5).
The second set of image signals is represented by equation (6).
Wherein the horizontal-vertical color difference signal HVYUVE [1] is determined by the initial luminance signal H [0] in the horizontal direction and the initial luminance signal V [0] in the vertical direction. Specifically, the horizontal-vertical luminance signal HVYUVE [0] is the sum of the initial luminance signal H [0] in the horizontal direction and the initial luminance signal V [0] in the vertical direction divided by 2.
Specifically, the horizontal-vertical luminance signal HVYUVE [0] is calculated using equation (7).
Wherein the horizontal-vertical luminance signal HVYUVE [1] is determined by the first initial color difference signal H [1] in the horizontal direction and the first initial color difference signal H [1] in the vertical direction. Specifically, the horizontal-vertical luminance signal HVYUVE [1] is the sum of the first initial color difference signal H [1] in the horizontal direction and the first initial color difference signal H [1] in the vertical direction divided by 2.
Specifically, the horizontal-vertical luminance signal HVYUVE [1] is calculated using equation (8).
Wherein the horizontal-vertical luminance signal HVYUVE [1] is determined by the first initial color difference signal H [1] in the horizontal direction and the first initial color difference signal H [1] in the vertical direction. Specifically, the horizontal-vertical luminance signal HVYUVE [1] is the sum of the first initial color difference signal H [1] in the horizontal direction and the first initial color difference signal H [1] in the vertical direction divided by 2.
Specifically, the horizontal-vertical luminance signal HVYUVE [1] is calculated using equation (8).
Wherein the horizontal color difference signal HVYUVE [2] is determined by the second initial color difference signal H2 in the horizontal direction. Specifically, the horizontal color difference signal HVYUVE [2] is the same as the second initial color difference signal H2 in the horizontal direction.
Specifically, the horizontal color difference signal HVYUVE [2] is calculated by the formula (9).
Wherein the horizontal color difference signal HVYUVE [2] is determined by the second initial color difference signal H2 in the horizontal direction. Specifically, the horizontal color difference signal HVYUVE [2] is the same as the second initial color difference signal H2 in the horizontal direction.
Specifically, the horizontal color difference signal HVYUVE [2] is calculated by the formula (9).
Wherein the vertical color difference signal HVYUVE [3] is determined by the second initial color difference signal V [2] in the vertical direction. Specifically, the vertical color difference signal HVYUVE [3] is the same as the second initial color difference signal V [2] in the vertical direction.
Specifically, the horizontal color difference signal HVYUVE [3] is calculated by the formula (10).
Wherein the horizontal luminance high-frequency signal HVYUVE [4] is determined by an initial high-frequency signal H [0] in the horizontal direction, an initial luminance signal H [3] and a horizontal-vertical luminance signal HVYUVE [0 ]. Specifically, a sum of the initial high-frequency signal H0 and the initial luminance signal H3 is calculated, and the horizontal luminance high-frequency signal HVYUVE [4] is obtained by subtracting the horizontal and vertical luminance signals HVYUVE [0 ].
Specifically, the horizontal luminance high-frequency signal HVYUVE [4] is calculated by the formula (11).
Wherein the vertical luminance high-frequency signal HVYUVE is determined by an initial high-frequency signal V [0], an initial luminance signal V [3] and a horizontal vertical luminance signal HVYUVE [0] in the vertical direction. Specifically, a sum of the initial high-frequency signal V0 and the initial luminance signal V3 is calculated, and the sum is subtracted from the horizontal-vertical luminance signal HVYUVE [0] to obtain the vertical luminance high-frequency signal HVYUVE [5].
Specifically, vertical luminance high-frequency signal HVYUVE is calculated by formula (12).
S103, comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction.
The interpolation direction is determined based on the magnitude relation of the horizontal-vertical luminance signal HVYUVE [0] and the horizontal-vertical color difference signal HVYUVE [1 ]. The interpolation direction includes two kinds, a vertical difference value and a horizontal difference value. In other words, it is determined whether to interpolate the image in the vertical direction or the horizontal direction based on the magnitude relation of the horizontal-vertical luminance signal HVYUVE [0] and the horizontal-vertical color difference signal HVYUVE [1 ].
Comparing the horizontal vertical luminance signal with the horizontal vertical color difference signal, if the horizontal vertical luminance signal HVYUVE [0] is greater than the horizontal vertical color difference signal HVYUVE [1], the interpolation direction is indicated as the horizontal direction, and if the horizontal vertical luminance signal HVYUVE [0] is less than or equal to the horizontal vertical color difference signal HVYUVE [1], the interpolation direction is indicated as the vertical direction.
S104, combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal.
The method for combining the plurality of signals included in the third group of image signals to obtain the fourth group of image signals mainly includes two methods, and two processing methods are described below respectively.
Combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the method comprises the following steps: determining a target brightness signal corresponding to the difference direction based on the horizontal and vertical brightness signal and the brightness high-frequency signal corresponding to the difference direction; taking the horizontal and vertical color difference signals as first target color difference signals; and determining a second target color difference signal based on the color difference signal corresponding to the difference direction.
Specifically, when the interpolation direction is the horizontal direction, the target luminance signal YUV [0] is calculated based on the horizontal-vertical luminance signal HVYUVE [0] and the horizontal-luminance high-frequency signal HVYUVE [4 ]. Specifically, the sum of the horizontal-vertical luminance signal HVYUVE [0] and the horizontal-luminance high-frequency signal HVYUVE [4] is taken as the target luminance signal YUV [0].
When the interpolation direction is the vertical direction, the target luminance signal YUV [0] is calculated based on the horizontal vertical luminance signal HVYUVE [0] and the vertical luminance high-frequency signal HVYUVE [5 ]. Specifically, the sum of the horizontal vertical luminance signal HVYUVE [0] and the vertical luminance high-frequency signal HVYUVE [5] is taken as the target luminance signal YUV [0].
The target luminance signal YUV [0] can be calculated by equation (13).
Specifically, when the interpolation direction is the horizontal direction or the vertical direction, the calculation method of the first target color difference signal YUV [1] is the same. Specifically, a first target color difference signal YUV [1] is calculated based on the horizontal vertical color difference signal HVYUVE [1]. Specifically, the first target color difference signal HVYUVE [1] is directly used as the first target color difference signal YUV [1].
The first target color difference signal YUV [1] can be calculated by equation (14).
Specifically, when the interpolation direction is the horizontal direction, the first target color difference signal YUV [2] is calculated based on the horizontal color difference signal HVYUVE [2]. Specifically, the horizontal color difference signal HVYUVE [2] is directly used as the first target color difference signal YUV [2]. When the interpolation direction is the vertical direction, a first target color difference signal YUV [2] is calculated based on the vertical color difference signal HVYUVE [3 ]. Specifically, the vertical color difference signal HVYUVE [3] is directly used as the first target color difference signal YUV [2].
The second target color difference signal YUV 2 may be calculated by equation (15).
Combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the method comprises the following steps: determining a target brightness signal corresponding to the difference direction based on the horizontal and vertical brightness signal, the brightness high-frequency signal corresponding to the difference direction and the target weight; taking the horizontal and vertical color difference signals as first target color difference signals; and determining a second target color difference signal based on the color difference signal corresponding to the difference direction and the target weight.
First, a simple calculation is performed on the determination method of the target weight. The target weight includes at least one of: when the absolute value is smaller than or equal to the first preset value, the target weight is a second preset value; when the absolute value is larger than the first preset value and the maximum signal value is smaller than or equal to the product of the third preset value and the minimum signal value, the target weight and the fourth preset value are in a proportional relation; when the absolute value is larger than the first preset value and the maximum signal value is larger than the product of the third preset value and the minimum signal value, the target weight is a fourth preset value; the absolute value is the absolute value of the difference value between the horizontal vertical brightness signal and the horizontal vertical color difference signal, the maximum signal value is the maximum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal, and the minimum signal value is the minimum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal.
The absolute value of the difference between the horizontal vertical luminance signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1] is calculated, and then the maximum value and the minimum value between the horizontal vertical luminance signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1] are compared. Specifically, the calculation can be performed by the formula (16).
Where diff denotes an absolute value of a difference between the horizontal vertical luminance signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1], max denotes a maximum value between the horizontal vertical luminance signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1], and min denotes a minimum value between the horizontal vertical luminance signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1 ].
The first preset value, the second preset value, the third preset value and the fourth preset value can be set according to practical situations, and optionally, the second preset value is 0, the third preset value is 2, and the fourth preset value is 128.
When the absolute value diff of the difference value between the horizontal vertical brightness signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1] is smaller than or equal to a first preset value, the target weight dir is 0; when the absolute value diff of the difference between the horizontal vertical luminance signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1] is larger than the first preset value and the maximum signal value is smaller than or equal to 2 times the minimum signal value, the target weight dir is the product of 128 and the first parameter, wherein the first parameter is obtained by subtracting 1 from the ratio of the maximum signal value to the minimum signal value. When the absolute value diff of the difference between the horizontal vertical luminance signal HVYUVE [0] and the horizontal vertical color difference signal HVYUVE [1] is larger than the first preset value and the maximum signal value is larger than the minimum signal value of 2 times, the target weight dir is 128.
The target weight dir may be calculated using equation (17).
In a second implementation manner, the target luminance signal is a difference value of a horizontal and vertical luminance signal and a first product, and the first product is a product of a luminance high-frequency signal corresponding to a difference value direction and a target weight; the second target color difference signal is the sum of the first ratio and the second product, the first ratio is the ratio of the sum of the horizontal color difference signal and the vertical color difference signal to the first set value, and the second product is the product of the color difference signal corresponding to the interpolation direction and the target weight.
Specifically, when the interpolation direction is the horizontal direction, the target luminance signal YUV [0] is calculated based on the horizontal-vertical luminance signal HVYUVE [0], the horizontal-luminance high-frequency signal HVYUVE [4], and the target weight dir. Specifically, the product of the horizontal luminance high-frequency signal HVYUVE [4] and the target weight dir is calculated, and the sum of the product and the horizontal vertical luminance signal HVYUVE [0] is used as the target luminance signal YUV [0].
When the interpolation direction is the vertical direction, a target luminance signal YUV [0] is calculated based on the horizontal vertical luminance signal HVYUVE [0], the vertical luminance high-frequency signal HVYUVE [5], and the target weight dir. Specifically, the product of the vertical luminance high-frequency signal HVYUVE [5] and the target weight dir is calculated, and the sum of the product and the horizontal vertical luminance signal HVYUVE [0] is taken as the target luminance signal YUV [0].
The target luminance signal YUV [0] can be calculated by equation (18).
Specifically, when the interpolation direction is the horizontal direction or the vertical direction, the calculation method of the first target color difference signal YUV [1] is the same. Specifically, a first target color difference signal YUV [1] is calculated based on the horizontal vertical color difference signal HVYUVE [1]. Specifically, the first target color difference signal HVYUVE [1] is directly used as the first target color difference signal YUV [1].
The computing manner of the first target color difference signal YUV [1] in the second computing method is the same as that of the first target color difference signal YUV [1] in the first computing method, and specific reference may be made to the description in the first computing method, and details are not repeated in this embodiment.
Specifically, when the interpolation direction is the horizontal direction, an average value of the vertical color difference signals HVYUVE [3] of the horizontal color difference signals HVYUVE [2] is calculated, and the average value is taken as a first ratio. The product of the horizontal color difference signal HVYUVE [2] and the target weight dir is calculated as a second product, and the sum of the first ratio and the second product is used as a second target color difference signal YUV [2]. When the interpolation direction is the vertical direction, an average value of the vertical color difference signals HVYUVE [3] of the horizontal color difference signals HVYUVE [2] is calculated, and the average value is taken as a first ratio. The product of the vertical color difference signal HVYUVE [3] and the target weight dir is calculated as a second product, and the sum of the first ratio and the second product is used as a second target color difference signal YUV [2].
The second target color difference signal YUV 2 may be calculated by equation (19).
S105, converting the fourth group of image signals into red, green and blue RGB image signals.
Through the steps, a fourth group of image signals is obtained, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal. Wherein the fourth set of image signals is color information represented in a YUV color coding method.
The fourth group of image signals is converted into RGB image signals by a color conversion method, specifically, the RGB image signals can be obtained by the formula (20).
In one possible implementation, before converting the fourth set of image signals into the RGB image signals, abnormal pixels in the fourth set of image signals may be detected and corrected. The method for detecting and correcting the abnormal pixel points in the fourth group of image signals mainly comprises the following steps: traversing a fourth set of image signals using a sliding window of a preset size; selecting a maximum value and a minimum value of image signals in a target area aiming at each image signal in the fourth group of image signals, wherein the target area is obtained by traversing images formed by the fourth group of image signals by a sliding window; if the signal value of the central point in the target area is larger than the maximum value, taking the maximum value as the signal value of the central point in the target area; and if the signal value of the central point in the target area is smaller than the minimum value, taking the minimum value as the signal value of the central point in the target area.
The fourth set of image signals is still a component of the image signals of the individual pixels in an image. The sliding window of the predetermined size may be a sliding window 3*3 which may traverse the image formed by the fourth set of image signals in sequence starting from the upper left corner. And processing the target brightness signal, the first target color difference signal and the second target color difference signal of each pixel point in the sliding window in sequence. Specifically, abnormal pixel points of the target brightness signal, the first target color difference signal and the second target color difference signal are detected and corrected respectively.
The processing method for the signal in the sliding window mainly comprises the following steps: maximum and minimum values other than the center point within the sliding window are obtained. If the signal value of the center point is larger than the maximum value, the maximum value is directly used as the signal value of the center point in the target area. If the signal value of the center point is smaller than the maximum value, the minimum value is directly used as the signal value of the center point in the target area. If the signal value of the center point is smaller than the maximum value and larger than the minimum value, no substitution processing is performed.
The above-described signal is exemplified as the target luminance signal. As shown in fig. 22, the position coordinates of the center point are 22, and the maximum value and the minimum value of the target luminance signals corresponding to 9 pixel points in the sliding window are acquired. If the maximum value is larger than the target brightness signal of the center point, the maximum value is taken as the target brightness signal of the center point, if the minimum value is larger than the target brightness signal of the center point, the minimum value is taken as the target brightness signal of the center point, and if the signal value of the center point is smaller than the maximum value and larger than the minimum value, the target brightness signal of the center point is also the original target brightness signal.
Thus, demosaicing of the RAW image is achieved, and in the embodiment of the application, interpolation of R and B channels is considered, and influence of G channel value changes in two directions on results is considered. The interpolation method is used for supplementing the missing pixel point difference information by using linear interpolation, however, high-frequency information in RGGB component is easy to lose in the interpolation process, and RGGB is transferred to YUVE to separate out brightness signals, color difference signals and high-frequency signals in order to better keep the high-frequency information unaffected.
In an embodiment of the present application, another method for determining a third set of image signals is provided.
The Bayer Raw RGGB is converted into a horizontal vertical YUVE, and the conversion method refers to the RGB-to-YUV matrix.
When converting YUV images into RGB images, the matrix a is used for conversion,
When converting the RGB image into the YUV image, the matrix B is used for conversion, when converting the YUV image into the RGB image, the matrix A is used for conversion,
Multiplying the matrix A and the matrix B can obtain:
It can be seen that the multiplication of RGB by the matrix of YUV inter-conversion, i.e. a x B is close to the identity matrix, as in a x B described above, with individual values other than zero, represents that there is some loss of information in the data conversion.
When RGGB conversion YUVE is carried out, the RGB-to-YUV matrix multiplication is utilized to have normalized characteristics, and the calculation is carried out as follows:
Y: initial luminance signal, U: first initial color difference signal, V: second initial color difference signal, E: the initial high-frequency signal is provided with a signal,
Three ratios of RGGB to YUVE were set, rcoef, gcoef and Bcoef. Wherein Rcoef + Gcoef + Bcoef =1, coef is adjusted according to AWBGain, AWBGain is applied first when in use, AWBGain is removed after calculation is finished, and influence of application AWBGain of the WB module at the back is prevented.
The YUVE-turn RGGB ratio is as follows: the manner in which Ucoef = Bcoef/Gcoef, vcoef = Rcoef/GcoefRGGB images are converted to YUVE images is achieved by equation (21):
The manner in which YUVE images are converted into RGGB images is achieved by the formula (22):
the two conversion formulas (21) and (22) are represented using a matrix a and a matrix B, respectively:
from the above YUV image and RGB image conversion, it can be seen that:
and carrying the matrix A and the matrix B into the matrix A multiplied by B for calculation, and obtaining a result: rcoef =0, bcoef=0, gcoef=1. And then can be derived:
And filling the RGGB pattern level with other two colors at the same pixel position respectively by a color difference method to obtain a first group of image signals, and multiplying the first group of image signals with the matrix A to obtain HVYUVE horizontal and vertical brightness signals, horizontal and vertical color difference signals, horizontal and color difference signals, vertical and color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals containing edge high-frequency information.
Or the RGGB image is vertically filled with other two colors at the same pixel position through a color difference method to obtain a first group of image signals, and the first group of image signals are multiplied by the matrix A to obtain HVYUVE, wherein the horizontal vertical brightness signals, the horizontal vertical color difference signals, the horizontal color difference signals, the vertical color difference signals, the horizontal brightness high-frequency signals and the vertical brightness high-frequency signals containing edge high-frequency information are obtained.
It should be understood that, although the steps in the flowchart of fig. 3 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 3 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In one embodiment, as shown in fig. 23, there is provided a demosaicing apparatus 230, comprising: a first processing module 231, a second processing module 232, a signal comparison module 233, a third processing module 234, and a signal conversion module 235, wherein:
A first processing module 231, configured to process RAW material RAW image signals to be processed to obtain a first set of image signals and a second set of image signals, where the first set of image signals includes four image signals in a horizontal direction, the second set of image signals includes four channel images in a vertical direction, and the four image signals include an initial brightness signal, a first initial color difference signal, a second initial color difference signal, and an initial high frequency signal; a second processing module 232, configured to combine the first set of image signals with the second set of image signals to obtain a third set of image signals, where the third set of image signals includes a horizontal vertical luminance signal, a horizontal color difference signal, a vertical color difference signal, a horizontal luminance high-frequency signal, and a vertical luminance high-level signal; a signal comparison module 233, configured to compare the horizontal-vertical luminance signal with the horizontal-vertical color difference signal, and determine an interpolation direction; a third processing module 234, configured to combine the multiple signals included in the third set of image signals based on the interpolation direction to obtain a fourth set of image signals, where the fourth set of image signals includes: a target luminance signal, a first target color difference signal, a second target color difference signal; the signal conversion module 235 is configured to convert the fourth set of image signals into RGB image signals.
In a possible implementation manner, the first processing module 231 is configured to process the RAW image signal to be processed in a first direction according to a preset method, so as to obtain a first group of image signals; processing the RAW image signals to be processed in a second direction according to a preset method to obtain a second group of image signals; the preset method comprises the following steps: performing interpolation processing on each pixel point in the RAW image signal to be processed to obtain a red R signal, a green G signal and a blue B signal corresponding to the pixel point; and processing the R signal, the G signal and the B signal to obtain an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal corresponding to the pixel point.
In one possible implementation, the second processing module 232 is configured to determine a horizontal-vertical luminance signal in the third set of image signals based on the initial luminance signal in the first set of image signals and the initial luminance signal in the second set of image signals; determining a horizontal vertical color difference signal in the third set of image signals based on a first initial color difference signal in the first set of image signals and a first initial color difference signal in the second set of image signals; determining a horizontal color difference signal in the third set of image signals based on a second initial color difference signal in the first set of image signals; determining a vertical color difference signal in the third set of image signals based on a second initial color difference signal in the second set of image signals; determining a horizontal luminance high-frequency signal in the third group of image signals based on an initial luminance signal in the first group of image signals and an initial high-frequency signal in the first group of image signals, and the horizontal vertical luminance signal; a vertical luminance high-frequency signal in the third set of image signals is determined based on the initial luminance signal in the second set of image signals and the initial high-frequency signal in the second set of image signals, and the horizontal vertical luminance signal.
In one possible implementation, the third processing module 234 is configured to include: determining a target brightness signal corresponding to the difference direction based on the horizontal and vertical brightness signal and a brightness high-frequency signal corresponding to the difference direction; taking the horizontal and vertical color difference signals as first target color difference signals; and determining a second target color difference signal based on the color difference signal corresponding to the difference direction.
In one possible implementation manner, the target luminance signal is a difference value of the horizontal and vertical luminance signals and a first product, and the first product is a product of a luminance high-frequency signal corresponding to the difference value direction and a target weight; the second target color difference signal is the sum of a first ratio and a second product, the first ratio is the ratio of the sum of the horizontal color difference signal and the vertical color difference signal to a first set value, and the second product is the product of the color difference signal corresponding to the interpolation direction and a target weight.
In one possible implementation, the target weight includes at least one of: when the absolute value is smaller than or equal to a first preset value, the target weight is a second preset value; when the absolute value is larger than the first preset value and the maximum signal value is smaller than or equal to the product of the third preset value and the minimum signal value, the target weight and the fourth preset value are in a proportional relation; when the absolute value is greater than the first preset value and the maximum signal value is greater than the product of the third preset value and the minimum signal value, the target weight is the fourth preset value; the absolute value is the absolute value of the difference value between the horizontal vertical brightness signal and the horizontal vertical color difference signal, the maximum signal value is the maximum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal, and the minimum signal value is the minimum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal.
In one possible implementation, the apparatus further includes: the abnormal point correction module is used for traversing the fourth group of image signals by using a sliding window with a preset size; selecting a maximum value and a minimum value of the image signals in a target area aiming at each image signal in the fourth group of image signals, wherein the target area is obtained by traversing images formed by the sliding window on the fourth group of image signals; if the signal value of the central point in the target area is larger than the maximum value, taking the maximum value as the signal value of the central point in the target area; and if the signal value of the central point in the target area is smaller than the minimum value, taking the minimum value as the signal value of the central point in the target area.
The demosaicing device provided by the embodiment of the application is used for executing the following processes: processing RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprises four image signals in the horizontal direction, the second group of image signals comprises four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise horizontal vertical brightness signals, horizontal vertical color difference signals, horizontal color difference signals, vertical color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into RGB image signals. The application extracts the high-frequency signals in the RAW image signals to be processed before interpolation, then carries out interpolation processing, combines the extracted high-frequency signals into the fourth group of image signals after interpolation processing, and further converts the extracted high-frequency signals into RGB image signals according to the fourth image signals so as to reduce negative effects such as moire, pseudo color, zipper effect and the like in a high-frequency region.
For specific limitations of the demosaicing device, reference may be made to the above limitations of the demosaicing method, and no further description is given here. The above-described respective modules in the demosaicing apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 24. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a demosaicing method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 24 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the demosaicing apparatus provided by the application may be implemented in the form of a computer program executable on a computer device as shown in fig. 24. The memory of the computer device may store various program modules constituting the demosaicing apparatus, such as the first processing module 231, the second processing module 232, the signal comparison module 233, the third processing module 234, and the signal conversion module 235 shown in fig. 23. The computer program comprising the individual program modules causes the processor to carry out the steps in the XX method of the various embodiments of the application described in this specification.
For example, the computer apparatus shown in fig. 24 may perform step 101 by the first processing module 231 module in the XX apparatus shown in fig. 23. The computer device may perform step 102 via the second processing module 232. The computer device may perform step 103 via the signal comparison module 233. The computer device may perform step 104 through the third processing module 234. The computer device may perform step 105 through the signal conversion module 235.
In one embodiment, a computer device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of: processing RAW material RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprise four image signals in the horizontal direction, the second group of image signals comprise four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise a horizontal vertical brightness signal, a horizontal vertical color difference signal, a horizontal color difference signal, a vertical color difference signal, a horizontal brightness high-frequency signal and a vertical brightness high-frequency signal; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into red, green and blue RGB image signals.
In one embodiment, the processor when executing the computer program further performs the steps of: processing the RAW image signals to be processed in a first direction according to a preset method to obtain a first group of image signals; processing the RAW image signals to be processed in a second direction according to a preset method to obtain a second group of image signals; the preset method comprises the following steps: performing interpolation processing on each pixel point in the RAW image signal to be processed to obtain a red R signal, a green G signal and a blue B signal corresponding to the pixel point; and processing the R signal, the G signal and the B signal to obtain an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal corresponding to the pixel point.
In one embodiment, the processor when executing the computer program further performs the steps of: determining a horizontal-vertical luminance signal in the third set of image signals based on the initial luminance signal in the first set of image signals and the initial luminance signal in the second set of image signals; determining a horizontal vertical color difference signal in the third set of image signals based on a first initial color difference signal in the first set of image signals and a first initial color difference signal in the second set of image signals; determining a horizontal color difference signal in the third set of image signals based on a second initial color difference signal in the first set of image signals; determining a vertical color difference signal in the third set of image signals based on a second initial color difference signal in the second set of image signals; determining a horizontal luminance high-frequency signal in the third group of image signals based on an initial luminance signal in the first group of image signals and an initial high-frequency signal in the first group of image signals, and the horizontal vertical luminance signal; a vertical luminance high-frequency signal in the third set of image signals is determined based on the initial luminance signal in the second set of image signals and the initial high-frequency signal in the second set of image signals, and the horizontal vertical luminance signal.
In one embodiment, the processor when executing the computer program further performs the steps of: determining a target brightness signal corresponding to the difference direction based on the horizontal and vertical brightness signal and a brightness high-frequency signal corresponding to the difference direction; taking the horizontal and vertical color difference signals as first target color difference signals; and determining a second target color difference signal based on the color difference signal corresponding to the difference direction.
In one embodiment, the processor when executing the computer program further performs the steps of: the target brightness signal is the difference value of the horizontal and vertical brightness signal and a first product, and the first product is the product of a brightness high-frequency signal corresponding to the difference value direction and a target weight; the second target color difference signal is the sum of a first ratio and a second product, the first ratio is the ratio of the sum of the horizontal color difference signal and the vertical color difference signal to a first set value, and the second product is the product of the color difference signal corresponding to the interpolation direction and a target weight.
In one embodiment, the processor when executing the computer program further performs the steps of: the target weight includes at least one of: when the absolute value is smaller than or equal to a first preset value, the target weight is a second preset value; when the absolute value is larger than the first preset value and the maximum signal value is smaller than or equal to the product of the third preset value and the minimum signal value, the target weight and the fourth preset value are in a proportional relation; when the absolute value is greater than the first preset value and the maximum signal value is greater than the product of the third preset value and the minimum signal value, the target weight is the fourth preset value; the absolute value is the absolute value of the difference value between the horizontal vertical brightness signal and the horizontal vertical color difference signal, the maximum signal value is the maximum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal, and the minimum signal value is the minimum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal.
In one embodiment, the processor when executing the computer program further performs the steps of: traversing the fourth set of image signals using a sliding window of a preset size; selecting a maximum value and a minimum value of the image signals in a target area aiming at each image signal in the fourth group of image signals, wherein the target area is obtained by traversing images formed by the sliding window on the fourth group of image signals; if the signal value of the central point in the target area is larger than the maximum value, taking the maximum value as the signal value of the central point in the target area; and if the signal value of the central point in the target area is smaller than the minimum value, taking the minimum value as the signal value of the central point in the target area.
The demosaicing equipment provided by the embodiment of the application is used for executing the following processes: processing RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprises four image signals in the horizontal direction, the second group of image signals comprises four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise horizontal vertical brightness signals, horizontal vertical color difference signals, horizontal color difference signals, vertical color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into RGB image signals. The application extracts the high-frequency signals in the RAW image signals to be processed before interpolation, then carries out interpolation processing, combines the extracted high-frequency signals into the fourth group of image signals after interpolation processing, and further converts the extracted high-frequency signals into RGB image signals according to the fourth image signals so as to reduce negative effects such as moire, pseudo color, zipper effect and the like in a high-frequency region.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: processing RAW material RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprise four image signals in the horizontal direction, the second group of image signals comprise four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise a horizontal vertical brightness signal, a horizontal vertical color difference signal, a horizontal color difference signal, a vertical color difference signal, a horizontal brightness high-frequency signal and a vertical brightness high-frequency signal; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into red, green and blue RGB image signals.
In one embodiment, the computer program when executed by the processor further performs the steps of: processing the RAW image signals to be processed in a first direction according to a preset method to obtain a first group of image signals; processing the RAW image signals to be processed in a second direction according to a preset method to obtain a second group of image signals; the preset method comprises the following steps: performing interpolation processing on each pixel point in the RAW image signal to be processed to obtain a red R signal, a green G signal and a blue B signal corresponding to the pixel point; and processing the R signal, the G signal and the B signal to obtain an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal corresponding to the pixel point.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a horizontal-vertical luminance signal in the third set of image signals based on the initial luminance signal in the first set of image signals and the initial luminance signal in the second set of image signals; determining a horizontal vertical color difference signal in the third set of image signals based on a first initial color difference signal in the first set of image signals and a first initial color difference signal in the second set of image signals; determining a horizontal color difference signal in the third set of image signals based on a second initial color difference signal in the first set of image signals; determining a vertical color difference signal in the third set of image signals based on a second initial color difference signal in the second set of image signals; determining a horizontal luminance high-frequency signal in the third group of image signals based on an initial luminance signal in the first group of image signals and an initial high-frequency signal in the first group of image signals, and the horizontal vertical luminance signal; a vertical luminance high-frequency signal in the third set of image signals is determined based on the initial luminance signal in the second set of image signals and the initial high-frequency signal in the second set of image signals, and the horizontal vertical luminance signal.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a target brightness signal corresponding to the difference direction based on the horizontal and vertical brightness signal and a brightness high-frequency signal corresponding to the difference direction; taking the horizontal and vertical color difference signals as first target color difference signals; and determining a second target color difference signal based on the color difference signal corresponding to the difference direction.
In one embodiment, the computer program when executed by the processor further performs the steps of: the target brightness signal is the difference value of the horizontal and vertical brightness signal and a first product, and the first product is the product of a brightness high-frequency signal corresponding to the difference value direction and a target weight; the second target color difference signal is the sum of a first ratio and a second product, the first ratio is the ratio of the sum of the horizontal color difference signal and the vertical color difference signal to a first set value, and the second product is the product of the color difference signal corresponding to the interpolation direction and a target weight.
In one embodiment, the computer program when executed by the processor further performs the steps of: the target weight includes at least one of: when the absolute value is smaller than or equal to a first preset value, the target weight is a second preset value; when the absolute value is larger than the first preset value and the maximum signal value is smaller than or equal to the product of the third preset value and the minimum signal value, the target weight and the fourth preset value are in a proportional relation; when the absolute value is greater than the first preset value and the maximum signal value is greater than the product of the third preset value and the minimum signal value, the target weight is the fourth preset value; the absolute value is the absolute value of the difference value between the horizontal vertical brightness signal and the horizontal vertical color difference signal, the maximum signal value is the maximum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal, and the minimum signal value is the minimum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal.
In one embodiment, the computer program when executed by the processor further performs the steps of: traversing the fourth set of image signals using a sliding window of a preset size; selecting a maximum value and a minimum value of the image signals in a target area aiming at each image signal in the fourth group of image signals, wherein the target area is obtained by traversing images formed by the sliding window on the fourth group of image signals; if the signal value of the central point in the target area is larger than the maximum value, taking the maximum value as the signal value of the central point in the target area; and if the signal value of the central point in the target area is smaller than the minimum value, taking the minimum value as the signal value of the central point in the target area.
The computer program provided by the embodiment of the application is executed by a processor as follows: processing RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprises four image signals in the horizontal direction, the second group of image signals comprises four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal; combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise horizontal vertical brightness signals, horizontal vertical color difference signals, horizontal color difference signals, vertical color difference signals, horizontal brightness high-frequency signals and vertical brightness high-frequency signals; comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction; combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal; the fourth set of image signals is converted into RGB image signals. The application extracts the high-frequency signals in the RAW image signals to be processed before interpolation, then carries out interpolation processing, combines the extracted high-frequency signals into the fourth group of image signals after interpolation processing, and further converts the extracted high-frequency signals into RGB image signals according to the fourth image signals so as to reduce negative effects such as moire, pseudo color, zipper effect and the like in a high-frequency region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of demosaicing, comprising:
Processing RAW material RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, wherein the first group of image signals comprise four image signals in the horizontal direction, the second group of image signals comprise four channel images in the vertical direction, and the four image signals comprise an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal;
Combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise a horizontal vertical brightness signal, a horizontal vertical color difference signal, a horizontal color difference signal, a vertical color difference signal, a horizontal brightness high-frequency signal and a vertical brightness high-frequency signal;
Comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals to determine an interpolation direction;
Combining a plurality of signals included in the third group of image signals based on the interpolation direction to obtain a fourth group of image signals, wherein the fourth group of image signals comprises: a target luminance signal, a first target color difference signal, a second target color difference signal;
the fourth set of image signals is converted into red, green and blue RGB image signals.
2. The method of claim 1, wherein processing the RAW image signal to be processed to obtain a first set of image signals and a second set of image signals comprises:
Processing the RAW image signals to be processed in a first direction according to a preset method to obtain a first group of image signals;
processing the RAW image signals to be processed in a second direction according to a preset method to obtain a second group of image signals;
The preset method comprises the following steps:
Performing interpolation processing on each pixel point in the RAW image signal to be processed to obtain a red R signal, a green G signal and a blue B signal corresponding to the pixel point;
And processing the R signal, the G signal and the B signal to obtain an initial brightness signal, a first initial color difference signal, a second initial color difference signal and an initial high-frequency signal corresponding to the pixel point.
3. The method of claim 1, wherein combining the first set of image signals with the second set of image signals results in a third set of image signals, comprising:
Determining a horizontal-vertical luminance signal in the third set of image signals based on the initial luminance signal in the first set of image signals and the initial luminance signal in the second set of image signals;
Determining a horizontal vertical color difference signal in the third set of image signals based on a first initial color difference signal in the first set of image signals and a first initial color difference signal in the second set of image signals;
Determining a horizontal color difference signal in the third set of image signals based on a second initial color difference signal in the first set of image signals;
determining a vertical color difference signal in the third set of image signals based on a second initial color difference signal in the second set of image signals;
Determining a horizontal luminance high-frequency signal in the third group of image signals based on an initial luminance signal in the first group of image signals and an initial high-frequency signal in the first group of image signals, and the horizontal vertical luminance signal;
A vertical luminance high-frequency signal in the third set of image signals is determined based on the initial luminance signal in the second set of image signals and the initial high-frequency signal in the second set of image signals, and the horizontal vertical luminance signal.
4. The method according to claim 1, wherein the combining the plurality of signals included in the third set of image signals based on the interpolation direction to obtain a fourth set of image signals includes:
Determining a target brightness signal corresponding to the difference direction based on the horizontal and vertical brightness signal and a brightness high-frequency signal corresponding to the difference direction;
taking the horizontal and vertical color difference signals as first target color difference signals;
And determining a second target color difference signal based on the color difference signal corresponding to the difference direction.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
The target brightness signal is the difference value of the horizontal and vertical brightness signal and a first product, and the first product is the product of a brightness high-frequency signal corresponding to the difference value direction and a target weight;
The second target color difference signal is the sum of a first ratio and a second product, the first ratio is the ratio of the sum of the horizontal color difference signal and the vertical color difference signal to a first set value, and the second product is the product of the color difference signal corresponding to the interpolation direction and a target weight.
6. The method of claim 5, wherein the target weights comprise at least one of:
when the absolute value is smaller than or equal to a first preset value, the target weight is a second preset value;
when the absolute value is larger than the first preset value and the maximum signal value is smaller than or equal to the product of the third preset value and the minimum signal value, the target weight and the fourth preset value are in a proportional relation;
When the absolute value is greater than the first preset value and the maximum signal value is greater than the product of the third preset value and the minimum signal value, the target weight is the fourth preset value;
The absolute value is the absolute value of the difference value between the horizontal vertical brightness signal and the horizontal vertical color difference signal, the maximum signal value is the maximum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal, and the minimum signal value is the minimum value of the horizontal vertical brightness signal and the horizontal vertical color difference signal.
7. The method as recited in claim 1, further comprising:
Traversing the fourth set of image signals using a sliding window of a preset size;
Selecting a maximum value and a minimum value of the image signals in a target area aiming at each image signal in the fourth group of image signals, wherein the target area is obtained by traversing images formed by the sliding window on the fourth group of image signals;
if the signal value of the central point in the target area is larger than the maximum value, taking the maximum value as the signal value of the central point in the target area;
and if the signal value of the central point in the target area is smaller than the minimum value, taking the minimum value as the signal value of the central point in the target area.
8. A demosaicing apparatus, comprising:
A first processing module, configured to process RAW material RAW image signals to be processed to obtain a first group of image signals and a second group of image signals, where the first group of image signals includes four image signals in a horizontal direction, the second group of image signals includes four channel images in a vertical direction, and the four image signals include an initial brightness signal, a first initial color difference signal, a second initial color difference signal, and an initial high frequency signal;
The second processing module is used for combining the first group of image signals with the second group of image signals to obtain a third group of image signals, wherein the third group of image signals comprise a horizontal vertical brightness signal, a horizontal color difference signal, a vertical color difference signal, a horizontal brightness high-frequency signal and a vertical brightness high-level signal;
the signal comparison module is used for comparing the horizontal and vertical brightness signals with the horizontal and vertical color difference signals and determining an interpolation direction;
a third processing module, configured to combine, based on the interpolation direction, a plurality of signals included in the third set of image signals to obtain a fourth set of image signals, where the fourth set of image signals includes: a target luminance signal, a first target color difference signal, a second target color difference signal;
And the signal conversion module is used for converting the fourth group of image signals into red, green and blue (RGB) image signals.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410301141.0A 2024-03-15 Demosaicing method, demosaicing device, electronic equipment and storage medium Pending CN118317211A (en)

Publications (1)

Publication Number Publication Date
CN118317211A true CN118317211A (en) 2024-07-09

Family

ID=

Similar Documents

Publication Publication Date Title
US20200184598A1 (en) System and method for image demosaicing
US7082218B2 (en) Color correction of images
US20070159542A1 (en) Color filter array with neutral elements and color image formation
US20150363912A1 (en) Rgbw demosaic method by combining rgb chrominance with w luminance
JP7182907B2 (en) Camera image data processing method and camera
US9262805B2 (en) Method and device for processing image in Bayer format
US20090252411A1 (en) Interpolation system and method
US7072509B2 (en) Electronic image color plane reconstruction
TWI547169B (en) Image processing method and module
CN103905802A (en) Method and device for mosaic removal based on P-mode color filter array
CN107623844B (en) Determination of color values of pixels at intermediate positions
JP2000134634A (en) Image converting method
TW202044824A (en) Circuitry for image demosaicing and enhancement
US20140037207A1 (en) System and a method of adaptively suppressing false-color artifacts
WO2007082289A2 (en) Color filter array with neutral elements and color image formation
WO2019196109A1 (en) Method and apparatus for suppressing image pseudo-colour
CN118317211A (en) Demosaicing method, demosaicing device, electronic equipment and storage medium
CN114359050B (en) Image processing method, apparatus, computer device, storage medium, and program product
Wang et al. Demosaicing with improved edge direction detection
Guttosch Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3® Direct Image Sensors
US8068145B1 (en) Method, systems, and computer program product for demosaicing images
CN111988592B (en) Image color reduction and enhancement circuit
Park Architectural analysis of a baseline isp pipeline
US20130038772A1 (en) Image processing apparatus and image processing method
WO2022027621A1 (en) Image processing method, chip, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication