CN116418970A - Method for detecting dead pixel, chip, electronic device and computer readable storage medium - Google Patents
Method for detecting dead pixel, chip, electronic device and computer readable storage medium Download PDFInfo
- Publication number
- CN116418970A CN116418970A CN202111653341.5A CN202111653341A CN116418970A CN 116418970 A CN116418970 A CN 116418970A CN 202111653341 A CN202111653341 A CN 202111653341A CN 116418970 A CN116418970 A CN 116418970A
- Authority
- CN
- China
- Prior art keywords
- pixel
- value
- detected
- determining
- values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 37
- 238000001514 detection method Methods 0.000 claims abstract description 175
- 230000003044 adaptive effect Effects 0.000 claims description 33
- 238000012360 testing method Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 abstract description 7
- 238000012937 correction Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 13
- 238000004364 calculation method Methods 0.000 description 11
- 230000008859 change Effects 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 8
- 230000003068 static effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 239000003086 colorant Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application relates to the technical field of image processing and discloses a dead pixel detection method, a chip, electronic equipment and a computer readable storage medium. The dead pixel detection method comprises the following steps: taking a pixel to be detected in the image to be detected as the center of a detection window, and acquiring the pixel value of the pixel to be detected and the pixel value of each pixel in the detection window, which is in the same channel as the pixel to be detected; determining the image category of the image in the detection window; if the image type is a flat area, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel is larger than a first preset threshold value; if the image type is a texture region, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a second preset threshold value; the second preset threshold value is larger than the first preset threshold value, so that the accuracy of dead point detection can be improved.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a dead pixel detection method, a chip, electronic equipment and a computer readable storage medium.
Background
In practical engineering, because pixel points on an image sensor may be damaged due to process defects, or be affected by internal and external environments to generate errors in the process of converting optical signals into electric signals, or some acquisition points are damaged after long-term use, certain point positions can not normally generate correct electric signals. These outliers, known as image outliers, tend to have a large negative visual impact on the sharpness and integrity of the final image, so that the Bayer image often needs to be outlier detected and corrected in the pipeline of image signal processing (Image Signal Processing, ISP).
However, in the current dead pixel detection method, the same detection strategy is adopted for the pixels in the whole image, and the detection accuracy is low.
Disclosure of Invention
An object of an embodiment of the present application is to provide a method, a chip, an electronic device, and a computer-readable storage medium for detecting a dead pixel, so that the accuracy of dead pixel detection can be improved.
In a first aspect, an embodiment of the present application provides a method for detecting a dead pixel, including: taking a pixel to be detected in an image to be detected as the center of a detection window, and acquiring the pixel value of the pixel to be detected and the pixel value of each pixel in the detection window, which is in the same channel as the pixel to be detected; determining the image category of the image in the detection window; if the image type is a flat area, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a first preset threshold value; if the image category is a texture region, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a second preset threshold value; wherein the second preset threshold is greater than the first preset threshold.
As a possible implementation manner, the determining the image category to which the image in the detection window belongs includes: determining a characteristic value according to the pixel value of each pixel of the same channel; calculating the average value of the pixel value of each pixel of the same channel and the offset value of the characteristic value; if the average value is larger than or equal to a preset texture region threshold value, determining that the image category of the image in the detection window is a texture region; and if the average value is smaller than the preset texture region threshold value, determining that the image category of the image in the detection window is a flat region.
As a possible implementation manner, the determining a feature value according to the pixel value of each pixel of the same channel includes: determining the maximum value and the minimum value in the pixel values of the pixels of the same channel; and determining a characteristic value according to the pixel values except the maximum value and the minimum value in the pixel values of the pixels in the same channel.
As one possible implementation manner, if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel is less than or equal to a first preset threshold value, or if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel is less than or equal to a second preset threshold value, the method further includes: determining a characteristic value according to the pixel value of each pixel of the same channel; calculating the average value of the pixel value of each pixel of the same channel and the offset value of the characteristic value; determining an offset value between the pixel value of the pixel to be detected and the characteristic value; if the image category is a flat area, determining that the pixel to be detected is a dead pixel if the difference between the pixel value of the pixel to be detected, the offset value of the characteristic value and the average value is larger than a first self-adaptive threshold; if the image category is a texture region, determining that the pixel to be detected is a dead pixel if the difference between the pixel value of the pixel to be detected, the offset value of the characteristic value and the average value is greater than a second self-adaptive threshold; wherein the first adaptive threshold and the second adaptive threshold are both determined based on the feature value, and the second adaptive threshold is greater than the first adaptive threshold.
As a possible implementation manner, the first adaptive threshold is determined based on the characteristic value and a first preset coefficient, the second adaptive threshold is determined based on the characteristic value and a second preset coefficient, both the first preset coefficient and the second preset coefficient are greater than 0 and less than 1, and the second preset coefficient is greater than the first preset coefficient.
As a possible implementation manner, the determining a feature value according to the pixel value of each pixel of the same channel includes: and determining the median value of the pixel values of the pixels of the same channel, and taking the median value as the characteristic value.
As a possible implementation manner, in a case that the pixel to be detected is determined to be a dead pixel, the method further includes: and correcting the pixel value of the pixel to be detected according to the pixel value of each pixel in the same channel to obtain the corrected pixel value of the pixel to be detected.
As a possible implementation manner, the correcting the pixel value of the pixel to be detected according to the pixel value of each pixel in the same channel to obtain a corrected pixel value of the pixel to be detected includes: according to the pixel values of the pixels of the same channel, respectively determining pixel gradient values in N directions in the detection window; wherein N is an integer greater than or equal to 2; and determining corrected pixel values of the pixels to be detected according to the pixel gradient values in the N directions.
As a possible implementation manner, the determining pixel gradient values in N directions in the detection window according to the pixel values of each pixel of the same channel includes: according to the pixel values of the pixels of the same channel, respectively determining a plurality of groups of pixel gradient values in each of the N directions; and determining the characteristic values of the plurality of groups of pixel gradient values in each direction, and taking the characteristic values of the plurality of groups of pixel gradient values in each direction as the pixel gradient values in each direction.
As a possible implementation manner, the determining the characteristic values of the multiple sets of pixel gradient values in each direction includes: and determining the median value of the plurality of groups of pixel gradient values in each direction.
As a possible implementation manner, the determining the corrected pixel value of the pixel to be measured according to the pixel gradient values in the N directions includes: and if the pixel gradient values in the N directions are all 0, determining the corrected pixel value of the pixel to be detected as the pixel value of any one of the pixels in the same channel.
As a possible implementation manner, the determining the corrected pixel value of the pixel to be measured according to the pixel gradient values in the N directions includes: according to the pixel gradient values in the N directions, determining weight values corresponding to the N directions respectively; selecting N reference pixels in each of the N directions among the pixels of the same channel for each of the N directions; the pixels to be detected are positioned on connecting lines among n reference pixels in each direction, and n is an integer greater than or equal to 2; determining a corrected pixel value in each direction according to the pixel values of the n reference pixels selected in each direction; and determining corrected pixel values of the pixels to be detected according to the weight values respectively corresponding to the N directions and the corrected pixel values in each direction.
As a possible implementation manner, the determining, according to the pixel gradient values in the N directions, the weight values corresponding to the N directions respectively includes:
and calculating the weight values corresponding to the N directions respectively through the following formula:
wherein weight [ k ]]For the weight value corresponding to the kth direction, grad [ k ]]For the pixel gradient value in the kth direction, senParam is a preset sensitive parameter; if grad [ k ]]=0, let
As a possible implementation manner, determining the corrected pixel value in each direction according to the pixel values of the n reference pixels selected in each direction includes: calculating pixel gradient values between any two adjacent pixels in the n selected reference pixels in each direction to obtain n-1 pixel gradient values in each direction; determining a smallest pixel gradient value of the n-1 pixel gradient values in each direction, and determining two target reference pixels with the smallest pixel gradient values in each direction; and determining the corrected pixel value in each direction according to the pixel values of the two target reference pixels in each direction.
As a possible implementation, n is equal to 3, and 2 normal pixels are included in the 3 reference pixels.
As a possible implementation manner, the 3 reference pixels include a first reference pixel with a pixel value of P1, a second reference pixel with a pixel value of P2, and a third reference pixel with a pixel value of P0, the pixel to be tested is located between the first reference pixel and the second reference pixel, and the first reference pixel is located between the third reference pixel and the pixel to be tested; the determining the corrected pixel value in each direction according to the pixel values of the two target reference pixels in each direction comprises: if the two target reference pixels are the first reference pixel and the second reference pixel, the modified pixel value is: (p1+p2)/(2); if the two target reference pixels are the first reference pixel and the third reference pixel, the modified pixel value is: 2P1-P0.
As a possible implementation manner, determining the corrected pixel value of the pixel to be measured according to the weight values respectively corresponding to the N directions and the corrected pixel value in each direction includes:
calculating the corrected pixel value by the following formula:
wherein P is the corrected pixel value, weight [ k ] ]For the weight value corresponding to the kth direction, P grad [k]Is the modified pixel value in the i-th direction.
As a possible implementation, N is equal to 4, and the 4 directions are respectively: 0 ° direction, 45 ° direction, 90 ° direction, 135 ° direction.
In a second aspect, an embodiment of the present application provides a chip, where the chip is located in an electronic device and connected to a memory in the electronic device, where the memory stores instructions executable by the chip, and the instructions are executed by the chip, so that the chip can perform the above-mentioned dead pixel detection method.
In a third aspect, embodiments of the present application provide an electronic device, including: such as the chip described above, and a memory connected to the chip.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, where the computer program implements the above-mentioned dead pixel detection method when executed by a processor.
In the embodiment of the application, a pixel to be detected in an image to be detected is taken as the center of a detection window, and the pixel value of the pixel to be detected and the pixel value of each pixel in the detection window, which is in the same channel with the pixel to be detected, are obtained; determining the image category of the image in the detection window; if the image type is a flat area, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a first preset threshold value; if the image type is a texture region, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a second preset threshold value; wherein the second preset threshold is greater than the first preset threshold. That is, in the embodiment of the present application, the texture area and the flat area in the image to be detected are distinguished, and different strategies are adopted for detecting the dead pixel in the texture area and the flat area in the image to be detected. According to the characteristic that human eyes are more sensitive to brightness change of an image low-frequency region, a smaller threshold is adopted for a detection window of a flat region so as to increase detection force to improve detection rate, and a larger threshold is adopted for a detection window of a texture region so as to reduce detection force and prevent false detection, so that the accuracy rate of dead point detection is improved.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
Fig. 1 is a flowchart of a dead pixel detection method mentioned in the present embodiment;
fig. 2 is a schematic diagram of a pixel with a pixel to be detected as the center of a detection window in the present embodiment;
FIG. 3 is a flow chart of one implementation of step 102 mentioned in this embodiment;
fig. 4 is a schematic diagram of the pixel to be tested in the present embodiment as a blue pixel or a red pixel;
fig. 5 is a schematic diagram of the pixel to be tested as a green pixel in the present embodiment;
FIG. 6 is a flow chart of the accurate detection mentioned in this embodiment;
fig. 7 is a schematic diagram of calculating gradient values of pixels in 4 directions when the pixel to be measured is a blue pixel or a red pixel in the present embodiment;
fig. 8 is a schematic diagram of calculating gradient values of pixels in 4 directions when the pixel to be measured is a green pixel in the embodiment;
fig. 9 is a schematic diagram of a correction flow of correcting a pixel value of a pixel to be detected mentioned in the present embodiment;
fig. 10 is a schematic diagram of reference pixels in 4 directions selected when the pixel to be measured is a blue pixel or a red pixel in the present embodiment;
Fig. 11 is a schematic diagram of reference pixels in 4 directions selected when the pixel to be measured mentioned in the present embodiment is a green pixel;
FIG. 12 is a flow chart of one implementation of step 303 mentioned in this embodiment;
fig. 13 is a flowchart of the flow of the dead pixel detection and correction mentioned in the present embodiment;
fig. 14 is a schematic structural view of the electronic device mentioned in the present embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of each embodiment of the present application will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present application, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the technical solutions claimed in the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present application, and the embodiments may be mutually combined and referred to without contradiction.
To facilitate understanding of the embodiments of the present application, the following first describes related arts related to the embodiments of the present application:
Image sensors play an important role in converting visible light into digital images as a key device for providing visual information. In general, each light collection point, i.e., pixel point, on an image sensor is covered with a filter of a specific color. A Bayer filter, for example, is commonly used, which contains three colors, red, green, and blue, so that only light signals of the corresponding colors are collected at a single collection point. The optical signals are processed inside the image sensor and then sequentially converted into electric signals and digital signals. The original Bayer image data can be obtained by combining the digital signals obtained at all the acquisition points on the image sensor. And processing by related hardware or software algorithm to obtain the digital image meeting the sensory requirements.
The dead pixels (dead pixels) in the image sensor may be classified into static dead pixels and dynamic dead pixels. The static dead pixel is a point where the pixel point outputs a digital signal with the same value under different incident light brightness conditions. The static dead pixels can be further divided into bright (White Pixel) and dark (Black Pixel) pixels due to their output values. As the name suggests, the brightness value of the bright spot output is close to or equal to the maximum value no matter what illumination is under; the luminance values of the dark spot outputs are all close to or equal to 0. The dynamic dead pixel is characterized in that the photoelectric conversion characteristic of the dead pixel is different from that of the normal pixel, so that the brightness value generated by the dead pixel is obviously larger or smaller under the same incident light condition.
In order to improve the accuracy of the dead pixel detection, the embodiment provides a dead pixel detection method which is applied to electronic equipment, wherein the electronic equipment can be equipment with an image processing function. The flow chart of the dead pixel detection method in this embodiment may refer to fig. 1, which includes:
step 101: and taking the pixel to be detected in the image to be detected as the center of the detection window, and acquiring the pixel value of the pixel to be detected and the pixel value of each pixel in the detection window, which is in the same channel as the pixel to be detected.
Step 102: and determining the image category to which the image in the detection window belongs.
Step 103: and under the condition that the image type is a flat area, if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel is larger than a first preset threshold value, determining that the pixel to be detected is a dead pixel.
Step 104: if the image type is a texture region, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a second preset threshold value; wherein the second preset threshold is greater than the first preset threshold.
In this embodiment, the texture area and the flat area in the image to be detected are distinguished, and different strategies are adopted for detecting the dead pixel in the texture area and the flat area in the image to be detected. According to the characteristic that human eyes are more sensitive to brightness change of an image low-frequency region, a smaller threshold is adopted for a detection window of a flat region so as to increase detection force to improve detection rate, and a larger threshold is adopted for a detection window of a texture region so as to reduce detection force and prevent false detection, so that the accuracy rate of dead point detection is improved.
The implementation details of the dead pixel detection method of the present embodiment are specifically described below, and the following description is merely provided for convenience of understanding, and is not necessary to implement the present embodiment.
In step 101, the image to be measured may be an initial image acquired by the image sensor, for example, may be a Bayer image or a Quad Bayer image. The size of the detection window may be set according to actual needs, for example, may be set to a 5*5 pixel area, referring to fig. 2, the blue pixel B in the shadow area is a pixel to be detected, the detection window with the blue pixel B as the center is a dashed box area in fig. 2, and the detection window is a 5*5 pixel area with the blue pixel B as the center.
Taking fig. 2 as an example, the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel as the pixel to be detected in the detection window are obtained as follows: and acquiring the pixel value of the pixel to be detected, namely the pixel value of the blue pixel B, and acquiring the pixel values of all pixels in the same channel as the blue pixel B in the detection window, namely the pixel values of other blue pixels except the middle blue pixel in a 5*5 pixel area taking the blue pixel B as the center.
In step 102, an image type of the image in the detection window may be determined according to the pixel value of each pixel in the detection window, which is in the same channel as the pixel to be detected, and the image type is a flat area or a texture area. For example, if the pixel values of the pixels in the same channel as the pixel to be detected in the detection window are relatively large in deviation, the image type of the image in the detection window can be determined to be a texture area, and if the pixel values of the pixels in the same channel as the pixel to be detected in the detection window are relatively small in deviation, the image type of the image in the detection window can be determined to be a flat area.
In step 103 and step 104, an offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel may be calculated, and the number of pixels of the same channel as the pixel to be detected within the detection window is the same as the number of calculated offset values. The offset value may be understood as the absolute value of the difference between the pixel value of the pixel to be measured and the pixel value of each pixel of the same channel. And under the condition that the image type is a flat area, if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel is larger than a first preset threshold value, determining that the pixel to be detected is a dead pixel. If the image type is a texture region, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a second preset threshold value; wherein the second preset threshold is greater than the first preset threshold. The second preset threshold and the first preset threshold may be set according to actual needs under the condition that the above-mentioned magnitude relation is satisfied.
In some embodiments, in the case where the image class is Texture region (Texture Window), the dead point detection can be performed by the following formula:
wherein P represents the pixel value of the pixel to be measured, and can also represent the pixel to be measured, P i The pixel value of the ith pixel in the same channel as the pixel to be detected in the detection window is represented, and the pixel of the ith pixel in the same channel as the pixel to be detected in the detection window can be represented. Thr (Thr) high A second preset threshold. The offset value between the pixel value of the pixel to be measured and the pixel value of each pixel of the same channel can comprise the above P-P i P i P, judging that the pixel P belongs to a dark spot or a bright spot indicates that the pixel P belongs to a bad spot.
In some embodiments, in the case where the image category is a flat area (Smooth Window), the dead pixel detection may be performed by the following formula:
wherein Thr low A first preset threshold value. That is, the detection threshold of the texture region is Thr high The detection threshold of the flat area is Thr low And satisfy the relationship Thr high >Thr low . In this embodiment, the probability of false detection of the image high-frequency information as the dead pixel is reduced by increasing the dead pixel detection threshold in the texture region, and the dead pixel detection rate is improved by decreasing the threshold in the flat region.
In this embodiment, the detection window may be sequentially slid in the whole Bayer image, and each pixel is traversed to be a pixel to be detected, so as to complete the dead pixel detection of the whole Bayer image. In this embodiment, by performing texture detection on the detection window, it is determined that the detection window belongs to a flat area or a texture area. According to the characteristic that human eyes are more sensitive to brightness change of an image low-frequency region, a threshold value for detecting dead pixels is reduced for a detection window of a flat region to increase detection force so as to improve detection rate, and the threshold value for detecting the dead pixels is increased for a detection window of a texture region so as to reduce detection force and prevent false detection. Compared with the traditional dead pixel detection method, different detection thresholds are set in the flat area and the texture area of the image to be detected, so that the detection severity can be flexibly changed. The detection strictness is relaxed in the texture area, so that the high-frequency information is prevented from being detected as dead pixels; the detection stringency is enhanced in the flat area to ensure that noise in the flat area can be detected as much as possible.
In some embodiments, the flowchart of the implementation procedure of step 102 for determining the image category to which the image in the detection window belongs may refer to fig. 3, including:
step 1021: the feature value is determined from the pixel values of the pixels of the same channel.
Step 1022: and calculating the average value of the offset values of the pixel values and the characteristic values of the pixels in the same channel.
Step 1023: if the average value is greater than or equal to a preset texture region threshold value, determining that the image category to which the image in the detection window belongs is a texture region.
Step 1024: if the average value is smaller than a preset texture region threshold value, determining that the image category of the image in the detection window is a flat region.
In some embodiments, determining the feature value from the pixel values of the pixels of the same channel as mentioned in step 1021 includes: and determining the median value of the pixel values of the pixels in the same channel, and taking the median value as a characteristic value. In this embodiment, the median is selected as the feature value, so that the calculated amount is small, and the influence of the pixel value of the dead pixel on the feature value is avoided if the dead pixel exists in each pixel of the same channel, and the median can better measure the concentration trend of the pixel value of each pixel of the same channel, so that the accuracy of judging the class of the rear image is improved.
In some embodiments, determining the feature value from the pixel values of the pixels of the same channel as mentioned in step 1021 includes: an average value of pixel values of pixels in the same channel is determined, and the average value is used as a characteristic value. Alternatively, the mode of the pixel value of each pixel of the same channel may be determined, and the mode may be used as the feature value. Alternatively, the variance of the pixel values of the pixels in the same channel may be determined, and the variance may be used as the feature value.
In step 1022, an average of the pixel values of the pixels of the same channel and the offset value of the feature value may be calculated. For example, when the feature value is a median value, the mean Diff can be calculated by the following formula texture :
Wherein m is the number of pixels in the same channel as the pixel to be detected in the detection window, P med Is the median value of the pixel values of m pixels of the same channel, P i Is the pixel value of the ith pixel of the m pixels.
In steps 1023 and 1024, the mean Diff may be determined texture And texture region threshold Thr texture If Diff texture ≥Thr texture And determining the image category of the image in the detection window as a texture area, otherwise, determining the image category of the image in the detection window as a flat area. In the present embodiment, the average value Diff is texture To measure the extent of texture of the image within the detection window. If the average value is larger, the gradient change in the detection window is larger, and the detection window contains more texture information. Conversely, the smaller the average value, the smaller the gradient change in the detection window, and the less texture information is contained in the detection window.
In some embodiments, determining the feature value from the pixel values of the pixels of the same channel as mentioned in step 1021 includes: determining the maximum value and the minimum value in the pixel values of all pixels in the same channel; and determining the characteristic value according to the pixel values except the maximum value and the minimum value in the pixel values of the pixels in the same channel. In consideration of the fact that the pixels with the maximum pixel value and the minimum pixel value in the same channel are highly likely to belong to dead pixels, when the characteristic values are determined, the maximum pixel value and the minimum pixel value are removed, and the influence of potential dead pixels in the same channel is avoided, so that the condition that detection failure is caused by a plurality of dead pixels is primarily eliminated.
For example, referring to fig. 4 or fig. 5, fig. 4 is a schematic diagram of a pixel to be detected being a blue or red pixel, in fig. 4, a pixel marked with P is a blue pixel or a red pixel, pixels marked with P0 to P5, pmax, pmin are all pixels in the same color channel as the pixel to be detected P in the detection window, and P0 to P5, pmax, pmin are respectively the pixel values of the pixels in the same color channel as the pixel to be detected P in the detection window. In fig. 5, the pixels marked with P are green pixels, and the pixels marked with P0 to P5, pmax, pmin are all pixels in the same color channel as the pixel P to be detected in the detection window. In this embodiment, the maximum value Pmax and the minimum value Pmin in the pixel values of the pixels in the same channel can be removed in the detection window, and the feature value is determined according to the remaining P0 to P5. For example, the median Pmed of the remaining 6 pixels { P0, P1, P2, P3, P4, P5} of the same channel within the detection window is calculated:
P med =median(P 0 ,P 1 ,P 2 ,P 3 ,P 4 ,P 5 )
Then, P0 to P5 and the median P are calculated med Mean Diff of the deviation values of (2) texture :
For calculated Diff texture Setting a texture region threshold Thr texture If there is Diff texture ≥Thr texture The detection window is marked as a texture region, otherwise a flat region.
In this embodiment, the maximum pixel value and the minimum pixel value of the pixels in the detection window are removed and then processed, so that interference of other bright spots and dark spots possibly existing in the detection window on bad spot detection and correction can be primarily eliminated.
In some embodiments, if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is less than or equal to the first preset threshold, or if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is less than or equal to the second preset threshold, the pixel to be detected may be considered as a normal pixel, further accurate detection may be performed for the pixel to be detected. In this embodiment, after determining that the pixel to be detected is a normal pixel in the preliminary detection, further accurate detection is performed on the pixel to be detected, so as to further improve the accuracy of dead pixel detection. The flow chart of the accurate detection in this embodiment may refer to fig. 6, which includes:
Step 201: the feature value is determined from the pixel values of the pixels of the same channel.
Step 202: and calculating the average value of the offset values of the pixel values and the characteristic values of the pixels in the same channel.
Step 203: and determining an offset value between the pixel value of the pixel to be detected and the characteristic value.
Step 204: and under the condition that the image type is a flat area, if the difference value between the pixel value of the pixel to be detected and the offset value and the mean value of the characteristic value is larger than a first self-adaptive threshold value, determining that the pixel to be detected is a dead pixel.
Step 205: and if the difference value between the pixel value of the pixel to be detected and the offset value and the mean value of the characteristic value is larger than the second self-adaptive threshold value under the condition that the image type is a texture region, determining that the pixel to be detected is a dead pixel. Wherein the first adaptive threshold and the second adaptive threshold are both determined based on the feature value, and the second adaptive threshold is greater than the first adaptive threshold.
In this embodiment, further accurate detection is performed on the pixels that are primarily determined to be normal, where the primary detection can complete detection of a portion of the individual dead pixels, and if there are a plurality of continuous dead pixels in the detection window or if the deviation between the dead pixel value and surrounding pixels does not completely satisfy the threshold value, the primary detection still has missed detection. Therefore, in order to avoid missing detection, the pixels judged to be normal points in the preliminary detection are further accurately detected, and the bad points can be successfully detected under the condition that continuous bad points exist in the detection window, so that the accuracy rate of the bad point detection can be further improved,
In step 203, the determined offset value between the pixel value of the pixel to be measured and the characteristic value may be an absolute value of the difference between the pixel value of the pixel to be measured and the characteristic value. When the characteristic value is the median value P of the pixel values of the pixels of the same channel med When the pixel value of the pixel to be measured is denoted as P, the offset value between the pixel value of the pixel to be measured and the characteristic value can be expressed as |P-P med |。
In steps 204 and 205, the difference between the pixel value of the pixel to be measured and the offset value and the mean value of the feature value may be calculated, and when the feature value is the median value, the difference diff offset The calculation formula of (2) can be as follows:
wherein m is the number of pixels in the same channel as the pixel to be detected in the detection window, P med Is the median value of the pixel values of m pixels of the same channel, P i The pixel value of the ith pixel in m pixels is the pixel value of the pixel to be detected. In this embodiment, it is considered that if the pixel P to be detected is a dead pixel, since the dead pixel luminance value has a mutant type within its detection window, it can be concluded that: median value P of pixel value of pixel P to be measured and each pixel of same channel med Should be much larger than other pixels P in the same channel within the detection window i And median P med Is a mean of the offset values of (a). Therefore, the present embodiment can calculate the obtained diff offset Further accurate detection is performed.
In the case where the image type is a flat area, if diff offset And if the pixel to be detected is larger than the first self-adaptive threshold value, determining that the pixel to be detected is a dead pixel, otherwise, determining that the pixel to be detected is a normal pixel point. In the case where the image type is texture region, ifdiff offset And if the pixel to be detected is larger than the second self-adaptive threshold value, determining that the pixel to be detected is a dead pixel, otherwise, determining that the pixel to be detected is a normal pixel point. Wherein the first adaptive threshold and the second adaptive threshold are both determined based on the feature value, and the second adaptive threshold is greater than the first adaptive threshold. That is, the first adaptive threshold and the second adaptive threshold may be based on a variation of the characteristic value adaptation of the pixel value of each pixel of the same channel. For example, the first adaptive threshold and the second adaptive threshold may be based on a median, average, or variance-adaptive change in pixel values for pixels of the same channel. In this embodiment, for the flat area and the texture area, the first adaptive threshold and the second adaptive threshold are set respectively, which is favorable for improving the detection rate of the dead pixels in the flat area and reducing the false detection rate of the texture area.
In this embodiment, when comparing the offset degree of the pixel to be detected and the median, the comparison value (the difference between the offset value of the pixel to be detected and the characteristic value and the mean value) is the mean value of the pixel values of other pixels in the same channel and the median offset degree in the detection window, so that the problem that the comparison value is larger due to the fact that other dead pixels possibly exist in the detection window is avoided to a certain extent through mean value calculation, and the dead pixel detection rate when continuous dead pixels exist in the detection window is greatly improved.
In some embodiments, the first adaptive threshold is determined based on the feature value and a first preset coefficient, the second adaptive threshold is determined based on the feature value and a second preset coefficient, both the first preset coefficient and the second preset coefficient are greater than 0 and less than 1, and the second preset coefficient is greater than the first preset coefficient.
In some embodiments, the characteristic value is a median value P of pixel values of pixels in the same channel as the pixel to be detected in the detection window med A first adaptive threshold Thr selfadj 1=coef low *P med A second adaptive threshold Thr selfadj 2=coef high *P med . Wherein coef low For a first predetermined coefficient, coef high For the second preset coefficient, 0 < coef low <coef high <1。
In this embodiment, the dead pixel is accurately detected by using the offset degree of the pixel to be detected and the median and the adaptive threshold value in a linear relationship with the median, so that the detection adaptability is improved, the offset value and the adaptive threshold value can be automatically changed along with the pixel environment where the detection window is located, and the situation that the detection degree is possibly too large or too small in some areas due to the fact that the same adaptive threshold value is selected in the whole image is avoided. The embodiment can realize the self-adaptive adjustment of detection intensity to a certain extent in different pixel environments and can cover the detection of continuous dead pixels.
In some embodiments, in a case where the pixel to be measured is determined to be a dead pixel, the method further includes: and correcting the pixel value of the pixel to be detected according to the pixel value of each pixel in the same channel to obtain the corrected pixel value of the pixel to be detected.
In consideration of the fact that both dynamic dead spots and static dead spots affect the detailed information of Bayer image data, false colors or too dark/bright spots are generated even in the final digital image, so that detection correction of the dead spots is required through the abnormal visual characteristics of the dead spots. The common bad point correction strategies are mainly divided into static bad point correction and dynamic bad point correction. The static dead point correction means that manufacturers know dead point positions of the image sensor in advance and provide a static dead point table according to the dead point positions when leaving factories. When the image sensor is used, the pixel value of the corresponding position is corrected by inquiring the table. However, this method is limited because it occupies a large amount of memory and the dead pixel may be newly generated at a later time. The dynamic dead pixel correction can detect and correct Bayer image data generated by the image sensor in real time, and does not need to depend on prior conditions, so that the dynamic dead pixel correction has higher universality. The dead pixel correction in this embodiment can be understood as a dynamic dead pixel correction, for example, the mean value or the median value of the pixel values can be calculated by the same-channel pixels of the pixel to be detected, and the mean value or the median value can be used for replacing the pixel values of the pixel to be detected to complete the correction.
In some embodiments, correcting the pixel value of the pixel to be measured according to the pixel value of each pixel in the same channel to obtain a corrected pixel value of the pixel to be measured, including: respectively determining pixel gradient values in N directions in a detection window according to pixel values of pixels in the same channel; wherein N is an integer greater than or equal to 2; and determining corrected pixel values of the pixels to be detected according to the pixel gradient values in the N directions. The pixel gradient value in each direction may be: the difference between the pixel values of any two adjacent pixels in each direction may also be: the average value of the differences of the pixel values of the adjacent pixels in each direction may also be: the maximum value or the minimum value of the difference in pixel values of the respective adjacent pixels in each direction. The differences of the pixel values mentioned above may be absolute values of the differences.
When N is equal to 2, the 2 directions may be the lateral and vertical directions, i.e., the 0 ° direction and the 180 ° direction. When N is greater than 2, N may take 4,4 of the directions respectively: 0 ° direction, 45 ° direction, 90 ° direction, 135 ° direction. Combining pixel gradient values in 4 directions is advantageous for obtaining accurate and reasonable corrected pixel values.
In some embodiments, determining pixel gradient values in N directions in the detection window from pixel values of pixels of the same channel, respectively, includes: according to the pixel values of the pixels of the same channel, respectively determining a plurality of groups of pixel gradient values in each of N directions; and determining characteristic values of a plurality of groups of pixel gradient values in each direction, and taking the characteristic values of the plurality of groups of pixel gradient values in each direction as the pixel gradient values in each direction. For example, the characteristic value of the plurality of sets of pixel gradient values in each direction may be a mean value of the plurality of sets of pixel gradient values in each direction.
In some embodiments, determining the eigenvalues of the sets of pixel gradient values in each direction comprises: the median of the sets of pixel gradient values in each direction is determined. That is, the characteristic value of the plurality of groups of pixel gradient values in each direction can be the median value of the plurality of groups of pixel gradient values in each direction, which is beneficial to avoiding inaccurate gradient calculation caused by the existence of other dead pixels in the detection window.
For example, when N is equal to 4, and the 4 directions are respectively: the 0 deg. direction, 45 deg. direction, 90 deg. direction, 135 deg. direction can be referred to fig. 7 and 8. Fig. 7 is a schematic diagram of calculating pixel gradient values in 4 directions when the pixel to be measured is a blue or red pixel, and fig. 8 is a schematic diagram of calculating pixel gradient values in 4 directions when the pixel to be measured is a green pixel.
In fig. 7, the pixels marked with P are blue pixels or red pixels, the pixels marked with P0 to P7 are all pixels in the same color channel as the pixel to be detected P in the detection window, and P0 to P7 are the pixel values of the pixels in the same color channel as the pixel to be detected P in the detection window. In fig. 8, the pixels marked with P are green pixels, and the pixels marked with P0 to P7 are all pixels in the same color channel as the pixel P to be detected in the detection window.
In order to avoid gradient calculation errors caused by the existence of other dead points in the detection window, in the embodiment, 3 groups of gradients are selected when the gradient in each direction is calculated, and the median value of the gradients in each direction is selected as the gradient in the direction.
Referring to fig. 7, for the pixels to be measured in the R and B channels, pixel gradient values corresponding to the horizontal 0 ° (H), the vertical 90 ° (V), the 45 ° diagonal (D) and the 135 ° diagonal (a) are respectively:
referring to fig. 8, for a pixel to be measured in the G channel, pixel gradient values corresponding to a horizontal 0 ° (H), a vertical 90 ° (V), a 45 ° diagonal (D) and a 135 ° diagonal (a) are respectively:
in some embodiments, determining corrected pixel values for the pixel under test from pixel gradient values in N directions includes: if the pixel gradient values in the N directions are all 0, determining that the corrected pixel value of the pixel to be detected is the pixel value of any one of the pixels in the same channel.
For example, referring to fig. 7 and 8, if gradh=gradv=gradd=grada=0, it is explained that the pixel value of the other pixel of the same channel as the dead pixel in the detection window is equal to the pixel value of the flat area, and the corrected pixel value of the pixel to be detected may be determined as the pixel value of any one of the pixels of the same channel. For example, the corrected pixel value is P0.
In some embodiments, determining corrected pixel values for the pixel under test from pixel gradient values in N directions includes: if there are pixel gradient values in the N directions that are not 0, the pixel values of the pixel to be measured are corrected by a flowchart shown in fig. 9, where the correction procedure includes:
step 301: and determining weight values corresponding to the N directions respectively according to the pixel gradient values in the N directions.
Step 302: selecting N reference pixels in each direction among pixels of the same channel for each of the N directions; the pixels to be detected are positioned on connecting lines among n reference pixels in each direction, and n is an integer greater than or equal to 2.
Step 303: the corrected pixel value in each direction is determined from the pixel values of the n reference pixels selected in each direction.
Step 304: and determining corrected pixel values of the pixels to be detected according to the weight values corresponding to the N directions and the corrected pixel values in each direction.
In this embodiment, the change characteristics of the pixel values in different areas are fully considered, and the corrected pixel value in each direction is obtained by a weighted calculation method according to the weight values corresponding to the N directions, so as to recover the dead pixel, which is beneficial to avoiding dead pixel residue or excessive loss of detail on visual effect.
In step 301, since the gradient perpendicular to the edge direction in the image to be measured is large and the gradient along the edge direction is small, pixel information in the direction with a smaller gradient needs to be utilized as much as possible when performing the dead pixel correction. Therefore, in this embodiment, the weight values corresponding to the N directions may be inversely proportional to the magnitudes of the pixel gradient values in the N directions.
In some embodiments, in step 301, the weight values corresponding to the N directions may be calculated by the following formula:
wherein weight [ k ]]For the weight value corresponding to the kth direction, grad [ k ]]For the pixel gradient value in the kth direction, senParam is a preset sensitive parameter; if grad [ k ]]=0, letThe preset sensitive parameters can be set according to actual needs, for example, the larger the weight value of the expected edge direction is, the larger the sensitive parameters can be set. Alternatively, in a specific implementation, the sensitivity parameter may be greater than or equal to 1. If grad [ k ] ]=0, let->So that in the case that the pixel gradient value in a certain direction is 0, a reasonable weight value can be determined for the direction, so that the corrected pixel value in the direction can be reasonably calculated later.
For example, n=4, grad [4] = { gradH, gradV, gradD, gradA }, and weight values weight [4] = { weight h, weight v, weight d, weight a }, which correspond to the 4 directions, respectively, can be calculated by the above formula for calculating weight values.
When the dead pixel value is corrected, the inverse proportion value of the pixel gradient value in each direction is used as the weight, the weight in the edge direction of the dead pixel is increased, the correction result value can form linear change along the edge direction of the dead pixel, and the method is more in line with the visual characteristics of human eyes.
In step 302, for each of the N directions, N reference pixels in each direction are selected from among the pixels in the same channel as the pixel to be measured; the pixels to be detected are positioned on connecting lines among n reference pixels in each direction, and n is an integer greater than or equal to 2.
In some embodiments, n is equal to 3, and 2 normal pixels are included in the 3 reference pixels. For example, referring to fig. 10 and 11, fig. 10 is a schematic diagram of reference pixels in 4 selected directions when the pixel to be measured is a blue pixel or a red pixel, and fig. 11 is a schematic diagram of reference pixels in 4 selected directions when the pixel to be measured is a green pixel.
In fig. 10 and 11, the 3 reference pixels selected laterally are respectively: ph0, ph1, ph2, the 3 reference pixels vertically selected are respectively: the 3 reference pixels selected in the direction of the diagonal (D) of Pv0, pv1, pv2, 45 ° are respectively: the 3 reference pixels selected in the directions of the diagonal (a) of Pd0, pd1, pd2, 135 ° are respectively: pa0, pa1, pa2. The connection line between the reference pixels selected in each direction passes through the middle pixel P to be detected, so that the pixel value of the pixel to be detected can be accurately calibrated.
It should be noted that, in the above example, only 3 reference pixels are selected in each direction as an example, and the present invention is not limited thereto. It is also possible to select 2 reference pixels, or more, in each direction. In the implementation, 3 reference pixels are selected in each direction, and at least 2 normal pixels exist in the 3 reference pixels, so that the pixel value of the pixel to be detected can be accurately calibrated. In the process of performing the dead pixel detection and calibration, each pixel is traversed from left to right and from top to bottom along the pixel in the image to be detected, so as to determine whether each pixel is a dead pixel, and perform the correction of the dead pixel when the pixel is determined to be a dead pixel. Thus, for example, among the 3 reference pixels Ph0, ph1, ph2 selected laterally, ph0, ph1 are all pixels traversed before the pixel P to be measured, and Ph0, ph1 are either normal pixels in nature or pixels that have been corrected to be normal even if they are dead pixels. Therefore, the influence of dead pixels on the calibration of the pixels to be detected can be effectively avoided, and the pixels participating in the calibration calculation are weighted by combining with the gradients of the directions, so that the correction result meeting the visual effect is obtained, the accuracy of the calibration is improved, and meanwhile, the reference pixels are selected as few as possible so as to reduce the calculation load.
In step 303, a modified pixel value in each direction is determined from the pixel values of the n reference pixels selected in each direction. For example, an average value of pixel values of two adjacent pixels among the pixel values of the n reference pixels selected in each direction may be used as the corrected pixel value in each direction.
In some embodiments, step 303 may refer to fig. 12 for an implementation procedure of determining a corrected pixel value in each direction according to the pixel values of the n reference pixels selected in each direction, including:
step 3031: calculating pixel gradient values between any two adjacent pixels in the n selected reference pixels in each direction to obtain n-1 pixel gradient values in each direction;
step 3032: determining a smallest pixel gradient value of the n-1 pixel gradient values in each direction, and determining two target reference pixels having the smallest pixel gradient values in each direction;
step 3033: the corrected pixel value in each direction is determined from the pixel values of the two target reference pixels in each direction.
In this embodiment, two target reference pixels with the smallest pixel gradient values, which indicate that the pixel values of the two target reference pixels are close, are very likely to be normal pixels, and accurate correction pixel values can be obtained in each direction according to the pixel values of the two target reference pixels.
In step 3031, referring to fig. 10 and 11, 3 reference pixels P are laterally selected h0 、P h1 、P h2 The pixel gradient values of any two adjacent reference pixels comprise 2 pixel gradient values, which are respectively: p (P) h0 And P h1 Pixel gradient value |p between h0 -P h1 |、P h1 And P h2 Pixel gradient values in betweenVertically selected 3 reference pixels P v0 、P v1 、P v2 The pixel gradient values of any two adjacent reference pixels comprise 2 pixel gradient values, which are respectively: p (P) v0 And P v1 Pixel gradient value |p between v0 -P v1 |、P v1 And P h2 Pixel gradient values between->3 reference pixels P selected in 135 DEG diagonal (A) direction a0 、P a1 、P a2 The pixel gradient values of any two adjacent reference pixels comprise 2 pixel gradient values, which are respectively: p (P) a0 And P a1 Pixel gradient value |p between a0 -P a1 |、P a1 And P a2 Pixel gradient values between->3 reference pixels P selected in 45 DEG diagonal (D) direction d0 、P d1 、P d2 The pixel gradient values of any two adjacent reference pixels comprise 2 pixel gradient values, which are respectively: p (P) d0 And P d1 Pixel gradient value |p between d0 -P d1 |、P d1 And P d2 Pixel gradient values between->
In some embodiments, n=3, and 2 normal pixels are included in the 3 reference pixels. Namely, 3 reference pixels are selected in each direction, and 2 normal pixels in the 3 reference pixels can be normal pixels per se or can be normal pixels after the dead pixel is corrected before, so that the influence of the dead pixel on the corrected pixel value is avoided.
In some embodiments, the n=3, 3 reference pixels include a first reference pixel having a pixel value P1, a second reference pixel having a pixel value P2, and a third reference pixel having a pixel value P0, the pixel under test is located between the first reference pixel and the second reference pixel, and the first reference pixel is located between the third reference pixel and the pixel under test. In step 3033, if the two target reference pixels are the first reference pixel and the second reference pixel, the corrected pixel value is: (p1+p2)/(2); if the two target reference pixels are the first reference pixel and the third reference pixel, the corrected pixel value is: 2P1-P0.
For example, referring to fig. 10 and 11, ifThe two target reference pixels with the smallest pixel gradient values in the 45 ° diagonal (D) direction are P d0 (pixel value of third reference pixel) and P d1 (pixel value of first reference pixel), then it can be based on P d0 And P d1 Calculating a corrected pixel value in the 45 DEG diagonal (D) direction, the corrected pixel value being 2*P d0 -P d1 . If the two target reference pixels with the smallest pixel gradient values in the 45 DEG diagonal (D) direction are P d1 And P d2 (pixel value of second reference pixel), then it can be based on P d1 And P d2 Calculating a corrected pixel value in the 45 DEG diagonal (D) direction, the corrected pixel value being
Taking the lateral (H) direction as an example, the corrected pixel value P in the lateral (H) direction h The calculation mode of (2) is as follows:
the calculation manner of the corrected pixel values in other directions is the same, and in order to avoid repetition, the description thereof will be omitted.
In this embodiment, when the reference pixel is selected in each direction to calculate the corrected pixel value, two groups of reference pixel points are selected, and the pixel gradient value is used as a judgment condition, so that other dead points in the detection window are introduced as references to cause correction value distortion in calculating the corrected pixel value as far as possible. Meanwhile, more reasonable weight values can be calculated under different pixel environments in the image to be detected, and normal pixels can be selected as far as possible to serve as reference pixels for weighting calculation so as to correct dead pixels.
In some embodiments, determining the corrected pixel value of the pixel under test according to the weight value and the corrected pixel value in each direction respectively corresponding to the N directions mentioned in step 304 includes:
the corrected pixel value is calculated by the following formula:
wherein P is the corrected pixel value, weight [ k ]]For the weight value corresponding to the kth direction, P grad [k]Is the modified pixel value in the kth direction.
In this embodiment, the detection window is sequentially slid in the whole Bayer image, and each pixel is traversed from top to bottom to left to right as the pixel to be detected, so as to complete the detection and correction of the dead pixel of the whole Bayer image.
In some embodiments, the flow of dead pixel detection and correction may refer to fig. 13, which includes:
step 401: judging whether non-traversed pixels exist in the image to be detected; if so, step 402 is performed, otherwise the flow ends.
Step 402: determining a detection window of the currently traversed pixel to be detected, and removing pixels with the maximum pixel value and the minimum pixel value from all pixels in the same channel with the pixel to be detected in the detection window.
Step 403: an image category of an image within the detection window is determined. The determination manner of the image category may refer to the related description in the above embodiment, and in order to avoid repetition, the description is omitted here.
Step 404: determining whether the pixel to be detected is detected as a dead pixel for the first time; if so, step 405 is performed, otherwise step 409 is performed. The process of preliminary detecting the dead pixel may refer to the descriptions of steps 101 to 104 in the above embodiments, and the description is omitted herein for avoiding repetition.
Step 405: and respectively determining pixel gradient values in 4 directions in the detection window according to the pixel values of the pixels in the same channel as the pixel to be detected in the detection window.
Step 406: and determining weight values corresponding to the 4 directions respectively according to the pixel gradient values in the 4 directions.
Step 407: the corrected pixel values in the 4 directions are determined.
Step 408: and determining corrected pixel values of the pixel to be detected according to the weight values corresponding to the 4 directions and the corrected pixel values in the 4 directions.
Step 409: determining whether the pixel to be detected is accurately detected as a dead pixel; if so, step 405 is performed, otherwise step 410 is performed. The process of accurately detecting the dead pixel may refer to the descriptions of steps 201 to 204 in the above embodiments, and the descriptions are omitted herein to avoid repetition.
Step 410: and determining the pixel to be detected as a normal pixel.
According to the embodiment, an accurate detection link is added after preliminary dead pixel detection, and the dead pixel detection probability is further improved. When the dead pixel value correction is carried out, the inverse value of the gradient in each direction is used as the weight, the weight in the edge direction of the dead pixel is increased, the correction result value can form linear change along the edge direction of the dead pixel, and the human eye visual characteristic is more met. The change characteristics of the pixel values of different areas are fully considered, and the corrected pixel values in each direction are obtained by a weighted calculation method according to the weight values corresponding to the 4 directions, so that dead pixels are recovered, and dead pixel residues or excessive loss of details on visual effects are avoided.
The above examples in the present embodiment are examples for easy understanding and are not limited to the technical solution of the present application.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
The embodiment of the present application further relates to a chip, referring to fig. 14, the chip 501 is connected to a memory 502, where the memory stores instructions executable by the chip, and the instructions are executed by the chip, so that the chip can execute the method for detecting a dead pixel in the above embodiment.
Where memory 502 and memory 502 are connected by a bus, the bus may include any number of interconnected buses and bridges, which connect together various circuits of one or more of the chips 501 and memory 502. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the chip 501 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the chip 501.
The chip 501 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 502 may be used to store data used by chip 501 in performing operations.
The embodiment of the application also relates to an electronic device, referring to fig. 14, including: the chip 501 and a memory 502 connected to the chip 501.
Embodiments of the present application also relate to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of implementing the present application and that various changes in form and details may be made therein without departing from the spirit and scope of the present application.
Claims (21)
1. The dead pixel detection method is characterized by comprising the following steps of:
taking a pixel to be detected in an image to be detected as the center of a detection window, and acquiring the pixel value of the pixel to be detected and the pixel value of each pixel in the detection window, which is in the same channel as the pixel to be detected;
determining the image category of the image in the detection window;
if the image type is a flat area, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a first preset threshold value;
if the image category is a texture region, determining that the pixel to be detected is a dead pixel if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel in the same channel is larger than a second preset threshold value;
wherein the second preset threshold is greater than the first preset threshold.
2. The method of claim 1, wherein determining the image category to which the image within the detection window belongs comprises:
Determining a characteristic value according to the pixel value of each pixel of the same channel;
calculating the average value of the pixel value of each pixel of the same channel and the offset value of the characteristic value;
if the average value is larger than or equal to a preset texture region threshold value, determining that the image category of the image in the detection window is a texture region;
and if the average value is smaller than the preset texture region threshold value, determining that the image category of the image in the detection window is a flat region.
3. The dead pixel detection method according to claim 1, wherein if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel is less than or equal to a first preset threshold value, or if the offset value between the pixel value of the pixel to be detected and the pixel value of each pixel of the same channel is less than or equal to a second preset threshold value, the method further comprises:
determining a characteristic value according to the pixel value of each pixel of the same channel;
calculating the average value of the pixel value of each pixel of the same channel and the offset value of the characteristic value;
determining an offset value between the pixel value of the pixel to be detected and the characteristic value;
if the image category is a flat area, determining that the pixel to be detected is a dead pixel if the difference between the pixel value of the pixel to be detected, the offset value of the characteristic value and the average value is larger than a first self-adaptive threshold;
If the image category is a texture region, determining that the pixel to be detected is a dead pixel if the difference between the pixel value of the pixel to be detected, the offset value of the characteristic value and the average value is greater than a second self-adaptive threshold;
wherein the first adaptive threshold and the second adaptive threshold are both determined based on the feature value, and the second adaptive threshold is greater than the first adaptive threshold.
4. The dead pixel detection method of claim 3 wherein the first adaptive threshold is determined based on the characteristic value and a first preset coefficient, the second adaptive threshold is determined based on the characteristic value and a second preset coefficient, the first preset coefficient and the second preset coefficient are both greater than 0 and less than 1, and the second preset coefficient is greater than the first preset coefficient.
5. A dead pixel detection method according to claim 2 or 3, wherein determining the feature value according to the pixel value of each pixel of the same channel comprises:
and determining the median value of the pixel values of the pixels of the same channel, and taking the median value as the characteristic value.
6. A dead pixel detection method according to claim 2 or 3, wherein determining the feature value according to the pixel value of each pixel of the same channel comprises:
Determining the maximum value and the minimum value in the pixel values of the pixels of the same channel;
and determining a characteristic value according to the pixel values except the maximum value and the minimum value in the pixel values of the pixels in the same channel.
7. The method according to claim 1, wherein in the case where the pixel to be detected is determined to be a dead pixel, the method further comprises:
and correcting the pixel value of the pixel to be detected according to the pixel value of each pixel in the same channel to obtain the corrected pixel value of the pixel to be detected.
8. The method of claim 7, wherein correcting the pixel value of the pixel to be detected according to the pixel value of each pixel of the same channel to obtain the corrected pixel value of the pixel to be detected comprises:
according to the pixel values of the pixels of the same channel, respectively determining pixel gradient values in N directions in the detection window; wherein N is an integer greater than or equal to 2;
and determining corrected pixel values of the pixels to be detected according to the pixel gradient values in the N directions.
9. The method of claim 8, wherein determining pixel gradient values in N directions in the detection window according to pixel values of pixels of the same channel, respectively, comprises:
According to the pixel values of the pixels of the same channel, respectively determining a plurality of groups of pixel gradient values in each of the N directions;
and determining the characteristic values of the plurality of groups of pixel gradient values in each direction, and taking the characteristic values of the plurality of groups of pixel gradient values in each direction as the pixel gradient values in each direction.
10. The method according to claim 9, wherein determining the characteristic values of the plurality of sets of pixel gradient values in each direction includes:
and determining the median value of the plurality of groups of pixel gradient values in each direction.
11. The dead pixel detection method according to any one of claims 8 to 10, wherein the determining the corrected pixel value of the pixel under test according to the pixel gradient values in the N directions includes:
and if the pixel gradient values in the N directions are all 0, determining the corrected pixel value of the pixel to be detected as the pixel value of any one of the pixels in the same channel.
12. The dead pixel detection method according to any one of claims 8 to 10, wherein the determining the corrected pixel value of the pixel under test according to the pixel gradient values in the N directions includes:
According to the pixel gradient values in the N directions, determining weight values corresponding to the N directions respectively;
selecting N reference pixels in each of the N directions among the pixels of the same channel for each of the N directions; the pixels to be detected are positioned on connecting lines among n reference pixels in each direction, and n is an integer greater than or equal to 2;
determining a corrected pixel value in each direction according to the pixel values of the n reference pixels selected in each direction;
and determining corrected pixel values of the pixels to be detected according to the weight values respectively corresponding to the N directions and the corrected pixel values in each direction.
13. The method of claim 12, wherein determining the weight values corresponding to the N directions according to the pixel gradient values in the N directions includes:
and calculating the weight values corresponding to the N directions respectively through the following formula:
14. The method according to claim 12, wherein the determining the corrected pixel value in each direction based on the pixel values of the n reference pixels selected in each direction includes:
calculating pixel gradient values between any two adjacent pixels in the n selected reference pixels in each direction to obtain n-1 pixel gradient values in each direction;
determining a smallest pixel gradient value of the n-1 pixel gradient values in each direction, and determining two target reference pixels with the smallest pixel gradient values in each direction;
and determining the corrected pixel value in each direction according to the pixel values of the two target reference pixels in each direction.
15. The method of claim 14, wherein n is equal to 3, and wherein 2 normal pixels are included in the 3 reference pixels.
16. The method according to claim 15, wherein the 3 reference pixels include a first reference pixel having a pixel value P1, a second reference pixel having a pixel value P2, and a third reference pixel having a pixel value P0, the pixel to be detected is located between the first reference pixel and the second reference pixel, and the first reference pixel is located between the third reference pixel and the pixel to be detected;
The determining the corrected pixel value in each direction according to the pixel values of the two target reference pixels in each direction comprises:
if the two target reference pixels are the first reference pixel and the second reference pixel, the modified pixel value is: (p1+p2)/(2);
if the two target reference pixels are the first reference pixel and the third reference pixel, the modified pixel value is: 2P1-P0.
17. The method according to claim 12, wherein determining the corrected pixel value of the pixel to be detected according to the weight values respectively corresponding to the N directions and the corrected pixel value in each direction includes:
calculating the corrected pixel value by the following formula:
wherein P is the corrected pixel value, weight [ k ]]For the weight value corresponding to the kth direction, P grad [k]Is the modified pixel value in the i-th direction.
18. The method of claim 12, wherein N is equal to 4, and 4 directions are respectively: 0 ° direction, 45 ° direction, 90 ° direction, 135 ° direction.
19. A chip, wherein the chip is located in an electronic device and is connected to a memory in the electronic device, the memory storing instructions executable by the chip to enable the chip to perform the method of detecting a dead pixel according to any one of claims 1 to 18.
20. An electronic device, comprising: the chip of claim 19, and a memory coupled to said chip.
21. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of detecting a dead pixel according to any one of claims 1 to 18.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111653341.5A CN116418970A (en) | 2021-12-30 | 2021-12-30 | Method for detecting dead pixel, chip, electronic device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111653341.5A CN116418970A (en) | 2021-12-30 | 2021-12-30 | Method for detecting dead pixel, chip, electronic device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116418970A true CN116418970A (en) | 2023-07-11 |
Family
ID=87053371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111653341.5A Pending CN116418970A (en) | 2021-12-30 | 2021-12-30 | Method for detecting dead pixel, chip, electronic device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116418970A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116886894A (en) * | 2023-08-04 | 2023-10-13 | 上海宇勘科技有限公司 | Picture dead pixel detection and correction method for adjacent internal column dead pixels |
-
2021
- 2021-12-30 CN CN202111653341.5A patent/CN116418970A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116886894A (en) * | 2023-08-04 | 2023-10-13 | 上海宇勘科技有限公司 | Picture dead pixel detection and correction method for adjacent internal column dead pixels |
CN116886894B (en) * | 2023-08-04 | 2024-03-15 | 上海宇勘科技有限公司 | Picture dead pixel detection and correction method for adjacent internal column dead pixels |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7973850B2 (en) | Image processing apparatus and image processing method | |
US7636472B2 (en) | Image quality correction apparatus and image quality correction method | |
CN102835102B (en) | Image processing apparatus and control method for image processing apparatus | |
US7009644B1 (en) | Dynamic anomalous pixel detection and correction | |
TWI283833B (en) | Color image dead pixel calibration method and its system | |
EP2224726A2 (en) | Image processing apparatus, image processing method, and program | |
JP4194336B2 (en) | Semiconductor integrated circuit, defective pixel correction method, and image processor | |
CN101242542A (en) | An image detection method and device | |
KR20060080217A (en) | Method and system in a digital image processing chain for adjusting a colour balance, corresponding equipment, and software means for implementing the method | |
KR101639664B1 (en) | Photographing apparatus and photographing method | |
CN104935838A (en) | Image restoration method | |
US8619162B2 (en) | Image processing apparatus and method, and image processing program | |
KR100906606B1 (en) | Method and apparatus for processing dead pixel | |
CN116418970A (en) | Method for detecting dead pixel, chip, electronic device and computer readable storage medium | |
CN100425055C (en) | Method and system for correcting color-image bad point | |
CN114757853B (en) | Method and system for acquiring flat field correction function and flat field correction method and system | |
JP2012514371A (en) | Method for detecting and correcting defective pixels of image sensor | |
CN114170960B (en) | Custom Gamma correction method for silicon-based OLED micro-display screen | |
US7199822B2 (en) | Dynamic white balance control circuit and multi-screen display device | |
CN115334294A (en) | Video noise reduction method of local self-adaptive strength | |
KR100825821B1 (en) | Method of detecting Defect Pixel | |
JP3662514B2 (en) | Defective pixel detection and correction device, defective pixel detection and correction method, defective pixel detection and correction program, and video signal processing device | |
CN109615587A (en) | A kind of image singular point bearing calibration | |
US10991080B2 (en) | Image adjustment method and associated image processing circuit | |
CN111629121B (en) | Image adjusting method and related image processing circuit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |