US20090174797A1 - Method and apparatus for spatial processing of a digital image - Google Patents
Method and apparatus for spatial processing of a digital image Download PDFInfo
- Publication number
- US20090174797A1 US20090174797A1 US12/003,922 US392208A US2009174797A1 US 20090174797 A1 US20090174797 A1 US 20090174797A1 US 392208 A US392208 A US 392208A US 2009174797 A1 US2009174797 A1 US 2009174797A1
- Authority
- US
- United States
- Prior art keywords
- pixel
- value
- pixels
- target pixel
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/68—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
- H04N25/683—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects by defect estimation performed on the scene signal, e.g. real time or on the fly detection
Definitions
- Embodiments relate to a method, apparatus and system for the spatial processing of a digital image.
- An imaging device 10 typically includes a plurality of imaging device pixel cells, each having an associated photosensor, arranged in an array 20 .
- FIG. 1 illustrates a CMOS imaging device 10 which employs a column parallel readout to sample the signals generated by the imaging device pixel cells.
- a column switch associated with a column driver 60 and associated column address decoder 70 for each column of the array selectively couples a column output line to a readout circuit while a row of the array is selected for readout by row address decoder 40 and row driver 30 .
- a control circuit 50 typically controls operation of the pixel cells of the array 20 for image charge integration and signal readout.
- Each imaging device pixel cell in a CMOS imaging device 10 is sampled for a reset output signal (Vrst) and a photogenerated voltage output signal (Vsig) proportional to incident light from a scene to be captured.
- the output signals are sent to the readout circuit which processes the imaging device pixel cell signals.
- the readout circuit typically includes a sample and hold circuit 72 for sampling and holding the reset output signal Vrst and photogenerated output signal Vsig, a differential amplifier 74 for subtracting the Vrst and Vsig signals to generate a pixel output signal (e.g., Vrst ⁇ Vsig), and an analog-to-digital converter (ADC) 77 , which receives the analog pixel output signal and digitizes it.
- ADC analog-to-digital converter
- the output of the analog-to-digital converter 77 is supplied to an image processor 110 , which processes the pixel output signals from array 20 to form a digital image.
- a color filter array may be used to detect separate color components of a scene to be captured so that the imaging device 10 may successfully reflect color details in a digitally produced color image.
- each imaging device pixel cell receives light through a respective color filter of the color filter array and detects only the color of its associated filter.
- a Bayer patterned color filter array 80 is a well known and commonly used color filter array that allows the passage of only red, green, or blue light.
- Imaging device pixels in an array 20 associated with a Bayer patterned color filter array 80 may be designated as red (R), green (G), or blue (B) pixels according to each pixel's associated filter.
- Color filters in a Bayer patterned color filter array 80 are arranged in a pattern of alternating rows 90 , 95 , 90 , 95 , etc., with each row having alternating colors, i.e., R,G,R,G,R,G, etc. in rows 90 and G,B,G,B,G, etc. in rows 95 .
- the digital image output of the imaging device 10 using a Bayer patterned color filter array 80 is initially an array of red, green and blue image pixels, where “pixels” refers to individual picture elements that together comprise a digital image. Each pixel value is proportional to the intensity of the respective incident light from the captured scene as received by the imaging device pixel cell through an associated filter. This initial red/green/blue image is referred to as a “raw” image. A number of image processing tasks are required to transform a raw image into an image of a quality that accurately reflects the target scene by human visual standards.
- Spatial processing tasks are a type of processing applied to raw image data which acquires pixel values from several pixels in a row or column of an image.
- Spatial processing includes color mosaic interpolation (i.e., demosaicing), pixel defect correction, image contrast enhancement, and image noise reduction, among other processing tasks.
- These tasks generally require the use of a plurality of line buffers to store lines of pixel values of the image so that proximal pixel values may be used in various processing calculations.
- the line buffers sequentially receive and store lines of pixel values in an image and these pixel values are processed while in the line buffer. Process lines of pixels exit each line buffer as a new next line of unprocessed pixel values are stored.
- a line buffer stores a complete row or column of pixel values. Typically, as many as five or more separate line buffers may be used in spatial processing tasks. In a “camera-on-a-chip”implementation, the number of buffers occupies a significant portion of the silicon area used in the chip, which can be a problem due to cost and space limitations.
- FIG. 1 is a simplified block diagram of an imaging device.
- FIG. 2 is a Bayer patterned color filter array.
- FIG. 3 is a flowchart illustrating an embodiment of a method of operating an imaging device.
- FIG. 4 is a system incorporating at least one imaging device configured to employ the method of FIG. 3 .
- pixel hereinafter refers to a single picture element in a digital image.
- pixel cell hereinafter refers to a photosensitive cell in an imaging device pixel array.
- Embodiments discussed herein use the same line buffers for multiple spatial processing tasks.
- a first embodiment of a spatial processing method 120 is now described.
- pixel values are stored in line buffers at step S 1 .
- the spatial processing method 120 continues with a progressive scanning of the stored data for defective pixel detection, for example, hot pixel detection, and correction at step S 2 .
- a demosaicing process is applied to pixels in the same line as the hot pixel correction process, delayed by a predetermined number of pixel data samples at step S 3 .
- the demosaicing process also incorporates noise reduction and contrast enhancement tasks, as will be described further below.
- the processing of steps S 1 -S 3 are repeated (steps S 4 , S 6 ) until it is determined at step S 4 that the scanning of all pixels of an image is complete (step S 5 ).
- the storing of the pixel values in line buffers may be executed using three full line buffers, or may be executed through hardware optimization designed to realize three effective line buffers while using only two full line buffers coupled with a fixed number of temporary storage cells and applied logic, or by using another equivalent method to realize three effective line buffers.
- any technique that provides for the storage of three effective lines of image pixel values may be used for the storage step.
- the invention could be employed with line buffers which store more than three effective lines of image pixel values.
- the line buffers are described as storing rows of pixels, the method may be applied by storing columns of pixels as well.
- step S 2 The defective pixel detection and correction process of step S 2 is now described in more detail with reference to correcting defective hot pixels, although correction of other types of pixel defects may also be performed in addition to or in lieu of defective hot pixel correction.
- the process is described for an application to Bayer patterned sampled RGB images; however, it should be understood that the process may be applied to other color systems with appropriate substitutions of color.
- the following pseudo-code expresses the hot pixel detection and correction steps for a target green pixel G 34 shown in the image portion depicted in FIG. 2 :
- a hot pixel typically exhibits high or excessive contrast in a digital image.
- the target pixel is compared with local pixels of the same color from the rows directly above and beneath the target pixel. Excessive contrast may be determined if the target pixel exhibits a significantly higher value, preferably compared to the sum of diagonal neighboring pixels.
- there is no threshold value decision nor any complex computation required for the hot pixel detection and correction which allows the embodiment to be implemented at low cost.
- the neighboring same color pixels comprise pixels G 23 , G 45 , G 25 and G 43 .
- the target pixel G 34 is compared to a sum value of the diagonal neighboring pixels having the greater summed value. If excessive contrast is detected, marked by the target pixel value exceeding the greater of the summed values (surround), the target pixel G 34 is determined to be a hot pixel (Lines 1-5). Correction may be executed by replacing the hot pixel value with a value which is calculated from the same neighboring pixels (Line 6).
- the replacement value calculated in the expression above is equal to the greater of the two sum values of the diagonal neighboring pixels, however, a proportional value, e.g., 1 ⁇ 2, of the greater of the two sum values may be used instead.
- red or blue target pixel hot pixel detection uses the nearest neighboring same-color pixels in the same row, as expressed by the pseudo-code below for target pixels R 35 and B 44 ( FIG. 2 ):
- the target pixels R 35 and B 44 are checked for excessive contrast by comparison to the sum of the two nearest neighboring pixels in the same row (Method 2, lines 1 and 3). If the target pixel value exceeds the sum value, the target pixel is determined to be a hot pixel and the pixel value is replaced with the sum value (Method 2, lines 2 and 4), or a proportional, e.g., 1 ⁇ 2, of the sum value.
- Step S 3 ( FIG. 3 ) is now described in more detail.
- Step S 3 comprises a demosaicing process applied to pixel signals in the line buffer which have already undergone defect corrections.
- the demosaicing process may also have integrated contrast enhancement and noise reduction.
- the demosaicing process is explained as operating on a Bayer patterned sampled image, but may be executed on images utilizing a different color system wherein appropriate color substitutions are made with respect to the method. Due to the differences in pixel color layout, the demosaicing process is applied differently to green target pixels than red or blue target pixels.
- the following pseudo-code expresses the demosaicing of a green target pixel cell G 45 in a row 95 having green and blue pixels ( FIG. 2 ):
- each imaging device pixel cell detects a single color component of the captured scene.
- the demosaicing process interpolates missing color values for a target pixel from neighboring pixels.
- the blue color value missing from the target green pixel G 45 is calculated based on neighboring blue pixels B 44 and B 46 , which are in the same row as the target pixel G 45 .
- the missing red color value is calculated based on neighboring red pixels R 35 and R 55 , which are in the rows immediately above and below the target pixel row, respectively.
- the hot pixel detection and correction process and the demosaicing process are both progressively applied through a scanning process across pixels held in a single line buffer.
- the demosaicing process is applied to a target pixel that is a number of pixel samples which follow pixels undergoing hot pixel detection and correction.
- the demosaicing process is applied to pixels a distance of two pixels from the application of the hot pixel detection and correction process. Accordingly, the target pixel, neighboring pixels in the same row as the target pixel, and pixels in the row above the target pixel have already been checked for hot pixel defect, and, if necessary, corrected.
- the demosaicing process is applied to green pixel G 34 ( FIG. 2 ). Under this progression, the hot pixel detection and correction method is not yet applied to pixels in the row beneath the target pixel.
- the first step of the demosaicing process is to check the neighboring red pixel R 55 in the row beneath the target pixel for hot pixel defect, and if necessary, store a temporary correction value temp_R 55 for calculation purposes (Meth. 3, lines 1-4).
- the average local green avg_g, red avg_r, and blue avg_b values are calculated next (Method 3, lines 5-8).
- the local average color values may be calculated as a median value, mean value, adjusted average or some other form of average.
- the average blue value avg_b is equal to the average (mean) value of the two blue pixels B 44 , B 46 located on either side of the target pixel (Method 3, line 8).
- the average red value avg_r is preferably equal to the average of the red pixel R 35 located in the row directly above the target pixel G 45 row and the temporary stored hot pixel correction value temp_R 55 representing the value of the red pixel R 35 located in the row directly below the target pixel G 45 (Method 3, line 7).
- the calculation for the representative local green value avg_g is based on an average of the four nearest green pixels G 34 , G 36 , G 56 , G 54 . Due to the increased number of samples, a more comprehensive averaging technique is utilized to determine a representative value avg_g.
- the calculation method for determining an average local value where four local samples are available is expressed herein as a function calcAvg, which receives the four local values as input parameters and returns an output value (average).
- the function calcAvg may be designed in various ways, depending on the desired bias and form of average used.
- calcAvg operates as expressed by the following pseudo-code:
- the average value average as calculated above is biased towards the smoother diagonal pair of four like-colored proximal pixels, that is, biased towards the pair having the least variance. In this way, abnormally high contrast points or hard edges in the image are discounted.
- the variance of the first diagonal pair p 1 and p 3 is determined in method 4 lines 4-6.
- the variance of the second diagonal pair p 2 and p 4 is determined in method 4, lines 7-9.
- the pair having the least variance is accorded increased weight in calculating the average (Method 4, lines 10-13).
- the output value average is returned to the calling method.
- the returned result representing local green average value average is averaged with the target pixel G 45 value to determine a local average green value avg_g biased toward the target pixel G 45 .
- the next step is calculating the average local luminance (Method 3 line 9).
- the average green (avg_g), red (avg_r) and blue (avg_b) values calculated above may be used as parameters to calculate the average local luminance.
- Various methods may be used to calculate the average luminance.
- the average green value calculated above may be used as an approximation of average luminance. This is particularly viable in a Bayer sampled image, as green is the most represented color sample.
- calcLuminance returns avg_g, which will accordingly be used as a representation of average luminance avg_Y going forward. It should be understood that local luminance may be calculated in different ways and the use of avg_g is not intended to be limiting.
- red, green and blue color difference values Cr, Cg, Cb are calculated by subtracting the average luminance value avg_Y from each of the local average values avg_g, avg_r, avg_b.
- a local luminance value localY is defined at method 3, line 13, as the difference between the target pixel G 45 value and the green color difference value Cg.
- the color difference values and local luminance value are all used to calculate a sharpness parameter dY (Method 3, line 14).
- the sharpness parameter dY is determined by a localSharpness process.
- the basic operation of localSharpness is to stretch the effect of the local luminance with respect to the surrounding pixel values and to perform noise reduction if the local contrast is below a certain noise floor.
- the localSharpness process is preferably executed as expressed by the following pseudo-code:
- the parameters sharpness and sharpnessBase are parameters for adjusting the resolution of the image. These parameters also affect the amount of contrast boost the image will receive and may be configured to be controllable by the user or preset by the manufacturer.
- the parameters threshold thrshift and thrBias are used to determine the amount of contrast boost and to control noise reduction.
- the thrShift parameter is a signal-dependent proportional parameter that may be adjusted according to the quality of the image sensor. The higher the quality of the image sensor, the higher the thrShift parameter may be set.
- the thrBias parameter is used to specify a constant noise floor offset value. This parameter may be scaled by the user in accordance with the analog gain of the image sensor to set an appropriate noise floor.
- the difference diff between the local luminance localY and the average luminance avg_Y is determined in method 5, lines 3-8.
- a temporary estimation of a noise value temp is calculated based on diff, avg_Y, thrshift and thrBias. If the temporary noise value temp is below the noise floor, the average luminance avg_Y is returned as the sharpness parameter dY at line 18. If the temporary noise value is above the noise floor, the sharpness parameters sharpness and sharpnessBase are applied to adjust the temp value. The temp value is added to the average luminance avg_Y and the sum value returned as the sharpness parameter dy. Accordingly, method 5 operates to check and adjust for noise reduction prior to adjusting sharpness or contrast.
- the final red, green and blue output values for the target pixel are determined by adding the sharpness parameter dy to the color difference value for each respective color, Cr, Cg, and Cb at method 3, lines 15-17.
- a similar process may be used for a target green pixel in a line of pixels containing green and red pixels.
- the demosaicing process of method 6 is applied to a target pixel (R 35 ) that is a number of pixel samples following the application of the hot pixel correction.
- R 35 target pixel
- hot pixel detection has not yet been applied at this point to pixels in the row beneath the target pixel, which in this example would include pixels B 44 , G 45 and B 46 , it is not necessary to check for hot pixel defect in these pixels.
- the calcAvg method (Method 4) described above is biased against abnormally high contrast, therefore potential hot pixels would be sufficiently discounted.
- the average local red pixel value avg_r is calculated at line 1 based on the neighboring red pixels R 33 and R 37 .
- the calcC3Avg method returns an average value of three input pixel values, which may be biased toward the target pixel R 35 (p 2 ), as shown below:
- the neighboring pixel values R 33 and R 37 are checked against high and low limit values (high and low).
- the high and low limit values are defined as proportional to the target pixel R 35 (p 2 ) at lines 3 and 4. If the R33 or R37 pixel value is higher than the upper limit value, then the limit upper limit value is used to calculate the average. If the R33 or R37 pixel value is lower than the lower limit value, then the lower limit value is used to calculate the average. If the R33 or R37 pixel value is in between the upper and lower limit, then the R33 or R37 pixel value is used to calculate the average. The accuracy of the calculated average value may therefore be increased by limiting overshooting or undershooting pixel values where a sharp transition occurs between pixels in an image.
- the average local blue avg_b and average local green avg_g values are determined next in lines 2-3, using the previously described calcAvg method (Method 4).
- Average luminance avg_Y is determined next as previously described and used to calculate the color difference values Cr, Cb, Cg.
- a local luminance value localY is calculated based on a difference between the target pixel value and the red color differential value.
- a sharpness parameter dY is calculated using the localsharpness method described above. The sharpness parameter is added to each color differential value respectively to produce the final adjusted output.
- a target blue pixel may be demosaiced in a similar fashion.
- the above described spatial processing method 120 requires one line buffer storing three effective lines to perform hot pixel correction, demosaicing, contrast enhancement, and noise reduction and may be implemented in various image processing systems.
- the processing described herein can be performed by an on-chip image processor 110 , as shown in FIG. 1 , or by a separate processor which receives the pixel information.
- FIG. 4 shows an image processor system 400 , for example, a still or video digital camera system, which may implement a demosaicing process in accordance with embodiments described herein.
- the imaging device 10 may receive control or other data from system 400 .
- the imaging device 10 receives light on pixel array 20 thru the lens 408 when shutter release button 416 is pressed.
- System 400 includes a processor 402 having a central processing unit (CPU) that communicates with various devices over a bus 404 , including with imaging device 10 . Some of the devices connected to the bus 404 provide communication into and out of the system 400 , such as one or more input/output (I/O) devices 406 which may include input setting and display circuits.
- I/O input/output
- Other devices connected to the bus 404 provide memory, illustratively including a random access memory (RAM) 410 , and one or more peripheral memory devices such as a removable, e.g., flash, memory drive 414 .
- the imaging device 10 may be constructed as shown in FIG. 1 .
- the imaging device 10 may, in turn, be coupled to processor 402 for image processing, or other image handling operations.
- Examples of processor based systems, which may employ the imaging device 10 include, without limitation, computer systems, camera systems, scanners, machine vision systems, vehicle navigation systems, video telephones, surveillance systems, auto focus systems, star tracker systems, motion detection systems, image stabilization systems, and others.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Description
- Embodiments relate to a method, apparatus and system for the spatial processing of a digital image.
- An
imaging device 10, as shown inFIG. 1 , typically includes a plurality of imaging device pixel cells, each having an associated photosensor, arranged in anarray 20.FIG. 1 illustrates aCMOS imaging device 10 which employs a column parallel readout to sample the signals generated by the imaging device pixel cells. In a column parallel readout, a column switch associated with acolumn driver 60 and associatedcolumn address decoder 70 for each column of the array selectively couples a column output line to a readout circuit while a row of the array is selected for readout byrow address decoder 40 androw driver 30. - A
control circuit 50 typically controls operation of the pixel cells of thearray 20 for image charge integration and signal readout. Each imaging device pixel cell in aCMOS imaging device 10 is sampled for a reset output signal (Vrst) and a photogenerated voltage output signal (Vsig) proportional to incident light from a scene to be captured. The output signals are sent to the readout circuit which processes the imaging device pixel cell signals. The readout circuit typically includes a sample and holdcircuit 72 for sampling and holding the reset output signal Vrst and photogenerated output signal Vsig, adifferential amplifier 74 for subtracting the Vrst and Vsig signals to generate a pixel output signal (e.g., Vrst−Vsig), and an analog-to-digital converter (ADC) 77, which receives the analog pixel output signal and digitizes it. The output of the analog-to-digital converter 77 is supplied to animage processor 110, which processes the pixel output signals fromarray 20 to form a digital image. - A color filter array may be used to detect separate color components of a scene to be captured so that the
imaging device 10 may successfully reflect color details in a digitally produced color image. When a color filter array is placed over thepixel array 20, each imaging device pixel cell receives light through a respective color filter of the color filter array and detects only the color of its associated filter. - A Bayer patterned
color filter array 80, illustrated inFIG. 2 , is a well known and commonly used color filter array that allows the passage of only red, green, or blue light. Imaging device pixels in anarray 20 associated with a Bayer patternedcolor filter array 80 may be designated as red (R), green (G), or blue (B) pixels according to each pixel's associated filter. Color filters in a Bayer patternedcolor filter array 80 are arranged in a pattern ofalternating rows rows 90 and G,B,G,B,G, etc. inrows 95. - The digital image output of the
imaging device 10 using a Bayer patternedcolor filter array 80 is initially an array of red, green and blue image pixels, where “pixels” refers to individual picture elements that together comprise a digital image. Each pixel value is proportional to the intensity of the respective incident light from the captured scene as received by the imaging device pixel cell through an associated filter. This initial red/green/blue image is referred to as a “raw” image. A number of image processing tasks are required to transform a raw image into an image of a quality that accurately reflects the target scene by human visual standards. - Spatial processing tasks are a type of processing applied to raw image data which acquires pixel values from several pixels in a row or column of an image. Spatial processing includes color mosaic interpolation (i.e., demosaicing), pixel defect correction, image contrast enhancement, and image noise reduction, among other processing tasks. These tasks generally require the use of a plurality of line buffers to store lines of pixel values of the image so that proximal pixel values may be used in various processing calculations. Typically, the line buffers sequentially receive and store lines of pixel values in an image and these pixel values are processed while in the line buffer. Process lines of pixels exit each line buffer as a new next line of unprocessed pixel values are stored.
- A line buffer stores a complete row or column of pixel values. Typically, as many as five or more separate line buffers may be used in spatial processing tasks. In a “camera-on-a-chip”implementation, the number of buffers occupies a significant portion of the silicon area used in the chip, which can be a problem due to cost and space limitations.
- It is desirable to reduce the required number of line buffers for spatial processing tasks. It is also desirable for the spatial processing tasks to be efficient, low-cost and simple. What is needed is a spatial processing method that requires fewer line buffers and provides increased efficiency and decreased complexity of the spatial processing operations used to achieve, for example, defective pixel correction, interpolation, contrast enhancement and noise reduction.
-
FIG. 1 is a simplified block diagram of an imaging device. -
FIG. 2 is a Bayer patterned color filter array. -
FIG. 3 is a flowchart illustrating an embodiment of a method of operating an imaging device. -
FIG. 4 is a system incorporating at least one imaging device configured to employ the method ofFIG. 3 . - In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and which illustrate specific embodiments. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to make and use them. It is also understood that structural, logical, or procedural changes may be made to the specific embodiments disclosed herein.
- As described above, the term “pixel” hereinafter refers to a single picture element in a digital image.
- The term “pixel cell” hereinafter refers to a photosensitive cell in an imaging device pixel array.
- Embodiments discussed herein use the same line buffers for multiple spatial processing tasks. Referring to
FIG. 3 , a first embodiment of aspatial processing method 120 is now described. First, pixel values are stored in line buffers at step S1. Thespatial processing method 120 continues with a progressive scanning of the stored data for defective pixel detection, for example, hot pixel detection, and correction at step S2. A demosaicing process is applied to pixels in the same line as the hot pixel correction process, delayed by a predetermined number of pixel data samples at step S3. The demosaicing process also incorporates noise reduction and contrast enhancement tasks, as will be described further below. The processing of steps S1-S3 are repeated (steps S4, S6) until it is determined at step S4 that the scanning of all pixels of an image is complete (step S5). - At step S1, the storing of the pixel values in line buffers may be executed using three full line buffers, or may be executed through hardware optimization designed to realize three effective line buffers while using only two full line buffers coupled with a fixed number of temporary storage cells and applied logic, or by using another equivalent method to realize three effective line buffers. It should be understood that any technique that provides for the storage of three effective lines of image pixel values may be used for the storage step. However, the invention could be employed with line buffers which store more than three effective lines of image pixel values. It should also be understood that although the line buffers are described as storing rows of pixels, the method may be applied by storing columns of pixels as well.
- The defective pixel detection and correction process of step S2 is now described in more detail with reference to correcting defective hot pixels, although correction of other types of pixel defects may also be performed in addition to or in lieu of defective hot pixel correction. The process is described for an application to Bayer patterned sampled RGB images; however, it should be understood that the process may be applied to other color systems with appropriate substitutions of color. The following pseudo-code expresses the hot pixel detection and correction steps for a target green pixel G34 shown in the image portion depicted in
FIG. 2 : -
(Method 1: green pixel hot pixel detection and correction) Line 1: if ((G23 + G45)>(G25 + G43)) Line 2: surround = G23 + G45; Line 3: else Line 4: surround = G25 + G43; Line 5: if (G34 > surround) Line 6: G34 = surround; - A hot pixel typically exhibits high or excessive contrast in a digital image. In the first embodiment, to check whether a target pixel is a hot pixel, the target pixel is compared with local pixels of the same color from the rows directly above and beneath the target pixel. Excessive contrast may be determined if the target pixel exhibits a significantly higher value, preferably compared to the sum of diagonal neighboring pixels. In contrast to the typical hot pixel detection and correction method, there is no threshold value decision nor any complex computation required for the hot pixel detection and correction, which allows the embodiment to be implemented at low cost.
- In
method 1, where the target pixel is pixel G34, the neighboring same color pixels comprise pixels G23, G45, G25 and G43. The target pixel G34 is compared to a sum value of the diagonal neighboring pixels having the greater summed value. If excessive contrast is detected, marked by the target pixel value exceeding the greater of the summed values (surround), the target pixel G34 is determined to be a hot pixel (Lines 1-5). Correction may be executed by replacing the hot pixel value with a value which is calculated from the same neighboring pixels (Line 6). The replacement value calculated in the expression above is equal to the greater of the two sum values of the diagonal neighboring pixels, however, a proportional value, e.g., ½, of the greater of the two sum values may be used instead. - In a Bayer patterned sampled image, there are no red pixels in the row immediately above or immediately below a row containing red pixels. The same is true for blue pixels. Accordingly, the method for red or blue target pixel hot pixel detection uses the nearest neighboring same-color pixels in the same row, as expressed by the pseudo-code below for target pixels R35 and B44 (
FIG. 2 ): -
(Method 2: red and blue hot pixel detection and correction) Line 1: if (R35 > (R33 + R37)) Line 2: R35 = R33 + R37; Line 3: if (B44> (B42 + B46)) Line 4: B44 = B42 + B46; - In method 2 the target pixels R35 and B44 are checked for excessive contrast by comparison to the sum of the two nearest neighboring pixels in the same row (Method 2,
lines 1 and 3). If the target pixel value exceeds the sum value, the target pixel is determined to be a hot pixel and the pixel value is replaced with the sum value (Method 2, lines 2 and 4), or a proportional, e.g., ½, of the sum value. - Step S3 (
FIG. 3 ) is now described in more detail. Step S3 comprises a demosaicing process applied to pixel signals in the line buffer which have already undergone defect corrections. The demosaicing process may also have integrated contrast enhancement and noise reduction. The demosaicing process is explained as operating on a Bayer patterned sampled image, but may be executed on images utilizing a different color system wherein appropriate color substitutions are made with respect to the method. Due to the differences in pixel color layout, the demosaicing process is applied differently to green target pixels than red or blue target pixels. The following pseudo-code expresses the demosaicing of a green target pixel cell G45 in arow 95 having green and blue pixels (FIG. 2 ): -
(Method 3: green target pixel demosaicing) Line 1: if (R55 >(R53 + R57)) Line 2: temp_R55 = R53 + R57; Line 3: else Line 4: temp_R55 = R55; Line 5: avg_g = calcAvg(G34, G36, G56, G54); Line 6: avg_g = (G45 + avg_g) / 2; Line 7: avg_r = (R35 + temp_R55) / 2; Line 8: avg_b = (B44 + B46) / 2; Line 9: avg_Y = calcLuminance(avg_r, avg_g, avg_b); Line 10: Cg = avg_g − avg_Y; Line 11: Cr = avg_r − avg_Y; Line 12: Cb = avg_b − avg_Y; Line 13: localY = G45 − Cg; Line 14: dY = localSharpness(localY, avg_Y, sharpness, sharpnessBase, lumaThrShift, lumaThrBias); Line 15: r = Cr + dY; Line 16: g = Cg + dY; Line 17: b = Cb + dY; - As previously explained, each imaging device pixel cell detects a single color component of the captured scene. The demosaicing process interpolates missing color values for a target pixel from neighboring pixels. In method 3, the blue color value missing from the target green pixel G45 is calculated based on neighboring blue pixels B44 and B46, which are in the same row as the target pixel G45. The missing red color value is calculated based on neighboring red pixels R35 and R55, which are in the rows immediately above and below the target pixel row, respectively.
- The hot pixel detection and correction process and the demosaicing process are both progressively applied through a scanning process across pixels held in a single line buffer. The demosaicing process is applied to a target pixel that is a number of pixel samples which follow pixels undergoing hot pixel detection and correction. Preferably, the demosaicing process is applied to pixels a distance of two pixels from the application of the hot pixel detection and correction process. Accordingly, the target pixel, neighboring pixels in the same row as the target pixel, and pixels in the row above the target pixel have already been checked for hot pixel defect, and, if necessary, corrected. For example, as the hot pixel detection and correction process is applied to each pixel in a row, when being applied to green pixel G36, the demosaicing process is applied to green pixel G34 (
FIG. 2 ). Under this progression, the hot pixel detection and correction method is not yet applied to pixels in the row beneath the target pixel. The first step of the demosaicing process is to check the neighboring red pixel R55 in the row beneath the target pixel for hot pixel defect, and if necessary, store a temporary correction value temp_R55 for calculation purposes (Meth. 3, lines 1-4). - The average local green avg_g, red avg_r, and blue avg_b values are calculated next (Method 3, lines 5-8). The local average color values may be calculated as a median value, mean value, adjusted average or some other form of average. Preferably, the average blue value avg_b is equal to the average (mean) value of the two blue pixels B44, B46 located on either side of the target pixel (Method 3, line 8). Similarly, the average red value avg_r is preferably equal to the average of the red pixel R35 located in the row directly above the target pixel G45 row and the temporary stored hot pixel correction value temp_R55 representing the value of the red pixel R35 located in the row directly below the target pixel G45 (Method 3, line 7).
- The calculation for the representative local green value avg_g is based on an average of the four nearest green pixels G34, G36, G56, G54. Due to the increased number of samples, a more comprehensive averaging technique is utilized to determine a representative value avg_g. The calculation method for determining an average local value where four local samples are available is expressed herein as a function calcAvg, which receives the four local values as input parameters and returns an output value (average). The function calcAvg may be designed in various ways, depending on the desired bias and form of average used. Preferably, calcAvg operates as expressed by the following pseudo-code:
-
(Method 4: calculate local average with four local values) Line 1: calcAvg(int p1, int p2, int p3, int p4) { Line 2: int d1, d2; Line 3: int average; Line 4: d1 = p1 − p3; Line 5: if(d1 < 0) Line 6: d1 = −d1; Line 7: d2 = p2 − p4; Line 8: if(d2 < 0) Line 9: d2 = −d2; Line 10: if(d1 < d2) Line 11: average = ((p1 + p3) * 3 + (p2 + p4) * 1) / 8; Line 12: else Line 13: average = ((p1 + p3)* 1 + (p2 + p4) * 3) / 8; Line 14: return(average); Line 15: } - The average value average as calculated above is biased towards the smoother diagonal pair of four like-colored proximal pixels, that is, biased towards the pair having the least variance. In this way, abnormally high contrast points or hard edges in the image are discounted. The variance of the first diagonal pair p1 and p3 is determined in method 4 lines 4-6. The variance of the second diagonal pair p2 and p4 is determined in method 4, lines 7-9. The pair having the least variance is accorded increased weight in calculating the average (Method 4, lines 10-13). The output value average is returned to the calling method.
- Referring back to method 3, lines 5-6, the returned result representing local green average value average is averaged with the target pixel G45 value to determine a local average green value avg_g biased toward the target pixel G45.
- Continuing in method 3, the next step is calculating the average local luminance (Method 3 line 9). The average green (avg_g), red (avg_r) and blue (avg_b) values calculated above may be used as parameters to calculate the average local luminance. Various methods may be used to calculate the average luminance. To simplify the demosaicing process, the average green value calculated above may be used as an approximation of average luminance. This is particularly viable in a Bayer sampled image, as green is the most represented color sample. In this case, calcLuminance returns avg_g, which will accordingly be used as a representation of average luminance avg_Y going forward. It should be understood that local luminance may be calculated in different ways and the use of avg_g is not intended to be limiting.
- Next, in method 3, lines 10-13 red, green and blue color difference values Cr, Cg, Cb are calculated by subtracting the average luminance value avg_Y from each of the local average values avg_g, avg_r, avg_b. A local luminance value localY is defined at method 3, line 13, as the difference between the target pixel G45 value and the green color difference value Cg. The color difference values and local luminance value are all used to calculate a sharpness parameter dY (Method 3, line 14).
- The sharpness parameter dY is determined by a localSharpness process. The basic operation of localSharpness is to stretch the effect of the local luminance with respect to the surrounding pixel values and to perform noise reduction if the local contrast is below a certain noise floor. The localSharpness process is preferably executed as expressed by the following pseudo-code:
-
(Method 5: sharpness and noise reduction) Line 1: localSharpness(int localY, int avg_Y, int sharpness, int sharpnessBase, int thrShift, int thrBias) { Line 2: int diff, temp, sign; // integer variables Line 3: diff = localY − avg_Y; Line 4: if(diff < 0) { Line 5: diff = −diff; Line 6: sign = 1; Line 7: } else Line 8: sign = 0; Line 9: temp = diff − ((avg_Y >> thrShift) + thrBias); Line 10: if(temp < 0) Line 11: temp = 0; Line 12: else { Line 13: temp *= sharpness; Line 14: temp /= sharpnessBase; Line 15: } Line 16: if(sign) Line 17: temp = −temp; Line 18: return(temp + avg_Y); Line 19: } - The parameters sharpness and sharpnessBase are parameters for adjusting the resolution of the image. These parameters also affect the amount of contrast boost the image will receive and may be configured to be controllable by the user or preset by the manufacturer.
- The parameters threshold thrshift and thrBias are used to determine the amount of contrast boost and to control noise reduction. The thrShift parameter is a signal-dependent proportional parameter that may be adjusted according to the quality of the image sensor. The higher the quality of the image sensor, the higher the thrShift parameter may be set. The thrBias parameter is used to specify a constant noise floor offset value. This parameter may be scaled by the user in accordance with the analog gain of the image sensor to set an appropriate noise floor.
- In operation, first the difference diff between the local luminance localY and the average luminance avg_Y is determined in method 5, lines 3-8. At line 9, a temporary estimation of a noise value temp is calculated based on diff, avg_Y, thrshift and thrBias. If the temporary noise value temp is below the noise floor, the average luminance avg_Y is returned as the sharpness parameter dY at line 18. If the temporary noise value is above the noise floor, the sharpness parameters sharpness and sharpnessBase are applied to adjust the temp value. The temp value is added to the average luminance avg_Y and the sum value returned as the sharpness parameter dy. Accordingly, method 5 operates to check and adjust for noise reduction prior to adjusting sharpness or contrast.
- Referring back to method 3 (green target pixel demosaicing), the final red, green and blue output values for the target pixel are determined by adding the sharpness parameter dy to the color difference value for each respective color, Cr, Cg, and Cb at method 3, lines 15-17. A similar process may be used for a target green pixel in a line of pixels containing green and red pixels.
- The following pseudo-code describes the process for demosaicing a red pixel, for example R35, in a line of pixels containing red and green pixels:
-
(Method 6: red target pixel demosaicing) Line 1: avg_r = calcC3Avg(R33, R35, R37); Line 2: avg_g = calcAvg(G34, G25, G36, G45); Line 3: avg_b = calcAvg(B24, B26, B46, B44); Line 4: avg_Y = calcLuminance(avg_r, avg_g, avg_b); Line 5: Cg = avg_g − avg_Y; Line 6: Cr = avg_r − avg_Y; Line 7: Cb = avg_b − avg_Y; Line 8: localY = R35 − Cr; Line 9: dY = localSharpness(localY, avg_Y, sharpness, sharpnessBase, lumaThrShift, lumaThrBias); Line 10: r = Cr + dY; Line 11: g = Cg + dY; Line 12: b = Cb + dY; - Similar to the demosaicing process described in method 3 above, the demosaicing process of method 6 is applied to a target pixel (R35) that is a number of pixel samples following the application of the hot pixel correction. Although hot pixel detection has not yet been applied at this point to pixels in the row beneath the target pixel, which in this example would include pixels B44, G45 and B46, it is not necessary to check for hot pixel defect in these pixels. The calcAvg method (Method 4) described above is biased against abnormally high contrast, therefore potential hot pixels would be sufficiently discounted.
- Referring to method 6, the average local red pixel value avg_r is calculated at
line 1 based on the neighboring red pixels R33 and R37. The calcC3Avg method returns an average value of three input pixel values, which may be biased toward the target pixel R35 (p2), as shown below: -
(Method 7: calculate average, three pixels, same row) Line 1: calcC3Avg(in p1, int p2, int p3) { Line 2: int high, low, tmp1, tmp3, average; Line 3: low = p2>>2; Line 4: high = p2 + p2 − low; Line 5: if (p1>high) Line 6: tmp1 = high; Line 7: else if (p1<low) Line 8: tmp1 = low; Line 9: else Line 10: tmp1 = p1; Line 11: if (p3>high) Line 12: tmp3 = high; Line 13: else if (p3<low) Line 14: tmp3 = low; Line 15: else Line 16: tmp3 = p3; Line 17: average = ((p2 * 2) + tmp1 + tmp3) / 4; Line 18: return(average); - In lines 3-16 the neighboring pixel values R33 and R37 (p1 and p3, respectively) are checked against high and low limit values (high and low). The high and low limit values are defined as proportional to the target pixel R35 (p2) at lines 3 and 4. If the R33 or R37 pixel value is higher than the upper limit value, then the limit upper limit value is used to calculate the average. If the R33 or R37 pixel value is lower than the lower limit value, then the lower limit value is used to calculate the average. If the R33 or R37 pixel value is in between the upper and lower limit, then the R33 or R37 pixel value is used to calculate the average. The accuracy of the calculated average value may therefore be increased by limiting overshooting or undershooting pixel values where a sharp transition occurs between pixels in an image.
- Referring back to method 6, the average local blue avg_b and average local green avg_g values are determined next in lines 2-3, using the previously described calcAvg method (Method 4). Average luminance avg_Y is determined next as previously described and used to calculate the color difference values Cr, Cb, Cg. A local luminance value localY is calculated based on a difference between the target pixel value and the red color differential value. A sharpness parameter dY is calculated using the localsharpness method described above. The sharpness parameter is added to each color differential value respectively to produce the final adjusted output. A target blue pixel may be demosaiced in a similar fashion.
- The above described
spatial processing method 120 requires one line buffer storing three effective lines to perform hot pixel correction, demosaicing, contrast enhancement, and noise reduction and may be implemented in various image processing systems. The processing described herein can be performed by an on-chip image processor 110, as shown inFIG. 1 , or by a separate processor which receives the pixel information. -
FIG. 4 shows animage processor system 400, for example, a still or video digital camera system, which may implement a demosaicing process in accordance with embodiments described herein. Theimaging device 10 may receive control or other data fromsystem 400. Theimaging device 10 receives light onpixel array 20 thru thelens 408 whenshutter release button 416 is pressed.System 400 includes aprocessor 402 having a central processing unit (CPU) that communicates with various devices over abus 404, including withimaging device 10. Some of the devices connected to thebus 404 provide communication into and out of thesystem 400, such as one or more input/output (I/O)devices 406 which may include input setting and display circuits. Other devices connected to thebus 404 provide memory, illustratively including a random access memory (RAM) 410, and one or more peripheral memory devices such as a removable, e.g., flash,memory drive 414. Theimaging device 10 may be constructed as shown inFIG. 1 . Theimaging device 10 may, in turn, be coupled toprocessor 402 for image processing, or other image handling operations. Examples of processor based systems, which may employ theimaging device 10, include, without limitation, computer systems, camera systems, scanners, machine vision systems, vehicle navigation systems, video telephones, surveillance systems, auto focus systems, star tracker systems, motion detection systems, image stabilization systems, and others. - While embodiments have been described in detail, it should be readily understood that the claimed invention is not limited to the disclosed embodiments. Rather the embodiments can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described. Accordingly, the invention is not limited by the foregoing description but is only limited by the scope of the attached claims.
Claims (25)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/003,922 US8035704B2 (en) | 2008-01-03 | 2008-01-03 | Method and apparatus for processing a digital image having defective pixels |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/003,922 US8035704B2 (en) | 2008-01-03 | 2008-01-03 | Method and apparatus for processing a digital image having defective pixels |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090174797A1 true US20090174797A1 (en) | 2009-07-09 |
US8035704B2 US8035704B2 (en) | 2011-10-11 |
Family
ID=40844262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/003,922 Active 2030-08-12 US8035704B2 (en) | 2008-01-03 | 2008-01-03 | Method and apparatus for processing a digital image having defective pixels |
Country Status (1)
Country | Link |
---|---|
US (1) | US8035704B2 (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100177203A1 (en) * | 2009-01-15 | 2010-07-15 | Aptina Imaging Corporation | Apparatus and method for local contrast enhanced tone mapping |
US20110019034A1 (en) * | 2009-07-27 | 2011-01-27 | Chan-Min Chou | Method for eliminating image noise and apparatus using the method |
WO2012044432A1 (en) * | 2010-09-30 | 2012-04-05 | Apple Inc. | Image signal processor line buffer configuration for processing raw image data |
US20120300021A1 (en) * | 2011-04-11 | 2012-11-29 | Honda Elesys Co., Ltd. | On-board camera system |
US20130050528A1 (en) * | 2011-08-30 | 2013-02-28 | Wei Hsu | Adaptive pixel compensation method |
US20130076939A1 (en) * | 2010-06-02 | 2013-03-28 | Shun Kaizu | Image processing apparatus, image processing method, and program |
US20130321679A1 (en) * | 2012-05-31 | 2013-12-05 | Apple Inc. | Systems and methods for highlight recovery in an image signal processor |
US8817120B2 (en) | 2012-05-31 | 2014-08-26 | Apple Inc. | Systems and methods for collecting fixed pattern noise statistics of image data |
US8872946B2 (en) | 2012-05-31 | 2014-10-28 | Apple Inc. | Systems and methods for raw image processing |
US8917336B2 (en) | 2012-05-31 | 2014-12-23 | Apple Inc. | Image signal processing involving geometric distortion correction |
US8953882B2 (en) | 2012-05-31 | 2015-02-10 | Apple Inc. | Systems and methods for determining noise statistics of image data |
US9025867B2 (en) | 2012-05-31 | 2015-05-05 | Apple Inc. | Systems and methods for YCC image processing |
US9031319B2 (en) | 2012-05-31 | 2015-05-12 | Apple Inc. | Systems and methods for luma sharpening |
US9077943B2 (en) | 2012-05-31 | 2015-07-07 | Apple Inc. | Local image statistics collection |
US9105078B2 (en) | 2012-05-31 | 2015-08-11 | Apple Inc. | Systems and methods for local tone mapping |
US9131196B2 (en) | 2012-05-31 | 2015-09-08 | Apple Inc. | Systems and methods for defective pixel correction with neighboring pixels |
US9142012B2 (en) | 2012-05-31 | 2015-09-22 | Apple Inc. | Systems and methods for chroma noise reduction |
US9332239B2 (en) | 2012-05-31 | 2016-05-03 | Apple Inc. | Systems and methods for RGB image processing |
US20160127667A1 (en) * | 2014-10-31 | 2016-05-05 | Silicon Optronics, Inc. | Image capture device, and defective pixel detection and correction method for image sensor array |
US20170154234A1 (en) * | 2015-12-01 | 2017-06-01 | Takuya Tanaka | Information processing device, information processing method, computer-readable recording medium, and inspection system |
US11089247B2 (en) | 2012-05-31 | 2021-08-10 | Apple Inc. | Systems and method for reducing fixed pattern noise in image data |
CN113781349A (en) * | 2021-09-16 | 2021-12-10 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8331688B2 (en) * | 2009-01-07 | 2012-12-11 | International Business Machines Corporation | Focus-based edge detection |
CN103442925B (en) | 2011-03-25 | 2016-08-17 | Tk控股公司 | For determining the system and method for driver's Vigilance |
US8849508B2 (en) | 2011-03-28 | 2014-09-30 | Tk Holdings Inc. | Driver assistance system and method |
US9207759B1 (en) * | 2012-10-08 | 2015-12-08 | Edge3 Technologies, Inc. | Method and apparatus for generating depth map from monochrome microlens and imager arrays |
CN106296726A (en) * | 2016-07-22 | 2017-01-04 | 中国人民解放军空军预警学院 | A kind of extraterrestrial target detecting and tracking method in space-based optical series image |
US10284800B2 (en) * | 2016-10-21 | 2019-05-07 | Canon Kabushiki Kaisha | Solid-state image pickup element, method of controlling a solid-state image pickup element, and image pickup apparatus |
US11375141B1 (en) * | 2021-02-09 | 2022-06-28 | Arthrex, Inc. | Endoscopic camera region of interest autoexposure |
Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030222995A1 (en) * | 2002-06-04 | 2003-12-04 | Michael Kaplinsky | Method and apparatus for real time identification and correction of pixel defects for image sensor arrays |
US20040051798A1 (en) * | 2002-09-18 | 2004-03-18 | Ramakrishna Kakarala | Method for detecting and correcting defective pixels in a digital image sensor |
US20040085458A1 (en) * | 2002-10-31 | 2004-05-06 | Motorola, Inc. | Digital imaging system |
US20050099516A1 (en) * | 1999-04-26 | 2005-05-12 | Microsoft Corporation | Error calibration for digital image sensors and apparatus using the same |
US20050201616A1 (en) * | 2004-03-15 | 2005-09-15 | Microsoft Corporation | High-quality gradient-corrected linear interpolation for demosaicing of color images |
US20050244052A1 (en) * | 2004-04-29 | 2005-11-03 | Renato Keshet | Edge-sensitive denoising and color interpolation of digital images |
US20050280725A1 (en) * | 2004-06-11 | 2005-12-22 | Stmicroelectronics S.R.L. | Processing pipeline of pixel data of a color image acquired by a digital sensor |
US6989962B1 (en) * | 2000-09-26 | 2006-01-24 | Western Digital (Fremont), Inc. | Inductive write head having high magnetic moment poles and low magnetic moment thin layer in the back gap, and methods for making |
US20060017826A1 (en) * | 2004-07-20 | 2006-01-26 | Olympus Corporation | In vivo image pickup device and in vivo image pickup system |
US7015961B2 (en) * | 2002-08-16 | 2006-03-21 | Ramakrishna Kakarala | Digital image system and method for combining demosaicing and bad pixel correction |
US7030917B2 (en) * | 1998-10-23 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | Image demosaicing and enhancement system |
US20060082675A1 (en) * | 2004-10-19 | 2006-04-20 | Eastman Kodak Company | Method and apparatus for capturing high quality long exposure images with a digital camera |
US20060104537A1 (en) * | 2004-11-12 | 2006-05-18 | Sozotek, Inc. | System and method for image enhancement |
US20060140507A1 (en) * | 2003-06-23 | 2006-06-29 | Mitsuharu Ohki | Image processing method and device, and program |
US7088392B2 (en) * | 2001-08-27 | 2006-08-08 | Ramakrishna Kakarala | Digital image system and method for implementing an adaptive demosaicing method |
US20060226337A1 (en) * | 2005-04-06 | 2006-10-12 | Lim Suk H | Digital image denoising |
US20060239580A1 (en) * | 2005-04-20 | 2006-10-26 | Bart Dierickx | Defect pixel correction in an image sensor |
US20060290711A1 (en) * | 2004-12-17 | 2006-12-28 | Peyman Milanfar | System and method for robust multi-frame demosaicing and color super-resolution |
US20070109430A1 (en) * | 2005-11-16 | 2007-05-17 | Carl Staelin | Image noise estimation based on color correlation |
US20070133902A1 (en) * | 2005-12-13 | 2007-06-14 | Portalplayer, Inc. | Method and circuit for integrated de-mosaicing and downscaling preferably with edge adaptive interpolation and color correlation to reduce aliasing artifacts |
US20070146511A1 (en) * | 2005-11-17 | 2007-06-28 | Sony Corporation | Signal processing apparatus for solid-state imaging device, signal processing method, and imaging system |
US20070165124A1 (en) * | 2006-01-13 | 2007-07-19 | Stmicroelectronics (Research & Development) Limited | Method of operating an image sensor |
US20070183681A1 (en) * | 2006-02-09 | 2007-08-09 | Hsiang-Tsun Li | Adaptive image filter for filtering image information |
US20070189603A1 (en) * | 2006-02-06 | 2007-08-16 | Microsoft Corporation | Raw image processing |
US20080056607A1 (en) * | 2006-08-30 | 2008-03-06 | Micron Technology, Inc. | Method and apparatus for image noise reduction using noise models |
US7366347B2 (en) * | 2004-05-06 | 2008-04-29 | Magnachip Semiconductor, Ltd. | Edge detecting method |
US20090027727A1 (en) * | 2007-07-25 | 2009-01-29 | Micron Technology, Inc. | Method, apparatus, and system for reduction of line processing memory size used in image processing |
US20090091645A1 (en) * | 2007-10-03 | 2009-04-09 | Nokia Corporation | Multi-exposure pattern for enhancing dynamic range of images |
US7715617B2 (en) * | 2002-07-25 | 2010-05-11 | Fujitsu Microelectronics Limited | Circuit and method for correction of defect pixel |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6989862B2 (en) | 2001-08-23 | 2006-01-24 | Agilent Technologies, Inc. | System and method for concurrently demosaicing and resizing raw data images |
JP4717371B2 (en) | 2004-05-13 | 2011-07-06 | オリンパス株式会社 | Image processing apparatus and image processing program |
EP1650979A1 (en) | 2004-10-21 | 2006-04-26 | STMicroelectronics S.r.l. | Method and system for demosaicing artifact removal |
US7706609B2 (en) | 2006-01-30 | 2010-04-27 | Microsoft Corporation | Bayesian demosaicing using a two-color image |
-
2008
- 2008-01-03 US US12/003,922 patent/US8035704B2/en active Active
Patent Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7030917B2 (en) * | 1998-10-23 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | Image demosaicing and enhancement system |
US20050099516A1 (en) * | 1999-04-26 | 2005-05-12 | Microsoft Corporation | Error calibration for digital image sensors and apparatus using the same |
US6989962B1 (en) * | 2000-09-26 | 2006-01-24 | Western Digital (Fremont), Inc. | Inductive write head having high magnetic moment poles and low magnetic moment thin layer in the back gap, and methods for making |
US7088392B2 (en) * | 2001-08-27 | 2006-08-08 | Ramakrishna Kakarala | Digital image system and method for implementing an adaptive demosaicing method |
US20030222995A1 (en) * | 2002-06-04 | 2003-12-04 | Michael Kaplinsky | Method and apparatus for real time identification and correction of pixel defects for image sensor arrays |
US7715617B2 (en) * | 2002-07-25 | 2010-05-11 | Fujitsu Microelectronics Limited | Circuit and method for correction of defect pixel |
US7015961B2 (en) * | 2002-08-16 | 2006-03-21 | Ramakrishna Kakarala | Digital image system and method for combining demosaicing and bad pixel correction |
US20040051798A1 (en) * | 2002-09-18 | 2004-03-18 | Ramakrishna Kakarala | Method for detecting and correcting defective pixels in a digital image sensor |
US20040085458A1 (en) * | 2002-10-31 | 2004-05-06 | Motorola, Inc. | Digital imaging system |
US20060140507A1 (en) * | 2003-06-23 | 2006-06-29 | Mitsuharu Ohki | Image processing method and device, and program |
US20050201616A1 (en) * | 2004-03-15 | 2005-09-15 | Microsoft Corporation | High-quality gradient-corrected linear interpolation for demosaicing of color images |
US20050244052A1 (en) * | 2004-04-29 | 2005-11-03 | Renato Keshet | Edge-sensitive denoising and color interpolation of digital images |
US7366347B2 (en) * | 2004-05-06 | 2008-04-29 | Magnachip Semiconductor, Ltd. | Edge detecting method |
US20050280725A1 (en) * | 2004-06-11 | 2005-12-22 | Stmicroelectronics S.R.L. | Processing pipeline of pixel data of a color image acquired by a digital sensor |
US20060017826A1 (en) * | 2004-07-20 | 2006-01-26 | Olympus Corporation | In vivo image pickup device and in vivo image pickup system |
US20060082675A1 (en) * | 2004-10-19 | 2006-04-20 | Eastman Kodak Company | Method and apparatus for capturing high quality long exposure images with a digital camera |
US20060104537A1 (en) * | 2004-11-12 | 2006-05-18 | Sozotek, Inc. | System and method for image enhancement |
US20060290711A1 (en) * | 2004-12-17 | 2006-12-28 | Peyman Milanfar | System and method for robust multi-frame demosaicing and color super-resolution |
US20060226337A1 (en) * | 2005-04-06 | 2006-10-12 | Lim Suk H | Digital image denoising |
US20060239580A1 (en) * | 2005-04-20 | 2006-10-26 | Bart Dierickx | Defect pixel correction in an image sensor |
US20070109430A1 (en) * | 2005-11-16 | 2007-05-17 | Carl Staelin | Image noise estimation based on color correlation |
US20070146511A1 (en) * | 2005-11-17 | 2007-06-28 | Sony Corporation | Signal processing apparatus for solid-state imaging device, signal processing method, and imaging system |
US20070133902A1 (en) * | 2005-12-13 | 2007-06-14 | Portalplayer, Inc. | Method and circuit for integrated de-mosaicing and downscaling preferably with edge adaptive interpolation and color correlation to reduce aliasing artifacts |
US20070165124A1 (en) * | 2006-01-13 | 2007-07-19 | Stmicroelectronics (Research & Development) Limited | Method of operating an image sensor |
US20070189603A1 (en) * | 2006-02-06 | 2007-08-16 | Microsoft Corporation | Raw image processing |
US20070183681A1 (en) * | 2006-02-09 | 2007-08-09 | Hsiang-Tsun Li | Adaptive image filter for filtering image information |
US20080056607A1 (en) * | 2006-08-30 | 2008-03-06 | Micron Technology, Inc. | Method and apparatus for image noise reduction using noise models |
US20090027727A1 (en) * | 2007-07-25 | 2009-01-29 | Micron Technology, Inc. | Method, apparatus, and system for reduction of line processing memory size used in image processing |
US20090091645A1 (en) * | 2007-10-03 | 2009-04-09 | Nokia Corporation | Multi-exposure pattern for enhancing dynamic range of images |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100177203A1 (en) * | 2009-01-15 | 2010-07-15 | Aptina Imaging Corporation | Apparatus and method for local contrast enhanced tone mapping |
US8363131B2 (en) * | 2009-01-15 | 2013-01-29 | Aptina Imaging Corporation | Apparatus and method for local contrast enhanced tone mapping |
US20110019034A1 (en) * | 2009-07-27 | 2011-01-27 | Chan-Min Chou | Method for eliminating image noise and apparatus using the method |
US8120678B2 (en) * | 2009-07-27 | 2012-02-21 | Altek Corporation | Method for eliminating image noise and apparatus using the method |
US8804012B2 (en) * | 2010-06-02 | 2014-08-12 | Sony Corporation | Image processing apparatus, image processing method, and program for executing sensitivity difference correction processing |
US20130076939A1 (en) * | 2010-06-02 | 2013-03-28 | Shun Kaizu | Image processing apparatus, image processing method, and program |
KR101296035B1 (en) | 2010-09-30 | 2013-09-03 | 애플 인크. | Image signal processor line buffer configuration for processing raw image data |
WO2012044432A1 (en) * | 2010-09-30 | 2012-04-05 | Apple Inc. | Image signal processor line buffer configuration for processing raw image data |
CN102547162A (en) * | 2010-09-30 | 2012-07-04 | 苹果公司 | Image signal processor line buffer configuration for processing raw image data |
US8508612B2 (en) | 2010-09-30 | 2013-08-13 | Apple Inc. | Image signal processor line buffer configuration for processing ram image data |
US8964058B2 (en) * | 2011-04-11 | 2015-02-24 | Honda Elesys Co., Ltd. | On-board camera system for monitoring an area around a vehicle |
US20120300021A1 (en) * | 2011-04-11 | 2012-11-29 | Honda Elesys Co., Ltd. | On-board camera system |
US8749668B2 (en) * | 2011-08-30 | 2014-06-10 | Novatek Microelectronics Corp. | Adaptive pixel compensation method |
US20130050528A1 (en) * | 2011-08-30 | 2013-02-28 | Wei Hsu | Adaptive pixel compensation method |
US9142012B2 (en) | 2012-05-31 | 2015-09-22 | Apple Inc. | Systems and methods for chroma noise reduction |
US9332239B2 (en) | 2012-05-31 | 2016-05-03 | Apple Inc. | Systems and methods for RGB image processing |
US8917336B2 (en) | 2012-05-31 | 2014-12-23 | Apple Inc. | Image signal processing involving geometric distortion correction |
US8953882B2 (en) | 2012-05-31 | 2015-02-10 | Apple Inc. | Systems and methods for determining noise statistics of image data |
US8817120B2 (en) | 2012-05-31 | 2014-08-26 | Apple Inc. | Systems and methods for collecting fixed pattern noise statistics of image data |
US9014504B2 (en) * | 2012-05-31 | 2015-04-21 | Apple Inc. | Systems and methods for highlight recovery in an image signal processor |
US9025867B2 (en) | 2012-05-31 | 2015-05-05 | Apple Inc. | Systems and methods for YCC image processing |
US9031319B2 (en) | 2012-05-31 | 2015-05-12 | Apple Inc. | Systems and methods for luma sharpening |
US9077943B2 (en) | 2012-05-31 | 2015-07-07 | Apple Inc. | Local image statistics collection |
US9105078B2 (en) | 2012-05-31 | 2015-08-11 | Apple Inc. | Systems and methods for local tone mapping |
US9131196B2 (en) | 2012-05-31 | 2015-09-08 | Apple Inc. | Systems and methods for defective pixel correction with neighboring pixels |
US20130321679A1 (en) * | 2012-05-31 | 2013-12-05 | Apple Inc. | Systems and methods for highlight recovery in an image signal processor |
US9317930B2 (en) | 2012-05-31 | 2016-04-19 | Apple Inc. | Systems and methods for statistics collection using pixel mask |
US8872946B2 (en) | 2012-05-31 | 2014-10-28 | Apple Inc. | Systems and methods for raw image processing |
US11689826B2 (en) | 2012-05-31 | 2023-06-27 | Apple Inc. | Systems and method for reducing fixed pattern noise in image data |
US9342858B2 (en) | 2012-05-31 | 2016-05-17 | Apple Inc. | Systems and methods for statistics collection using clipped pixel tracking |
US11089247B2 (en) | 2012-05-31 | 2021-08-10 | Apple Inc. | Systems and method for reducing fixed pattern noise in image data |
US9741099B2 (en) | 2012-05-31 | 2017-08-22 | Apple Inc. | Systems and methods for local tone mapping |
US9710896B2 (en) | 2012-05-31 | 2017-07-18 | Apple Inc. | Systems and methods for chroma noise reduction |
US9743057B2 (en) | 2012-05-31 | 2017-08-22 | Apple Inc. | Systems and methods for lens shading correction |
US9549133B2 (en) * | 2014-10-31 | 2017-01-17 | Silicon Optronics, Inc. | Image capture device, and defective pixel detection and correction method for image sensor array |
US20160127667A1 (en) * | 2014-10-31 | 2016-05-05 | Silicon Optronics, Inc. | Image capture device, and defective pixel detection and correction method for image sensor array |
US20170154234A1 (en) * | 2015-12-01 | 2017-06-01 | Takuya Tanaka | Information processing device, information processing method, computer-readable recording medium, and inspection system |
US10043090B2 (en) * | 2015-12-01 | 2018-08-07 | Ricoh Company, Ltd. | Information processing device, information processing method, computer-readable recording medium, and inspection system |
CN113781349A (en) * | 2021-09-16 | 2021-12-10 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US8035704B2 (en) | 2011-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8035704B2 (en) | Method and apparatus for processing a digital image having defective pixels | |
TWI458333B (en) | Method and apparatus for image noise reduction using noise models | |
US8218898B2 (en) | Method and apparatus providing noise reduction while preserving edges for imagers | |
US7015961B2 (en) | Digital image system and method for combining demosaicing and bad pixel correction | |
US7313288B2 (en) | Defect pixel correction in an image sensor | |
US8576309B2 (en) | Pixel defect correction device, imaging apparatus, pixel defect correction method, and program | |
US20070133893A1 (en) | Method and apparatus for image noise reduction | |
US7876363B2 (en) | Methods, systems and apparatuses for high-quality green imbalance compensation in images | |
US7830428B2 (en) | Method, apparatus and system providing green-green imbalance compensation | |
US7756355B2 (en) | Method and apparatus providing adaptive noise suppression | |
US20080278609A1 (en) | Imaging apparatus, defective pixel correcting apparatus, processing method in the apparatuses, and program | |
JP5060535B2 (en) | Image processing device | |
US20140125847A1 (en) | Image processing apparatus and control method therefor | |
JP2010258620A (en) | Image processor, image processing method, and program | |
US8400534B2 (en) | Noise reduction methods and systems for imaging devices | |
JP5256236B2 (en) | Image processing apparatus and method, and image processing program | |
US10791289B2 (en) | Image processing apparatus, image processing method, and non-transitory computer readable recording medium | |
US20090237530A1 (en) | Methods and apparatuses for sharpening images | |
JP2004159176A (en) | Noise elimination method, imaging apparatus, and noise elimination program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICRON TECHNOLOGY, INC., IDAHO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HU, SHANE C.;MULLIS, ROBERT;REEL/FRAME:020380/0732 Effective date: 20071212 |
|
AS | Assignment |
Owner name: APTINA IMAGING CORPORATION, CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:023245/0186 Effective date: 20080926 Owner name: APTINA IMAGING CORPORATION,CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:023245/0186 Effective date: 20080926 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |