US8259179B2 - Compensating for non-uniform illumination of object fields captured by a camera - Google Patents
Compensating for non-uniform illumination of object fields captured by a camera Download PDFInfo
- Publication number
- US8259179B2 US8259179B2 US11/383,406 US38340606A US8259179B2 US 8259179 B2 US8259179 B2 US 8259179B2 US 38340606 A US38340606 A US 38340606A US 8259179 B2 US8259179 B2 US 8259179B2
- Authority
- US
- United States
- Prior art keywords
- illumination
- image
- data
- imaging device
- correction data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 157
- 238000000034 method Methods 0.000 claims abstract description 68
- 238000003384 imaging method Methods 0.000 claims abstract description 62
- 230000003287 optical effect Effects 0.000 claims abstract description 43
- 230000015654 memory Effects 0.000 claims abstract description 23
- 230000004048 modification Effects 0.000 claims abstract description 9
- 238000012986 modification Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 230000005855 radiation Effects 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 3
- 230000001413 cellular effect Effects 0.000 claims 3
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 4
- 230000005484 gravity Effects 0.000 description 4
- 229910052710 silicon Inorganic materials 0.000 description 4
- 239000010703 silicon Substances 0.000 description 4
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000011088 calibration curve Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000003705 background correction Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
Definitions
- This invention relates generally to techniques of processing captured digital imaging data obtained using one or more illumination sources, and, more specifically, to processing binary digital image data obtained using one or more illumination sources to correct for variations across an imaged optical field such as, for example, to compensate for non-uniform illumination.
- Digital cameras image scenes onto a two-dimensional sensor such as a charge-coupled-device (CCD), a complementary metal-on-silicon (CMOS) device or other type of light sensor.
- CCD charge-coupled-device
- CMOS complementary metal-on-silicon
- These devices include a large number of photo-detectors (typically three, four, five or more million) arranged across a small two dimensional surface that individually generate a signal proportional to the intensity of light or other optical radiation (including infrared and ultra-violet regions of the spectrum adjacent the visible light wavelengths) striking the element.
- These elements, forming pixels of an image are typically scanned in a raster pattern to typically generate a serial stream of data of the intensity of radiation striking one sensor element after another as they are scanned.
- Color data are most commonly obtained by using photo-detectors that are sensitive to each of distinct color components (such as red, green and blue), alternately distributed across the sensor.
- Non-uniform illumination, and potentially other factors, causes an uneven distribution of light across the photo-sensor, and thus image data signals from the sensor include data of the undesired intensity variation superimposed thereon.
- One or more illumination sources may be used to illuminate an image field.
- An illumination source may, as an example, be a flash illumination device.
- An illumination source will often be part of the imaging device but may also be a separate device.
- An illumination source may produce non-uniform illumination across an image field. Non-uniform illumination may be attributed to imperfections in or other characteristics of an illumination source, improper alignment of an illumination source in relation to the x-y position of the image plane of the photo-sensor employed, and possibly other factors that may be present in a particular system.
- the invention offers techniques for modifying image field data to compensate for non-uniformities in the illumination so as to minimize degradation of the final adjusted image by these non-uniformities in one or more illumination sources.
- the amount of compensation applied to the signal from each photo-detector element is dependent upon the position of the element in relationship to the pattern of non-uniform illumination of the image field across the surface of the image photo-sensor.
- Such non-uniform illumination compensation techniques have application to digital cameras and other types of digital image capturing devices employing one or more illumination sources but are not limited to such optical photo system applications.
- the techniques may be implemented at a low cost, require a minimum amount of memory, and operate at the same rate as the digital image data being modified is obtained from the photo-sensor, thereby not adversely affecting the performance of the digital image processing path. This is accomplished by applying correction factors in real time to the output signals of the photo-sensor in order to compensate for an undesired intensity variation across the photo-sensor.
- the camera or other optical system is calibrated by imaging a scene of uniform intensity onto the photo-sensor, capturing data of a resulting intensity variation across the photo-sensor.
- the pixel array is logically divided into a grid of blocks and then average rates of change of the intensity across each block are computed.
- the calibration data needed to correct for the intensity variation is computed as the inverse of the intensity variation.
- a reduced amount of data of the undesired non-uniform illumination pattern (or the inverse, the non-uniform illumination correction factors) may be stored in one or more sparse two-dimensional lookup tables. A separate lookup table can be used for each color.
- FIG. 1 schematically illustrates an digital camera in which the techniques of the present invention may be utilized
- FIG. 2 is a block diagram of a portion of the electronic processing system of the device of FIG. 1 ;
- FIG. 3 schematically illustrates the calibration phase of a specific embodiment of the invention, employing a surface with uniform optical properties, a camera with an artificial illumination source, and showing the path of incident light emitted by the illumination source and reflected light reflected from the surface back to the camera lens;
- FIG. 4 is a block diagram setting forth steps in calibration of a camera or other optical system of interest using an illuminated surface with uniform optical properties
- FIG. 5 is a block diagram setting forth steps in applying stored illumination calibration data to obtain illumination-corrected output
- FIGS. 6A , 6 B, 6 C, 6 D, 6 E graphically illustrate the application of stored illumination calibration data to obtain illumination-corrected output
- FIG. 7 schematically illustrates an image capture and modification phase in which an object of interest is imaged, employing the camera with an artificial illumination source, and showing the path of incident light emitted by the illumination source and reflected light reflected from the object of interest back to the camera lens;
- FIG. 8 graphically illustrates the selection of a relevant portion of the calibration information.
- non-uniform illumination across the image field may result in a variation of energy across each pixel of that light pattern.
- These energy variations are not related to the captured image or other picture data itself.
- the variation of illumination across the scene assuming the objects in the scene are approximately the same distance from the source of the flash illumination, has fixed properties. These properties are directly related to the physical, optical and electronic characteristics of the illuminating flash source.
- each pixel value could be combined, such as by multiplication, with a non-uniform illumination correction factor. This factor is unique to each pixel in the image sensor according to the pixel's geographic location in the image sensor matrix.
- PixelOut The intensity output of the non-uniform illumination compensation module; in other words, the corrected pixel
- PixelIn The intensity input to the non-uniform illumination compensation module; in other words, the pixel before correction
- F(X,Y) An additive correction factor, having units of intensity, which depends on the pixel's position expressed in terms of X and Y rectangular coordinates
- F′(X,Y) A dimensionless multiplicative correction factor, which also depends on the pixel's position expressed in terms of X and Y rectangular coordinates.
- CT[x,y] is the illumination-corrected image data set of interest as a function of the position (x,y) of an image data point of interest
- T[x,y] is the un-corrected image data set of interest as a function of the position (x,y) of an image data point of interest
- IC[x,y] is an additive illumination correction factor of equation (2a) as a function of the position (x,y) of a image data point of interest.
- IC′[x,y] is a dimensionless multiplicative illumination correction factor as a function of the position (x,y) of a image data point of interest, in the alternative equation (2b).
- equations (2a) and (2b) represent the image-wide equivalent of equations (1a) and (1b), respectively, which are applied on a pixel by pixel (or pixel block by pixel block) basis.
- FIG. 1 such a digital camera is schematically shown to include a case 11 , an imaging optical system 13 , user controls 15 that generate control signals 17 , a video input-output receptacle 19 with internal electrical connections 21 , and a card slot 23 , with internal electrical connections 25 , into which a non-volatile memory card 27 is removably inserted.
- Data of images captured by the camera may be stored on the memory card 27 or on an internal non-volatile memory (not shown). Image data may also be outputted to a video device, such as a television monitor, through the receptacle 19 .
- the memory card 27 can be a commercially available semiconductor flash electrically erasable and programmable read-only-memory (EEPROM), small removable rotating magnetic disk or other non-volatile memory to which digital image data can be stored by the camera.
- EEPROM electrically erasable and programmable read-only-memory
- small removable rotating magnetic disk or other non-volatile memory to which digital image data can be stored by the camera.
- larger capacity storage media can be used instead, such as magnetic tape or a writable optical disk.
- the optical system 13 can be a single lens, as shown, but will normally be a set of lenses.
- An image 29 of a scene 31 is formed as visible optical radiation through a shutter 33 onto a two-dimensional surface of an image sensor 35 .
- An electrical output 37 of the sensor carries an analog signal resulting from scanning individual photo-detectors of the surface of the sensor 35 onto which the image 29 is projected.
- the sensor 35 typically contains a large number of individual photo-detectors arranged in a two-dimensional array of rows and columns to detect individual pixels of the image 29 .
- Signals proportional to the intensity of light striking the individual photo-detectors are obtained in the output 37 in time sequence, typically by scanning them in a raster pattern, where the rows of photo-detectors are scanned one at a time from left to right, beginning at the top row, to generate a frame of digital image data from which the image 29 may be reconstructed.
- the analog signal 37 is applied to an analog-to-digital converter circuit chip 39 that generates digital data in circuits 41 of the image 29 .
- the signal in circuits 41 is a sequence of individual blocks of digital data representing the intensity of light striking the individual photo-detectors of the sensor 35 .
- Processing of the video data in circuits 41 and control of the camera operation are provided, in this embodiment, by a single integrated circuit chip 43 .
- the circuit chip 43 is connected to control and status lines 45 .
- the lines 45 are, in turn, connected with the shutter 33 , sensor 29 , analog-to-digital converter 39 and other components of the camera to provide synchronous operation of them.
- a separate volatile random-access memory circuit chip 47 is also connected to the processor chip 43 for temporary data storage.
- a separate non-volatile re-programmable memory chip 49 is connected to the processor chip 43 for storage of the processor program, calibration data and the like.
- a usual clock circuit 51 is provided within the camera for providing clock signals to the circuit chips and other components. Rather than a separate component, the clock circuit for the system may alternatively be included on the processor chip 43 .
- An illumination source 53 is connected to, and operates in response to instructions from, the processor chip 43 .
- Sensor 35 may have its large number of pixels logically divided into rectangles of a grid pattern.
- One way to determine the correction factor for individual pixels, without having to store such factors for all pixels of the array, is to store them for a representative few of the pixels in each block and then calculate the correction for other individual pixels by interpolation, linear or otherwise. That is, the size of the blocks of the grid pattern are made small enough such that the intensity variation of the non-uniform illumination pattern across an individual block may be predicted from a few stored values in the block.
- the correction factor is extrapolated from this stored subset.
- the correction factor extrapolation formula is implemented as a two dimensional extrapolation responsive to the geometric distance between the pixel of interest at a current location, and neighboring pixels that are represented by a non-uniform illumination correction factor stored in a limited table of correction factors.
- a functional block diagram of the processor chip 43 is shown in FIG. 2 .
- a digital signal processor (DSP) 55 is a key component, controlling both the operation of the chip 43 and other components of the camera. But since the DSP 55 does not extensively process video data, as discussed below, it may be a relatively simple and inexpensive processor.
- a memory management unit 57 interfaces the DSP 55 to the external memory chips 47 and 49 , and to output interface circuits 59 that are connected to the input-output connector 19 and to the card slot 23 ( FIG. 1 ) through respective circuits 21 and 25 .
- the flow of digital image data through the block diagram of FIG. 2 from the analog-to-digital converter 39 ( FIG. 1 ) is now generally described.
- the input data in lines 41 is pre-processed in a block 61 and then provided as one input to a multiplier circuit 63 .
- Another input 65 to the multiplier 63 carries data that modifies the incoming video data, the modified video data appearing at an output 67 of the multiplier 63 .
- the intensity correction data in lines 65 correct for the effects of lens shading and intensity variations imparted across the image by camera elements.
- the digital image data are directed through the memory management unit 57 to the output interface circuits 59 and then through either lines 21 to the input-output receptacle 19 or through lines 25 to the memory card slot 23 ( FIG. 1 ), or both, of the camera for display and/or storage.
- the intensity correction data in lines 65 are generated by a block of dedicated processing circuits 71 .
- the block 71 includes circuits 73 that provide the (X, Y) position of each image pixel from which video data are currently being acquired. This pixel position is then used by an intensity correction data calculation circuit 75 to generate the modification factor applied to the multiplier 63 .
- a memory 77 stores a look-up table. In order to reduce the size of the memory 77 , only a small amount of correction data are stored in the look-up table and the circuits 75 calculate the correction values of individual pixels from such data.
- a set of registers 79 stores parameters and intermediate results that are used by both of the calculation circuits 73 and 75 .
- the calculation circuits 73 and 75 operate independently of the DSP 55 .
- the DSP could possibly be used to make these calculations instead but this would require an extremely fast processor, if sufficient speed were even available, would be expensive and would take considerable more space on the chip 43 .
- the circuits 73 and 75 dedicated to performing the required repetitive calculations without participation by the DSP 55 , are quite straightforward in structure, take little space on the chip 43 and frees up the DSP 55 to perform other functions.
- the memory or memories 77 and 79 storing the image modification data and parameters are preferably a volatile random-access type for access speed and process compatibility with other processor circuits so that they can all be included on a single cost effective chip.
- a typical digital imaging system processes data for each of multiple distinct color components of the image.
- a typical commercial sensor alternates photo-detectors along the rows that are covered with red, green and blue filters.
- an image modification factor is generated for each image pixel from that set of data, regardless of the color. This is quite adequate in cases where the variation across the image that is being removed by the signal modification affects all colors to the same or nearly the same degree. However, where the variation is significantly color dependent, separate correction factors are preferably used for each color component.
- One desirable flash strobe module is an insulated gate bipolar transistor (IGBT) type, allowing for the intensity of the illumination level to be controlled.
- IGBT insulated gate bipolar transistor
- SCR silicon-controlled rectifier
- non-uniform illumination correction factors for an optical photo system of a digital camera, digital video capturing device or other type of digital imaging device are derived during a calibration procedure.
- This calibration is performed by imaging a surface having uniform optical properties onto the photo-sensor employed by the device being calibrated.
- One example of such a surface is a uniform mid-level gray target.
- the individual pixel intensity values of an image of such a target are captured and the slope values for the individual rectangles of the grid across the photo-sensor are calculated and stored in a memory within the device being calibrated.
- Image modification data and parameters are generated once for each camera at a final stage of its manufacture and then are permanently stored in the non-volatile memory 49 ( FIG. 2 ). These data are then loaded through lines 81 into the memories 77 and 79 each time the system is initialized, under control of the DSP 55 operating through control and status lines 83 .
- FIG. 3 schematically illustrates the calibration phase of the operation of the invention according to the this embodiment.
- An imaging device e.g., a camera
- an illumination source e.g., a flash
- an illumination balance reference which is a uniformly reflecting target image (such as a gray card)
- a camera (or other imaging device) 11 comprising a lens 13 and an illumination source 53 in the form of a flash is used to capture an image of surface 91 with uniform color, absorption/reflection, dispersion and other optical properties thereacross.
- Light rays 93 are generated by the flash 53 and emanate toward the surface 91 .
- Incident light rays 93 strike the surface 91 , which reflects rays 95 .
- the reflected rays 95 are imaged by a lens 13 onto the photo-detector within the camera 11 .
- the camera 11 then processes the information and uses it to calibrate the non-uniform illumination correction data that will compensate for non-uniform illumination across an image scene that is produced by the flash 53 .
- the focal length 97 in this embodiment is the indicated as a perpendicular distance from the lens 13 to the surface 91 .
- FIG. 4 is a block diagram setting forth steps of factory calibration.
- a surface with uniform optical properties is illuminated.
- image data from the image field of the illuminated uniform surface is captured.
- illumination correction data are generated as a function of position in the image field.
- the illumination correction data are stored in a non-volatile memory of the imaging device.
- the matrix of pixels of a photo-sensor can be logically divided into a grid of a large number of contiguous rectangular blocks that each contains a fixed number of pixels on a side.
- data of the non-uniform illumination pattern on the individual blocks are calculated and stored, from which a stored data correction factor is calculated for the individual pixels or blocks of pixels as picture data are scanned from the photo-sensor, in real time, typically in a raster scanning pattern.
- the calibration data may in some applications be captured and stored with a resolution that is less than that with which data of an image field are normally captured.
- data of individual pixels may be combined to provide one data point per color component for a block of pixels. This reduces the amount of calibration data that need to be stored in the camera and increases the speed with which correction of full resolution image data may be made.
- FIG. 5 is a block diagram setting forth steps in applying stored illumination correction calibration data to obtain illumination-corrected output.
- stored illumination correction data is retrieved.
- image data of the illuminated image field is captured.
- the captured image data is stored.
- the image modification parameters are combined with the image data to obtain improved, illumination-corrected output.
- FIGS. 6A-6E graphically illustrate the application of stored illumination correction calibration data to obtain illumination-corrected output.
- Each figure provides a plot of intensity against position (in either the x or y direction) along a line across a photo-sensor through its optical center.
- FIG. 6A depicts the intensity relative to position of light reflected by a surface with uniform optical properties.
- FIG. 6B depicts illumination correction data 98 , which is designed to compensate for the non-uniformities seen in FIG. 6A .
- the arithmetic sum of intensity on the photo-sensor and the illumination correction data produces a flat line of constant intensity, as seen in FIG. 6C .
- FIG. 6D an example is provided of possible data for an image of interest.
- FIG. 6E provides the illumination-corrected data for this same intensity pattern.
- FIG. 7 schematically illustrates an example of the capture of the image of interest of step 510 .
- the camera 11 with its lens 13 and illumination source 53 , is used to capture data of an object scene of interest 94 .
- the object of interest 94 has a length 96 and typically subtends a segment 99 of the calibration non-uniform illumination correction data 98 .
- Incident light rays 93 are generated by the flash 53 and emanate toward the object 94 .
- Incident light rays 93 strike the object 94 , which reflects reflected rays 95 . Reflected rays 95 are imaged by the lens 13 onto the photo-detector.
- the camera 11 then combines the image of interest with the calibration data 98 (step 530 ) to produce an illumination-corrected image of the object of interest 94 .
- Focal length 97 is the indicated distance from lens 13 to object 94 .
- segment 99 of the calibration data 98 is used to produce the corrected image.
- Calibration data 98 not contained within segment 99 have no effect on the corrected image generated of object of interest 94 .
- a higher zoom setting will result in a smaller portion of the image of interest being captured by the imaging device.
- the angle subtended by the captured image of interest, i.e., the zoom setting used, is incorporated into the calibration process.
- the calibration process also incorporates the distance of the imaging device from the image of interest, which bears an inverse square relationship to the intensity. It is assumed that the non-uniform illumination pattern is independent of distance from the imaging device, while varying in intensity. Either the correction process is performed pixel by pixel or else it is performed pixel block by pixel block.
- FIG. 8 represents the calibration curve of FIG. 6B , with segment 99 indicated as the portion of the calibration curve lying between the dashed lines. Only segment 99 of the calibration data is used in correcting the image of interest.
- Calibration correction information can be executed within the imaging device, thus permanently modifying the original, uncorrected image data.
- the calibration correction information can be stored within the imaging device as auxiliary data to be used in post-processing at a digital image processing service center, or by the user as part of image enhancement.
- the corrected image may be previewed on the imaging device's preview screen before it is permanently applied to the image data.
- the calibration can be carried out using low resolution images. Low resolution images will typically suffice for obtaining calibration correction information for a featureless object.
- each camera or other optical system is calibrated by imaging a scene of uniform intensity onto the photo-sensor, capturing data of each pixel of a resulting intensity variation across the photo-sensor, logically dividing the pixel array into a grid of blocks and then calculating average rates of change of the intensity across each block.
- These relatively few intensity slope values, the characteristics of the pixel grid and the absolute intensity of the first pixel of each scanned frame characterize the non-uniform illumination intensity variation across the photo-sensor with a reduced amount of data. It is usually desirable to acquire three sets of such data, one set for each primary color that is utilized in the picture processing.
- the created slope tables and the basic gain is stored in a digital camera's nonvolatile memory 49 of FIGS. 1 and 2 , for example, during the manufacturing process, and subsequently used to compensate for non-uniform illumination non-uniformities as previously described.
- correction for non-uniform illumination can be achieved by using the principle of superposition.
- the composite non-uniform illumination pattern to be corrected is composed of several non-uniform illumination patterns superimposed on one another. These patterns are preferably separated at calibration time and multiple non-uniform illumination patterns are visualized, each with its own center of gravity. (The center of gravity is also known as the optical center or the anchor point.) These centers of gravity can then be combined into an “effective center of gravity” and used to form lookup table 77 of FIG. 2 , or each used individually to derive separate look up tables which are subsequently combined to form lookup table 77 .
- the algorithm employed to combine these shading correction factors for use in table 77 can be either linear, piece-wise linear, or non-linear.
- the algorithm employed to combine these shading correction factors for use in table 77 can be either linear, piece-wise linear, or non-linear.
- the optical center of a pattern of illumination will not necessarily be aligned with or bear any particular relationship with the optical geometry of the imaging device.
- the pattern of illumination may, for instance, be incident from one side of the image of interest.
- Reflector devices can be employed to attempt to assist but typically cannot precisely resolve such considerations.
- Illumination correction patterns are a means for correcting for such issues. Effects of a varying focal length may also be taken into account.
- the correction data also include correction for any intensity variations across the image that are caused by lens shading, effects of the optical cavity, the image sensor and/or its interaction with the incident image light, and the like, in addition to providing correction for non-uniformities due to non-uniform illumination by the illumination source. It may be desirable to have separate correction data for the non-uniform illumination of an object scene. If so, correction data are separately captured for lens shading and the like by imaging the same screen used in acquiring illumination correction data with a non-uniform light source but this time with uniform illumination across it, such as by one of the methods described in the previously identified U.S. patent application publication numbers 2004-0032952, 2004-0257454 and 2005-0041806.
- correction data for the non-uniform light source are obtained without components of lens shading and the like.
- the present invention provides unique illumination compensation of digital images captured from a non-uniformly lit scene.
- a common instance where such compensation is beneficial is the capture of a scene illuminated by a digital camera's small, built-in, electronic flash unit.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Description
PixelOut=PixelIn+F(X,Y) (1a)
or
PixelOut=PixelIn*F′(X,Y) (1b)
where,
PixelOut=The intensity output of the non-uniform illumination compensation module; in other words, the corrected pixel;
PixelIn=The intensity input to the non-uniform illumination compensation module; in other words, the pixel before correction;
F(X,Y)=An additive correction factor, having units of intensity, which depends on the pixel's position expressed in terms of X and Y rectangular coordinates; and
F′(X,Y)=A dimensionless multiplicative correction factor, which also depends on the pixel's position expressed in terms of X and Y rectangular coordinates.
CT[x,y]=T[x,y]+IC[x,y], (2a)
or
CT[x,y]=T[x,y]*IC′[x,y], (2b)
where CT[x,y] is the illumination-corrected image data set of interest as a function of the position (x,y) of an image data point of interest, T[x,y] is the un-corrected image data set of interest as a function of the position (x,y) of an image data point of interest, and IC[x,y] is an additive illumination correction factor of equation (2a) as a function of the position (x,y) of a image data point of interest. IC′[x,y] is a dimensionless multiplicative illumination correction factor as a function of the position (x,y) of a image data point of interest, in the alternative equation (2b). Generally speaking, equations (2a) and (2b) represent the image-wide equivalent of equations (1a) and (1b), respectively, which are applied on a pixel by pixel (or pixel block by pixel block) basis. When all of the corrective factors IC[x,y] or IC′[x,y] for a particular image, depending upon which of equations (2a) or (2b) is being used, are listed according to their x,y coordinates, this list represents a two-dimensional mask. The values of that mask at positions x,y across the image are then combined with the image data at the same positions x,y across the image.
Claims (55)
CT[x,y]=T[x,y]+IC[x,y],
CT[x,y]=T[x,y]*IC′[x,y],
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/383,406 US8259179B2 (en) | 2006-05-15 | 2006-05-15 | Compensating for non-uniform illumination of object fields captured by a camera |
PCT/US2007/067215 WO2007133898A1 (en) | 2006-05-15 | 2007-04-23 | Compensating for non-uniform illumination of object fields captured by a camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/383,406 US8259179B2 (en) | 2006-05-15 | 2006-05-15 | Compensating for non-uniform illumination of object fields captured by a camera |
Publications (2)
Publication Number | Publication Date |
---|---|
US20070262235A1 US20070262235A1 (en) | 2007-11-15 |
US8259179B2 true US8259179B2 (en) | 2012-09-04 |
Family
ID=38441453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/383,406 Expired - Fee Related US8259179B2 (en) | 2006-05-15 | 2006-05-15 | Compensating for non-uniform illumination of object fields captured by a camera |
Country Status (2)
Country | Link |
---|---|
US (1) | US8259179B2 (en) |
WO (1) | WO2007133898A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11606549B1 (en) * | 2019-06-26 | 2023-03-14 | Ball Aerospace & Technologies Corp. | Methods and systems for mitigating persistence in photodetectors |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8237824B1 (en) * | 2007-03-28 | 2012-08-07 | Ambarella, Inc. | Fixed pattern noise and bad pixel calibration |
US8675101B1 (en) * | 2007-03-28 | 2014-03-18 | Ambarella, Inc. | Temperature-based fixed pattern noise and bad pixel calibration |
US8023013B1 (en) | 2007-03-28 | 2011-09-20 | Ambarella, Inc. | Fixed pattern noise correction with compressed gain and offset |
US8659685B2 (en) * | 2008-06-25 | 2014-02-25 | Aptina Imaging Corporation | Method and apparatus for calibrating and correcting shading non-uniformity of camera systems |
TW201007162A (en) * | 2008-08-04 | 2010-02-16 | Shanghai Microtek Technology Co Ltd | Optical carriage structure of inspection apparatus and its inspection method |
US8547457B2 (en) * | 2009-06-22 | 2013-10-01 | Empire Technology Development Llc | Camera flash mitigation |
US8571343B2 (en) | 2011-03-01 | 2013-10-29 | Sharp Laboratories Of America, Inc. | Methods and systems for document-image correction |
US10805523B2 (en) * | 2012-05-30 | 2020-10-13 | Easy Printing Network Limited | Article authentication apparatus having a built-in light emitting device and camera |
US9148573B2 (en) | 2013-03-15 | 2015-09-29 | Hewlett-Packard Development Company, L.P. | Non-uniform correction illumination pattern |
JP6390163B2 (en) * | 2014-05-16 | 2018-09-19 | 株式会社リコー | Information processing apparatus, information processing method, and program |
EP3175609B1 (en) * | 2014-07-31 | 2022-02-23 | Hewlett-Packard Development Company, L.P. | Processing data representing an image |
US9917955B2 (en) | 2016-02-03 | 2018-03-13 | Onyx Graphics, Inc. | Spectral transmissive measurement of media |
FR3062505B1 (en) * | 2017-01-27 | 2020-10-02 | Continental Automotive France | METHOD OF DETECTION OF A MOVING OBJECT FROM A VIDEO STREAM OF IMAGES |
DE102017125799A1 (en) * | 2017-11-06 | 2019-05-09 | Carl Zeiss Industrielle Messtechnik Gmbh | Reduction of picture disturbances in pictures |
US20240265500A1 (en) * | 2022-07-29 | 2024-08-08 | The Institute Of Optics And Electronics, The Chinese Academy Of Sciences | Illumination field non-uniformity detection system, detection method, correction method, and device |
CN115265772B (en) * | 2022-07-29 | 2024-09-27 | 中国科学院光电技术研究所 | Illumination field non-uniformity detection system and method |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5815203A (en) * | 1993-05-20 | 1998-09-29 | Goldstar Co., Ltd. | Zoom tracking apparatus and method in a video camera |
US20020140836A1 (en) * | 2001-03-28 | 2002-10-03 | Mitsubishi Denki Kabushiki Kaisha | Imaging device and manufacturing method thereof |
US20020145769A1 (en) * | 2001-02-16 | 2002-10-10 | Pollard Stephen B. | Digital cameras |
US20030044066A1 (en) * | 2001-09-06 | 2003-03-06 | Norihiro Sakaguchi | Device and method for image pickup |
US20030052991A1 (en) | 2001-09-17 | 2003-03-20 | Stavely Donald J. | System and method for simulating fill flash in photography |
JP2003304443A (en) | 2002-04-10 | 2003-10-24 | Nikon Corp | Device and method for processing image |
US6670988B1 (en) | 1999-04-16 | 2003-12-30 | Eastman Kodak Company | Method for compensating digital images for light falloff and an apparatus therefor |
US6687400B1 (en) | 1999-06-16 | 2004-02-03 | Microsoft Corporation | System and process for improving the uniformity of the exposure and tone of a digital image |
US20040032952A1 (en) | 2002-08-16 | 2004-02-19 | Zoran Corporation | Techniques for modifying image field data |
US6819359B1 (en) * | 1999-02-03 | 2004-11-16 | Fuji Photo Film Co., Ltd. | Method and apparatus for controlling the processing of signals containing defective pixels in accordance with imaging operation mode |
US20040239782A1 (en) * | 2003-05-30 | 2004-12-02 | William Equitz | System and method for efficient improvement of image quality in cameras |
US20040257454A1 (en) | 2002-08-16 | 2004-12-23 | Victor Pinto | Techniques for modifying image field data |
US20050041806A1 (en) * | 2002-08-16 | 2005-02-24 | Victor Pinto | Techniques of modifying image field data by exprapolation |
US20060109358A1 (en) * | 2004-11-24 | 2006-05-25 | Dong-Seob Song | System on a chip camera system employing complementary color filter |
US20080043117A1 (en) * | 2004-08-18 | 2008-02-21 | Mtekvision Co., Ltd, | Method and Apparatus for Compensating Image Sensor Lens Shading |
US7570881B2 (en) * | 2006-02-21 | 2009-08-04 | Nokia Corporation | Color balanced camera with a flash light unit |
-
2006
- 2006-05-15 US US11/383,406 patent/US8259179B2/en not_active Expired - Fee Related
-
2007
- 2007-04-23 WO PCT/US2007/067215 patent/WO2007133898A1/en active Application Filing
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5815203A (en) * | 1993-05-20 | 1998-09-29 | Goldstar Co., Ltd. | Zoom tracking apparatus and method in a video camera |
US6819359B1 (en) * | 1999-02-03 | 2004-11-16 | Fuji Photo Film Co., Ltd. | Method and apparatus for controlling the processing of signals containing defective pixels in accordance with imaging operation mode |
US6670988B1 (en) | 1999-04-16 | 2003-12-30 | Eastman Kodak Company | Method for compensating digital images for light falloff and an apparatus therefor |
US6687400B1 (en) | 1999-06-16 | 2004-02-03 | Microsoft Corporation | System and process for improving the uniformity of the exposure and tone of a digital image |
US20020145769A1 (en) * | 2001-02-16 | 2002-10-10 | Pollard Stephen B. | Digital cameras |
US20020140836A1 (en) * | 2001-03-28 | 2002-10-03 | Mitsubishi Denki Kabushiki Kaisha | Imaging device and manufacturing method thereof |
EP1292128A2 (en) | 2001-09-06 | 2003-03-12 | Ricoh Company, Ltd. | Device and method for image pickup |
US20030044066A1 (en) * | 2001-09-06 | 2003-03-06 | Norihiro Sakaguchi | Device and method for image pickup |
US20030052991A1 (en) | 2001-09-17 | 2003-03-20 | Stavely Donald J. | System and method for simulating fill flash in photography |
JP2003304443A (en) | 2002-04-10 | 2003-10-24 | Nikon Corp | Device and method for processing image |
US20040032952A1 (en) | 2002-08-16 | 2004-02-19 | Zoran Corporation | Techniques for modifying image field data |
US20040257454A1 (en) | 2002-08-16 | 2004-12-23 | Victor Pinto | Techniques for modifying image field data |
US20050041806A1 (en) * | 2002-08-16 | 2005-02-24 | Victor Pinto | Techniques of modifying image field data by exprapolation |
US20040239782A1 (en) * | 2003-05-30 | 2004-12-02 | William Equitz | System and method for efficient improvement of image quality in cameras |
US20080043117A1 (en) * | 2004-08-18 | 2008-02-21 | Mtekvision Co., Ltd, | Method and Apparatus for Compensating Image Sensor Lens Shading |
US20060109358A1 (en) * | 2004-11-24 | 2006-05-25 | Dong-Seob Song | System on a chip camera system employing complementary color filter |
US7570881B2 (en) * | 2006-02-21 | 2009-08-04 | Nokia Corporation | Color balanced camera with a flash light unit |
Non-Patent Citations (2)
Title |
---|
EPO/ISA, "Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration," corresponding International Patent Application No. PCT/US2007/067215, mailed on Sep. 12, 2007, 14 pages. |
Petschnigg et al., "Digital Photography with Flash and No-Flash Image Pairs," Microsoft Corporation, Aug. 2004, 9 pages. |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11606549B1 (en) * | 2019-06-26 | 2023-03-14 | Ball Aerospace & Technologies Corp. | Methods and systems for mitigating persistence in photodetectors |
Also Published As
Publication number | Publication date |
---|---|
US20070262235A1 (en) | 2007-11-15 |
WO2007133898A1 (en) | 2007-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8259179B2 (en) | Compensating for non-uniform illumination of object fields captured by a camera | |
US7755672B2 (en) | Techniques for modifying image field data obtained using illumination sources | |
US7834921B1 (en) | Compensation techniques for variations in image field data | |
US7817196B1 (en) | Techniques of modifying image field data by extrapolation | |
US7408576B2 (en) | Techniques for modifying image field data as a function of radius across the image field | |
JP4161295B2 (en) | Color imaging system that expands the dynamic range of image sensors | |
US8934035B2 (en) | Correction of non-uniform sensitivity in an image array | |
US8737755B2 (en) | Method for creating high dynamic range image | |
JP2009512303A (en) | Method and apparatus for removing vignetting in digital images | |
US20110043674A1 (en) | Photographing apparatus and method | |
US20080279471A1 (en) | Methods, apparatuses and systems for piecewise generation of pixel correction values for image processing | |
CN114930136A (en) | Method and apparatus for determining wavelength deviation of images captured by multi-lens imaging system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ZORAN CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PERTSEL, SHIMON;REEL/FRAME:017995/0343 Effective date: 20060601 |
|
AS | Assignment |
Owner name: CSR TECHNOLOGY INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZORAN CORPORATION;REEL/FRAME:027550/0695 Effective date: 20120101 |
|
AS | Assignment |
Owner name: QUALCOMM TECHNOLOGIES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CSR TECHNOLOGY INC.;REEL/FRAME:033134/0007 Effective date: 20140608 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: CSR TECHNOLOGY INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZORAN CORPORATION;REEL/FRAME:036642/0395 Effective date: 20150915 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20160904 |
|
AS | Assignment |
Owner name: QUALCOMM INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:QUALCOMM TECHNOLOGIES, INC.;REEL/FRAME:041694/0336 Effective date: 20170210 |