AU2009201617A1 - Linear high dynamic range image acquisition - Google Patents

Linear high dynamic range image acquisition Download PDF

Info

Publication number
AU2009201617A1
AU2009201617A1 AU2009201617A AU2009201617A AU2009201617A1 AU 2009201617 A1 AU2009201617 A1 AU 2009201617A1 AU 2009201617 A AU2009201617 A AU 2009201617A AU 2009201617 A AU2009201617 A AU 2009201617A AU 2009201617 A1 AU2009201617 A1 AU 2009201617A1
Authority
AU
Australia
Prior art keywords
image
sensitivity
wrapped
wrapping
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009201617A
Inventor
Ross Ashman
David John Battle
Donald James Bone
Peter Alleine Fletcher
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009201617A priority Critical patent/AU2009201617A1/en
Publication of AU2009201617A1 publication Critical patent/AU2009201617A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B41/00Special techniques not covered by groups G03B31/00 - G03B39/00; Apparatus therefor

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Description

S&F Ref: 888224 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): David John Battle, Donald James Bone, Peter Alleine Fletcher, Ross Ashman Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Linear high dynamic range image acquisition The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2071480_1) -1 LINEAR HIGH DYNAMIC RANGE IMAGE ACQUISITION TECHNICAL FIELD OF THE INVENTION This invention relates to devices for capturing optical images having a wide range of intensity or "brightness" values. Such images are also referred to as having a 5 large dynamic range. These images commonly result when photographing scenes containing both indoor and outdoor elements, such as scenes having a substantial sky component, or when high intensity light in a scene is reflected from smooth surfaces such as water, metal or glass. BACKGROUND 10 Conventional digital image sensors based on charge coupled devices (CCD) or complementary metal oxide semiconductor (CMOS) devices have limited dynamic range. This limitation typically results from high noise, inadequate analogue to digital converter (ADC) range, or the limited ability of sensors to integrate photo-electrons before saturating. is Several attempts have been made to find alternative approaches to using the number of stored photo-electrons as a measure of intensity. These alternatives have generally involved either replacing or recycling electrons as the counting elements in photo-sensors. In one arrangement, it has been suggested that the passage of time rather than the 20 number of stored electrons should be used as a measure of intensity. According to this approach, a measure of intensity is obtained by measuring the time taken for an image sensor to saturate, or alternately, by measuring the frequency with which the image sensor saturates. Alternatively, by "wrapping" (discharging or recharging) the image sensor at, or 25 just before, saturation (effectively recycling electrons), the dynamic range of the image sensor can be extended, provided that the number of "wraps" is stored in an auxiliary register. Both the above approaches have problems in that a counter is required for each pixel in order to track the number of wraps, which complicates the pixel electronics 30 leading to large pixels and poor fill factors. Another approach addresses pixel sensor saturation by reducing sensor sensitivity (defined here as the number of photons measured in a given time period for a given luminous intensity) as incident luminous intensity increases so that at least some contrast -2 is retained in the very bright image regions. Rather than a single linear output characteristic, therefore, the pixel sensor response is composed of multiple linear segments. In the limit, such an approach can approximate a logarithmic-type response that compresses a wide input dynamic range down to a more manageable range. 5 An undesirable aspect of this approach is that local contrast in logarithmically compressed images becomes progressively worse with increasing brightness. The sudden onset of saturation may be smoothed, but image "washout" still occurs in proportion to local brightness. Other problems with logarithmic and pseudo-logarithmic compression relate to colour reproduction, where it is advantageous to interpolate discrete colour 10 channels in a linear space. In yet another approach, circuit topologies have recently been demonstrated which utilise so-called "overflow" electrons and additional parallel capacitance in which to store them. This additional capacitance is switched in at strategic times to capture overflow current from the photo sensor. The values of these capacitors are selected to is achieve a similar sensitivity shaping to that discussed above in logarithmic compression. However, to retain overall linearity, measured charges or voltages are multiplied by an inverse sensitivity ratio before summation. A significant drawback of this approach is that, despite the linearity correction, the instantaneous signal to noise ratio (SNR) drops markedly at each switching point. Switching from high to low sensitivity usually implies a 20 lower photon counting rate with commensurately worse statistics. While careful selection of sensitivity ratios and integration times can limit the SNR dip, this behaviour is far from ideal. SUMMARY It is an object of the present invention to substantially overcome, or at least 25 ameliorate, one or more disadvantages of existing arrangements. Disclosed are arrangements, referred to as "MSB LSB Fusion" (MLF) arrangements, which seek to ameliorate the disadvantages of the above noted approaches by acquiring images of a scene using low sensitivity and high sensitivity sensors, and then fusing (i.e. combining) the most significant bit (MSB) planes and the least significant bit 30 (LSB) planes of the respective low sensitivity and high sensitivity images of the scene to reconstruct a High Dynamic Range (HDR) image. The disclosed examples of MLF arrangements enable the capture of high dynamic range (HDR) images using a combination of "wrapping" and "non-wrapping" photo-sensors with high and low respective sensitivities where the dynamic range of the -3 low sensitivity sensor ensures that the sensor does not saturate. The present implementations of the MLF arrangements provide a means of combining the numeric data corresponding to the outputs of these sensors and correcting errors in the low sensitivity data such that the overall dynamic range is extended while maintaining high 5 SNR. One advantage of the MLF method with respect to the prior art is that it permits the acquisition of HDR images on a linear scale of brightness with high SNR. The modifications required to existing sensor technology are relatively modest and the process for fusing the data from each sensor type is numerically simple, potentially involving only io integer operations. According to a first aspect of the present invention, there is provided a method for constructing a high dynamic range image of a scene, said method comprising the steps of: capturing a low sensitivity image of the scene using a non-saturating low is sensitivity sensor, said low sensitivity image being represented by a first number of bit planes; capturing a high sensitivity image of the scene using a wrapping high sensitivity sensor, said high sensitivity image being represented by a second number of bit planes; comparing the most significant bit planes of the high sensitivity image and the 20 least significant bit planes of the low sensitivity image to determine correction data; correcting the most significant bit planes of the low sensitivity image according to the correction data; and appending the corrected most significant bit planes of the low sensitivity image to the bit planes of the high sensitivity image to form the high dynamic range image of the 25 scene. According to another aspect of the invention, there is provided an apparatus for constructing a high dynamic range image of a scene, said apparatus comprising: a non-saturating low sensitivity sensor for capturing a low sensitivity image of the scene; 30 a wrapping high sensitivity sensor for capturing a high sensitivity image of the scene; means for comparing the most significant bit planes of the high sensitivity image and the least significant bit planes of the low sensitivity image to determine correction data; -4 means for correcting the most significant bit planes of the low sensitivity image according to the correction data; and means for appending the corrected most significant bit planes of the low sensitivity image to the bit planes of the high sensitivity image to form the high dynamic 5 range image of the scene. According to another aspect of the invention, there is provided a sensing method for acquiring high dynamic range images said method comprising the steps of: capturing a non-wrapped non-saturated image via a low sensitivity intensity measurement; 1o capturing a wrapped image via a high sensitivity wrapping intensity measurement; and fusing the wrapped and non-wrapped images to form a high dynamic range image. According to another aspect of the invention, there is provided a sensing is apparatus for acquiring high dynamic range images said apparatus comprising: a first sensor for capturing a non-wrapped non-saturated image via a low sensitivity intensity measurement; a second sensor for capturing a wrapped image via a high sensitivity wrapping intensity measurement; and 20 means for fusing the wrapped and non-wrapped images to form a high dynamic range image. According to another aspect of the invention, there is provided an image sensor for acquiring high dynamic range images comprising both low sensitivity non-wrapping pixel sensors and high sensitivity wrapping pixel sensors deployed in a regular interleaved 25 pattern. According to another aspect of the invention, there is provided an image sensor wherein each pixel includes a low-sensitivity wrapping sensor and a high-sensitivity wrapping sensor under the same lens. Other aspects of the invention are also disclosed. 30 BRIEF DESCRIPTION OF THE DRAWINGS One or more embodiments of the invention will now be described with reference to the drawings, in which: Fig. I is a depiction of a camera showing the image sensor in relation to the camera lens; -5 Fig. 2 illustrates the relationship between the non-wrapped and wrapped analogue signals acquired during dual integration and their overlapping binary representations; Fig. 3 is a schematic circuit diagram of an example of the augmented electronics 5 required to implement wrapping pixels according to the present implementations of the MLF arrangements; Fig. 4 is a collection of histograms typical of wrapping and non-wrapping sensors during HDR acquisition at two distinct levels of dynamic range extension; Figs. 5a and 5b are a pair of signal-to-noise ratio (SNR) plots characteristic of the 10 performance of the present implementations of the MLF arrangements at two levels of dynamic range extension; Fig. 6 is a lookup table used to correct the most significant bits (MSB) of an HDR image with respect to its least significant bits (LSB); Fig. 7 is a flowchart illustrating the process steps involved in acquiring and is fusing non-wrapped MSB and wrapped LSB image data to form HDR images using the dual integration approach; Fig. 8 illustrates an alternative sensor arrangement in which non-wrapped MSB and wrapped LSB data are acquired at separate, interleaved, locations alongside a diagram showing the conceptual overlap of digits in the numeric representations of their outputs; 20 Fig. 9 illustrates adaptive interpolation of non-wrapped low sensitivity sensor data information to estimate the most significant bits (MSB) of image intensity in an arrangement where LSB and MSB data are sensed at interleaved locations; Fig. 10 illustrates another alternative MLF arrangement in which two light sensitive sensors housed under a single lens are used to sense the MSB and LSB data at 25 effectively the same location; Fig. 11 A is a cross-section diagram of an exemplary image capture system upon which the various MLF arrangements described can be practiced; and Fig. 11 B is a schematic block diagram for the controller of Fig. 11 B. DETAILED DESCRIPTION INCLUDING BEST MODE 30 Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
-6 It is to be noted that the discussions contained in the "Background" section and that above relating to prior art arrangements relate to discussions of devices which form public knowledge through their use. Such discussions should not be interpreted as a representation by the present inventor(s) or the patent applicant that such devices in any 5 way form part of the common general knowledge in the art. As noted the MLF arrangements acquire a first version and a second version of an image using non-saturating low sensitivity and wrapping high sensitivity sensors, and then fusing the MSB bit planes and the LSB bit planes of the respective low sensitivity and high sensitivity versions of the image to construct a High Dynamic Range (HDR) io version of the image. The disclosed examples of MLF arrangements enable the capture of high dynamic range (HDR) images using a combination of "wrapping" and "non wrapping" photo-sensors with high and low respective sensitivities where the dynamic range of the low sensitivity sensor ensures that the sensor does not saturate. The aforementioned versions of the image in question may, in regard to each is pixel sensor, be acquired in one arrangement from a single pixel sensor during different (typically contiguous) time intervals. The aforementioned versions of the image in question may, in another arrangement, be acquired from different (typically adjacent) pixel sensors during a single time interval. It is desirable to minimise the respective time interval disparity, and pixel sensor spatial disparity, in order to obtain the best 20 performance of the present implementations of the MLF arrangements. Fig. 1 is a depiction of a camera 130 showing an image sensor 110 in relation to a camera lens 120. The present implementations of the MLF arrangements are adapted for acquiring and reconstructing linear, low noise, high dynamic range (HDR) imagery using new digital image sensing principles incorporated into the camera 130. The term "linear" 25 in this context means that the output from the image sensor is a linear function of the input light intensity. Central to the present implementation of the MLF arrangements is a new type of image sensor 110, which is depicted in Fig. 1 in a conventional sensor position with respect to the lens 120 in the camera 130. The sensor 110 consists of an array of light sensitive pixel sensors sampling the intensity of the optical flux (also 30 referred to as optical intensity) focussed by the lens, thus electronically recording an image of the scene in front of the camera 130. When an image is to be captured using the camera 130 or an equivalent image capturing apparatus, in one arrangement referred to as a temporally multiplexed arrangement (see Fig. 2 for example) a shutter control 1128 (see Figs. 11 A and 11 B) is -7 operated in order to open the shutter 1114, 1120, or the integration period for the image sensor is initiated by purely electronic means by the processing unit 1150 under control of the MLF program 1161. Subsequently, light traversing the lens 120 is sensed by the sensor 110. 5 The amount of time To 1 during which image capture occurs (also referred to as the image capture period) is determined by one or more camera control parameters relating to the amount of ambient light, the type of scene to be captured (e.g. landscape, sporting event, etc), the depth of field and so on. To, can typically be defined manually, automatically, or using a hybrid approach in which some camera parameters are 10 determined manually and the remainder are determined automatically. The image capture period To 1 (201) comprises two time interval components depicted as Tio, (ie 291) and Thigh (ie 250) as described in more detail in regard to Fig. 2. To,, and Thigh are determined, in one example, by suitable definition of K as described below. Other MLF arrangements are also described, including a spatially interleaved is approach which is described in relation to arrangements depicted in Figs. 8-9, and a dual sensor approach in which high and low sensitivity sensors are housed under a single micro-lens, as illustrated in Fig. 10. In essence, to avoid the problem of saturation, optical intensity is sampled using both high and low sensitivities, Sg, and S,,,, in the following ratio: Shg 20 K = -S -o . (1) Measurement sensitivity can be controlled by several different means, including neutral density filtering, active pixel area adjustment during manufacture, or by varying the integration time between resetting a pixel and subsequently reading out its analogue value. With reference to Eq. (1), the sensitivity ratio K also equals the ratio of integration 25 periods Thigh and T 10 introduced above if sensitivity is controlled in this way. Generally, K can be any value, but particular hardware advantages (discussed later) accrue by arranging for K to be a power of two, such that K = 2(N-M) (2) where N is the desired number of bitplanes in the HDR image and M is the number of bits 30 derived from image measurements made with low sensitivity S, 0 ,,. Fig. I lA is a cross-section diagram of an exemplary image capture system 1100, upon which the various MLF arrangements described can be practiced. In the general case the image capture system 1100 is a digital still camera or a digital video camera (also referred to as a camcorder).
-8 As seen in Fig. 11 A, the camera system 1100 comprises an optical system 1102 which receives light from a scene 1101 and forms an image on the sensor 110. The sensor 110 comprises a 2D array of pixel sensors which measure the intensity of the image formed on it by the optical system as a function of position. The operation of the camera, 5 including user interaction and all aspect of reading, processing and storing image data from the sensor 110 is coordinated by a main controller 1122 which comprises a special purpose computer system. This system is considered in detail below. The user is able to communicate with the controller 1122 via a set of buttons including a shutter release button 1128, used to initiate focus and capture of image data, and other general and to special purpose buttons 1124, 1125, 1126 which may provide direct control over specific camera functions such as flash operation or support interaction with a graphical user interface presented on a display device 1123. The display device may also have a touch screen capability to further facilitate user interaction. Using the buttons and controls it is possible to control or modify the behaviour of the camera. Typically it is possible to 15 control capture settings such as the priority of shutter speed or aperture size when achieving a required exposure level, or the area used for light metering, use of flash, ISO speed, options for automatic focusing and many other photographic control functions. Further, it is possible to control processing options such as the colour balance or compression quality. The display 1123 is typically also used to review the captured 20 image or video data. It is common for a still image camera to use the display to provide a live preview of the scene, thereby providing an alternative to an optical viewfinder 1127 for composing prior to still image capture and during video capture. The optical system comprises an arrangement of lens groups 1110, 1112, 1113 and 1117 which can be moved relative to each other along a line 1131 parallel to an 25 optical axis 1103 under control of a lens controller 1118 to achieve a range of magnification levels and focus distances for the image formed at the sensor 110. The lens controller 1118 may also control a mechanism I111 to vary the position, on any line 1132 in the plane perpendicular to the optical axis 1103, of a corrective lens group 1112, in response to input from one or more motion sensors 1115, 1116 or the controller 1122 so 30 as to shift the position of the image formed by the optical system on the sensor 110. Typically the corrective optical element 1112 is used to effect an optical image stabilisation by correcting the image position on the sensor for small movements of the camera such as those caused by hand-shake. The optical system may further comprise an adjustable aperture 1114 and a shutter mechanism 1120 for restricting the passage of light -9 through the optical system. Although both the aperture and shutter are typically implemented as mechanical devices they may also be constructed using materials, such as liquid crystal, whose optical properties can be modified under the control of an electrical control signal. Such electro-optical devices have the advantage of allowing both shape 5 and the opacity of the aperture to be varied continuously under control of the controller 1122. Fig. 11 B is a schematic block diagram for the controller 1122 of Fig. 11 B, in which other components of the camera system which communicate with the controller are depicted as functional blocks. In particular, the image sensor 110 and lens controller 1198 10 are depicted without reference to their physical organisation or the image forming process and are treated only as devices which perform specific pre-defined tasks and to which data and control signals can be passed. Fig. 11 B also depicts a flash controller 1199 which is responsible for operation of a strobe light that can be used during image capture in low light conditions as auxiliary sensors 1197 which may form part of the camera system. is Auxiliary sensors may include orientation sensors that detect if the camera is in a landscape of portrait orientation during image capture; motion sensors that detect movement of the camera; other sensors that detect the colour of the ambient illumination or assist with autofocus and so on. Although these are depicted as part of the controller 1122, they may in some implementations be implemented as separate components within 20 the camera system. The controller comprises a processing unit 1150 for executing program code, Read Only Memory (ROM) 1160 and Random Access Memory (RAM) 1170 as well as non-volatile mass data storage 1192. In addition, at least one communications interface 1193 is provided for communication with other electronic devices such as printers, 25 displays and general purpose computers. Examples of communication interfaces include USB, IEEE1394, HDMI and Ethernet. An audio interface 1194 comprises one or more microphones and speakers for capture and playback of digital audio data. A display controller 1195 and button interface 1196 are also provided to interface the controller to the physical display and controls present on the camera body. The components are 30 interconnected by a data bus 1181 and control bus 1182. In a capture mode, the controller 1122 operates to read data from the image sensor 110 and audio interface 1194 and manipulate that data to form a digital representation of the scene that can be stored to a non-volatile mass data storage 1192. In the case of a still image camera, image data may be stored using a standard image file -10 format such as JPEG or TIFF, or it may be encoded using a proprietary raw data format that is designed for use with a complimentary software product that would provide conversion of the raw format data into a standard image file format. Such software would typically be run on a general purpose computer. For a video camera, the sequences of 5 images that comprise the captured video are stored using a standard format such DV, MPEG, H.264. Some of these formats are organised into files such as AVI or Quicktime referred to as container files, while other formats such as DV, which are commonly used with tape storage, are written as a data stream. The non-volatile mass data storage 1192 is used to store the image or video data captured by the camera system and has a large io number of realisations including but not limited to removable flash memory such as a compact flash (CF) or secure digital (SD) card, memory stick, multimedia card, miniSD or microSD card; optical storage media such as writable CD, DVD or Blu-ray disk; or magnetic media such as magnetic tape or hard disk drive (HDD) including very small form-factor HDDs such as microdrives. The choice of mass storage depends on the is capacity, speed, usability, power and physical size requirements of the particular camera system. In a playback or preview mode, the controller 1122 operates to read data from the mass storage 1192 and present that data using the display 1195 and audio interface 1194. The processor 1150, is able to execute programs stored in one or both of the 20 connected memories 1160 and 1170. When the camera system 1100 is initially powered up system program code 1161, resident in ROM memory 1160, is executed. This system program permanently stored in the camera system's ROM is sometimes referred to as firmware. Execution of the firmware by the processor fulfils various high level functions, including processor management, memory management, device management, storage 25 management and user interface. The processor 1150 includes a number of functional modules including a control unit (CU) 1151, an arithmetic logic unit (ALU) 1152, a digital signal processing engine (DSP) 1153 and a local or internal memory comprising a set of registers 1154 which typically contain atomic data elements 1156, 1157, along with internal buffer or cache 30 memory 1155. One or more internal buses 1159 interconnect these functional modules. The processor 1150 typically also has one or more interfaces 1158 for communicating with external devices via the system data 1181 and control 1182 buses using a connection 1155.
-11 The system program 1161 includes a sequence of instructions 1162 though 1163 that may include conditional branch and loop instructions. The program 1161 may also include data which is used in execution of the program. This data may be stored as part of the instruction or in a separate location 1164 within the ROM 1160 or RAM 1170. 5 In general, the processor 1150 is given a set of instructions which are executed therein. This set of instructions may be organised into blocks which perform specific tasks or handle specific events that occur in the camera system. Typically the system program will wait for events and subsequently execute the block of code associated with that event. This may involve setting into operation separate threads of execution running 10 on independent processors in the camera system such as the lens controller 1198 that will subsequently execute in parallel with the program running on the processor. Events may be triggered in response to input from a user as detected by the button interface 1196. Events may also be triggered in response to other sensors and interfaces in the camera system. 15 The execution of a set of the instructions may require numeric variables to be read and modified. Such numeric variables are stored in RAM 1170. The disclosed method uses input variables 1171, that are stored in known locations 1172, 1173 in the memory 1170. The input variables are processed to produce output variables 1177, that are stored in known locations 1178, 1179 in the memory 1170. Intermediate variables 20 1174 may be stored in additional memory locations in locations 1175, 1176 of the memory 1170. Alternatively, some intermediate variables may only exist in the registers 1154 of the processor 1150. The execution of a sequence of instructions is achieved in the processor 1150 by repeated application of a fetch-execute cycle. The Control unit 1151 of the processor 25 maintains a register called the program counter which contains the address in memory 1160 of the next instruction to be executed. At the start of the fetch execute cycle, the contents of the memory address indexed by the program counter is loaded into the control unit. The instruction thus loaded controls the subsequent operation of the processor, causing for example, data to be loaded from memory into processor registers, the contents 30 of a register to be arithmetically combined with the contents of another register, the contents of a register to be written to the location stored in another register and so on. At the end of the fetch execute cycle the program counter is updated to point to the next instruction in the program. Depending on the instruction just executed this may involve -12 incrementing the address contained in the program counter or loading it with a new address in order to achieve a branch operation. Each step or sub-process in the processes of flow charts such as depicted in Fig. 7 are associated with one or more segments of the program 1161, and is performed by 5 repeated execution of a fetch-execute cycle in the processor 1150 or similar programmatic operation of other independent processor blocks in the camera system. Fig. 2 illustrates the relationship, in a temporal multiplexed MLF arrangement, between the non-wrapped and wrapped analogue signals acquired during dual integration and their overlapping binary representations. The image capture period Tot (ie 201), 10 during which light is received by each pixel sensor in the image sensor 110, is determined as described in regard to Fig. 1 above. The time interval Toi comprises the two time interval components Tiew (ie 291) and Thigh (ie 250). As described in more detail in regard to Fig. 3, during the time interval Tio, (ie 291) the pixel sensor in question operates in a low sensitivity mode and accumulates (i.e. integrates) charge or voltage (depending upon is the type of sensor) at a rate depicted by the slope of a line segment 202, where this slope is, in turn, dependent upon the intensity of the light received by the sensor. After the interval T 1 on, the charge or voltage is read out and the pixel sensor is reset, thus bringing the accumulated charge or voltage from a point 203 back to zero. The integrated charge or voltage Vi,, at the reset point 203 is used later as described in regard, for example, to 20 Fig. 7. For the remainder of the time interval Tto,, (ie Thigh which is depicted by the reference number 250), the pixel sensor in the present example operates in a high sensitivity mode and accumulates charge or voltage in a sawtooth fashion. Considering a first sawtooth segment 205 (also referred to as a wrapping cycle, wrapping segment or the 25 like), the pixel sensor accumulates charge or voltage at a rate depicted by the slope of a line segment 204 where this slope is, in turn, dependent upon the intensity of the light received by the sensor. When the accumulated charge reaches a threshold voltage Vih (i.e. 270) at a point 206, the sensor is reset from the charge or voltage Vth to zero, and the subsequent sawtooth segment commences. The pixel sensor thus operates successively in 30 a non-wrapping low sensitivity mode and then in a wrapping high sensitivity mode. After a final reset point 207 during a final sawtooth segment 208 in the image capture period Tte,, the pixel sensor accumulates charge or voltage at a rate depicted by the slope of a line segment 280. When the elapsed time reaches Tc,, the value of the -13 integrated charge or voltage Vhigh is captured for later processing as described in regard, for example, to Fig. 7. During the time interval Thigh (ie 250) in the described example in Fig. 2, it is noted that the MLF arrangement wraps (ie resets) four times. s With reference to Fig 2 and equation (2), M is the number of most significant bits (MSB) 211 which are acquired during the period To. (i.e. 291) using the low sensitivity Sow, while the L least significant bits (LSB) 212 are acquired during the period Thigh (ie 250) with high sensitivity S,, g. Ordinarily, the high sensitivity measurements acquired during the period Thigh (ie 10 250) with sensitivity S,,, would be the first to reach a saturation level 210. However, according to the present implementations of the MLF arrangements, the internal circuitry associated with each pixel sensor is arranged to autonomously and asynchronously reset or "wrap" - at the prescribed common threshold 270. The wrapping is autonomous in the sense that it is performed dependent upon the charge or voltage integrated by the pixel 15 sensor in question, and without requiring external control. The wrapping is asynchronous in the sense that it occurs for each pixel sensor independently of the wrapping occurring for other pixel sensors. In an analogue sense, the photodiode voltage associated with each pixel sensor follows, during the time interval T 1 0 , a linear ramp 202 until the end of the integration 20 period 291, at which time its value 290 is read out and digitised via an M-bit ADC 213. In an analogue sense, the photodiode voltage associated with each pixel sensor follows, during the time interval Thigh, the sawtooth waveform 280, until the end 215 of the integration period 250, at which time its value 260 is read out and digitised via an L bit ADC 214. 25 This MLF arrangement effectively decouples the LSB 212 from hypothetical higher-order bits of its binary representation. The MLF arrangement also effectively decouples the MSB 211 from hypothetical lower-order bits of its binary representation. For an incident illuminance far exceeding the dynamic range of the sensor, therefore, the L bits 212 of the associated digital data retain their significance. If the sensor had not 30 reset, all information would have been lost at the saturation point. Fig. 3 is a schematic circuit diagram of an example of a pixel sensor including the augmented electronics required to implement wrapping pixels according to the present implementations of the MLF arrangements. Additional components required to implement pixel wrapping according to the present implementations of the MLF arrangements in a -14 standard 3/4 transistor CMOS active pixel sensor are shown in region 330. Electrically, the circuit in Fig. 3 operates with opposite polarity to that depicted in Fig. 2. by virtue of the photodiode (310) discharging as it captures incident photons (311) Considering, for example, the sawtooth segment 205 in Fig. 2, a photodiode s pixel sensor such as 310 in the image sensor 110 in the camera 130, upon which light 311 is being focussed by the lens 120, accumulates charge or voltage as depicted by the sawtooth line segment 204 in Fig. 2. The pixel sensor is recharged (i.e. reset or wrapped) to the supply voltage 360 at the point 206 via a momentary reset signal 350 generated by a mono-stable element 340 as soon as the comparator 320 senses an integrated voltage 280 1o less than the threshold voltage 370. Asynchronously to the aforementioned autonomous resetting behaviour, the photodiode (310) and the diffusion capacitance (312) can be reset as per conventional image sensor technology via transistors QI and Q2 to coordinate overall sensor operation. The present implementations of the MLF arrangements can be applied to is alternate pixel electronics to those illustrated in Fig. 3, including variations applicable to charge coupled devices (CCDs). The present implementations of the MLF arrangements sample LSB and MSB image data independently such that neither saturates. The resulting L least-significant bits 212 and M most-significant bits 211 are subsequently fused via a process (described in more detail in regard to Fig. 7) to reconstruct an N-bit HDR image. 20 In the first MLF arrangement to be discussed, it is assumed that both low and high sensitivity measurements are available from each pixel location, such that two complete images, i.e. low sensitivity and high sensitivity wrapped versions of the image to be captured, are acquired on each exposure. This arrangement uses a temporal multiplexing approach which acquires, in quick succession, both non-wrapped low 25 sensitivity images during Tio., and wrapped high sensitivity images during Thigh from the same pixel sensor. Returning to Fig. 2, one way of arranging for this is to employ dual integrations, one integration with a short duration T 291, corresponding to a low sensitivity S 1 , followed by a second integration with a longer duration Tjig 250, corresponding to a high 30 sensitivity Sijgj,. In this case, the fact that the pixel electronics are configured to wrap is inconsequential to acquiring the low-sensitivity image, because the integration period T, 291 is selected such that no pixel saturates for a given peak image intensity. Conceptually, the fusion of MSB and LSB information occurs as illustrated in Fig. 2, in which the analogue signal levels 290 and 260, corresponding respectively to low -15 and high sensitivity integrations, are shown in relation to their respective binary representations, 211 and 212. The first step in fusing the M-bit MSB 211 and the L-bit LSB data 212 to form N-bit HDR data is multiplying the MSB data 211 by K (see equation (1)) to correctly align the significance of its binary digits. Here, the advantage of 5 maintaining K as a power of 2 as in Eq (2) is evident, as multiplication by powers of two equates to a simple shifting operation which is easily implemented in hardware. Where K is maintained as a power of 2, the aforementioned "alignment of the significance of the binary digits" means shifting the MSB bits 211 by the desired shift amount 216 so that the fusion of the MSB data 211 and the LSB data 212 results in fused data 217 having the io desired number of bits (ie the desired dynamic range). Because of the V overlapping digits 230 in the binary numbers 211 and 212, redundancy is introduced such that inevitable disparities between the overlapping MSB and LSB data (211 and 212 respectively) can be resolved. These disparities are due to both deterministic and statistical uncertainties including errors in obtaining the desired is sensitivity ratio K, as well as the fundamentally worse Poisson statistics associated with reduced photon counts, i.e., as the MSB data 211 corresponds to fewer photons, it will be contaminated by Poisson noise to a greater extent than the LSB data 212. Put another way, the overlapping bits 230 measured during the high sensitivity (wrapping) integration period Thigh are more reliable than those measured during the low 20 sensitivity (non-wrapping) integration period Ti 0 .. For this reason, the MSB data 211 is always, in the described MLF arrangements, corrected with respect to the LSB data 212 rather than vice versa. It is essential, therefore, that a mechanism is provided for correcting the MSB with respect to the LSB such that their V overlapping bits agree. In Fig. 2, the relative significance of the MSB data 211 is shown multiplied by K 25 (i.e. shifted by the appropriate number 216 of bit positions). The impact of reduced sensitivity on the statistics of the MSB data for a given uniform illumination of the pixel sensor in question is illustrated in Figs. 4a and 4b, where M= L = 14 bits and the intended dynamic range of the output is initially N = 19 bits. With this specification, the number of overlapping bits, V is given by 30 V = M + L - N = 9. (3) where M and L are the numbers of bits associated with the MSB and LSB data respectively. Figs. 4a and 4b are a collection of histograms typical of wrapping and non wrapping sensors during HDR acquisition at two distinct levels of dynamic range -16 extension. The histograms represent the statistical results of measuring, or simulating, the number of times particular bit patterns or "codes" arise during many simultaneous or sequential pixel measurements. The term "code", as used in regard to the MSB data 211 or the LSB data 212, means the value that the respective overlap bits 219, 218 take in a 5 particular measurement assuming a constant luminous intensity across all measurements. It is noted that wrapping and non-wrapping sensors may, in the described MLF arrangements, be implemented either using a temporal multiplexing approach applied to a single pixel sensor (as depicted in Fig. 2), or alternately, using a spatially interleaved approach using different pixel sensors (as depicted in Figs. 8-10). 10 The vertical "frequency" axis in the histograms in Figs. 4a and 4b define, for each code word referred to by the horizontal "code" axis, the number of times that the particular code word arises during the set of measurement/simulation runs considered. The horizontal "code" axis in the histograms relates to the possible values of the overlap bits 219, 218 in the example being considered. is Accordingly, for M = L = 14 bits, an intended dynamic range of the output being N = 19 bits and thus V = M + L - N = 9, as depicted in Fig. 4a, the possible 9-bit code words (defined by V=9) can take on 512 different values, as depicted by the range of the horizontal axis. The "data variance" referred to below relates, in regard to the MSB data 211, to 20 the statistical variation in the values of the code words 219 associated with the MSB data 211 for the set of measurements/simulations for a fixed level of incoming light intensity. The data variance in regard to the LSB data 212 refers to the statistical variation between the values of the code words 218 associated with the LSB data 212 for the set of measurements/simulations for the same fixed level of incoming light intensity. 25 In Fig. 4a, the histogram width (and hence variance) of the MSB overlap data 410 can be seen to be larger than that of the LSB overlap data 420, despite their respective means 413, 414 agreeing as expected. Thus for example the width 411 of the distribution of MSB overlap data 219 is seen to be much larger than the width 412 of the distribution of LSB overlap data 218. 30 Also noteworthy in Figure 4a is the circular nature of the binary codes, which have wrapped around from the high end 415 to the low end 416 of the overlap range (0 511). Fig. 4b represents a similar situation to Fig. 4a, except that the dynamic range has been extended to N = 22 bits and the overlap V has been commensurately reduced -17 from 9 to 6 bits. Accordingly, for M = L = 14 bits, an intended dynamic range of the output being N = 22 bits and thus V = M+L-N = 6, as depicted in Fig. 4b, the possible 6 bit code words (defined by V=6) can take on 64 different values, as depicted by the range of the horizontal axis. 5 In this case, it is apparent that the distribution of MSB codes 430 is no longer contained within the range of the 6 overlapping bits (0-63), but has been spread out almost uniformly. When it comes to associating the overlapping LSB bits with the MSB overlap data 219 which is nearest in a binary counting sense, the situations are quite different 1o between Figs. 4a and 4b. The phrase "association in a binary counting sense" means seeking a matching LSB code word 218 whose numerical value is closest to the associated MSB code word 219 regardless of the counting direction or binary wrapping. In the former case in Fig. 4a, the MSB overlap 219 can be corrected to agree with any LSB overlap 218 by adding or subtracting a value less than half the binary range of the overlap, IS i.e., 2 1-'. The aforementioned correction is intended to adjust the measured MSB value 211 such that its lower Vbits coincide with the upper Vbits of the measured LSB value 212. With the constraint that the distribution of MSB overlap codes fits within the 9 20 bit overlap range, there is a high probability that the correct MSB overlap code is within this distance. Taking the top N - L bits (depicted as 221 in Fig. 2) of the corrected MSB and appending these to the LSB (depicted as 212 in Fig. 2) yields the desired N bits (i.e. 217) of output dynamic range. In the case of Fig. 4b, however, simply obtaining agreement between the MSB 25 and LSB overlap codes, 219 and 218, cannot guarantee selection of the correct MSB, as the histogram of MSB overlap codes 430 has undergone an unknown number of wraps. While sub-optimal, this situation still has a statistical chance of success and leads, in practice, to an SNR which is degraded, but far better than if no correction had been applied. 30 To identify the approximate cross-over point between the two regimes described above, it can be noted that the histogram of MSB overlap codes should fall within or at least fit as completely as possible within the overlap range to prevent ambiguous wrapping. In the following, the condition 80rM <2v (4) -18 is imposed, where a- is the standard deviation of the MSB overlap codes. Assuming a standard Poisson model, the variance of the photon (and hence photo-electron) counts should equal the mean count, but o-, also involves a scaling factor (G ) due to the multiple electrons measured per ADC bit, which leads to an effective reduction in the s standard deviation of the ADC output by a factor 1/G. The maximum expected value of a-m , therefore, is given by a- (5) G where M is the number of MSB bits. Substituting (4) into (3) yields the approximate condition for combining wrapped 1o and non-wrapped image data to form linear HDR images with low probability of error. M log 2 G V > -+3- 102(6) 2 2 where the term "linear" in this context means that the output from the image sensor using the present implementations of the MLF arrangements is a linear function of the input light intensity. is Figs. 5a and 5b are a pair of signal-to-noise ratio (SNR) plots characteristic of the performance of the present implementations of the MLF arrangements at two levels of dynamic range extension. Figs. Sa and 5b illustrate the theoretical (Poisson only) noise performance for the two cases just discussed in relation to Figs. 4a and 4b. In all of the following, G was taken to be 10 electrons per bit. 20 Fig. Sa shows an unbroken 10 dB/decade SNR improvement 520 characteristic of Poisson-limited noise. The wrapping point for the LSB data is indicated by the vertical dashed line 510. The wrapping point 510 depicts the level of sensor output (and thus indirectly the level of sensor input) at which the integrated charge or voltage 204 in Fig. 2 intersects the threshold level 270 at the point 206. The results in Fig. Sa relate to Fig. 4a, 25 in which M= L = 14 bits and the intended dynamic range of the output is N= 19 bits. Applying Eq. (3) gives V = 9 bits, which satisfies Eq. (6) in this case. In other words, since equation (6) is satisfied by virtue of having sufficient overlapping bits 230, a monotonic SNR characteristic is obtained. Fig. 5b, however, illustrates a case where the dynamic range has been extended 30 to N = 22 bits, reducing the overlap range to V = 6 bits. This value of V fails to satisfy Eq. (6) and the instantaneous SNR is seen to dip approximately 15 dB before again increasing monotonically at 10 dB/decade. While the undesirability of such an SNR reduction has already been mentioned, it should also be noted that, even with the dynamic range -19 extended to 22 bits, the absolute average SNR never drops below 45 dB, which could be acceptable in many circumstances. Fig. 6 is a lookup table used to correct the difference between the most significant bits (MSB) 211 of an HDR image with respect to its least significant bits 5 (LSB) 212 on a per-pixel basis. Fig. 6 exemplifies the MSB corrections in the form of a lookup table (LUT) for the particular case V = 3. These corrections are used to increment or decrement the binary value of the entire MSB 211 such that its lower V bits 219 agree with the corresponding upper Vbits 218 of the LSB 212. With the LSB overlap codes 218 addressing the rows of the LUT 610 and the 10 MSB overlap codes 219 addressing its columns 620, the required corrections can be addressed from memory as shown. The salient features of Fig. 6 include zero corrections on the main diagonal 630 and unit increments above and below. If the row and column addresses differ by 2 (-), such as depicted by regions 640 and 650, for example, there is ambiguity as to whether the is corrections are positive or negative, as both are equal distances in a binary counting sense. In such cases the corrections in the regions 640 and 650 default to zero. For an overlap range of V bits, the maximum correction magnitude is 2 (") -1. Depending on the hardware architecture and computational resources available, the simplicity of the arithmetic involved may not justify the memory associated with the LUT 20 for large overlaps. In such cases, alternative arrangements could employ explicit integer arithmetic over the LUT. Fig. 7 is a flowchart illustrating the process steps involved in acquiring and fusing non-wrapped MSB and wrapped LSB image data to form HDR images using the dual integration implementations of the MLF arrangements. Following the sensor reset 25 710 at the beginning of the image capture period, there are two successive integration periods, Ti,, and Thigh, during which the (non-wrapped) MSB and (wrapped) LSB data are acquired and stored in the memory 1170 by the processing unit 1150 under control of the MLF program 1161. In neither case do any pixels saturate. Following analogue to digital conversion by the processing unit 1150 of the two signals read out from each pixel at 740, 30 the MSB data acquired during the T, 0 integration are multiplied by K in the alignment step 750 (or shifted relative to the LSB data if K is a power of 2). Next, the error correction step at 750 involves either calculating or addressing pre-calculated corrections by the processing unit 1150 under control of the MLF program 1161 and applying them to the MSB data. Finally, the top N - L bits of the corrected MSB are appended to the LSB -20 data by the processing unit 1150 under control of the MLF program 1161 to form an N-bit high dynamic range (HDR) output image. The first arrangement, just discussed, employed temporal multiplexing to acquire (non-wrapped) low sensitivity and (wrapped) high sensitivity images during a single 5 exposure. Fig. 8 illustrates an alternative MLF arrangement in which non-wrapped MSB and wrapped LSB data are acquired at separate, spatially interleaved locations. Fig. 8 also illustrates the conceptual overlap of digits in the numeric representations of their outputs. This second MLF arrangement, which may be more resistant to the effects of movement 1o during the exposure period Tet than the first temporally multiplexed arrangement, involves spatial multiplexing and adaptive interpolation to achieve dynamic range extension at the expense of image resolution. In this alternative arrangement, two distinct types of light sensitive pixels are deployed in for example, a regular interleaved pattern, to thus occupy spatially interleaved locations, which in Fig. 8 have been denoted types I and is m. Here, the set of m-type locations 820 contain the low-sensitivity (non-wrapping) sensors associated with the most significant bits of the image data and the set of 1-type locations 810 contain the high-sensitivity (wrapping) sensors associated with the least significant bits of the image data. As before, these two numbers overlap by V digits such that the MSB 830 can be corrected with respect to the LSB 840. 20 Fig. 9 illustrates adaptive interpolation of non-wrapped low sensitivity sensor data to estimate the most significant bits (MSB) of image intensity in an arrangement where LSB and MSB data are sensed at spatially interleaved locations. The adaptive interpolation is intended to reduce the effects arising from the fact that the pixel sensors are at different spatial locations. 25 The unit cell from Fig. 8, wherein each i-type location is completely surrounded by m-type locations is shown expanded in Fig. 9. This arrangement permits four paths of interpolation for estimating the most significant bits at the central i-type location 930, where they are not measured. In general, it can be expected that the MSB data, more representative of large scale structure, will change less rapidly than the LSB data, which is 30 more affected by noise, making the former more amenable to interpolation. The set of four interpolated MSB estimates 940 are then compared to the measured LSB value 950 in the same way discussed previously in relation to the first arrangement. The overlapping bits from each are either subtracted in circular binary arithmetic to generate MSB corrections, or the same overlapping bits can be used to -21 address the rows and columns of a pre-computed lookup table (LUT) in memory. With the set of four MSB estimates available, that requiring the smallest correction (and hence possessing the highest likelihood) is selected. As illustrated in Fig. 9, the smallest correction is seen to be associated with estimate 960, which was interpolated from the 5 measured MSB data 910 and 920. From Figs. 8 and 9, it is apparent that HDR image values are estimated only at I type locations, leading to a potential loss of image resolution. Thus, in this second arrangement, extended dynamic range is achieved at the expense of image resolution. On many cameras utilising high-resolution image sensors, this tradeoff would be completely 10 acceptable, especially if it was implemented as an optional acquisition mode. On depressing a special "HDR" button, for example, the image sensor could be dynamically re-configured such that I-type pixels became wrapping pixels and m-type pixels remained non-wrapping pixels, though with reduced integration period, as determined by the metering system of the camera. is Fig. 10 illustrates another alternative MLF arrangement in which two light sensitive sensors housed under a single lens are used to sense the MSB and LSB data at effectively the same location. In the third arrangement presented, low sensitivity (non wrapping) pixel sensors 1010 and high-sensitivity (wrapping) pixel sensors 1020 are effectively co-located, perhaps under a single micro-lens 1030 as illustrated in Fig. 10. 20 This arrangement obviates the requirements for either dual sampling (as in the temporal multiplexing arrangement) or adaptive interpolation (as in Figs. 8-9), yet is amenable to the same HDR reconstruction algorithm described in relation to those arrangements. As discussed previously, the desired sensitivity ratio K between the sensor types can be achieved by controlling the relative sizes of their photo-diodes during manufacture, or by 25 applying neutral density filtering, or by controlling their integration periods. The disclosed MLF arrangements can be incorporated into image capture equipment in a number of different forms. In one arrangement, a digital camera could be fitted with an HDR image sensor employing the time-division multiplexing arrangement introduced here. By redesigning the individual pixel sensors along the lines of Fig. 3, the 30 sensitivity ratio K in Equation (1) could be controlled dynamically by the camera's metering system to maximise the dynamic range of captured images. The MSB length M, LSB length L, as well as the appropriate number of overlapping bits V, could all be automatically selected by the camera with an embedded form of equation (6). Following data acquisition of the MSB and LSB images as described above, the MSB image might -22 be appropriate for an immediate preview of the HDR image obtained by on-camera fusion of the MSB and LSB images as described. INDUSTRIAL APPLICABILITY The arrangements described are applicable to the image capture and processing s industries and particularly for the Camera industry. The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including 10 principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings.

Claims (15)

  1. 2. A method according to claim 1, wherein said comparing step comprises the further steps of: 20 determining a number of said least significant bit planes of the low sensitivity image whose histogram fits within the dynamic range of said number; and matching said determined number of the least significant bit planes of the low sensitivity image to the same number of the most significant bit planes of the high sensitivity wrapped image to determine said correction data. 25
  2. 3. A method according to claim 2, wherein the matching step comprises determining, on a per-pixel basis, a difference between pixel values of the least significant bit planes of the low sensitivity image and corresponding pixel values of the most significant bit planes of the wrapped high sensitivity image. 30
  3. 4. A method according to claim 1, wherein the steps of capturing the low sensitivity image and the high sensitivity image are performed, in regard to each pixel, using a single pixel sensor successively operating in a low sensitivity mode and a high sensitivity mode during successive time intervals . -24
  4. 5. A method according to claim 1, wherein the steps of capturing the low sensitivity image and the high sensitivity image are performed, in regard to each pixel, using spatially distinct pixel sensors , respectively operating in a low sensitivity mode and a 5 high sensitivity wrapping mode.
  5. 6. A method according to claim 1, wherein the dynamic range of the low sensitivity sensor ensures that the sensor does not saturate. 10 7. A method according to claim 1, wherein the high sensitivity sensor successively wraps prior to reaching saturation.
  6. 8. An apparatus for constructing a high dynamic range image of a scene, said apparatus comprising: is a non-saturating low sensitivity sensor for capturing a low sensitivity image of the scene; a wrapping high sensitivity sensor for capturing a high sensitivity image of the scene; means for comparing the most significant bit planes of the high sensitivity image 20 and the least significant bit planes of the low sensitivity image to determine correction data; means for correcting the most significant bit planes of the low sensitivity image according to the correction data; and means for appending the corrected most significant bit planes of the low 25 sensitivity image to the bit planes of the high sensitivity image to form the high dynamic range image of the scene.
  7. 9. An apparatus according to claim 8, wherein said comparing means comprises: means for determining a number of said least significant bit planes of the low 30 sensitivity image whose error histogram falls within the dynamic range of said number; and means for matching said determined number of the least significant bit planes of the low sensitivity image to the same number of the most significant bit planes of the high sensitivity image to determine said correction data. -25
  8. 10. A method for acquiring high dynamic range images said method comprising the steps of: capturing a non-wrapped non-saturated image via a low sensitivity intensity 5 measurement; capturing a wrapped image via a high sensitivity wrapping intensity measurement; and fusing the wrapped and non-wrapped images to form a high dynamic range image. 10
  9. 11. A method according to claim 10 wherein the fusing step includes determining a number of overlapping digits between numeric representations of the non-wrapped low sensitivity and wrapped high-sensitivity images such that their error histograms are contained within the dynamic range of the overlapping digits. 15
  10. 12. A method according to claim 10 wherein the fusing step includes obtaining required corrections to the non-wrapped low sensitivity image with respect to the wrapped high-sensitivity image based on calculating the differences in their overlapping digits . 20 13. A method according to claim 10 wherein the fusing step includes obtaining required corrections to the non-wrapped low sensitivity image with respect to the wrapped high-sensitivity image based on a lookup table addressed by the overlapping digits from each image. 25 14. An image sensor for acquiring high dynamic range images comprising both low sensitivity non-wrapping pixel sensors and high sensitivity wrapping pixel sensors deployed in a regular interleaved pattern.
  11. 15. A system for constructing a high dynamic range image of a scene the system 30 comprising: an image sensor for capturing a non-wrapped non-saturated image via a low sensitivity intensity measurement and a wrapped image via a high sensitivity wrapping intensity measurement; and -26 a processor for fusing the wrapped and non-wrapped images to form a high dynamic range image; wherein: the image sensor comprises both low sensitivity non-wrapping pixel sensors and high sensitivity wrapping pixel sensors deployed in a regular interleaved pattern; and 5 the most significant digits associated with the measured image intensity at each high sensitivity wrapping pixel are estimated from low sensitivity non-wrapping pixels around it.
  12. 16. A system according to claim 15 wherein multiple interpolated estimates of the io most significant digits associated with the measured image intensity at each high sensitivity wrapping pixel are compared with the overlapping digits from the high sensitivity wrapping pixels to obtain a closest match.
  13. 17. An image sensor wherein each pixel includes a low-sensitivity wrapping sensor is and a high-sensitivity wrapping sensor under the same lens.
  14. 18. A method for constructing a high dynamic range image of a scene, substantially as described herein with reference to any one of the described embodiments, as that embodiment is depicted in the enclosed drawings. 20
  15. 19. A system for constructing a high dynamic range image of a scene, substantially as described herein with reference to any one of the described embodiments, as that embodiment is depicted in the enclosed drawings. 25 20. An image sensor, substantially as described herein with reference to any one of the described embodiments, as that embodiment is depicted in the enclosed drawings. DATED this 22nd Day of April 2009 CANON KABUSHIKI KAISHA 30 Patent Attorneys for the Applicant SPRUSON&FERGUSON
AU2009201617A 2009-04-22 2009-04-22 Linear high dynamic range image acquisition Abandoned AU2009201617A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009201617A AU2009201617A1 (en) 2009-04-22 2009-04-22 Linear high dynamic range image acquisition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009201617A AU2009201617A1 (en) 2009-04-22 2009-04-22 Linear high dynamic range image acquisition

Publications (1)

Publication Number Publication Date
AU2009201617A1 true AU2009201617A1 (en) 2010-11-11

Family

ID=43064868

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009201617A Abandoned AU2009201617A1 (en) 2009-04-22 2009-04-22 Linear high dynamic range image acquisition

Country Status (1)

Country Link
AU (1) AU2009201617A1 (en)

Similar Documents

Publication Publication Date Title
US10594973B2 (en) Conditional-reset, multi-bit read-out image sensor
TWI504257B (en) Exposing pixel groups in producing digital images
EP2636018B1 (en) Method for producing high dynamic range images
US20110242385A1 (en) Solid-state imaging device and camera system
JP6053447B2 (en) Imaging device
US20110149111A1 (en) Creating an image using still and preview
US20100020206A1 (en) Image sensing system and control method therefor
US11290648B2 (en) Image capture apparatus and control method thereof
US20090251580A1 (en) Circuit and Method for Reading Out and Resetting Pixels of an Image Sensor
US8842200B2 (en) Imaging device and imaging method capable of bright spot processing
US20200296307A1 (en) Information processing apparatus, image sensor, image capturing apparatus, and information processing method
US10044957B2 (en) Imaging device and imaging method
CN105993164B (en) Solid state image sensor, electronic equipment and auto focusing method
US10056421B2 (en) Imaging device and imaging method
US9538103B2 (en) Signal processing unit, signal processing method, image pickup device, and image pickup apparatus
US8049802B2 (en) CMOS camera adapted for forming images of moving scenes
JP6632580B2 (en) Imaging device and imaging device
AU2009201617A1 (en) Linear high dynamic range image acquisition
JP6559021B2 (en) Imaging apparatus and control program therefor
JP5043400B2 (en) Imaging apparatus and control method thereof
JP2018074209A (en) Imaging apparatus
JP2021129281A5 (en) Imaging element and imaging device
JP3915306B2 (en) Imaging apparatus and mechanical shutter response delay measuring method of imaging apparatus
US20230109210A1 (en) Image capturing apparatus and control method therefor
US20240031694A1 (en) Image sensing device and imaging device including the same

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application