US20070070223A1 - Image pickup apparatus and image processing method - Google Patents

Image pickup apparatus and image processing method Download PDF

Info

Publication number
US20070070223A1
US20070070223A1 US11606080 US60608006A US2007070223A1 US 20070070223 A1 US20070070223 A1 US 20070070223A1 US 11606080 US11606080 US 11606080 US 60608006 A US60608006 A US 60608006A US 2007070223 A1 US2007070223 A1 US 2007070223A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
image
signal
image data
high
circuit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11606080
Inventor
Masaya Tamaru
Koichi Sakamoto
Koji Ichikawa
Masahiko Sugimoto
Manabu Hyodo
Kazuhiko Takemura
Hirokazu Kobayashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/235Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor
    • H04N5/2355Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor by increasing the dynamic range of the final image compared to the dynamic range of the electronic image sensor, e.g. by adding correct exposed portions of short and long exposed images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • H04N5/202Gamma control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/235Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
    • H04N9/69Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Circuits for processing colour signals colour balance circuits, e.g. white balance circuits, colour temperature control
    • H04N9/735Circuits for processing colour signals colour balance circuits, e.g. white balance circuits, colour temperature control for picture signal generators

Abstract

An image combined high sensitivity image and low sensitivity image is provided with well-adjusted white balance and broad dynamic range. The image is obtained by multiplying the combined data by total gain that depends on scene. A white balance is adjusted with gain value calculated from of high output image data. Lv value representing luminance is calculated and compared with a threshold to decide whether or not the high sensitivity image and the low sensitivity image should be combined. First gamma correction unit performs gamma-correction for the image signal derived from the high sensitivity signal with first gamma character, second gamma correction unit performs gamma-correction for the image signal derived from the low sensitivity signal with second gamma character that is different from the first gamma character, and addition unit combines the image signal from the first gamma correction unit and the image signal from the second gamma correction unit.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to an image pickup apparatus and an image processing method of obtaining an image with a broad dynamic range.
  • 2. Description of the Related Art
  • [Prior Art]
  • Dynamic range of image sensor such as CCD used in an image pickup device such as widely prevailing digital cameras are generally narrower than those of film. Hence, in the case of imaging imaging a high luminance subject, the amount of received light exceeds the dynamic range. Then, the output of the image sensor saturates to cause the missing of the information of the subject imaged.
  • When, for example, an indoor scene is picked up by an imaging apparatus such as digital still cameras, it sometimes occurs that subjects present in the interior can be imaged well, but the blue sky observed through a window is imaged with saturation. The image picked up in the situation is unnatural as a whole. Such problem comes from the fact that the dynamic range of an image is narrow. To solve the problem, dynamic range is expanded by combining two images picked up separately.
  • For example, short time exposure image (low sensitivity image) is firstly picked up with high shutter speed, then long time exposure image (high sensitivity image) is successively picked up with low shutter speed. These two images are combined to superimpose the outdoor scene through the window in the low sensitivity image on the high sensitivity image in which the indoor scene is preferably picked up.
  • In a conventional technique described in JP-A-2000-307963, as the part of moving subjects do not perfectly coincide with low sensitivity image and high sensitivity image when the two images is combined, the high sensitivity image is partly replaced by the low sensitivity by using a mask.
  • In the above-described conventional technique of JP-A-2000-307963, although image signals of the two images are combined with use of the mask, the discrepancy of white balance is not considered. Therefore, as a result of that white balances of the high sensitivity image and the low sensitivity image in the combined image are respectively different, the combined image comes to be unnatural depending on an imaging scene.
  • Recently, an imaging apparatus such as digital cameras, which includes a new image pickup device having both of high sensitivity pixel and low sensitivity pixel, is proposed. The imaging apparatus combines a high sensitivity image (which will be also called “high output image” hereinafter) picked up by the high sensitivity pixel and a low sensitivity image (which will be also called “low output image” hereinafter) picked up by the low sensitivity pixel to output data of a single image. It is necessary to solve the problem associated with the above-described, when the image pickup apparatus combines images.
  • In the case of imaging a very bright subject under a high-contrast imaging condition, the image combination according to the conventional technique of JP-A-2000-307963 is effective in the reproduction of details in bright (highlight) portions, since the dynamic range is expanded. However, in the case of imaging under a low-contrast imaging condition such as a cloudiness or an interior, similar preferable effects are hardly achieved, or rather, the tone reproduction is wasted.
  • FIG. 18 shows the relation between a high sensitivity signal obtained by imaging a scene with a digital camera and the combined signal derived by combining the high sensitivity signal with a low sensitivity signal by a conventional technique. This figure depicts data associated with the signal of any one color among R (red), G (green) and B (blue). The ordinate indicates tone value, while the abscissa indicates subject luminance. The thin line in the figure is the high sensitivity signal prior to combination, and the thick line is the combined signal.
  • As shown in the figure, the region from a to b of the tone value of the high sensitivity signal (indicated by the hatched zone) is not utilized depending on the luminance of the scene by the combination operation. Therefore, there exist cases where it would be rather better that the image combination is not performed under a low-contrast imaging condition.
  • In another conventional technique described in JP-A-6-141229, the combination of two images is performed by converting each of the image signals obtained with low and high shutter through the same γ characteristics and then additively combining the both signals that are each converted with γ characteristics. However, since simple addition gives an image in which the middle tone region that is strongly influential on image quality appears unnatural, weighted addition depending on signal levels is usually carried out.
  • The conventional technique of JP-A-6-141229 intends to retain a preferable tone reproduction in the middle tone region through weighted addition of the image signals of the two images depending on signal levels and only the image signal with the high sensitivity is used for the middle tone region. However, even if a weighted addition depending on signal levels is performed, the white balance of the middle tone tends to differ from that of the highlights tone when each white balances of two images with high sensitivity and low sensitivity is not accurately adjusted. As a result, a combination image comes to be unnatural.
  • SUMMARY OF THE INVENTION
  • The first object of the present invention is to provide an image pickup apparatus and an image processing method of combining and outputting an image with broad dynamic range while adjusting a white balance.
  • The second object of the present invention is to provide an image pickup apparatus for obtaining preferable images with tone values in high luminance side while expanding dynamic range by displaying images based on image signals corresponding to the imaging condition.
  • The third object of the present invention is to provide an image pickup apparatus for combining and outputting images, which is less unnatural, with broad dynamic range without precisely adjusting white balance.
  • The invention provides an image combination method of image-combining a high output image data and a low output image data has the steps of multiplying a combined data of the high output image data and the low output image data by a total gain that depends on a scene. According to the method, it becomes possible to obtain combined images provided with a broad dynamic range and a well-adjusted white balance.
  • Preferably, the total gain is multiplied on the combined data of the high output image data and the low output image data in a range that the high output image data exceeds a certain value, and the range that the high output image data exceeds a certain value is range that the total gain p exceeds a value represented by [arbitrary numeral “α”−coefficient “k”×(high output image data after gamma-correction “high”)/threshold “th”)]. Further, the coefficient “k”=0.2, the arbitrary numeral “α”=1, the total gain p=“0.8” for high contrast scenes, the total gain “p”=0.86 for cloudy or shady scenes, the total gain “p”=0.9 for indoor scenes under fluorescent lamp illumination. Therefore, preferable combined images well fits original scenes can be obtained.
  • The invention provides an image pickup apparatus for image-combining a high output image data and a low output image data, has multiplying means for multiplying a combined data of the high output image data and the low output image data by a total gain that depends on a scene. According to the apparatus, it becomes possible to obtain combined images provided with a broad dynamic range and a well-adjusted white balance.
  • Preferably, the multiplying means multiplies the combined data of the high output image data and the low output image data by the total gain in a range that the high output image data exceeds a certain value. and the range that the high output image data exceeds a certain value is range that the total gain p exceeds a value represented by [arbitrary numeral “α”−coefficient “k”×(high output image data after gamma-correction “high”)/threshold “th”)]. Further, the coefficient “k”=0.2, the arbitrary numeral “α”=1, the total gain p=“0.8” for high contrast scenes, the total gain “p”=0.86 for cloudy or shady scenes, the total gain “p”=0.9 for indoor scenes under fluorescent lamp illumination. Therefore, preferable combined images well fits original scenes can be obtained.
  • The invention provides An image pickup apparatus for combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data has: a calculating unit for calculating a gain value for white balance adjustment from the image data of the high output image; a gain correcting unit for performing not only first white balance adjustment for the image data of the high output image with the gain value calculated by the calculating unit but also second white balance adjustment for the image data of the low output image with the gain value. According to the apparatus, it becomes possible to generate a combined image, which is natural, with least discrepancy in white balance and a broad dynamic range.
  • The invention provides An image pickup apparatus for combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data has: a calculating unit for calculating a gain value for white balance adjustment from the image data of the high output image; a gain correcting unit for performing a white balance adjustment for the combined image data with the gain value calculated by the calculating unit. According to the apparatus, it also becomes possible to generate a combined image, which is natural, with least discrepancy in white balance and a broad dynamic range.
  • The invention provides An image processing method of combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data has the step of: calculating a gain value used for first white balance adjustment for the image data of the high output image and second white balance adjustment for the image data of the low output image from the image data of the high output image. According to the method, it becomes possible to generate a combined image, which is natural, with least discrepancy in white balance and a broad dynamic range.
  • The invention provides An image processing method of combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data has the step of: calculating a gain value used for a white balance adjustment for the combined image data from the image data of the high output image. According to the method, it also becomes possible to generate a combined image, which is natural, with least discrepancy in white balance and a broad dynamic range.
  • The invention provides an image pickup apparatus has: an imaging device including first photoreceptors which receive light from a subject with a first sensitivity to output signals corresponding to the amount of the received light, and second photoreceptors which receive light from a subject with a second sensitivity lower than the first sensitivity to output signals corresponding to the amount of the received light; received light calculating means for calculating the amount of light received by the imaging device; judging means for judging whether or not the amount of the received light calculated by the received light calculating means exceeds a predetermined value; and display means for displaying an image based on combination signals derived from combining signals output from the first photoreceptors and signals output from the second photoreceptors when the judging means judges that the amount of the received light exceeds the predetermined value, while displaying an image based on signals output from the first photoreceptors when the judging means judges that the amount of the received light is equal or below the predetermined value.
  • The image pickup apparatus is provided an imaging device including the first photoreceptors which receive the light from a subject with a first sensitivity to output signals corresponding to the amount of the received light, and the second photoreceptors which receive the light from a subject with a second sensitivity lower than the first sensitivity to output signals corresponding to the amount of the received light. The second photoreceptors may be arranged so as to lie between the first photoreceptors, or placed on the first photoreceptors which are provided with means such as a channel stopper that prevents the amounts of received light from mixing. There is no limitation on the arrangement of the second photoreceptors.
  • The received light calculating means calculates the amount of light received by the imaging device. The judging means judges whether or not the amount of received light calculated by the received light calculating means exceeds a predetermined value. In the case where the light amount of the image pickup apparatus is judged to exceed the predetermined value, the display means displays an image by using the signal derived by combining the signals output from the first photoreceptors and the second photoreceptors, and in the case where the light amount received by the image pickup apparatus is judged not to exceed the predetermined value, displays an image by using the signals from the first photoreceptors.
  • Since an image is displayed, for the case where the light amount received by the imaging device (i.e., the luminance of the imaging circumstance) exceeds the predetermined value, by using the signal derived by combining the high sensitivity signals picked up by the first photoreceptors and the low sensitivity signals picked up by the second photoreceptors, and further, for the case where the received light amount does not exceed the predetermined value, by using the signals output from the first photoreceptors, preferable effects are achieved including broad dynamic range and an effective use of tone values towards the high luminance limit.
  • In displaying images with the display means, it is possible to always combine the signals output from the first photoreceptors and the signals output from the second photoreceptors, and to use the combined signals for image display depending on the judging result by the judging means. Alternatively, it is possible not to combine the two kinds of signals consistently and combine them depending on the judgment of the judging means for image display.
  • Moreover, the received light calculating means calculates the amount of received light on the basis of an f number and shutter speed of the image pickup apparatus.
  • Therefore, the luminance of the imaging environment can be accurately estimated from the f numbere and the shutter speed. Thus, the judgment by the judging unit becomes highly accurate.
  • The invention provides An image pickup apparatus for additively combining a low sensitivity image signal and a high sensitivity image signal to generate an image with broad dynamic range, has: first gamma correction means for performing gamma correction for the high sensitivity image signal with a first gamma character; second gamma correction means for performing gamma correction for the low sensitivity image signal with a second gamma character which is different from the first gamma character; and combining means for additively combining image signals output from the first gamma correction means and that image signals output from the second gamma correction means.
  • According to the apparatus, it becomes possible to provide images not giving unnatural impression without executing weighted addition depending on signal levels in the additive combination operation.
  • Preferably the gamma value of the first gamma character is larger than the gamma value of the second gamma character, and moreover, the gamma value of the first gamma character is 0.45, while the gamma value of the second gamma character is 0.18. According to the apparatus, more preferable images can be obtained which are still far from unnatural impression and less deteriorated in white balance as well as middle tone rendition.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a digital still camera associated with one embodiment of the invention;
  • FIG. 2 shows the pixel arrangement of the solid-state image sensor shown in FIG. 1;
  • FIG. 3 is a detailed block diagram of the digital signal-processing unit shown in FIG. 1;
  • FIG. 4 is a diagram showing how the dynamic range changes;
  • FIG. 5 shows the pixel arrangement of a solid-state image sensor associated with another embodiment;
  • FIG. 6 is a detailed block diagram of the digital signal-processing unit shown in FIG. 1;
  • FIG. 7 is a detailed block diagram of the digital signal-processing unit associated with another embodiment;
  • FIG. 8 is a block diagram showing the configuration of a digital camera associated with the embodiment 1 practicing the invention;
  • FIG. 9 is a diagram showing the outline configuration of a CCD;
  • FIG. 10 is a flowchart showing the flow of a combination judgment routine;
  • FIG. 11 is a diagram showing the relation of the threshold for the necessity of combination operation to the types of the ordinary shooting condition as well as the scene luminance (Lv value);
  • FIG. 12 is a diagram showing the relation between the high sensitivity signal prior to image combination and the combined signal obtained by image combination operation;
  • FIG. 13 shows the configuration of the digital signal-processing circuit in a digital camera associated with the embodiment 2 practicing the invention;
  • FIG. 14 shows the outline configuration of a CCD containing photoreceptors each of which can receive both of high sensitivity and low sensitivity signals;
  • FIG. 15 is a diagram describing the signal levels obtained by the high and low sensitivity pixels shown in FIG. 2;
  • FIG. 16 is a block diagram of the output circuit section in a dynamic range-expanded image-capturing unit according to one embodiment of the invention;
  • FIG. 17 is a block diagram of the gamma-correcting circuit shown in FIG. 16;
  • FIG. 18 shows the relation between the high sensitivity signal prior to combination and a combined signal obtained by a combination operation based on a conventional technique.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • In the following, four practical embodiments of the invention are described with reference to the drawings.
  • First Embodiment
  • FIG. 1 is a block diagram of a digital still camera of first embodiment according to the invention. In the first embodiment, the digital still camera is explained for an example, while it is applicable to other types of an image pickup apparatus such as digital video cameras. An image combination operation in the embodiment is performed by software in a digital signal processing unit 26 to be described later, while the image combination operation can be also performed by a hardware circuit.
  • The digital still camera shown in FIG. 1 has an imaging lens 10, a solid-state image sensor 11, a lens diaphragm 12 that is positioned between the imaging lens 10 and a solid-state image sensor 11, an infrared-cut filter 13 and an optical low pass filter 14. A CPU 15, which controls the entire digital still camera, controls a light-emitting unit 16 and a light-receiving unit 17 both for photoflash, adjusts a lens-driving unit 18 to regulate the position of the imaging lens 10 to the focus position, and controls via a diaphragm-driving unit 19 the aperture of the lens diaphragm 12 so as to achieve a correct amount of exposure.
  • Further, the CPU 15 drives the solid-state image sensor 11 via the image sensor-driving unit 20 whereby the image of the subject captured through the imaging lens 10 is outputted in the form of color signals. Separately, command signals from the camera user are input to the CPU 15 via an operating unit 21. The CPU 15 performs various controls in accordance with these commands.
  • The electric control system of the digital still camera has an analog signal processing unit 22 connected to an output terminal of the solid-state image sensor 11, and an A/D converting circuit 23 which converts RGB color signals output from this analog signal processing unit 22 to digital signals. These two components are controlled by the CPU 15.
  • The electric control system of the digital still camera has a memory control unit 25 connected to a main memory 24, a digital signal processing unit 26 which will be described in detail later, a compression-decompression unit 27 which compresses captured images to JPEG ones or decompresses compressed images, an integration unit 28 which integrates measured light intensity data of the image data that was converted to digital data by A/D converting circuit 23, an external memory control unit 30 to which a removable recording medium 29 is connected and a display control unit 32 to which a display unit 31 provided at the rear surface of the camera is connected. These components are mutually connected through a control bus 33 and a data bus 34, and controlled by the command from the CPU 15.
  • Though each of the digital signal processing unit 26, the analog signal processing unit 22, and the A/D converting circuit 23 may be installed as a separate circuit in the digital still camera, they are preferably fabricated into a semi-conductor board together with the solid-state image sensor 11 by using LSI manufacturing techniques to give a unified solid-state image pickup device.
  • FIG. 2 shows the pixel arrangement in the solid-state image sensor 11 used in the present embodiment. Pixels 1 in the CCD section that captures dynamic range-expanded images are arranged as described in, for example, JP-A-10-136391 wherein each pixel in an odd-numbered line is shifted in the horizontal direction by a half pitch relative to each pixel in an even-numbered line and wherein the vertical transfer path (not shown) that transfers the signal charge read out of each pixel in the vertical direction runs zigzag so as to get away from every pixel lying in the vertical direction.
  • Further, the individual pixel 1 associated with the embodiment shown in the figure is structured so as to be divided into a low speed pixel 2 occupying about one fifth of the total area of pixel 1 and a high speed pixel 3 occupying the remaining, about four fifth of the total area. And the system is constructed so that the signal charge from the individual low speed pixel 2 and that from the individual high speed pixel 3 are separately read out and transferred to the above-described vertical transfer path. The ratio in and the position at which the pixel 1 is to be divided are determined from the designing point of view. Thus, the structure shown in FIG. 2 is just an example.
  • The image pickup unit of the embodiment is designed and fabricated so that a low sensitivity image (the image captured by the low speed pixel 2) and a high sensitivity image (the image captured by the high speed pixel 3) are acquired simultaneously, and that image signals are read out sequentially from the individual pixels 2 and 3, which are additively combined for output as described in detail later.
  • The solid-state image sensor 11, which was described for the example of a honeycomb type pixel arrangement as shown in FIG. 2, may be of Bayer type CCD or CMOS sensor.
  • FIG. 3 is a detailed block diagram of the digital signal processing unit 26 shown in FIG. 1. This digital signal processing unit 26, which adopts a logarithmic addition method in which the high sensitivity image signal and the low sensitivity image signal are added after respective gamma correction. The digital signal processing unit 26 includes an offset correction circuit 41 a which takes in RGB color signals consisting of digital signals of the high sensitivity image outputted from the A/D conversion circuit 23 shown in FIG. 1 and executes an offset correction operation on those signals, a gain control circuit 42 a which adjusts white balance of the output signals from the offset correction circuit 41 a, a gamma correction circuit 43 a which executes gamma correction on the color signals after gain control, an offset correction circuit 41 b which takes in RGB color signals consisting of digital signals of the low sensitivity image outputted from the A/D conversion circuit 23 shown in FIG. 1 and executes an offset correction operation on those signals, a gain control circuit 42 b which adjusts the white balance of the output signals from the offset correction circuit 41 b, and a gamma correction circuit 43 b which executes gamma correction for the color signals after gain control. In the cases where a linear matrix operation is executed for the offset-corrected signal, such operation is performed between the gain control circuit 42 a, 42 b and the gamma correction circuit 43 a, 43 b.
  • The digital signal processing unit 26 further includes an image combination operation circuit 44 which takes in both of the output signals from the gamma correction circuits 43 a, 43 b and performs image combination operation as described in detail later, an RGB interpolation unit 45 which calculates RGB three color signals at each pixel position via the interpolating calculation of the RGB color signals after image combination, an RGB/YC conversion circuit 46 which obtains a luminance signal Y and color difference signals Cr, Cb from the RBG signals, a noise filter 47 which reduces noise from the luminance signal Y and color difference signals Cr, Cb, a contour correction circuit 48 which executes contour correction on the noise-reduced luminance signal Y, and a color difference matrix circuit 49 which executes color correction by multiplying a color difference matrix on the color difference signals Cr, Cb.
  • The RGB interpolation unit 45 is not needed for three plates-type image sensors. However, since the solid-state image sensor 11 of the embodiment is single plate-type of which each pixel outputs one color signal among R, G and B signals. Therefore, the RGB interpolation unit 45 estimates the intensities of the colors the signals of which a certain pixel does not output, i.e., G and B color signals at the position of an R signal-outputting pixel, by interpolation with use of the G and B signals at the surrounding pixels.
  • The image combination operation circuit 44 combines the high sensitivity image signal outputted from the gamma correction circuit 43 a and the low sensitivity image signal outputted from the gamma correction circuit 43 b according to the following formula by pixels and outputs the combined signal.
    data=[high+MIN(high/th, 1)×low]×MAX[(−k×high/th)+α, p]  (1)
    wherein,
  • high: gamma-corrected data of the high sensitivity (high output) image signal
  • low: gamma-corrected data of the low sensitivity (low output) image signal
  • p: total gain
  • k: coefficient
  • th: threshold, and
  • α: a value determined by the scene (≈1)
  • The threshold “th” is a value that the user or designer of the digital still camera designates, exemplified by “219” chosen from the values ranging from 0 to 255 in the case where the gamma-corrected data is of 8 bit (containing 256 tonal steps).
  • The first term in the above formula (1) indicates that, when the high sensitivity image data high exceeds the threshold “th”, the low sensitivity image data low is directly added to the high sensitivity image data high, and that, when the high sensitivity image data high does not exceeds the threshold “th”, to the high sensitivity image data high is added a value obtained by multiplying the low sensitivity image data low by the ratio of the high sensitivity image data high to the threshold “th”.
  • The embodiment is characterized by that the added data obtained in the first term is not used as combined image data, but that the value derived by multiplying the first term value by the second term value (MAX[(−k×high/th)+α, p]) is used as the combined image data.
  • In the second term, the coefficient “k” is preferably “0.2” for the solid-state image sensor 11 used in the embodiment shown in FIG. 2. For a solid-state image sensor 11 shown in FIG. 2 wherein the saturation ratio of signal charge of the high sensitivity pixel 3 and the low sensitivity pixel 2 are respectively different, the coefficient k can be calculated by the following formula (2) for convenience.
    coefficient k=1−Sh(Sh+S1)  (2)
    wherein
  • Sh: the saturation amount of signal charge of the high sensitivity pixel
  • Sl: the saturation amount of signal charge of the low sensitivity pixel
  • Although, strictly speaking, the area ratio of the photodiode does not always represent the ratio of the saturation charge amounts, let us approximate here that the area ratio represents the saturation ratio for convenience in the case shown in FIG. 2. Then, by applying the above formula (2),
    k=1−4/(4+1)=1−0.8=0.2
  • As the solid-state image sensor having high sensitivity and low sensitivity pixels, another array shown in FIG. 5 in which high sensitivity pixels 3′ and low sensitivity pixels 2′ are formed by changing the aperture area of the micro-lens provided on each of a large number of photodiodes (not shown) all fabricated in the same dimension and shape can be used in addition to the one shown in FIG. 2. For this type of image sensor, the above formula (2) is not applicable since the saturation amount of signal charge is the same for the high and low sensitivity pixels. However, the formula (1) can be applied by obtaining the value for coefficient “k” experimentally or by estimating it from the aperture area of the micro-lenses. This value of coefficient “k” is determined by the configuration of the solid-state image sensor, and is set at a fixed value at the time of shipping which the user cannot alter at his will.
  • As the value of total gain “p” in the formula (1), the embodiment adopts experimentally determined ones. The total gain “p” represents the gain for the overall combined image data, and the embodiment controls the dynamic range of the image by controlling this value of “p”. When the total gain “p” is small, the dynamic range is broad. When the total gain “p” is large, the dynamic range narrows. Specifically, the value of “p” is changed depending on scenes as exemplified by P=0.8 for high contrast scenes (those under the clear midsummer sky), P=0.86 for cloudy and shady scenes, and P=0.9 for indoor scenes under fluorescent lamp illumination. With such a scheme, 8 bit quantization levels can be more effectively utilized when the gamma-corrected data are of 8 bits.
  • The value of “p” may be automatically determined by the automatic judgment which the digital still camera performs based on the data detected by various sensors thereof, or may be set by the camera user who designates the type of scene via the operating unit 21 shown in FIG. 1.
  • FIG. 4 is a diagram showing how the dynamic range changes when the value of “p” is changed. In Curve A for a large total gain “p”, the dynamic range is small. Along with the diminution of the total gain “p”, the shape of the curve changes to finally reach Curve B having a broad dynamic range.
  • In the formula (1), the value of “α”, which may be changed corresponding to the type of the scene to be captured by the camera, may take a fixed value “1”.
  • As described above, according to the embodiment, the combination of the high sensitivity image data and the low sensitivity image data are combined, and thereafter the total gain that depends on scenes is multiplied by the combined data. Images with a well-adjusted white balance and a broad dynamic range can be provided as scenes picked up. Furthermore, since a logarithmic addition method wherein the number of bits for each of the high and low sensitivity image data is reduced prior to addition is adopted for image combination, a small circuit dimension suffices, leading to cost reduction in camera manufacture.
  • In the embodiment described above, an example has been explained in which a high output image (high sensitivity image) and a low output image (low sensitivity image) both captured by a digital still camera is combined within the camera. But, the concept of the invention is applicable to cases wherein a high sensitivity image data and a low sensitivity image data captured by an image pickup device are once stored in a memory, which data (CCD-RAW data) are fed to, for example, a PC, and wherein an image combination operation similar to the one executed by the digital signal processing unit 26 in the above-described embodiment is performed on those data in order to provide a combined image having a well-adjusted white balance and a broad dynamic range.
  • Furthermore, in the embodiment described above, the image captured by the low sensitivity pixel was defined as a low sensitivity image, and the one captured by the high sensitivity pixel as a high sensitivity image. However, the invention is not restricted to cases where images of different sensitivities are combined, but also applicable to cases where a plurality of images that have been captured by the same pixel with use of varied aperture values of the diaphragm are combined. For example, in the case of sequentially imaging multiple images of a high-contrast still life with varied exposure amounts, the image captured with a large aperture acts as a high output image since the output signal levels from individual pixels are high, while the image captured with a small aperture acts as a low output image since the output signal levels from individual pixels are lower than those of the high output image. The concept of the abode-described embodiment is applicable for the combination of those images.
  • Second Embodiment
  • Configuration of a digital still camera according to second embodiment is same as that of a digital still camera according to the first embodiment shown in FIG. 1, except a digital signal processing unit. Therefore, a digital signal processing unit of the embodiment is denoted by “26 a” in FIG. 6 and “26 b” in FIG. 7.
  • FIG. 6 is a detailed block diagram of the digital signal processing unit 26 a. This digital signal processing unit 26 a may be constructed in the form of a hardware circuit or software. In the embodiment, the digital signal processing unit 26 a will be described for the case where a low sensitivity image picked up with a high shutter speed (This image is also called “low output image”, since the output from each pixel is low.) and a high sensitivity image sequentially picked up with a low shutter speed (This image is also called “high output image”, since the output from each pixel is higher than the data of the low output image.) are combined.
  • The digital signal processing unit 26 a has a white balance gain calculating circuit 40 that calculates the gain value for white balance adjustment on taking in the output data from the integration unit 28 shown in FIG. 1, a first offset correction circuit 41 a that takes in RGB color signals of the high output image output from the A/D conversion circuit 23 and executes an offset correction operation on those signals, a first gain control circuit 42 a that adjusts white balance of the output signal from the first offset correction circuit 41 a with the gain value calculated by the gain calculating circuit 40, and a first gamma correction circuit 43 a that executes gamma correction for the high output image data output from the first gain control circuit 42 a with “γ” value for the high output image.
  • The digital signal processing unit 26 b further has a second offset correction circuit 41 b that takes in RGB color signals of the low output image output from the A/D conversion circuit 23 and executes an offset correction operation on those signal, a second gain control circuit 42 b that adjusts white balance of the output signal from the second offset correction circuit 41 b with the gain value calculated by the gain calculating circuit 40, a second gamma correction circuit 43 b that executes gamma correction for the low output image data output from the second gain control circuit 42 b with “γ” value for the low output image, and an image combination circuit 44 that combines the output data from the first gamma correction circuit 43 a and the output data from the second gamma correction circuit 43 b on each pixel basis.
  • The digital signal processing unit 26 a further includes an RGB interpolation unit 45 which calculates the RGB three color signals at each pixel position via the interpolating calculation of the RGB color signals in the combined image data (the output signal of the image combination circuit 44), an RGB/YC conversion circuit 46 which obtains a luminance signal Y and color difference signals Cr, Cb from the RBG signals, a noise filter 47 which reduces noise from the luminance signal Y and the color difference signals Cr, Cb, a contour correction circuit 48 which executes contour correction for the noise-reduced luminance signal Y, and a color difference matrix circuit 49 which executes color correction by multiplying a color difference matrix on the color difference signals Cr, Cb.
  • The RGB interpolation unit 45 is not needed for three plates-type image sensors. However, since the solid-state image sensor 11 of the embodiment is single plate-type of which each pixel outputs one color signal among R, G and B signals. Therefore, the RGB interpolation unit 45 estimates the intensities of the colors the signals of which a certain pixel does not output, i.e., G and B color signals at the position of an R signal-outputting pixel, by interpolation with use of the G and B signals of the surrounding pixels.
  • Now, the operation of the digital still camera described above is explained. When the user inputs the command of capturing a dynamic range-expanded combined image via the operating unit 21, the CPU 15 firstly picks up a low sensitivity image of a scene with a high shutter speed, and then picks up a high sensitivity image of the same scene with a low shutter speed. The digital signal processing unit 26 a takes in these two image data to combine.
  • The RGB signals (digital signals) of the high sensitivity image are taken in the digital signal processing unit 26 a on pixel-by-pixel basis, and subjected to offset correction in the first offset correction circuit 41 a at first, and then performed white balance adjustment in the first gain control circuit 42 a. The gain value used for this white balance adjustment is calculated by the white balance gain calculating circuit 40, which derives this gain value from the integral value of the high sensitivity image integrated by the integration unit 28. The gain-corrected high sensitivity image data is subjected to gamma correction by the first gamma correction circuit 43 a with “γ” value for the high sensitivity image, and then output to the image combination circuit 44.
  • On the other hand, the RGB signals (digital signals) of the low sensitivity image are taken in the digital signal processing unit 26 a on pixel-by-pixel basis, and subjected to offset correction in the second offset correction circuit 41 b at first, and then performed white balance adjustment in the second gain control circuit 42 b. The gain value used for this white balance adjustment is also calculated by the white balance gain calculating circuit 40. In the embodiment, the gain value for the white balance adjustment of the low sensitivity image is also calculated from the integral value of the high sensitivity image integrated by the integration unit 28. The gain-corrected low sensitivity image data is subjected to gamma correction by the second gamma correction circuit 43 b with “γ” value for the low sensitivity image, and then output to the image combination circuit 44.
  • The image combination circuit 44 additively combines the high sensitivity image data and the low sensitivity image data on pixel-by-pixel basis, and outputs image data, which is formed by luminance signal Y and color-difference signals Cr, Cb, via the RGB interpolation unit 45, the RGB/YC conversion circuit 46, the noise filter 47, the contour-correcting circuit 48 and the color-difference matrix circuit 49. By such processing, the image data comes to be with a broad dynamic range and well-adjusted white balance to be natural, then stored in the recording medium 29.
  • As described above, in the embodiment, the white balance adjustment for the low sensitivity image is performed with the gain value obtained from the image data of the high sensitivity image when the image combination of the high sensitivity image (high output image) and the low sensitivity image (low output image) are combined. Accordingly, no white balance discrepancy ever takes place in the combined image of the low and high sensitivity images, thus enables to obtain the combined image to be natural.
  • Since the high and low sensitivity images of the same object shot sequentially with a very short interval essentially are picked up with the same light source, the gain value used for white balance adjustment is essentially the same. However, it is highly probable that the image data of the low sensitivity image goes into a region where the derivation of the gain value for white balance adjustment is almost impossible, and that therefore a value largely deviated from the gain value obtained from the high sensitivity image data may be derived. The embodiment performs the white balance adjustment for the low sensitivity image with the gain value derived from the high sensitivity image data from the viewpoint of color reproduction and smooth continuity of color.
  • The present method can obtain a combined image substantially free of white balance discrepancy even in cases where a low sensitivity image and a high sensitivity image are combined from fraction to fraction with use of a mask in some conventional techniques as described JP-A-2000-307963, even though all the pixel data of the low and high sensitivity images were used for additive combination to give rise to a combined image.
  • The image combination executed by the digital signal processing unit 26 a in FIG. 6 is called “logarithmic addition method” in which the low and high sensitivity images are combined after respective gamma conversion. In contrast, there is another image combination method called “arithmetic addition method” in which the low and high sensitivity images prior to gamma conversion are combined together, followed by gamma conversion. FIG. 7 shows the block diagram of the digital signal processing unit 26 b associated with another embodiment in which the invention is applied to a digital still camera carrying out arithmetic addition method.
  • The digital signal processing unit 26 b associated with the another embodiment has a white balance gain calculating circuit 40, an image combination circuit 44 which takes in the RGB color signals of the high sensitivity images and the RGB color signals of the low sensitivity images on pixel-by-pixel basis and to additively combine, a knee correction circuit 39 which executes a knee correction process for the output data (combined image data) from the image combination circuit 44, a gain control circuit 42 which executes gain corrections including white balance adjustment for the combined image data after the knee correction, a gamma correction circuit 43 which executes gamma correction for the gain corrected combined image data, an RGB interpolation circuit 45, an RGB/YC conversion circuit 46, a noise filter 47, a contour-correcting circuit 48 and a color-difference matrix circuit 49.
  • Again, in the digital signal processing unit 26 of the another embodiment, the gain control circuit 42 adopts the gain value derived from the integrated value of the high sensitivity image data for the white balance adjustment executed by the gain control circuit 42, while the low sensitivity image data is not used for the derivation of gain value. The combined image prepared by the arithmetic addition method executed by the digital signal processing unit 26 in the another embodiment is also provided with a well-adjusted white balance without discrepancy and a broad dynamic range.
  • In the above-described embodiments, the image captured with a high shutter speed was called low sensitivity image, and the one with a low shutter speed was called high sensitivity image. However, the scope of the invention is not limited to cases for image combination of different sensitivity images, but is applicable to cases for image combination of a plurality of images obtained with the same shutter speed and different diaphragm apertures.
  • For example, in the case of sequentially imaging multiple images of a high-contrast still life with varied exposure amounts, the image captured with a large aperture acts as a high output image since the output signal levels from individual pixels are high, while the image captured with a small aperture acts as a low output image since the output signal levels from individual pixels are lower than those of the high output image.
  • In the above-described embodiments, an example of image combination in which a plurality of images sequentially captured with a high shutter speed and a low shutter speed was explained. But the invention is applicable to the image combination in which a plurality of solid-state image sensors one of which captures a high sensitivity image and another one of which captures a low sensitivity image are loaded in the image pickup device and in which the captured plural images are combined.
  • Moreover, the invention is applicable to the image combination in which both of high sensitivity pixels and low sensitivity pixels are loaded in a single solid-state image sensor and in which the images read out of the high and low sensitivity pixels are combined. In such a case, the gain value for white balance adjustment is calculated on the basis of the high output image read out of the high sensitivity pixels or the solid-state image sensor both of which store larger amounts of signal charge under the same shutter speed and the same diaphragm aperture.
  • In the above-described embodiments, an example has been explained in which a high output image and a low output image both captured by a digital still camera is combined within the digital still camera. But, the concept of the invention is applicable to cases wherein a high sensitivity image data and a low sensitivity image data captured by an image pickup device are once stored in a memory, which data are taken out of the image pickup device, fed to, for example, a PC, and wherein an image combination operation similar to the one executed by the digital signal processing unit 26 a, 26 b in the above-described embodiment is performed on those data in order to provide a combined image having a well-adjusted white balance and a broad dynamic range.
  • Third Embodiment
  • In third embodiment, descriptions will be given to the cases where the invention is applied to a digital still camera.
  • As shown in FIG. 8, a digital camera 100 as the embodiment has an optical lens 112, a diaphragm 114 which regulates the amount of light transmitting the optical lens 112, a shutter 116 which regulates the light-passing period, a CCD (Charge-Coupled Device) 118 as the image sensor which picks up a scene with high sensitivity and low sensitivity photoreceptors based on the impinging light, which representing the scene, transmitted the optical lens 112, the diaphragm 114 and the shutter 116 to output R, G and B three-color analog image signals.
  • The CCD 118 are, in the order, connected to an analog signal processing unit 120 which executes pre-determined analog signal processing on the high sensitivity and low sensitivity signals inputted by the CCD 118, and an analog/digital converter (hereafter, called “A/D converter”) 122 which converts the high sensitivity and low sensitivity analog signals inputted from the analog signal processing unit 120.
  • The digital camera further has a driving unit 124 which drives the optical lens 112, a driving unit 126 which drives the diaphragm 114, a driving unit 128 which drives the shutter 116, a CCD control unit 130 which performs the timing control at imaging, a strobe control unit 158 which controls the light emission of a strobe 160 and a camera operating unit 162 such as a shutter switch.
  • The high sensitivity and low sensitivity digital signals (the digital values of the R, G and B signals) output from the A/D converter 122 is fed to both of a control circuit 150 (the detail will be described later) and a digital signal processing circuit 134. The digital signal processing circuit 134 has a combination operation circuit 136, a Knee-processing circuit 138, a white balance (WB) adjusting circuit 140, a gamma correction circuit 142 and a memory 144.
  • The combination operation circuit 136, which receives a combination command from the control circuit 150, combines the high sensitivity and low sensitivity signals inputted from the A/D converter 122 in such a manner as will be described below. The Knee-processing circuit 138 modifies the input-output characteristic in the high luminance region as required. The WB-adjusting circuit 140 has three multipliers (not shown) which modify each of the digital values of the R, G and B signals via the multiplication of respective gains. Each of the R, G and B signals is inputted to the corresponding multiplier. Further, gain values Rg, Gg and Bg for controlling white balance are inputted to the corresponding multipliers from the control circuit 150, and the multipliers use these two inputted values for multiplication. R′, G′ and B′ signals, which have been white balance-adjusted by such operation, are fed to the gamma correction circuit 142.
  • The gamma correction circuit 142 modifies the input-output characteristics of the white balance-adjusted R′, G′ and B′ signals so as to have pre-determined gamma characteristics, and further changes the 10 bits signals to 8 bits ones. The signals thus processed are stored in the memory 144.
  • The RGB signals outputted from the memory 144 of the digital signal processing circuit 134 are recorded in a removable recording medium such as a smart medium and memory stick not shown in the figure, and at the same time displayed on a liquid crystal display not shown.
  • In addition to the above-described components, the digital camera 100 is equipped with the control circuit 150 having a microcomputer containing a CPU (Central Processing Unit) 152, a ROM 154 and a RAM 156.
  • The control circuit 150 controls the entire operation of the digital camera 100. In the ROM 154, a threshold “th” of the amount of light received by the CCD 118 (i.e., the scene luminance) derived from f number and shutter speed of the digital camera 100, and program of process routine for judging whether or not image combination is to be carried out based on this threshold “th” and outputting a combination or no-combination command, depending on the judgment, to the combination operation circuit 136 of the digital signal processing circuit 134 are stored.
  • Moreover, the digital camera 100 has a photo-sensor 132 for the detection of scene luminance. The signal obtained by the light received by the photo-sensor 132 is fed to the control circuit 150. When the shutter switch is pushed halfway down, the control circuit 150 calculates the shutter speed and f number corresponding to the mode setting (for example, automatic exposure mode, aperture priority mode, or shutter speed priority mode) of the digital camera 100.
  • The following is the configuration of the CCD 118 associated with the embodiment.
  • As the CCD 118 can be adopted a honeycomb-type CCD as shown in FIG. 9.
  • The image pickup part of this CCD 118, in which one color is allotted to each pixel, is provided with a plurality of two-dimensionally arranged image sensors PD1 with predetermined pitches (horizontal pitch=Ph (μm), and vertical pitch=Pv (μm)) in a staggered mode in which adjacent photoreceptors PD1 are shifted in the vertical as well as horizontal directions, vertical transfer electrodes VEL which are arranged so as to circumvent the aperture parts AP formed in the front surface of the photoreceptors PD1 and take out signals (charge) from the photoreceptors PD1 and transfer the signals in the vertical direction, and horizontal transfer electrodes HEL which are placed at the bottom side of the vertical transfer electrodes VEL placed at the lowest position along the direction perpendicular to the paper plane and transfer the signals delivered by the vertical transfer electrodes VEL outward. In the example depicted in the figure, the aperture parts AP are fabricated in the shape of an octagonal honeycomb.
  • Here, vertical transfer electrode groups each comprising a plurality of the vertical transfer electrodes VEL are constructed so that one of vertical transfer driving signals V1, V2, . . . , or V8 can be applied to each electrode simultaneously. In the example shown in the figure, vertical transfer-driving signal V3 is applied to the electrode group forming the first column, vertical transfer-driving signal V4 to the electrode group forming the second column, vertical transfer-driving signal V5 to the electrode group forming the third column, vertical transfer-driving signal V6 to the electrode group forming the fourth column, vertical transfer-driving signal V7 to the electrode group forming the sixth column, vertical transfer-driving signal V8 to the electrode group forming the seventh column, and vertical transfer-driving signal V1 to the electrode group forming the eighth column, respectively.
  • On the other hand, each photoreceptor PD1 is designed to be electrically connected to an adjacent vertical transfer electrode VEL by means of a transfer gate TG. In the example shown in the figure, the CCD is so constructed that each photoreceptor PD1 is connected to then adjacent vertical transfer electrode VEL lying lower right by means of a transfer gate TG.
  • In the figure, the aperture AP formed in front of the photoreceptor PD1 designated as ‘R’ is covered by a color separation filter (or color filter) transmitting red light, the aperture AP formed in front of the photoreceptor PD1 designated as ‘G’ is covered by a color separation filter (or color filter) transmitting green light, and the aperture AP formed in front of the photoreceptor PD1 designated as ‘B’ is covered by a color separation filter (or color filter) transmitting blue light. In other words, the photoreceptor designated as ‘R’ receives red light, the photoreceptor designated as ‘G’ receives green light, and the photoreceptor designated as ‘B’ receives blue light, respectively, and each photoreceptor outputs an analog signal corresponding to the amount of received light, respectively.
  • The CCD 118 further comprises photoreceptors PD2 that are less sensitive than the above-described photoreceptors PD1. The photoreceptor PD2 is placed surrounded by plural photoreceptors PD1. Similar to the photoreceptor PD1, in front of the photoreceptor PD2 is provided an aperture AP which area is smaller than that of the aperture for the photoreceptor PD1. And the photoreceptor PD2 is electrically connected to an adjacent vertical transfer electrode VEL by means of a transfer gate TG. Further, the aperture AP provided in front of the photoreceptor PD2 is covered with one of R, G and B color filters as in the photoreceptor PD1. Since the light-receiving area of the photoreceptor PD2 is made smaller than that of the photoreceptor PD1 in this way, RGB signals of sensitivities lower than those obtained by the photoreceptor PD1 can be acquired.
  • The electrode to which the transfer gate TG of the photoreceptor PD2 is connected is provided independent of the electrode to which the transfer gate TG of the adjacent photoreceptor PD1 is connected. In the present embodiment, after the reading out of the charge of the photoreceptor PD1, then the charge of the photoreceptor PD2 is read out.
  • The photoreceptor PD1 of the embodiment corresponds to the first photoreceptor of the invention, while the photoreceptor PD2 of the embodiment corresponds to the second photoreceptor of the invention.
  • The combination command routine in the control circuit 150 of the digital camera 100 now be described in detail using the flowchart in FIG. 10.
  • First of all, the light, which that represents a scene image, transmitted the optical lens 112, the diaphragm 114 and the shutter 116 is received by both of the photoreceptors PD1 and PD2 of the CCD 118 having different sensitivities. An analog image signal representing the scene image is outputted to the analog signal processing unit 120. When the camera operator pushes the shutter switch halfway down in order to pick up this scene with the digital camera 100, a shutter speed (F) and a f number (T) are calculated based on the signal inputted from the photo-sensor 132 under such shutter switch condition of halfway pushing down as a step 100, and the driving units 126 and 128 are controlled.
  • The analog signal processing unit 120 executes the pre-determined analog signal process for both of the high sensitivity and low sensitivity signals inputted from the CCD 118. These analog signals are converted to respective digital ones by means of the A/D converter 122. The digital signals outputted from the A/D converter 122 are inputted to the digital signal processing circuit 134 and the control circuit 150.
  • In the next step 102, an “Lv” value showing the luminance of the imaging environment for the digital camera is derived on the basis of the calculated shutter speed (F) and the f number (T). One example of this derivation procedure is shown below.
  • From the shutter speed “F”, an “Av” value showing the shutter speed in terms of APEX value according to the following formula.
    F2=2Av
  • Further, from the f number “T”, a “Tv” value showing the f number in terms of APEX value according to the following formula.
    1/T=2Tv
  • Then, “Av” value and “Tv” value, which are obtained by the two formulae above, are combined to give “Ev” (representing the amount of received light).
    Ev=Av+Tv
  • The resulting “Ev” value is used as “Lv” is.
    Ev=Lv
  • Now, the derived Lv value is compared with the threshold “th” stored in the ROM 154 in step 104. FIG. 11 shows the relation between the general class of imaging condition and the luminance Lv for the imaging condition. In the embodiment, the threshold “th” is established for the range of clear sky under which the scene is highly contrasty (in the range between 14 and 16 in terms of Lv, preferably Lv=14.5). If the Lv value is larger than this threshold th (Lv>th), then a judgment is made that image combination is necessary since the imaging condition is highly contrasty, i.e., the scene contains high luminance subjects. On the other hand, if the derived Lv value is below the threshold th (Lv≦th), then a judgment is made that image combination is not necessary since the imaging condition is of low contrast, i.e., the scene contains low luminance subjects.
  • Accordingly, when judged Lv>th in step 104, a combination command signal is output from the control circuit 150 to the combination operation circuit 136 in step 106.
  • The combination command signal is inputted to the combination operation circuit 136, the combination operation circuit 136 combines the high sensitivity signal and the low sensitivity signal inputted from the A/D converter 122 with the arithmetic addition method represented by the following formula.
    data={wh×high+w1×(low−th/S+th)}/{wh+w1}
  • In the formula, “S” represents the ratio (sensitivity ratio) of the high sensitivity signal to the low sensitivity signal, which takes values equal or larger than “1”. “th” is the threshold indicating the level at which the calculation of the combined signal data for image formation starts. “high” is the value of the high sensitivity signal, and “wh” is a value representing the weight of the high sensitivity signal. “low” is the value of the low sensitivity signal, and “w1” is a value representing the weight of the high sensitivity signal.
  • FIG. 12 shows the relation between the high sensitivity signal obtained by receiving the light from the scene with use of the digital camera 100 under a imaging condition giving the judgment Lv>th and the combined signal obtained by combining the high sensitivity signal with the low sensitivity signal as described above. The thin line in the figure indicates the high sensitivity signal prior to image combination, and the thick line indicates the combined signal resulting from the combination operation. As for scene luminance, “X1” indicates the maximum expressible scene luminance value without image combination, while “X2” indicates the maximum scene luminance that has become expressible after image combination. In this example, the luminance of the scene “X2” to be picked up is larger than “X1”. The figure shows the data on one color chosen from R, G and B.
  • As is evident from FIG. 12, the scene luminance level to which tonal information is expressible is expanded from “X1” to “X2” owing to the above-described combination operation. Thus, in the case where the scene contains subjects having luminances higher than “X1”, the dynamic range can be preferably expanded via the execution of combination operation, leading to an expressible range expansion. After the image combination operation, an image of the scene is formed with both of the high sensitivity signal and the combined signal shown by the thick solid line in the figure, and the image is output to the Knee-processing circuit 138.
  • In contrast, for the judgment of Lv≦th in step 104, image combination is not necessary. Thus, a command signal directing to pass the high sensitivity signal only without combination operation is delivered to the combination operation circuit 136 in step 108. When the combination operation circuit 136 receives this command signal, the combination operation circuit 136 allows the high sensitivity signal to pass the circuit and outputs to the Knee-processing circuit 138. Since the high sensitivity signal is used up to the scene luminance of “X1”, an effective use of the tone values in the high luminance region is achieved.
  • After the process by the Knee-processing circuit 138, the WB-adjusting circuit 140 and the gamma-correcting circuit 142 perform predetermined process, and then the liquid crystal displays as, for example, through images. When image is picked up with the shutter switch completely pushed down, the image data is stored in the memory 144, and at the same time recorded in a removable recording medium such as a smart medium, memory stick and the like.
  • As described above, it is judged whether or not the combination of the high sensitivity signal and the low sensitivity signal is necessary based on the scene luminance, and the signals are combined according to a result of the judgment. Therefore, not only preferable images are formed, but also the dynamic range can be effectively used.
  • In the embodiment described above, the example of combining, by the combination operation circuit 136, the high sensitivity signal with the low sensitivity signal inputted from the A/D converter 122 prior to white balance adjustment and gamma correction was described. In the following, another embodiment is described in which white balance adjustment and gamma correction are performed for the high sensitivity signal and the low sensitivity signal prior to image combination followed by the combination by the logarithmic addition method. FIG. 13 shows the configuration of a digital signal processing circuit 70 for the another embodiment. The digital signal processing circuit 70 has a WB (white balance) adjusting circuit 72 for high sensitivity signal which adjusts the white balance of the high sensitivity signal, a WB adjusting circuit 74 for e low sensitivity signal which adjusts the white balance of the low sensitivity signal, a gamma-correcting circuit 76 for high sensitivity signal which is connected to the WB adjusting circuit 72 and executes the gamma correction of the high sensitivity signal, a gamma-correcting circuit 78 for low sensitivity signal which is connected to the WB-adjusting circuit 74 and executes the gamma correction of the low sensitivity signal, an image combination circuit 80 and a memory 82.
  • According to the configuration shown in FIG. 13, the high sensitivity signal and the low sensitivity signal are subjected to respective white balance adjustments corresponding to their characteristics and performed by the WB-adjusting circuit 72 and 74, further subjected to respective gamma corrections performed by the gamma-correcting circuit 76 and 78, and delivered to the combination operation circuit 80. The combination operation circuit 78 combines the inputted high sensitivity and low sensitivity signals in the case of receiving a combination command from the control circuit 150 in the same manner as in the previous embodiment. In the case of receiving a command of no need of combination, the combination operation circuit 78 allows only the high sensitivity to pass through.
  • In the two embodiments described above, the cases where the high sensitivity photoreceptors PD1 and the low sensitivity photoreceptors PD2 were formed separately to obtain a high sensitivity signal and a low sensitivity signal respectively were explained. However, another configuration may be used in which a single type of photoreceptor PD is divided into a high sensitivity region 92 having a large photo-receptive area capable of high sensitivity photo-reception and a low sensitivity region 90 having a small photo-receptive area capable of low sensitivity photo-reception by means of a channel stopper 94, and in which a high sensitivity signal and a low sensitivity signal can be obtained from the two regions, respectively, as shown in FIG. 14. Owing to the channel stopper 94 provided in the photoreceptor PD, the signal formed by the high sensitivity photoreception can be separated from the one formed by the low sensitivity photoreception, since the two signals are not mixed.
  • In the above-described embodiments, the decision of combination or no combination was made by the “Lv” value. Alternatively, the high sensitivity signal and the low sensitivity signal may be always combined according to the combination formula cited previously, and a decision based on the “Lv” value may be made on whether or not the combined signal should be used for image formation.
  • In the above-described embodiment, the judgment whether image combination is necessary or not was made on the basis of the “Lv” value (representing the luminance of the imaging environment). However, the judgment on the type of the scene-illuminating light source derived by integrating each of the R, G, and B digital color signals for the purpose of acquiring the gain value for white balance adjustment may be applied to the present judgment on image combination instead of “Lv” value. For example, as shown in FIG. 11, image combination is judged necessary for the case where the subject is placed under a clear sky (i.e., the scene illuminant is a clear sky), and for all the other types of scene illuminants, image combination is judged unnecessary.
  • Furthermore, both of the type of scene illuminant and the “Lv” value may be used for the judgment on the combination necessity.
  • The application of the invention is not limited to digital cameras of the above-described types, but to various image pickup devices.
  • As described above, according to the embodiments, since the signal output from the first photoreceptor or a combined signal is used depending on the imaging condition, preferable effects can be achieved such as dynamic range expansion and the formation of preferably shot images due to the use of tone values towards the highlight region.
  • Forth Embodiment
  • A solid-state image sensor according to forth embodiment is same as that of the first embodiment shown in FIG. 2.
  • A dynamic range-expanded imaging apparatus of forth embodiment, a low sensitivity image (image picked up by the low speed pixel 2) and a high sensitivity image (image picked up by the high speed pixel 3) are simultaneously obtained, and that image signals are read out sequentially from each pixels 2 and 3, which are additively combined to output.
  • FIG. 15 is a diagram showing a general tendency of the relation of signal levels of low and high sensitivity signals relative to exposure amount. In the illustrated example, while the saturation signal level for the low sensitivity signal “L” is “n”, the saturation signal level for the high sensitivity signal “H” is “4n” which is four times of “n”. And, as for the exposure amount with which the signal level reaches to saturation, the saturation exposure for the low sensitivity pixel 2 is four times as large as that for the high sensitivity pixel 3. Thus, when the latter is represented by “m”, then the former is equal to “4m”. Although such characteristics have been derived from the fact that the area of the high sensitivity pixel 3 is 4 times as large as that of the low sensitivity pixel 2, the actual saturation signal ratio or ratio of saturation exposure does not always reflect the area ratio of the two kinds of pixels.
  • FIG. 16 is the block diagram of the output circuit in the dynamic range-expanded imaging apparatus having a CCD section which pixel arrangement is above described of the embodiment. This output circuit includes an offset correction circuit 11 which takes in the digital RGB color signals that are output from each pixel (CCD) 1 shown in FIG. 2 and A/D converted and executes an offset process, a gain control circuit 12 which adjusts white balance, a gamma correction circuit 13 which executes gamma correction (γ conversion) on the gain-corrected color signals in a manner to be described in detail later, and an addition operation circuit 14.
  • The output circuit further includes an RGB interpolation operator part 15 which executes an interpolation calculation for the RGB signals after the addition operation to derive the RBG three-color signals at each pixel position, an RGB/YC converting circuit 16 which obtains the luminance signal Y and color-difference signal C from the RGB signals, a noise reduction circuit 17 which reduces noise from the luminance signal Y and the color-difference signal C, a contour correction circuit 18 which executes contour correction for the noise-reduced luminance signal Y, and a color difference matrix circuit 19 which executes color hue correction via multiplying a color difference matrix on the color-difference signal C after noise reduction.
  • FIG. 17 is the block diagram for the gamma correction circuit 13 and addition operation circuit 14, both shown in FIG. 16. The gamma correction circuit 13 has a first gamma correction circuit 13 a, a second gamma correction circuit 13 b, and a switching circuit 13 c which takes in the output signal from the gain control circuit 12 in FIG. 16 and outputs it to either of the gamma correction circuits 13 a and 13 b. The addition operation circuit 14 additively combines the output signal of the first gamma correction circuit 13 a and the output signal of the second gamma correction circuit 13 b to output the combined signal to the subsequent RGB interpolating part 15.
  • The signal charge detected by the low sensitivity pixel 2 and the signal charge detected by the high sensitivity pixel 3 are read from each pixel 1 in the dynamic range-expanded imaging apparatus, as distinguished each other. When an image signal read from the high sensitivity pixel 3 is inputted to the gamma correction circuit 13 via the offset correction circuit 11 and the gain control circuit 12, the switching circuit 13 c delivers this input signal to the first gamma correction circuit 13 a. When an image signal read from the low sensitivity pixel 2 is inputted to the gamma correction circuit 13 via the offset correction circuit 11 and the gain control circuit 12, the switching circuit 13 c delivers this input signal to the second gamma correction circuit 13 b.
  • In the operation of γ conversion, an output signal is derived by raising the input signal value of the γ power. The “γ” value used for the operation is not set at a constant value over the entire input signal range, but is generally modified “γ” value as the base by about 10% according to ranges. A table data of the first gamma character based on γ=0.45 is set to the first gamma correction circuit 13 a. A table data of the second gamma character based on γ=0.18 is set to the second gamma correction circuit 13 b.
  • In the dynamic range-expanded imaging apparatus, the image signal read from the high sensitivity pixel 3 is subjected to the γ-conversion with “γ” value of about 0.45 executed by the first gamma correction circuit 13 a to output to the addition operation circuit 14. On the other hand, the image signal read from the low sensitivity pixel 2 is subjected to the γ-conversion with “γ” value of about 0.18 executed by the second gamma correction circuit 13 b to output to the addition operation circuit 14.
  • The addition operation circuit 14 executes the addition operation of the image signal, which was γ converted by the first gamma correction circuit 13 a, from the high sensitivity pixel 3 and the image signal, which was γ converted by the second gamma correction circuit 13 b, from the low sensitivity pixel 2 on pixel-by-pixel basis, and then outputs.
  • As described above, according to the embodiment, image data from the low sensitivity pixel and image data from the high sensitivity pixel are respectively γ converted with γ characteristic which is different over the entire range of input signal and then additively combined to generate reproduced images. Therefore, the deterioration of white balance and the deterioration of tone reproduction in the middle luminance range can be avoided without executing weighted addition depending on signal levels, and thus images which is natural with an extended dynamic range can be obtained. Moreover, the γ value for the image signal from the low sensitivity pixel is set lower than the image signal from the high sensitivity pixel, more preferable image can be obtained.
  • By way of precaution, the invention is not restricted to the above-described embodiment at all, though the above embodiment dealt with a solid-state imaging apparatus equipped with both of high and low sensitivity pixels. The concept of the invention can be achieved by the installation in a digital still camera and the like of a control circuit that sequentially captures high and low sensitivity images through shutter speed control, etc., and then executes γ-conversion on each of the image signals with use of a γ character differing from each other followed by the addition of the two kinds of signals.

Claims (4)

  1. 1. An image pickup apparatus for combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data comprising:
    a calculating unit for calculating a gain value for white balance adjustment from the image data of the high output image;
    a gain correcting unit for performing not only first white balance adjustment for the image data of the high output image with the gain value calculated by the calculating unit but also second white balance adjustment for the image data of the low output image with the gain value.
  2. 2. An image pickup apparatus for combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data comprising:
    a calculating unit for calculating a gain value for white balance adjustment from the image data of the high output image;
    a gain correcting unit for performing a white balance adjustment for the combined image data with the gain value calculated by the calculating unit.
  3. 3. An image processing method of combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data comprising the step of:
    calculating a gain value used for first white balance adjustment for the image data of the high output image and second white balance adjustment for the image data of the low output image from the image data of the high output image.
  4. 4. An image processing method of combining image data of a high output image and image data of a low output image, both of which are picked up by an imaging device, to produce combined image data comprising the step of:
    calculating a gain value used for a white balance adjustment for the combined image data from the image data of the high output image.
US11606080 2002-06-24 2006-11-30 Image pickup apparatus and image processing method Abandoned US20070070223A1 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
JP2002182924A JP2004032171A (en) 2002-06-24 2002-06-24 Imaging apparatus
JPJP2002-182924 2002-06-24
JP2002205607A JP2004048563A (en) 2002-07-15 2002-07-15 Imaging device and image processing method
JPJP2002-205607 2002-07-15
JP2002212517A JP4015492B2 (en) 2002-07-22 2002-07-22 Image composition method and an imaging apparatus
JPJP2002-212517 2002-07-22
JP2002237320A JP3990230B2 (en) 2002-08-16 2002-08-16 Imaging device
JPJP2002-237320 2002-08-16
US10601654 US7508421B2 (en) 2002-06-24 2003-06-24 Image pickup apparatus and image processing method
US11606080 US20070070223A1 (en) 2002-06-24 2006-11-30 Image pickup apparatus and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11606080 US20070070223A1 (en) 2002-06-24 2006-11-30 Image pickup apparatus and image processing method

Publications (1)

Publication Number Publication Date
US20070070223A1 true true US20070070223A1 (en) 2007-03-29

Family

ID=31999404

Family Applications (4)

Application Number Title Priority Date Filing Date
US10601654 Active 2025-06-21 US7508421B2 (en) 2002-06-24 2003-06-24 Image pickup apparatus and image processing method
US11606080 Abandoned US20070070223A1 (en) 2002-06-24 2006-11-30 Image pickup apparatus and image processing method
US11606087 Active 2024-07-23 US7750950B2 (en) 2002-06-24 2006-11-30 Image pickup apparatus and image processing method
US11606078 Abandoned US20070076103A1 (en) 2002-06-24 2006-11-30 Image pickup apparatus and image processing method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10601654 Active 2025-06-21 US7508421B2 (en) 2002-06-24 2003-06-24 Image pickup apparatus and image processing method

Family Applications After (2)

Application Number Title Priority Date Filing Date
US11606087 Active 2024-07-23 US7750950B2 (en) 2002-06-24 2006-11-30 Image pickup apparatus and image processing method
US11606078 Abandoned US20070076103A1 (en) 2002-06-24 2006-11-30 Image pickup apparatus and image processing method

Country Status (1)

Country Link
US (4) US7508421B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050219391A1 (en) * 2004-04-01 2005-10-06 Microsoft Corporation Digital cameras with luminance correction
US20130120620A1 (en) * 2011-11-11 2013-05-16 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20130229557A1 (en) * 2012-03-01 2013-09-05 Canon Kabushiki Kaisha Image pickup apparatus, image pickup system, driving method for image pickup apparatus, and driving method for image pickup system

Families Citing this family (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10064678C1 (en) * 2000-12-22 2002-07-11 Kappa Opto Electronics Gmbh A method for improving the signal in a recorded with a digital color video camera picture sequence
JP4004943B2 (en) * 2002-12-25 2007-11-07 富士フイルム株式会社 Image composition method and an imaging apparatus
US7598986B2 (en) * 2003-01-10 2009-10-06 Fujifilm Corporation Image pick-up apparatus and white balance control method
US7236646B1 (en) * 2003-04-25 2007-06-26 Orbimage Si Opco, Inc. Tonal balancing of multiple images
JP4301890B2 (en) * 2003-07-25 2009-07-22 富士フイルム株式会社 Control method for an imaging apparatus and an imaging apparatus
JP2005072965A (en) * 2003-08-25 2005-03-17 Fuji Film Microdevices Co Ltd Image compositing method, solid-state image pickup device and digital camera
JP4393242B2 (en) 2004-03-29 2010-01-06 富士フイルム株式会社 The driving method of the solid-state imaging device and a solid-state imaging device
JP4500574B2 (en) * 2004-03-30 2010-07-14 富士フイルム株式会社 High dynamic range color solid-state imaging device and a digital camera equipped with the solid-state imaging device
JP4412720B2 (en) * 2004-06-24 2010-02-10 キヤノン株式会社 Image processing method and apparatus
KR100617781B1 (en) * 2004-06-29 2006-08-28 삼성전자주식회사 Apparatus and method for improving image quality in a image sensor
US7627435B2 (en) * 2004-08-04 2009-12-01 Agilent Technologies, Inc. Filtering of pixel signals during array scanning
US7477304B2 (en) * 2004-08-26 2009-01-13 Micron Technology, Inc. Two narrow band and one wide band color filter for increasing color image sensor sensitivity
KR100588744B1 (en) * 2004-09-09 2006-06-12 매그나칩 반도체 유한회사 Shutter module using line scan type image sensor and control method of it
JP2006081037A (en) * 2004-09-10 2006-03-23 Eastman Kodak Co Image pickup device
US7545421B2 (en) * 2004-10-28 2009-06-09 Qualcomm Incorporated Apparatus, system, and method for optimizing gamma curves for digital image devices
JP4452161B2 (en) * 2004-11-12 2010-04-21 パナソニック株式会社 Imaging device
JP2006238410A (en) * 2005-01-31 2006-09-07 Fuji Photo Film Co Ltd Imaging apparatus
US20060170790A1 (en) * 2005-01-31 2006-08-03 Richard Turley Method and apparatus for exposure correction in a digital imaging device
US7480421B2 (en) * 2005-05-23 2009-01-20 Canon Kabushiki Kaisha Rendering of high dynamic range images
US20070127909A1 (en) * 2005-08-25 2007-06-07 Craig Mowry System and apparatus for increasing quality and efficiency of film capture and methods of use thereof
US7999863B2 (en) * 2006-02-01 2011-08-16 Fujifilm Corporation Image correction apparatus and method
JP4241840B2 (en) * 2006-02-23 2009-03-18 富士フイルム株式会社 Imaging device
JP2007259344A (en) * 2006-03-24 2007-10-04 Fujifilm Corp Imaging apparatus and image processing method
JP4187004B2 (en) * 2006-04-17 2008-11-26 ソニー株式会社 Exposure control method for an imaging apparatus and an imaging apparatus
JP4949766B2 (en) * 2006-08-09 2012-06-13 オンセミコンダクター・トレーディング・リミテッド Image signal processing device
KR100849846B1 (en) * 2006-09-21 2008-08-01 삼성전자주식회사 Apparatus and method for compensating brightness of image
JP4994825B2 (en) * 2006-12-22 2012-08-08 キヤノン株式会社 An imaging apparatus and its control method and program, and storage medium
JP4984981B2 (en) * 2007-03-08 2012-07-25 ソニー株式会社 Imaging method and an imaging apparatus and a driving device
US8599282B2 (en) * 2007-04-26 2013-12-03 Samsung Electronics Co., Ltd. Method and apparatus for generating image
JP4325703B2 (en) * 2007-05-24 2009-09-02 ソニー株式会社 The solid-state imaging device, signal processing device and signal processing method of the solid-state imaging device, and imaging apparatus
US8169518B2 (en) * 2007-08-14 2012-05-01 Fujifilm Corporation Image pickup apparatus and signal processing method
US8073234B2 (en) * 2007-08-27 2011-12-06 Acushnet Company Method and apparatus for inspecting objects using multiple images having varying optical properties
US20090059039A1 (en) * 2007-08-31 2009-03-05 Micron Technology, Inc. Method and apparatus for combining multi-exposure image data
CN101394485B (en) * 2007-09-20 2011-05-04 华为技术有限公司 Image generating method, apparatus and image composition equipment
JP5163031B2 (en) * 2007-09-26 2013-03-13 株式会社ニコン Electronic camera
US20090086074A1 (en) * 2007-09-27 2009-04-02 Omnivision Technologies, Inc. Dual mode camera solution apparatus, system, and method
US20100194915A1 (en) * 2008-01-09 2010-08-05 Peter Bakker Method and apparatus for processing color values provided by a camera sensor
JP5009880B2 (en) * 2008-09-19 2012-08-22 富士フイルム株式会社 An imaging apparatus and an imaging method
JP5202211B2 (en) * 2008-09-25 2013-06-05 三洋電機株式会社 The image processing device and electronic equipment
JP5642344B2 (en) * 2008-11-21 2014-12-17 オリンパスイメージング株式会社 Image processing apparatus, image processing method, and image processing program
JP5251563B2 (en) * 2009-02-04 2013-07-31 日本テキサス・インスツルメンツ株式会社 Imaging device
US8013911B2 (en) * 2009-03-30 2011-09-06 Texas Instruments Incorporated Method for mixing high-gain and low-gain signal for wide dynamic range image sensor
JP5526673B2 (en) * 2009-09-16 2014-06-18 ソニー株式会社 The solid-state imaging device and electronic apparatus
US8269856B2 (en) * 2009-10-09 2012-09-18 Altek Corporation Automatic focusing system in low-illumination setting and method using the same
JP5115568B2 (en) * 2009-11-11 2013-01-09 カシオ計算機株式会社 Imaging apparatus, an imaging method, and an imaging program
KR101633893B1 (en) * 2010-01-15 2016-06-28 삼성전자주식회사 Apparatus and Method for Image Fusion
CN102254919B (en) * 2010-04-12 2013-08-14 成功大学 Distributed filtering and sensing structure and optical device
JP5622513B2 (en) * 2010-10-08 2014-11-12 オリンパスイメージング株式会社 Image processing apparatus, image processing method, and an imaging device
US8675984B2 (en) 2011-03-03 2014-03-18 Dolby Laboratories Licensing Corporation Merging multiple exposed images in transform domain
US9690997B2 (en) 2011-06-06 2017-06-27 Denso Corporation Recognition object detecting apparatus
KR101861767B1 (en) * 2011-07-08 2018-05-29 삼성전자주식회사 Image sensor, image processing apparatus including the same, and interpolation method of the image processing apparatus
JP6008148B2 (en) * 2012-06-28 2016-10-19 パナソニックIpマネジメント株式会社 Imaging device
JP6164867B2 (en) * 2013-02-21 2017-07-19 キヤノン株式会社 The solid-state imaging device, a control method, and control program
JP6226551B2 (en) * 2013-05-08 2017-11-08 キヤノン株式会社 Imaging device
US9402041B2 (en) 2013-07-11 2016-07-26 Canon Kabushiki Kaisha Solid-state image sensor and imaging apparatus using same
JP6176028B2 (en) * 2013-09-26 2017-08-09 株式会社デンソー Vehicle control system, an image sensor
US9147704B2 (en) * 2013-11-11 2015-09-29 Omnivision Technologies, Inc. Dual pixel-sized color image sensors and methods for manufacturing the same
JP2015177391A (en) * 2014-03-17 2015-10-05 株式会社リコー Image processing apparatus, imaging apparatus, image processing program, and image processing method
US20150348502A1 (en) * 2014-05-30 2015-12-03 Apple Inc. User Interface and Method for Directly Setting Display White Point
US9736390B2 (en) * 2014-07-07 2017-08-15 Canon Kabushiki Kaisha Image processing apparatus and image processing method
WO2016181398A1 (en) * 2015-05-11 2016-11-17 Vayyar Imaging Ltd System, device and methods for imaging of objects using electromagnetic array
US9871965B2 (en) * 2016-02-03 2018-01-16 Texas Instruments Incorporated Image processing for wide dynamic range (WDR) sensor data

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5128751A (en) * 1989-02-01 1992-07-07 Canon Kabushiki Kaisha Image sensing device arranged to perform a white compression process
US5455621A (en) * 1992-10-27 1995-10-03 Matsushita Electric Industrial Co., Ltd. Imaging method for a wide dynamic range and an imaging device for a wide dynamic range
US5694167A (en) * 1990-11-22 1997-12-02 Canon Kabushiki Kaisha Image pick up device using transfer registers in parallel with rows of light receiving cells
US6204881B1 (en) * 1993-10-10 2001-03-20 Canon Kabushiki Kaisha Image data processing apparatus which can combine a plurality of images at different exposures into an image with a wider dynamic range
US6211915B1 (en) * 1996-02-09 2001-04-03 Sony Corporation Solid-state imaging device operable in wide dynamic range
US20010001245A1 (en) * 1996-05-08 2001-05-17 Olympus Optical Co., Ltd., Tokyo, Japan Image processing apparatus
US20020145674A1 (en) * 2001-04-09 2002-10-10 Satoru Nakamura Imaging apparatus and signal processing method for the same
US6593970B1 (en) * 1997-11-21 2003-07-15 Matsushita Electric Industrial Co., Ltd. Imaging apparatus with dynamic range expanded, a video camera including the same, and a method of generating a dynamic range expanded video signal
US6747694B1 (en) * 1999-06-07 2004-06-08 Hitachi Denshi Kabushiki Kaisha Television signal processor for generating video signal of wide dynamic range, television camera using the same, and method for television signal processing
US6803946B1 (en) * 1998-07-27 2004-10-12 Matsushita Electric Industrial Co., Ltd. Video camera apparatus with preset operation and a video camera monitor system including the same
US6972800B2 (en) * 2000-01-14 2005-12-06 Matsushita Electric Industrial Co., Ltd. Solid state imaging apparatus for a video camera capable of independently performing a knee point adjustment and a gain control
US20060050163A1 (en) * 1999-06-15 2006-03-09 Wang Yibing M Dual sensitivity image sensor
US7502067B2 (en) * 1997-12-05 2009-03-10 Olympus Optical Co., Ltd. Electronic camera that synthesizes two images taken under different exposures

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5420635A (en) * 1991-08-30 1995-05-30 Fuji Photo Film Co., Ltd. Video camera, imaging method using video camera, method of operating video camera, image processing apparatus and method, and solid-state electronic imaging device
JP3115069B2 (en) 1991-12-27 2000-12-04 松下電器産業株式会社 Electronic component mounting apparatus
JP3400506B2 (en) * 1993-03-12 2003-04-28 オリンパス光学工業株式会社 Image processing apparatus
US5801773A (en) * 1993-10-29 1998-09-01 Canon Kabushiki Kaisha Image data processing apparatus for processing combined image signals in order to extend dynamic range
US5745808A (en) * 1995-08-21 1998-04-28 Eastman Kodak Company Camera exposure control system using variable-length exposure tables
JP3830590B2 (en) 1996-10-30 2006-10-04 富士写真フイルム株式会社 The solid-state imaging device
JPH11155108A (en) 1997-11-21 1999-06-08 Matsushita Electric Ind Co Ltd Video signal processor and processing method and video camera using the same
JP3999321B2 (en) 1997-12-05 2007-10-31 オリンパス株式会社 Electronic camera
JP4282113B2 (en) * 1998-07-24 2009-06-17 オリンパス株式会社 Imaging apparatus and an imaging method, and a recording medium recording an image pickup program
JP3208762B2 (en) 1998-11-18 2001-09-17 ソニー株式会社 Image processing apparatus and image processing method
JP3813362B2 (en) 1998-11-19 2006-08-23 ソニー株式会社 Image processing apparatus and image processing method
JP3458741B2 (en) * 1998-12-21 2003-10-20 ソニー株式会社 Imaging method and an imaging apparatus, an image processing method and image processing apparatus
EP1017230B1 (en) * 1998-12-28 2010-02-10 SANYO ELECTRIC Co., Ltd. Imaging apparatus and digital camera
JP2000209492A (en) 1999-01-14 2000-07-28 Toshiba Ave Co Ltd Image pickup device
JP4356134B2 (en) 1999-04-16 2009-11-04 ソニー株式会社 Image processing apparatus and image processing method
JP2000350220A (en) 1999-06-07 2000-12-15 Hitachi Denshi Ltd Television camera
JP2001008104A (en) 1999-06-23 2001-01-12 Fuji Photo Film Co Ltd Wide dynamic range image pickup device
WO2001013171A1 (en) * 1999-08-17 2001-02-22 Applied Vision Systems, Inc. Improved dynamic range video camera, recording system, and recording method
JP3615454B2 (en) * 2000-03-27 2005-02-02 三洋電機株式会社 Digital camera
JP3982987B2 (en) 2000-10-18 2007-09-26 株式会社日立製作所 Imaging device
JP3607866B2 (en) * 2000-12-12 2005-01-05 オリンパス株式会社 Imaging device
JP2004032171A (en) 2002-06-24 2004-01-29 Fuji Photo Film Co Ltd Imaging apparatus

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5128751A (en) * 1989-02-01 1992-07-07 Canon Kabushiki Kaisha Image sensing device arranged to perform a white compression process
US5694167A (en) * 1990-11-22 1997-12-02 Canon Kabushiki Kaisha Image pick up device using transfer registers in parallel with rows of light receiving cells
US5455621A (en) * 1992-10-27 1995-10-03 Matsushita Electric Industrial Co., Ltd. Imaging method for a wide dynamic range and an imaging device for a wide dynamic range
US6204881B1 (en) * 1993-10-10 2001-03-20 Canon Kabushiki Kaisha Image data processing apparatus which can combine a plurality of images at different exposures into an image with a wider dynamic range
US6211915B1 (en) * 1996-02-09 2001-04-03 Sony Corporation Solid-state imaging device operable in wide dynamic range
US20010001245A1 (en) * 1996-05-08 2001-05-17 Olympus Optical Co., Ltd., Tokyo, Japan Image processing apparatus
US6593970B1 (en) * 1997-11-21 2003-07-15 Matsushita Electric Industrial Co., Ltd. Imaging apparatus with dynamic range expanded, a video camera including the same, and a method of generating a dynamic range expanded video signal
US7502067B2 (en) * 1997-12-05 2009-03-10 Olympus Optical Co., Ltd. Electronic camera that synthesizes two images taken under different exposures
US6803946B1 (en) * 1998-07-27 2004-10-12 Matsushita Electric Industrial Co., Ltd. Video camera apparatus with preset operation and a video camera monitor system including the same
US6747694B1 (en) * 1999-06-07 2004-06-08 Hitachi Denshi Kabushiki Kaisha Television signal processor for generating video signal of wide dynamic range, television camera using the same, and method for television signal processing
US20060050163A1 (en) * 1999-06-15 2006-03-09 Wang Yibing M Dual sensitivity image sensor
US6972800B2 (en) * 2000-01-14 2005-12-06 Matsushita Electric Industrial Co., Ltd. Solid state imaging apparatus for a video camera capable of independently performing a knee point adjustment and a gain control
US20020145674A1 (en) * 2001-04-09 2002-10-10 Satoru Nakamura Imaging apparatus and signal processing method for the same

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050219391A1 (en) * 2004-04-01 2005-10-06 Microsoft Corporation Digital cameras with luminance correction
US7463296B2 (en) * 2004-04-01 2008-12-09 Microsoft Corporation Digital cameras with luminance correction
US20130120620A1 (en) * 2011-11-11 2013-05-16 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US8896729B2 (en) * 2011-11-11 2014-11-25 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20130229557A1 (en) * 2012-03-01 2013-09-05 Canon Kabushiki Kaisha Image pickup apparatus, image pickup system, driving method for image pickup apparatus, and driving method for image pickup system
US9077921B2 (en) * 2012-03-01 2015-07-07 Canon Kabushiki Kaisha Image pickup apparatus, image pickup system, driving method for image pickup apparatus, and driving method for image pickup system using two analog-to-digital conversions

Also Published As

Publication number Publication date Type
US20070076113A1 (en) 2007-04-05 application
US20040051790A1 (en) 2004-03-18 application
US20070076103A1 (en) 2007-04-05 application
US7508421B2 (en) 2009-03-24 grant
US7750950B2 (en) 2010-07-06 grant

Similar Documents

Publication Publication Date Title
US6906744B1 (en) Electronic camera
US6204881B1 (en) Image data processing apparatus which can combine a plurality of images at different exposures into an image with a wider dynamic range
US6952225B1 (en) Method and apparatus for automatic white balance adjustment based upon light source type
US6480226B1 (en) Image pickup apparatus having gradation control function for providing image signals definitive of backlighted objects
US20070085917A1 (en) Image pickup apparatus for preventing linearity defect
US20030030729A1 (en) Dual mode digital imaging and camera system
US5801773A (en) Image data processing apparatus for processing combined image signals in order to extend dynamic range
US7580061B2 (en) Image sensing apparatus which determines white balance correction information before photographing
US7158174B2 (en) Method for automatic white balance of digital images
US20010024237A1 (en) Solid-state honeycomb type image pickup apparatus using a complementary color filter and signal processing method therefor
US20070132858A1 (en) Imaging apparatus and image processor
US6813040B1 (en) Image processor, image combining method, image pickup apparatus, and computer-readable storage medium storing image combination program
US7236190B2 (en) Digital image processing using white balance and gamma correction
US20040085475A1 (en) Automatic exposure control system for a digital camera
US20030001958A1 (en) White balance adjustment method, image processing apparatus and electronic camera
US20080218597A1 (en) Solid-state imaging device and imaging apparatus
US20060119738A1 (en) Image sensor, image capturing apparatus, and image processing method
US6919924B1 (en) Image processing method and image processing apparatus
US20020085100A1 (en) Electronic camera
US20020118967A1 (en) Color correcting flash apparatus, camera, and method
US6831695B1 (en) Image pickup apparatus for outputting an image signal representative of an optical image and image pickup control method therefor
US20020140830A1 (en) Signal processing apparatus and method, and image sensing apparatus
US20020071041A1 (en) Enhanced resolution mode using color image capture device
US7358988B1 (en) Image signal processor for performing image processing appropriate for an output device and method therefor
US20020167596A1 (en) Image signal processing device, digital camera and computer program product for processing image signal

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJIFILM HOLDINGS CORPORATION, JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:FUJI PHOTO FILM CO., LTD.;REEL/FRAME:018898/0872

Effective date: 20061001

Owner name: FUJIFILM HOLDINGS CORPORATION,JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:FUJI PHOTO FILM CO., LTD.;REEL/FRAME:018898/0872

Effective date: 20061001

AS Assignment

Owner name: FUJIFILM CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FUJIFILM HOLDINGS CORPORATION;REEL/FRAME:018934/0001

Effective date: 20070130

Owner name: FUJIFILM CORPORATION,JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FUJIFILM HOLDINGS CORPORATION;REEL/FRAME:018934/0001

Effective date: 20070130